Service | Microsoft Docs article | Related commit history on GitHub | Change details |
---|---|---|---|
active-directory | Accidental Deletions | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/app-provisioning/accidental-deletions.md | The Azure AD provisioning service includes a feature to help avoid accidental de The Azure AD provisioning service includes a feature to help avoid accidental deletions. This feature ensures that users aren't disabled or deleted in the target tenant unexpectedly. ::: zone-end -The feature lets you specify a deletion threshold, above which an admin -needs to explicitly choose to allow the deletions to be processed. +You use accidental deletions to specify a deletion threshold. Anything above the threshold that you set requires an admin to explicitly allow the processing of the deletions. ## Configure accidental deletion prevention threshold. 5. Ensure the **Notification Email** address is completed. - If the deletion threshold is met, an email will be sent. + If the deletion threshold is met, an email is sent. 6. Select **Save** to save the changes. -When the deletion threshold is met, the job will go into quarantine and a notification email will be sent. The quarantined job can then be allowed or rejected. To learn more about quarantine behavior, see [Application provisioning in quarantine status](application-provisioning-quarantine-status.md). +When the deletion threshold is met, the job goes into quarantine, and a notification email is sent. The quarantined job can then be allowed or rejected. To learn more about quarantine behavior, see [Application provisioning in quarantine status](application-provisioning-quarantine-status.md). ## Recovering from an accidental deletion-If you encounter an accidental deletion, you'll see it on the provisioning status page. It will say **Provisioning has been quarantined. See quarantine details for more information**. +When you encounter an accidental deletion, you see it on the provisioning status page. It says `Provisioning has been quarantined. See quarantine details for more information`. You can click either **Allow deletes** or **View provisioning logs**. ### Allowing deletions -The **Allow deletes** action will delete the objects that triggered the accidental delete threshold. Use the following procedure to accept the deletes. +The **Allow deletes** action deletes the objects that triggered the accidental delete threshold. Use the procedure to accept the deletions. 1. Select **Allow deletes**. 2. Click **Yes** on the confirmation to allow the deletions.-3. You'll see confirmation that the deletions were accepted and the status will return to healthy with the next cycle. +3. View the confirmation that the deletions were accepted. The status returns to healthy with the next cycle. ### Rejecting deletions -If you don't want to allow the deletions, you need to do the following: +Investigate and reject deletions as necessary: - Investigate the source of the deletions. You can use the provisioning logs for details. - Prevent the deletion by assigning the user / group to the application (or configuration) again, restoring the user / group, or updating your provisioning configuration. - Once you've made the necessary changes to prevent the user / group from being deleted, restart provisioning. Don't restart provisioning until you've made the necessary changes to prevent the users / groups from being deleted. ### Test deletion prevention-You can test the feature by triggering disable / deletion events by setting the threshold to a low number, for example 3, and then changing scoping filters, un-assigning users, and deleting users from the directory (see common scenarios in next section). +You can test the feature by triggering disable / deletion events by setting the threshold to a low number, for example 3, and then changing scoping filters, unassigning users, and deleting users from the directory (see common scenarios in next section). -Let the provisioning job run (20 ΓÇô 40 mins) and navigate back to the provisioning page. You'll see the provisioning job in quarantine and can choose to allow the deletions or review the provisioning logs to understand why the deletions occurred. +Let the provisioning job run (20 ΓÇô 40 mins) and navigate back to the provisioning page. Check the provisioning job in quarantine and choose to allow the deletions or review the provisioning logs to understand why the deletions occurred. ## Common deprovisioning scenarios to test - Delete a user / put them into the recycle bin. To learn more about deprovisioning scenarios, see [How Application Provisioning ## Frequently Asked Questions ### What scenarios count toward the deletion threshold?-When a user is set to be removed from the target application (or target tenant), it will be counted against the +When a user is set for removal from the target application (or target tenant), it's counted against the deletion threshold. Scenarios that could lead to a user being removed from the target application (or target tenant) could include: unassigning the user from the application (or configuration) and soft / hard deleting a user in the directory. Groups evaluated for deletion count towards the deletion threshold. In addition to deletions, the same functionality also works for disables. ### What is the interval that the deletion threshold is evaluated on? It's evaluated each cycle. If the number of deletions doesn't exceed the threshold during a -single cycle, the ΓÇ£circuit breakerΓÇ¥ wonΓÇÖt be triggered. If multiple cycles are needed to reach a -steady state, the deletion threshold will be evaluated per cycle. +single cycle, the ΓÇ£circuit breakerΓÇ¥ isn't triggered. If multiple cycles are needed to reach a +steady state, the deletion threshold is evaluated per cycle. ### How are these deletion events logged? You can find users that should be disabled / deleted but havenΓÇÖt due to the deletion threshold. |
active-directory | Export Import Provisioning Configuration | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/app-provisioning/export-import-provisioning-configuration.md | |
active-directory | On Premises Powershell Connector | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/app-provisioning/on-premises-powershell-connector.md | The capabilities tab defines the behavior and functionality of the connector. Th |No Reference Values In First Export Pass|Unchecked|When checked, reference attributes are exported in a second export pass. | |Enable Object Rename|Unchecked|When checked, distinguished names can be modified. | |Delete-Add As Replace|Checked|Not supported. This will be ignored.|-|Enable Export Password in First Pass|Checked|Not supported. This will be ignored.| +|Enable Export Password in First Pass|Unchecked|Not supported. This will be ignored.| ### Global Parameters The Global Parameters tab enables you to configure the Windows PowerShell script |Partition Script|\<Blank>| |Hierarchy Script|\<Blank>| |Begin Import Script|\<Blank>|-|Import Script|Paste ImportData code as value| +|Import Script|[Paste the import script as the value](https://github.com/microsoft/MIMPowerShellConnectors/blob/master/src/ECMA2HostCSV/Scripts/Import%20Scripts.ps1)| |End Import Script|\<Blank>|-|Begin Export Script|Paste Begin export code as value| -|Export Script|Paste ExportData code as value| +|Begin Export Script|\<Blank>| +|Export Script|[Paste the import script as the value](https://github.com/microsoft/MIMPowerShellConnectors/blob/master/src/ECMA2HostCSV/Scripts/Export%20Script.ps1)| |End Export Script|\<Blank>| |Begin Password Script|\<Blank>| |Password Extension Script|\<Blank>| |
active-directory | On Premises Web Services Connector | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/app-provisioning/on-premises-web-services-connector.md | + + Title: Azure AD provisioning to applications via web services connector +description: This document describes how to configure Azure AD to provision users with external systems that offer web services based APIs. +++++++ Last updated : 05/11/2023++++++# Provisioning with the web services connector +The following documentation provides information about the generic web services connector. Microsoft Entra Identity Governance supports provisioning accounts into various applications such as SAP ECC, Oracle eBusiness Suite, and line of business applications that expose REST or SOAP APIs. Customers that have previously deployed MIM to connect to these applications can easily switch to using the lightweight Azure AD provisioning agent, while reusing the same web services connector built for MIM. ++## Capabilities supported ++> [!div class="checklist"] +> - Create users in your application. +> - Remove users in your application when they don't need access anymore. +> - Keep user attributes synchronized between Azure AD and your application. +> - Discover the schema for your application. ++The web services connector implements the following functionalities: ++- SOAP Discovery: Allows the administrator to enter the WSDL path exposed by the target web service. Discovery will produce a tree structure of its hosted web services with their inner endpoint(s)/operations along with the operationΓÇÖs Meta data description. There's no limit to the number of discovery operations that can be done (step by step). The discovered operations are used later to configure the flow of operations that implement the connectorΓÇÖs operations against the data-source (as Import/Export). ++- REST Discovery: Allows the administrator to enter Restful service details i.e. Service Endpoint, Resource Path, Method and Parameter details. A user can add an unlimited number of Restful services. The rest services information will be stored in the ```discovery.xml``` file of the ```wsconfig``` project. They'll be used later by the user to configure the Rest Web Service activity in the workflow. ++- Connector Space Schema configuration: Allows the administrator to configure the connector space schema. The schema configuration will include a listing of Object Types and attributes for a specific implementation. The administrator can specify the object types that will be supported by the Web Service MA. The administrator may also choose here the attributes that will be part of the Connector space Schema. ++- Operation Flow configuration: Workflow designer UI for configuring the implementation of FIM operations (Import/Export) per object type through exposed web service operations functions such as: ++ - Assignment of parameters from connector space to web service functions. + - Assignment of parameters from web service functions to the connector space. +++## Documentation for popular applications +Integrations with popular applications such as SAP ECC and Oracle eBusiness Suite can be found [here](https://www.microsoft.com/download/details.aspx?id=51495). You can also configure a template to connect to your own [rest or SOAP API](/microsoft-identity-manager/reference/microsoft-identity-manager-2016-ma-ws). +++For more information, see [the Overview of the generic Web Service connector](/microsoft-identity-manager/reference/microsoft-identity-manager-2016-ma-ws) in the MIM documentation library. ++## Next steps ++- [App provisioning](user-provisioning.md) +- [ECMA Connector Host generic SQL connector](tutorial-ecma-sql-connector.md) +- [ECMA Connector Host LDAP connector](on-premises-ldap-connector-configure.md) |
active-directory | Workday Retrieve Pronoun Information | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/app-provisioning/workday-retrieve-pronoun-information.md | |
active-directory | Active Directory Optional Claims | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/develop/active-directory-optional-claims.md | The set of optional claims available by default for applications to use are list | `xms_tpl` | Tenant preferred language| JWT | | The resource tenant's preferred language, if set. Formatted LL ("en"). | | `ztdid` | Zero-touch Deployment ID | JWT | | The device identity used for [Windows AutoPilot](/windows/deployment/windows-autopilot/windows-10-autopilot) | +> [!WARNING] +> Never use `email` or `upn` claim values to store or determine whether the user in an access token should have access to data. Mutable claim values like these can change over time, making them insecure and unreliable for authorization. + ## v2.0-specific optional claims set These claims are always included in v1.0 Azure AD tokens, but not included in v2.0 tokens unless requested. These claims are only applicable for JWTs (ID tokens and Access Tokens). |
active-directory | Custom Extension Get Started | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/develop/custom-extension-get-started.md | Next, you register the custom extension. You register the custom extension by as "@odata.type": "#microsoft.graph.azureAdTokenAuthentication", "resourceId": "{functionApp_IdentifierUri}" },- "clientConfiguration": { - "timeoutInMilliseconds": 2000, - "maximumRetries": 1 - }, "claimsForTokenConfiguration": [ { "claimIdInApiResponse": "DateOfBirth" |
active-directory | Howto Create Service Principal Portal | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/develop/howto-create-service-principal-portal.md | Title: Create an Azure AD app and service principal in the portal description: Create a new Azure Active Directory app and service principal to manage access to resources with role-based access control in Azure Resource Manager. -+ Previously updated : 02/01/2023 Last updated : 05/12/2023 When programmatically signing in, pass the tenant ID and the application ID in y ## Set up authentication -There are two types of authentication available for service principals: password-based authentication (application secret) and certificate-based authentication. *We recommend using a certificate*, but you can also create an application secret. +There are two types of authentication available for service principals: password-based authentication (application secret) and certificate-based authentication. *We recommend using a trusted certificate issued by a certificate authority*, but you can also create an application secret or create a self-signed certificate for testing. -### Option 1 (recommended): Create and upload a self-signed certificate +### Option 1 (recommended): Upload a trusted certificate issued by a certificate authority -You can use an existing certificate if you've one. Optionally, you can create a self-signed certificate for *testing purposes only*. To create a self-signed certificate, open Windows PowerShell and run [New-SelfSignedCertificate](/powershell/module/pki/new-selfsignedcertificate) with the following parameters to create the certificate in the user certificate store on your computer: +To upload the certificate file: ++1. Search for and select **Azure Active Directory**. +1. From **App registrations** in Azure AD, select your application. +1. Select **Certificates & secrets**. +1. Select **Certificates**, then select **Upload certificate** and then select the certificate file to upload. +1. Select **Add**. Once the certificate is uploaded, the thumbprint, start date, and expiration values are displayed. ++After registering the certificate with your application in the application registration portal, enable the [confidential client application](authentication-flows-app-scenarios.md#single-page-public-client-and-confidential-client-applications) code to use the certificate. ++### Option 2: Testing only- create and upload a self-signed certificate ++Optionally, you can create a self-signed certificate for *testing purposes only*. To create a self-signed certificate, open Windows PowerShell and run [New-SelfSignedCertificate](/powershell/module/pki/new-selfsignedcertificate) with the following parameters to create the certificate in the user certificate store on your computer: ```powershell $cert=New-SelfSignedCertificate -Subject "CN=DaemonConsoleCert" -CertStoreLocation "Cert:\CurrentUser\My" -KeyExportPolicy Exportable -KeySpec Signature To upload the certificate: 1. Select **Certificates**, then select **Upload certificate** and then select the certificate (an existing certificate or the self-signed certificate you exported). 1. Select **Add**. -After registering the certificate with your application in the application registration portal, enable the client application code to use the certificate. +After registering the certificate with your application in the application registration portal, enable the [confidential client application](authentication-flows-app-scenarios.md#single-page-public-client-and-confidential-client-applications) code to use the certificate. -### Option 2: Create a new application secret +### Option 3: Create a new application secret If you choose not to use a certificate, you can create a new application secret. |
active-directory | Tutorial V2 Android | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/develop/tutorial-v2-android.md | -When you've completed this tutorial, your application will accept sign-ins of personal Microsoft accounts (including outlook.com, live.com, and others) and work or school accounts from any company or organization that uses Azure AD. +When you've completed this tutorial, your application accepts sign-ins of personal Microsoft accounts (including outlook.com, live.com, and others) and work or school accounts from any company or organization that uses Azure AD. In this tutorial: In this tutorial: - [Android Studio](https://developer.android.com/studio) - [Android documentation on generating a key](https://developer.android.com/studio/publish/app-signing#generate-key)+- [Layout resource](https://developer.android.com/guide/topics/resources/layout-resource) ## How this tutorial works  -The app in this tutorial will sign in users and get data on their behalf. This data will be accessed through a protected API (Microsoft Graph API) that requires authorization and is protected by the Microsoft identity platform. +The app in this tutorial signs in users and get data on their behalf. This data is accessed through a protected API (Microsoft Graph API) that requires authorization and is protected by the Microsoft identity platform. This sample uses the Microsoft Authentication Library (MSAL) for Android to implement Authentication: [com.microsoft.identity.client](https://javadoc.io/doc/com.microsoft.identity.client/msal). Follow these steps to create a new project if you don't already have an Android 1. In the Android Studio project window, navigate to **app** > **build.gradle** and add the following libraries in the _dependencies_ section: ```gradle- implementation 'com.microsoft.identity.client:msal:4.2.0' + implementation 'com.microsoft.identity.client:msal:4.5.0' implementation 'com.android.volley:volley:1.2.1' ``` Follow these steps to create a new project if you don't already have an Android queue.add(request); } }- ``` 1. Open _OnFragmentInteractionListener.java_ and replace the code with following code snippet to allow communication between different fragments: Follow these steps to create a new project if you don't already have an Android .show(); } }- ``` 1. Open _MainActivity.java_ and replace the code with following code snippet to manage the UI. Follow these steps to create a new project if you don't already have an Android OnFragmentInteractionListener{ enum AppFragment {- SingleAccount, - MultipleAccount, - B2C + SingleAccount } private AppFragment mCurrentFragment; Follow these steps to create a new project if you don't already have an Android setCurrentFragment(AppFragment.SingleAccount); } - if (id == R.id.nav_multiple_account) { - setCurrentFragment(AppFragment.MultipleAccount); - } -- if (id == R.id.nav_b2c) { - setCurrentFragment(AppFragment.B2C); - } drawer.removeDrawerListener(this); } Follow these steps to create a new project if you don't already have an Android getSupportActionBar().setTitle("Single Account Mode"); return; - case MultipleAccount: - getSupportActionBar().setTitle("Multiple Account Mode"); - return; -- case B2C: - getSupportActionBar().setTitle("B2C Mode"); - return; } } Follow these steps to create a new project if you don't already have an Android attachFragment(new com.azuresamples.msalandroidapp.SingleAccountModeFragment()); return; - case MultipleAccount: - attachFragment(new MultipleAccountModeFragment()); - return; -- case B2C: - attachFragment(new B2CModeFragment()); - return; } } Follow these steps to create a new project if you don't already have an Android .commit(); } }- ``` > [!NOTE] Follow these steps to create a new project if you don't already have an Android ### Layout -If you would like to model your UI off this tutorial, the following is a sample **activity_main.xml**. +A layout is a file that defines the visual structure and appearance of a user interface, specifying the arrangement of UI components. It's written in XML. The following XML samples are provided if you would like to model your UI off this tutorial: 1. In **app** > **src** > **main**> **res** > **layout** > **activity_main.xml**. Replace the content of **activity_main.xml** with the following code snippet to display buttons and text boxes: + ```xml + <?xml version="1.0" encoding="utf-8"?> + <androidx.drawerlayout.widget.DrawerLayout xmlns:android="http://schemas.android.com/apk/res/android" + xmlns:app="http://schemas.android.com/apk/res-auto" + xmlns:tools="http://schemas.android.com/tools" + android:id="@+id/drawer_layout" + android:layout_width="match_parent" + android:layout_height="match_parent" + android:fitsSystemWindows="true" + tools:openDrawer="start"> ++ <include + layout="@layout/app_bar_main" + android:layout_width="match_parent" + android:layout_height="match_parent" /> ++ <com.google.android.material.navigation.NavigationView + android:id="@+id/nav_view" + android:layout_width="wrap_content" + android:layout_height="match_parent" + android:layout_gravity="start" + android:fitsSystemWindows="true" + app:headerLayout="@layout/nav_header_main" + app:menu="@menu/activity_main_drawer" /> ++ </androidx.drawerlayout.widget.DrawerLayout> + ``` ++1. In **app** > **src** > **main**> **res** > **layout** > **app_bar_main.xml**. If you don't have **app_bar_main.xml** in your folder, create and add the following code snippet: + ```xml <?xml version="1.0" encoding="utf-8"?>- <LinearLayout xmlns:android="http://schemas.android.com/apk/res/android" + <androidx.coordinatorlayout.widget.CoordinatorLayout xmlns:android="http://schemas.android.com/apk/res/android" + xmlns:app="http://schemas.android.com/apk/res-auto" xmlns:tools="http://schemas.android.com/tools"- android:id="@+id/activity_main" android:layout_width="match_parent" android:layout_height="match_parent"- android:background="#FFFFFF" - android:orientation="vertical" tools:context=".MainActivity"> - <LinearLayout + <com.google.android.material.appbar.AppBarLayout android:layout_width="match_parent" android:layout_height="wrap_content"- android:orientation="horizontal" - android:paddingTop="5dp" - android:paddingBottom="5dp" - android:weightSum="10"> -- <Button - android:id="@+id/signIn" - android:layout_width="0dp" - android:layout_height="wrap_content" - android:layout_weight="5" - android:gravity="center" - android:text="Sign In"/> -- <Button - android:id="@+id/clearCache" - android:layout_width="0dp" - android:layout_height="wrap_content" - android:layout_weight="5" - android:gravity="center" - android:text="Sign Out" - android:enabled="false"/> + android:theme="@style/AppTheme.AppBarOverlay"> - </LinearLayout> - <LinearLayout - android:layout_width="match_parent" - android:layout_height="wrap_content" - android:gravity="center" - android:orientation="horizontal"> -- <Button - android:id="@+id/callGraphInteractive" - android:layout_width="0dp" - android:layout_height="wrap_content" - android:layout_weight="5" - android:text="Get Graph Data Interactively" - android:enabled="false"/> -- <Button - android:id="@+id/callGraphSilent" - android:layout_width="0dp" - android:layout_height="wrap_content" - android:layout_weight="5" - android:text="Get Graph Data Silently" - android:enabled="false"/> - </LinearLayout> + <androidx.appcompat.widget.Toolbar + android:id="@+id/toolbar" + android:layout_width="match_parent" + android:layout_height="?attr/actionBarSize" + android:background="?attr/colorPrimary" + app:popupTheme="@style/AppTheme.PopupOverlay" /> ++ </com.google.android.material.appbar.AppBarLayout> ++ <include layout="@layout/content_main" /> ++ </androidx.coordinatorlayout.widget.CoordinatorLayout> + ``` ++1. In **app** > **src** > **main**> **res** > **layout** > **content_main.xml**. If you don't have **content_main.xml** in your folder, create and add the following code snippet: ++ ```xml + <?xml version="1.0" encoding="utf-8"?> + <androidx.constraintlayout.widget.ConstraintLayout xmlns:android="http://schemas.android.com/apk/res/android" + android:id="@+id/content_main" + xmlns:app="http://schemas.android.com/apk/res-auto" + xmlns:tools="http://schemas.android.com/tools" + android:layout_width="match_parent" + android:layout_height="match_parent" + app:layout_behavior="@string/appbar_scrolling_view_behavior" + tools:context=".MainActivity" + tools:showIn="@layout/app_bar_main"> ++ </androidx.constraintlayout.widget.ConstraintLayout> + ``` ++1. In **app** > **src** > **main**> **res** > **layout** > **fragment_m_s_graph_request_wrapper.xml**. If you don't have **fragment_m_s_graph_request_wrapper.xml** in your folder, create and add the following code snippet: ++ ```xml + <?xml version="1.0" encoding="utf-8"?> + <FrameLayout xmlns:android="http://schemas.android.com/apk/res/android" + xmlns:tools="http://schemas.android.com/tools" + android:layout_width="match_parent" + android:layout_height="match_parent" + tools:context=".MSGraphRequestWrapper"> + <!-- TODO: Update blank fragment layout --> <TextView- android:text="Getting Graph Data..." - android:textColor="#3f3f3f" android:layout_width="match_parent"- android:layout_height="wrap_content" - android:layout_marginLeft="5dp" - android:id="@+id/graphData" - android:visibility="invisible"/> + android:layout_height="match_parent" + android:text="@string/hello_blank_fragment" /> ++ </FrameLayout> + ``` ++1. In **app** > **src** > **main**> **res** > **layout** > **fragment_on_interaction_listener.xml**. If you don't have **fragment_on_interaction_listener.xml** in your folder, create and add the following code snippet: ++ ```xml + <?xml version="1.0" encoding="utf-8"?> + <FrameLayout xmlns:android="http://schemas.android.com/apk/res/android" + xmlns:tools="http://schemas.android.com/tools" + android:layout_width="match_parent" + android:layout_height="match_parent" + tools:context=".OnFragmentInteractionListener"> + <!-- TODO: Update blank fragment layout --> <TextView- android:id="@+id/current_user" android:layout_width="match_parent"- android:layout_height="0dp" - android:layout_marginTop="20dp" - android:layout_weight="0.8" - android:text="Account info goes here..." /> + android:layout_height="match_parent" + android:text="@string/hello_blank_fragment" /> ++ </FrameLayout> + ``` ++1. In **app** > **src** > **main**> **res** > **layout** > **fragment_single_account_mode.xml**. If you don't have **fragment_single_account_mode.xml** in your folder, create and add the following code snippet: ++ ```xml + <?xml version="1.0" encoding="utf-8"?> + <FrameLayout xmlns:android="http://schemas.android.com/apk/res/android" + xmlns:tools="http://schemas.android.com/tools" + android:layout_width="match_parent" + android:layout_height="match_parent" + tools:context=".SingleAccountModeFragment"> ++ <LinearLayout xmlns:android="http://schemas.android.com/apk/res/android" + android:layout_width="match_parent" + android:layout_height="match_parent" + android:orientation="vertical" + tools:context=".SingleAccountModeFragment"> ++ <LinearLayout + android:id="@+id/activity_main" + android:layout_width="match_parent" + android:layout_height="match_parent" + android:orientation="vertical" + android:paddingLeft="@dimen/activity_horizontal_margin" + android:paddingRight="@dimen/activity_horizontal_margin" + android:paddingBottom="@dimen/activity_vertical_margin"> ++ <LinearLayout + android:layout_width="match_parent" + android:layout_height="wrap_content" + android:orientation="horizontal" + android:paddingTop="5dp" + android:paddingBottom="5dp" + android:weightSum="10"> ++ <TextView + android:layout_width="0dp" + android:layout_height="wrap_content" + android:layout_weight="3" + android:layout_gravity="center_vertical" + android:textStyle="bold" + android:text="Scope" /> ++ <LinearLayout + android:layout_width="0dp" + android:layout_height="wrap_content" + android:orientation="vertical" + android:layout_weight="7"> ++ <EditText + android:id="@+id/scope" + android:layout_height="wrap_content" + android:layout_width="match_parent" + android:text="user.read" + android:textSize="12sp" /> ++ <TextView + android:layout_height="wrap_content" + android:layout_width="match_parent" + android:paddingLeft="5dp" + android:text="Type in scopes delimited by space" + android:textSize="10sp" /> ++ </LinearLayout> + </LinearLayout> ++ <LinearLayout + android:layout_width="match_parent" + android:layout_height="wrap_content" + android:orientation="horizontal" + android:paddingTop="5dp" + android:paddingBottom="5dp" + android:weightSum="10"> ++ <TextView + android:layout_width="0dp" + android:layout_height="wrap_content" + android:layout_weight="3" + android:layout_gravity="center_vertical" + android:textStyle="bold" + android:text="MSGraph Resource URL" /> ++ <LinearLayout + android:layout_width="0dp" + android:layout_height="wrap_content" + android:orientation="vertical" + android:layout_weight="7"> ++ <EditText + android:id="@+id/msgraph_url" + android:layout_height="wrap_content" + android:layout_width="match_parent" + android:textSize="12sp" /> + </LinearLayout> + </LinearLayout> ++ <LinearLayout + android:layout_width="match_parent" + android:layout_height="wrap_content" + android:orientation="horizontal" + android:paddingTop="5dp" + android:paddingBottom="5dp" + android:weightSum="10"> ++ <TextView + android:layout_width="0dp" + android:layout_height="wrap_content" + android:layout_weight="3" + android:textStyle="bold" + android:text="Signed-in user" /> ++ <TextView + android:id="@+id/current_user" + android:layout_width="0dp" + android:layout_height="wrap_content" + android:paddingLeft="5dp" + android:layout_weight="7" + android:text="None" /> + </LinearLayout> ++ <LinearLayout + android:layout_width="match_parent" + android:layout_height="wrap_content" + android:orientation="horizontal" + android:paddingTop="5dp" + android:paddingBottom="5dp" + android:weightSum="10"> ++ <TextView + android:layout_width="0dp" + android:layout_height="wrap_content" + android:layout_weight="3" + android:textStyle="bold" + android:text="Device mode" /> ++ <TextView + android:id="@+id/device_mode" + android:layout_width="0dp" + android:layout_height="wrap_content" + android:paddingLeft="5dp" + android:layout_weight="7" + android:text="None" /> + </LinearLayout> ++ <LinearLayout + android:layout_width="match_parent" + android:layout_height="wrap_content" + android:orientation="horizontal" + android:paddingTop="5dp" + android:paddingBottom="5dp" + android:weightSum="10"> ++ <Button + android:id="@+id/btn_signIn" + android:layout_width="0dp" + android:layout_height="wrap_content" + android:layout_weight="5" + android:gravity="center" + android:text="Sign In"/> ++ <Button + android:id="@+id/btn_removeAccount" + android:layout_width="0dp" + android:layout_height="wrap_content" + android:layout_weight="5" + android:gravity="center" + android:text="Sign Out" + android:enabled="false"/> + </LinearLayout> +++ <LinearLayout + android:layout_width="match_parent" + android:layout_height="wrap_content" + android:gravity="center" + android:orientation="horizontal"> ++ <Button + android:id="@+id/btn_callGraphInteractively" + android:layout_width="0dp" + android:layout_height="wrap_content" + android:layout_weight="5" + android:text="Get Graph Data Interactively" + android:enabled="false"/> ++ <Button + android:id="@+id/btn_callGraphSilently" + android:layout_width="0dp" + android:layout_height="wrap_content" + android:layout_weight="5" + android:text="Get Graph Data Silently" + android:enabled="false"/> + </LinearLayout> +++ <TextView + android:id="@+id/txt_log" + android:layout_width="match_parent" + android:layout_height="0dp" + android:layout_marginTop="20dp" + android:layout_weight="0.8" + android:text="Output goes here..." /> ++ </LinearLayout> + </LinearLayout> ++ </FrameLayout> + ``` ++1. In **app** > **src** > **main**> **res** > **layout** > **nav_header_main.xml**. If you don't have **nav_header_main.xml** in your folder, create and add the following code snippet: ++ ```xml + <?xml version="1.0" encoding="utf-8"?> + <LinearLayout xmlns:android="http://schemas.android.com/apk/res/android" + xmlns:app="http://schemas.android.com/apk/res-auto" + android:layout_width="match_parent" + android:layout_height="@dimen/nav_header_height" + android:background="@drawable/side_nav_bar" + android:gravity="bottom" + android:orientation="vertical" + android:paddingLeft="@dimen/activity_horizontal_margin" + android:paddingTop="@dimen/activity_vertical_margin" + android:paddingRight="@dimen/activity_horizontal_margin" + android:paddingBottom="@dimen/activity_vertical_margin" + android:theme="@style/ThemeOverlay.AppCompat.Dark"> ++ <ImageView + android:id="@+id/imageView" + android:layout_width="66dp" + android:layout_height="72dp" + android:contentDescription="@string/nav_header_desc" + android:paddingTop="@dimen/nav_header_vertical_spacing" + app:srcCompat="@drawable/microsoft_logo" /> <TextView- android:id="@+id/txt_log" android:layout_width="match_parent"- android:layout_height="0dp" - android:layout_marginTop="20dp" - android:layout_weight="0.8" - android:text="Output goes here..." /> + android:layout_height="wrap_content" + android:paddingTop="@dimen/nav_header_vertical_spacing" + android:text="Azure Samples" + android:textAppearance="@style/TextAppearance.AppCompat.Body1" /> ++ <TextView + android:id="@+id/textView" + android:layout_width="wrap_content" + android:layout_height="wrap_content" + android:text="MSAL Android" /> + </LinearLayout>+ ``` +1. In **app** > **src** > **main**> **res** > **values** > **dimens.xml**. Replace the content of **dimens.xml** with the following code snippet: ++ ```xml + <resources> + <dimen name="fab_margin">16dp</dimen> + <dimen name="activity_horizontal_margin">16dp</dimen> + <dimen name="activity_vertical_margin">16dp</dimen> + <dimen name="nav_header_height">176dp</dimen> + <dimen name="nav_header_vertical_spacing">8dp</dimen> + </resources> + ``` ++1. In **app** > **src** > **main**> **res** > **values** > **colors.xml**. Replace the content of **colors.xml** with the following code snippet: ++ ```xml + <?xml version="1.0" encoding="utf-8"?> + <resources> + <color name="purple_200">#FFBB86FC</color> + <color name="purple_500">#FF6200EE</color> + <color name="purple_700">#FF3700B3</color> + <color name="teal_200">#FF03DAC5</color> + <color name="teal_700">#FF018786</color> + <color name="black">#FF000000</color> + <color name="white">#FFFFFFFF</color> + <color name="colorPrimary">#008577</color> + <color name="colorPrimaryDark">#00574B</color> + <color name="colorAccent">#D81B60</color> + </resources> + ``` ++1. In **app** > **src** > **main**> **res** > **values** > **styles.xml**. If you don't have **styles.xml** in your folder, create and add the following code snippet: ++ ```xml + <resources> ++ <!-- Base application theme. --> + <style name="AppTheme" parent="Theme.AppCompat.Light.DarkActionBar"> + <!-- Customize your theme here. --> + <item name="colorPrimary">@color/colorPrimary</item> + <item name="colorPrimaryDark">@color/colorPrimaryDark</item> + <item name="colorAccent">@color/colorAccent</item> + </style> ++ <style name="AppTheme.NoActionBar"> + <item name="windowActionBar">false</item> + <item name="windowNoTitle">true</item> + </style> ++ <style name="AppTheme.AppBarOverlay" parent="ThemeOverlay.AppCompat.Dark.ActionBar" /> ++ <style name="AppTheme.PopupOverlay" parent="ThemeOverlay.AppCompat.Light" /> ++ </resources> + ``` ++1. In **app** > **src** > **main**> **res** > **values** > **themes.xml**. Replace the content of **themes.xml** with the following code snippet: ++ ```xml + <resources xmlns:tools="http://schemas.android.com/tools"> + <!-- Base application theme. --> + <style name="Theme.MSALAndroidapp" parent="Theme.MaterialComponents.DayNight.DarkActionBar"> + <!-- Primary brand color. --> + <item name="colorPrimary">@color/purple_500</item> + <item name="colorPrimaryVariant">@color/purple_700</item> + <item name="colorOnPrimary">@color/white</item> + <!-- Secondary brand color. --> + <item name="colorSecondary">@color/teal_200</item> + <item name="colorSecondaryVariant">@color/teal_700</item> + <item name="colorOnSecondary">@color/black</item> + <!-- Status bar color. --> + <item name="android:statusBarColor" tools:targetApi="21">?attr/colorPrimaryVariant</item> + <!-- Customize your theme here. --> + </style> ++ <style name="Theme.MSALAndroidapp.NoActionBar"> + <item name="windowActionBar">false</item> + <item name="windowNoTitle">true</item> + </style> ++ <style name="Theme.MSALAndroidapp.AppBarOverlay" parent="ThemeOverlay.AppCompat.Dark.ActionBar" /> ++ <style name="Theme.MSALAndroidapp.PopupOverlay" parent="ThemeOverlay.AppCompat.Light" /> + </resources> + ``` ++1. In **app** > **src** > **main**> **res** > **drawable** > **ic_single_account_24dp.xml**. If you don't have **ic_single_account_24dp.xml** in your folder, create and add the following code snippet: ++ ```xml + <vector xmlns:android="http://schemas.android.com/apk/res/android" + android:width="24dp" + android:height="24dp" + android:viewportWidth="24.0" + android:viewportHeight="24.0"> + <path + android:fillColor="#FF000000" + android:pathData="M12,12c2.21,0 4,-1.79 4,-4s-1.79,-4 -4,-4 -4,1.79 -4,4 1.79,4 4,4zM12,14c-2.67,0 -8,1.34 -8,4v2h16v-2c0,-2.66 -5.33,-4 -8,-4z"/> + </vector> + ``` ++1. In **app** > **src** > **main**> **res** > **drawable** > **side_nav_bar.xml**. If you don't have **side_nav_bar.xml** in your folder, create and add the following code snippet: ++ ```xml + <shape xmlns:android="http://schemas.android.com/apk/res/android" + android:shape="rectangle"> + <gradient + android:angle="135" + android:centerColor="#009688" + android:endColor="#00695C" + android:startColor="#4DB6AC" + android:type="linear" /> + </shape> + ``` ++1. In **app** > **src** > **main**> **res** > **drawable**. To the folder, add a png Microsoft logo named `microsoft_logo.png`. ++Declaring your UI in XML allows you to separate the presentation of your app from the code that controls its behavior. To learn more about Android layout, see [Layouts](https://developer.android.com/develop/ui/views/layout/declaring-layout) + ## Test your app ### Run locally |
active-directory | Multilateral Federation Solution One | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/fundamentals/multilateral-federation-solution-one.md | -[](media/multilateral-federation-solution-one/azure-ad-cirrus-bridge.png#lightbox) +[](media/multilateral-federation-solution-one/cirrus-bridge.png#lightbox) If on-premises Active Directory is also being used, then [AD is configured](../hybrid/whatis-hybrid-identity.md) with hybrid identities. Implementing this Azure AD with Cirrus Bridge solution provides: If on-premises Active Directory is also being used, then [AD is configured](../h Implementing Azure AD with Cirrus bridge enables you to take advantage of more capabilities available in Azure AD: -* **External attribute store support** - [Azure AD custom claims provider](../develop/custom-claims-provider-overview.md) enables you to use an external attribute store (like an external LDAP Directory) to add additional claims into tokens on a per app basis. It uses a custom extension that calls an external REST API to fetch claims from external systems. +* **Custom claims provider support** - [Azure AD custom claims provider](../develop/custom-claims-provider-overview.md) enables you to use an external attribute store (like an external LDAP Directory) to add additional claims into tokens on a per app basis. It uses a custom extension that calls an external REST API to fetch claims from external systems. * **Custom security attributes** - Provides you with the ability to add custom attributes to objects in the directory and control who can read them. [Custom security attributes](../fundamentals/custom-security-attributes-overview.md) enable you to store more of your attributes directly in Azure AD. |
active-directory | Customize Workflow Email | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/governance/customize-workflow-email.md | When customizing an email sent via Lifecycle workflows, you can choose to custom 1. Select the **Email Customization** tab. -1. On the email customization screen, enter a custom subject, message body, and the email language translation option that will be used to translate the message body of the email. +1. On the email customization screen, enter a custom subject, message body, and the email language translation option that will be used to translate the message body of the email. The custom subject and message body will not be translated. :::image type="content" source="media/customize-workflow-email/customize-workflow-email-example.png" alt-text="Screenshot of an example of a customized email from a workflow."::: 1. After making changes, select **save** to capture changes to the customized email. |
active-directory | Entitlement Management Access Package Assignments | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/governance/entitlement-management-access-package-assignments.md | You can assign a user to an access package in PowerShell with the `New-MgEntitle ```powershell Connect-MgGraph -Scopes "EntitlementManagement.ReadWrite.All" Select-MgProfile -Name "beta"-$accesspackage = Get-MgEntitlementManagementAccessPackage -DisplayNameEq "Marketing Campaign" -ExpandProperty "accessPackageAssignmentPolicies" -$policy = $accesspackage.AccessPackageAssignmentPolicies[0] +$accesspackage = Get-MgEntitlementManagementAccessPackage -DisplayNameEq "Marketing Campaign" -ExpandProperty "AssignmentPolicies" +$policy = $accesspackage.AssignmentPolicies[0] $req = New-MgEntitlementManagementAccessPackageAssignmentRequest -AccessPackageId $accesspackage.Id -AssignmentPolicyId $policy.Id -TargetId "a43ee6df-3cc5-491a-ad9d-ea964ef8e464" ``` For example, if you want to ensure all the users who are currently members of a Connect-MgGraph -Scopes "EntitlementManagement.ReadWrite.All,Directory.Read.All" Select-MgProfile -Name "beta" $members = Get-MgGroupMember -GroupId "a34abd69-6bf8-4abd-ab6b-78218b77dc15"-$accesspackage = Get-MgEntitlementManagementAccessPackage -DisplayNameEq "Marketing Campaign" -ExpandProperty "accessPackageAssignmentPolicies" -$policy = $accesspackage.AccessPackageAssignmentPolicies[0] +$accesspackage = Get-MgEntitlementManagementAccessPackage -DisplayNameEq "Marketing Campaign" -ExpandProperty "AssignmentPolicies" +$policy = $accesspackage.AssignmentPolicies[0] $req = New-MgEntitlementManagementAccessPackageAssignment -AccessPackageId $accesspackage.Id -AssignmentPolicyId $policy.Id -RequiredGroupMember $members ``` If you wish to add an assignment for a user who is not yet in your directory, yo ```powershell Connect-MgGraph -Scopes "EntitlementManagement.ReadWrite.All" Select-MgProfile -Name "beta"-$accesspackage = Get-MgEntitlementManagementAccessPackage -DisplayNameEq "Marketing Campaign" -ExpandProperty "accessPackageAssignmentPolicies" -$policy = $accesspackage.AccessPackageAssignmentPolicies[0] +$accesspackage = Get-MgEntitlementManagementAccessPackage -DisplayNameEq "Marketing Campaign" -ExpandProperty "AssignmentPolicies" +$policy = $accesspackage.AssignmentPolicies[0] $req = New-MgEntitlementManagementAccessPackageAssignmentRequest -AccessPackageId $accesspackage.Id -AssignmentPolicyId $policy.Id -TargetEmail "sample@example.com" ``` |
active-directory | Cloudflare Azure Ad Integration | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/manage-apps/cloudflare-azure-ad-integration.md | See the [team domain](https://developers.cloudflare.com/cloudflare-one/glossary# ## Next steps - Go to developer.cloudflare.com for [Integrate SSO](https://developers.cloudflare.com/cloudflare-one/identity/idp-integration/)+- [Tutorial: Configure Conditional Access policies for Cloudflare Access](cloudflare-conditional-access-policies.md) - [Tutorial: Configure Cloudflare Web Application Firewall with Azure AD B2C](../../active-directory-b2c/partner-cloudflare.md) |
active-directory | Cloudflare Conditional Access Policies | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/manage-apps/cloudflare-conditional-access-policies.md | + + Title: Tutorial to configure Conditional Access policies in Cloudflare Access +description: Configure Conditional Access to enforce application and user policies in Cloudflare Access +++++++ Last updated : 05/11/2023++++++# Tutorial: Configure Conditional Access policies in Cloudflare Access ++With Conditional Access, administrators enforce policies on application and user policies in Azure Active Directory (Azure AD). Conditional Access brings together identity-driven signals, to make decisions, and enforce organizational policies. Cloudflare Access creates access to self-hosted, software as a service (SaaS), or nonweb applications. ++Learn more: [What is Conditional Access?](../conditional-access/overview.md) ++## Prerequisites ++* An Azure AD subscription + * If you don't have one, get an [Azure free account](https://azure.microsoft.com/free/) +* An Azure AD tenant linked to the Azure AD subscription + * See, [Quickstart: Create a new tenant in Azure AD](../fundamentals/active-directory-access-create-new-tenant.md) +* Global Administrator permissions +* Configured users in the Azure AD subscription +* A Cloudflare account + * Go to dash.cloudflare.com to [Get started with Cloudflare](https://dash.cloudflare.com/sign-up?https%3A%2F%2Fone.dash.cloudflare.com%2F) ++## Scenario architecture ++* **Azure AD** - Identity Provider (IdP) that verifies user credentials and Conditional Access +* **Application** - You created for IdP integration +* **Cloudflare Access** - Provides access to applications ++## Set up an identity provider ++Go to developers.cloudflare.com to [set up Azure AD as an IdP](https://developers.cloudflare.com/cloudflare-one/identity/idp-integration/azuread/#set-up-azure-ad-as-an-identity-provider). ++ > [!NOTE] + > It's recommended you name the IdP integration in relation to the target application. For example, **Azure AD - Customer management portal**. ++## Configure Conditional Access ++1. Go to the [Azure portal](https://portal.azure.com/). +2. Select **Azure Active Directory**. +3. Under **Manage**, select **App registrations**. +4. Select the application you created. +5. Go to **Branding & properties**. +6. For **Home page URL**, enter the application hostname. ++  ++7. Under **Manage**, select **Enterprise applications**. +8. Select your application. +9. Select **Properties**. +10. For **Visible to users**, select **Yes**. This action enables the app to appear in App Launcher and in [My Apps](https://myapplications.microsoft.com/). +11. Under **Security**, select **Conditional Access**. +12. See, [Building a Conditional Access policy](../conditional-access/concept-conditional-access-policies.md). +13. Create and enable other policies for the application. ++## Create a Cloudflare Access application ++Enforce Conditional Access policies on a Cloudflare Access application. ++1. Go to dash.cloudflare.com to [sign in to Cloudflare](https://dash.cloudflare.com/login). +2. In **Zero Trust**, go to **Access**. +3. Select **Applications**. +4. See, [Add a self-hosted application](https://developers.cloudflare.com/cloudflare-one/applications/configure-apps/self-hosted-apps/). +5. In **Application domain**, enter the protected application target URL. +6. For **Identity providers**, select the IdP integration. ++  ++7. Create an Access policy. See, [Access policies](https://developers.cloudflare.com/cloudflare-one/policies/access/) and the following example. ++  ++ > [!NOTE] + > Reuse the IdP integration for other applications if they require the same Conditional Access policies. For example, a baseline IdP integration with a Conditional Access policy requiring multifactor authentication and a modern authentication client. If an application requires specific Conditional Access policies, set up a dedicated IdP instance for that application. ++## Next steps ++* [What is Conditional Access?](../conditional-access/overview.md) +* [Secure Hybrid Access with Azure AD partner integrations](secure-hybrid-access-integrations.md) +* [Tutorial: Configure Cloudflare with Azure AD for secure hybrid access](cloudflare-azure-ad-integration.md) |
active-directory | Pim Approval Workflow | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/privileged-identity-management/pim-approval-workflow.md | + + Title: Approve or deny requests for Azure AD roles in PIM +description: Learn how to approve or deny requests for Azure AD roles in Azure AD Privileged Identity Management (PIM). ++documentationcenter: '' +++editor: '' ++++ na + Last updated : 05/11/2023++++++# Approve or deny requests for Azure AD roles in Privileged Identity Management ++With Privileged Identity Management (PIM) in Azure Active Directory (Azure AD), part of Microsoft Entra, you can configure roles to require approval for activation, and choose one or multiple users or groups as delegated approvers. Delegated approvers have 24 hours to approve requests. If a request is not approved within 24 hours, then the eligible user must re-submit a new request. The 24 hour approval time window is not configurable. ++## View pending requests ++As a delegated approver, you'll receive an email notification when an Azure AD role request is pending your approval. You can view these pending requests in Privileged Identity Management. ++1. Sign in to the [Azure portal](https://portal.azure.com/). ++1. Open **Azure AD Privileged Identity Management**. ++1. Select **Approve requests**. ++  ++ In the **Requests for role activations** section, you'll see a list of requests pending your approval. ++## View pending requests using Microsoft Graph API ++### HTTP request ++````HTTP +GET https://graph.microsoft.com/v1.0/roleManagement/directory/roleAssignmentScheduleRequests/filterByCurrentUser(on='approver')?$filter=status eq 'PendingApproval' +```` ++### HTTP response ++````HTTP +{ + "@odata.context": "https://graph.microsoft.com/v1.0/$metadata#Collection(unifiedRoleAssignmentScheduleRequest)", + "value": [ + { + "@odata.type": "#microsoft.graph.unifiedRoleAssignmentScheduleRequest", + "id": "9f2b5ddb-a50e-44a1-a6f4-f616322262ea", + "status": "PendingApproval", + "createdDateTime": "2021-07-15T19:57:17.76Z", + "completedDateTime": "2021-07-15T19:57:17.537Z", + "approvalId": "9f2b5ddb-a50e-44a1-a6f4-f616322262ea", + "customData": null, + "action": "SelfActivate", + "principalId": "d96ea738-3b95-4ae7-9e19-78a083066d5b", + "roleDefinitionId": "88d8e3e3-8f55-4a1e-953a-9b9898b8876b", + "directoryScopeId": "/", + "appScopeId": null, + "isValidationOnly": false, + "targetScheduleId": "9f2b5ddb-a50e-44a1-a6f4-f616322262ea", + "justification": "test", + "createdBy": { + "application": null, + "device": null, + "user": { + "displayName": null, + "id": "d96ea738-3b95-4ae7-9e19-78a083066d5b" + } + }, + "scheduleInfo": { + "startDateTime": null, + "recurrence": null, + "expiration": { + "type": "afterDuration", + "endDateTime": null, + "duration": "PT5H30M" + } + }, + "ticketInfo": { + "ticketNumber": null, + "ticketSystem": null + } + } + ] +} +```` ++## Approve requests ++>[!NOTE] +>Approvers are not able to approve their own role activation requests. ++1. Find and select the request that you want to approve. An approve or deny page appears. ++  ++1. In the **Justification** box, enter the business justification. ++1. Select **Approve**. You will receive an Azure notification of your approval. ++  ++## Approve pending requests using Microsoft Graph API ++### Get IDs for the steps that require approval ++For a specific activation request, this command gets all the approval steps that need approval. Multi-step approvals are not currently supported. ++#### HTTP request ++````HTTP +GET https://graph.microsoft.com/beta/roleManagement/directory/roleAssignmentApprovals/<request-ID-GUID> +```` ++#### HTTP response ++````HTTP +{ + "@odata.context": "https://graph.microsoft.com/beta/$metadata#roleManagement/directory/roleAssignmentApprovals/$entity", + "id": "<request-ID-GUID>", + "steps@odata.context": "https://graph.microsoft.com/beta/$metadata#roleManagement/directory/roleAssignmentApprovals('<request-ID-GUID>')/steps", + "steps": [ + { + "id": "<approval-step-ID-GUID>", + "displayName": null, + "reviewedDateTime": null, + "reviewResult": "NotReviewed", + "status": "InProgress", + "assignedToMe": true, + "justification": "", + "reviewedBy": null + } + ] +} +```` ++### Approve the activation request step ++#### HTTP request ++````HTTP +PATCH +https://graph.microsoft.com/beta/roleManagement/directory/roleAssignmentApprovals/<request-ID-GUID>/steps/<approval-step-ID-GUID> +{ + "reviewResult": "Approve", + "justification": "abcdefg" +} + ```` ++#### HTTP response ++Successful PATCH calls generate an empty response. ++## Deny requests ++1. Find and select the request that you want to deny. An approve or deny page appears. ++  ++1. In the **Justification** box, enter the business justification. ++1. Select **Deny**. A notification appears with your denial. ++## Workflow notifications ++Here's some information about workflow notifications: ++- Approvers are notified by email when a request for a role is pending their review. Email notifications include a direct link to the request, where the approver can approve or deny. +- Requests are resolved by the first approver who approves or denies. +- When an approver responds to the request, all approvers are notified of the action. +- Global admins and Privileged role admins are notified when an approved user becomes active in their role. ++>[!NOTE] +>A Global Administrator or Privileged role admin who believes that an approved user should not be active can remove the active role assignment in Privileged Identity Management. Although administrators are not notified of pending requests unless they are an approver, they can view and cancel any pending requests for all users by viewing pending requests in Privileged Identity Management. ++## Next steps ++- [Email notifications in Privileged Identity Management](pim-email-notifications.md) +- [Approve or deny requests for Azure resource roles in Privileged Identity Management](pim-resource-roles-approval-workflow.md) |
active-directory | Pim Complete Roles And Resource Roles Review | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/privileged-identity-management/pim-complete-roles-and-resource-roles-review.md | + + Title: Complete an access review of Azure resource and Azure AD roles in PIM +description: Learn how to complete an access review of Azure resource and Azure AD roles Privileged Identity Management in Azure Active Directory. ++documentationcenter: '' +++editor: '' +++ na ++ Last updated : 5/11/2023+++++++# Complete an access review of Azure resource and Azure AD roles in PIM ++Privileged role administrators can review privileged access once an [access review has been started](pim-create-azure-ad-roles-and-resource-roles-review.md). Privileged Identity Management (PIM) in Azure Active Directory (Azure AD) will automatically send an email that prompts users to review their access. If a user doesn't receive an email, you can send them the instructions for [how to perform an access review](pim-perform-azure-ad-roles-and-resource-roles-review.md). ++Once the review has been created, follow the steps in this article to complete the review and see the results. ++## Complete access reviews ++1. Login to the [Azure portal](https://portal.azure.com/). For **Azure resources**, navigate to **Privileged Identity Management** and select **Azure resources** under **Manage** from the dashboard. For **Azure AD roles**, select **Azure AD roles** from the same dashboard. ++2. For **Azure resources**, select your resource under **Azure resources** and then select **Access reviews** from the dashboard. For **Azure AD roles**, proceed directly to the **Access reviews** on the dashboard. ++3. Select the access review that you want to manage. Below is a sample screenshot of the **Access Reviews** overview for both **Azure resources** and **Azure AD roles**. ++ :::image type="content" source="media/pim-complete-azure-ad-roles-and-resource-roles-review/rbac-azure-ad-roles-home-list.png" alt-text="Access reviews list showing role, owner, start date, end date, and status screenshot." lightbox="media/pim-complete-azure-ad-roles-and-resource-roles-review/rbac-azure-ad-roles-home-list.png"::: ++On the detail page, the following options are available for managing the review of **Azure resources** and **Azure AD roles**: ++ ++### Stop an access review ++All access reviews have an end date, but you can use the **Stop** button to finish it early. The **Stop** button is only selectable when the review instance is active. You cannot restart a review after it's been stopped. ++### Reset an access review ++When the review instance is active and at least one decision has been made by reviewers, you can reset the access review by selecting the **Reset** button to remove all decisions that were made on it. After you've reset an access review, all users are marked as not reviewed again. ++### Apply an access review ++After an access review is completed, either because you've reached the end date or stopped it manually, the **Apply** button removes denied users' access to the role. If a user's access was denied during the review, this is the step that will remove their role assignment. If the **Auto apply** setting is configured on review creation, this button will always be disabled because the review will be applied automatically instead of manually. ++### Delete an access review ++If you are not interested in the review any further, delete it. To remove the access review from the Privileged Identity Management service, select the **Delete** button. ++> [!IMPORTANT] +> You will not be required to confirm this destructive change, so verify that you want to delete that review. ++## Results ++On the **Results** page, you may view and download a list of your review results. +++> [!Note] +> **Azure AD roles** have a concept of role-assignable groups, where a group can be assigned to the role. When this happens, the group will show up in the review instead of expanding the members of the group, and a reviewer will either approve or deny the entire group. +++> [!Note] +>If a group is assigned to **Azure resource roles**, the reviewer of the Azure resource role will see the expanded list of the users in a nested group. Should a reviewer deny a member of a nested group, that deny result will not be applied successfully because the user will not be removed from the nested group. ++## Reviewers ++On the **Reviewers** page, you may view and add reviewers to your existing access review. You may also remind reviewers to complete their reviews here. ++> [!Note] +> If the reviewer type selected is user or group, you can add more users or groups as the primary reviewers at any point. You can also remove primary reviewers at any point. If the reviewer type is manager, you can add users or groups as the fallback reviewers to complete reviews on users who do not have managers. Fallback reviewers cannot be removed. +++## Next steps ++- [Create an access review of Azure resource and Azure AD roles in PIM](pim-create-azure-ad-roles-and-resource-roles-review.md) +- [Perform an access review of Azure resource and Azure AD roles in PIM](pim-perform-azure-ad-roles-and-resource-roles-review.md) |
active-directory | Pim Create Roles And Resource Roles Review | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/privileged-identity-management/pim-create-roles-and-resource-roles-review.md | + + Title: Create an access review of Azure resource and Azure AD roles in PIM +description: Learn how to create an access review of Azure resource and Azure AD roles in Azure AD Privileged Identity Management (PIM). ++documentationcenter: '' +++editor: '' ++++ Last updated : 5/11/2023++++++# Create an access review of Azure resource and Azure AD roles in PIM ++The need for access to privileged Azure resource and Azure AD roles by employees changes over time. To reduce the risk associated with stale role assignments, you should regularly review access. You can use Azure Active Directory (Azure AD) Privileged Identity Management (PIM) to create access reviews for privileged access to Azure resource and Azure AD roles. You can also configure recurring access reviews that occur automatically. This article describes how to create one or more access reviews. ++## Prerequisites +++ To create access reviews for Azure resources, you must be assigned to the [Owner](../../role-based-access-control/built-in-roles.md#owner) or the [User Access Administrator](../../role-based-access-control/built-in-roles.md#user-access-administrator) role for the Azure resources. To create access reviews for Azure AD roles, you must be assigned to the [Global Administrator](../roles/permissions-reference.md#global-administrator) or the [Privileged Role Administrator](../roles/permissions-reference.md#privileged-role-administrator) role. ++Access Reviews for **Service Principals** requires an Entra Workload Identities Premium plan in addition to Azure AD Premium P2 license. ++- Workload Identities Premium licensing: You can view and acquire licenses on the [Workload Identities blade](https://portal.azure.com/#view/Microsoft_Azure_ManagedServiceIdentity/WorkloadIdentitiesBlade) in the Azure portal. +++## Create access reviews ++1. Sign in to [Azure portal](https://portal.azure.com/) as a user that is assigned to one of the prerequisite role(s). ++2. Select **Identity Governance**. + +3. For **Azure AD roles**, select **Azure AD roles** under **Privileged Identity Management**. For **Azure resources**, select **Azure resources** under **Privileged Identity Management**. ++ :::image type="content" source="./media/pim-create-azure-ad-roles-and-resource-roles-review/identity-governance.png" alt-text="Select Identity Governance in the Azure portal screenshot." lightbox="./media/pim-create-azure-ad-roles-and-resource-roles-review/identity-governance.png"::: + +4. For **Azure AD roles**, select **Azure AD roles** again under **Manage**. For **Azure resources**, select the subscription you want to manage. +++5. Under Manage, select **Access reviews**, and then select **New** to create a new access review. ++ :::image type="content" source="./media/pim-create-azure-ad-roles-and-resource-roles-review/access-reviews.png" alt-text="Azure AD roles - Access reviews list showing the status of all reviews screenshot."::: + +6. Name the access review. Optionally, give the review a description. The name and description are shown to the reviewers. ++ :::image type="content" source="./media/pim-create-azure-ad-roles-and-resource-roles-review/name-description.png" alt-text="Create an access review - Review name and description screenshot."::: ++7. Set the **Start date**. By default, an access review occurs once, starts the same time it's created, and it ends in one month. You can change the start and end dates to have an access review start in the future and last however many days you want. ++ :::image type="content" source="./media/pim-create-azure-ad-roles-and-resource-roles-review/start-end-dates.png" alt-text="Start date, frequency, duration, end, number of times, and end date screenshot."::: ++8. To make the access review recurring, change the **Frequency** setting from **One time** to **Weekly**, **Monthly**, **Quarterly**, **Annually**, or **Semi-annually**. Use the **Duration** slider or text box to define how many days each review of the recurring series will be open for input from reviewers. For example, the maximum duration that you can set for a monthly review is 27 days, to avoid overlapping reviews. ++9. Use the **End** setting to specify how to end the recurring access review series. The series can end in three ways: it runs continuously to start reviews indefinitely, until a specific date, or after a defined number of occurrences has been completed. You, or another administrator who can manage reviews, can stop the series after creation by changing the date in **Settings**, so that it ends on that date. +++10. In the **Users Scope** section, select the scope of the review. For **Azure AD roles**, the first scope option is Users and Groups. Directly assigned users and [role-assignable groups](../roles/groups-concept.md) will be included in this selection. For **Azure resource roles**, the first scope will be Users. Groups assigned to Azure resource roles are expanded to display transitive user assignments in the review with this selection. You may also select **Service Principals** to review the machine accounts with direct access to either the Azure resource or Azure AD role. ++ :::image type="content" source="./media/pim-create-azure-ad-roles-and-resource-roles-review/users.png" alt-text="Users scope to review role membership of screenshot."::: ++11. Or, you can create access reviews only for inactive users (preview). In the *Users scope* section, set the **Inactive users (on tenant level) only** to **true**. If the toggle is set to *true*, the scope of the review will focus on inactive users only. Then, specify **Days inactive** with a number of days inactive up to 730 days (two years). Users inactive for the specified number of days will be the only users in the review. + +12. Under **Review role membership**, select the privileged Azure resource or Azure AD roles to review. ++ > [!NOTE] + > Selecting more than one role will create multiple access reviews. For example, selecting five roles will create five separate access reviews. ++ :::image type="content" source="./media/pim-create-azure-ad-roles-and-resource-roles-review/review-role-membership.png" alt-text="Review role memberships screenshot."::: ++13. In **assignment type**, scope the review by how the principal was assigned to the role. Choose **eligible assignments only** to review eligible assignments (regardless of activation status when the review is created) or **active assignments only** to review active assignments. Choose **all active and eligible assignments** to review all assignments regardless of type. ++ :::image type="content" source="./media/pim-create-azure-ad-roles-and-resource-roles-review/assignment-type-select.png" alt-text="Reviewers list of assignment types screenshot."::: ++14. In the **Reviewers** section, select one or more people to review all the users. Or you can select to have the members review their own access. ++ :::image type="content" source="./media/pim-create-azure-ad-roles-and-resource-roles-review/reviewers.png" alt-text="Reviewers list of selected users or members (self)"::: ++ - **Selected users** - Use this option to designate a specific user to complete the review. This option is available regardless of the scope of the review, and the selected reviewers can review users, groups and service principals. + - **Members (self)** - Use this option to have the users review their own role assignments. This option is only available if the review is scoped to **Users and Groups** or **Users**. For **Azure AD roles**, role-assignable groups will not be a part of the review when this option is selected. + - **Manager** ΓÇô Use this option to have the userΓÇÖs manager review their role assignment. This option is only available if the review is scoped to **Users and Groups** or **Users**. Upon selecting Manager, you will also have the option to specify a fallback reviewer. Fallback reviewers are asked to review a user when the user has no manager specified in the directory. For **Azure AD roles**, role-assignable groups will be reviewed by the fallback reviewer if one is selected. ++### Upon completion settings ++1. To specify what happens after a review completes, expand the **Upon completion settings** section. ++ :::image type="content" source="./media/pim-create-azure-ad-roles-and-resource-roles-review/upon-completion-settings.png" alt-text="Upon completion settings to auto apply and should review not respond screenshot."::: ++2. If you want to automatically remove access for users that were denied, set **Auto apply results to resource** to **Enable**. If you want to manually apply the results when the review completes, set the switch to **Disable**. ++3. Use the **If reviewer don't respond** list to specify what happens for users that are not reviewed by the reviewer within the review period. This setting does not impact users who were reviewed by the reviewers. ++ - **No change** - Leave user's access unchanged + - **Remove access** - Remove user's access + - **Approve access** - Approve user's access + - **Take recommendations** - Take the system's recommendation on denying or approving the user's continued access + +4. Use the **Action to apply on denied guest users** list to specify what happens for guest users that are denied. This setting is not editable for Azure AD and Azure resource role reviews at this time; guest users, like all users, will always lose access to the resource if denied. ++ :::image type="content" source="./media/pim-create-azure-ad-roles-and-resource-roles-review/action-to-apply-on-denied-guest-users.png" alt-text="Upon completion settings - Action to apply on denied guest users screenshot."::: ++5. You can send notifications to additional users or groups to receive review completion updates. This feature allows for stakeholders other than the review creator to be updated on the progress of the review. To use this feature, select **Select User(s) or Group(s)** and add an additional user or group upon you want to receive the status of completion. ++ :::image type="content" source="./media/pim-create-azure-ad-roles-and-resource-roles-review/upon-completion-settings-additional-receivers.png" alt-text="Upon completion settings - Add additional users to receive notifications screenshot."::: ++### Advanced settings ++1. To specify additional settings, expand the **Advanced settings** section. ++ :::image type="content" source="./media/pim-create-azure-ad-roles-and-resource-roles-review/advanced-settings.png" alt-text="Advanced settings for show recommendations, require reason on approval, mail notifications, and reminders screenshot."::: ++1. Set **Show recommendations** to **Enable** to show the reviewers the system recommendations based the user's access information. Recommendations are based on a 30-day interval period where users who have logged in the past 30 days are recommended access, while users who have not are recommended denial of access. These sign-ins are irrespective of whether they were interactive. The last sign-in of the user is also displayed along with the recommendation. ++1. Set **Require reason on approval** to **Enable** to require the reviewer to supply a reason for approval. ++1. Set **Mail notifications** to **Enable** to have Azure AD send email notifications to reviewers when an access review starts, and to administrators when a review completes. ++1. Set **Reminders** to **Enable** to have Azure AD send reminders of access reviews in progress to reviewers who have not completed their review. +1. The content of the email sent to reviewers is auto-generated based on the review details, such as review name, resource name, due date, etc. If you need a way to communicate additional information such as additional instructions or contact information, you can specify these details in the **Additional content for reviewer email** which will be included in the invitation and reminder emails sent to assigned reviewers. The highlighted section below is where this information will be displayed. ++ :::image type="content" source="./media/pim-create-azure-ad-roles-and-resource-roles-review/email-info.png" alt-text="Content of the email sent to reviewers with highlights"::: ++## Manage the access review ++You can track the progress as the reviewers complete their reviews on the **Overview** page of the access review. No access rights are changed in the directory until the review is completed. Below is a screenshot showing the overview page for **Azure resources** and **Azure AD roles** access reviews. +++If this is a one-time review, then after the access review period is over or the administrator stops the access review, follow the steps in [Complete an access review of Azure resource and Azure AD roles](pim-complete-azure-ad-roles-and-resource-roles-review.md) to see and apply the results. ++To manage a series of access reviews, navigate to the access review, and you will find upcoming occurrences in Scheduled reviews, and edit the end date or add/remove reviewers accordingly. ++Based on your selections in **Upon completion settings**, auto-apply will be executed after the review's end date or when you manually stop the review. The status of the review will change from **Completed** through intermediate states such as **Applying** and finally to state **Applied**. You should expect to see denied users, if any, being removed from roles in a few minutes. ++## Impact of groups assigned to Azure AD roles and Azure resource roles in access reviews ++ΓÇó For **Azure AD roles**, role-assignable groups can be assigned to the role using [role-assignable groups](../roles/groups-concept.md). When a review is created on an Azure AD role with role-assignable groups assigned, the group name shows up in the review without expanding the group membership. The reviewer can approve or deny access of the entire group to the role. Denied groups will lose their assignment to the role when review results are applied. ++ΓÇó For **Azure resource roles**, any security group can be assigned to the role. When a review is created on an Azure resource role with a security group assigned, the users assigned to that security group will be fully expanded and shown to the reviewer of the role. When a reviewer denies a user that was assigned to the role via the security group, the user will not be removed from the group, and therefore the apply of the deny result will be unsuccessful. ++> [!NOTE] +> It is possible for a security group to have other groups assigned to it. In this case, only the users assigned directly to the security group assigned to the role will appear in the review of the role. +++## Update the access review ++After one or more access reviews have been started, you may want to modify or update the settings of your existing access reviews. Here are some common scenarios that you might want to consider: ++- **Adding and removing reviewers** - When updating access reviews, you may choose to add a fallback reviewer in addition to the primary reviewer. Primary reviewers may be removed when updating an access review. However, fallback reviewers are not removable by design. ++ > [!Note] + > Fallback reviewers can only be added when reviewer type is manager. Primary reviewers can be added when reviewer type is selected user. ++- **Reminding the reviewers** - When updating access reviews, you may choose to enable the reminder option under Advanced Settings. Once enabled, users will receive an email notification at the midpoint of the review period, regardless of whether they have completed the review or not. ++ :::image type="content" source="./media/pim-create-azure-ad-roles-and-resource-roles-review/reminder-setting.png" alt-text="Screenshot of the reminder option under access reviews settings."::: ++- **Updating the settings** - If an access review is recurring, there are separate settings under "Current" versus under "Series". Updating the settings under "Current" will only apply changes to the current access review while updating the settings under "Series" will update the setting for all future recurrences. ++ :::image type="content" source="./media/pim-create-azure-ad-roles-and-resource-roles-review/current-v-series-setting.png" alt-text="Screenshot of the settings page under access reviews." lightbox="./media/pim-create-azure-ad-roles-and-resource-roles-review/current-v-series-setting.png"::: + +## Next steps ++- [Perform an access review of Azure resource and Azure AD roles in PIM](pim-perform-azure-ad-roles-and-resource-roles-review.md) +- [Complete an access review of Azure resource and Azure AD roles in PIM](pim-complete-azure-ad-roles-and-resource-roles-review.md) |
active-directory | Pim Perform Roles And Resource Roles Review | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/privileged-identity-management/pim-perform-roles-and-resource-roles-review.md | + + Title: Perform an access review of Azure resource and Azure AD roles in PIM +description: Learn how to review access of Azure resource and Azure AD roles in Azure AD Privileged Identity Management (PIM). ++documentationcenter: '' +++editor: '' +++ na ++ Last updated : 5/11/2023++++++# Perform an access review of Azure resource and Azure AD roles in PIM ++Privileged Identity Management (PIM) simplifies how enterprises manage privileged access to resources in Azure Active Directory (AD), part of Microsoft Entra, and other Microsoft online services like Microsoft 365 or Microsoft Intune. Follow the steps in this article to perform reviews of access to roles. ++If you are assigned to an administrative role, your organization's privileged role administrator may ask you to regularly confirm that you still need that role for your job. You might get an email that includes a link, or you can go straight to the [Azure portal](https://portal.azure.com) and begin. ++If you're a privileged role administrator or global administrator interested in access reviews, get more details at [How to start an access review](pim-create-azure-ad-roles-and-resource-roles-review.md). ++## Approve or deny access ++You can approve or deny access based on whether the user still needs access to the role. Choose **Approve** if you want them to stay in the role, or **Deny** if they do not need the access anymore. The users' assignment status will not change until the review closes and the administrator applies the results. Common scenarios in which certain denied users cannot have results applied to them may include the following: ++- **Reviewing members of a synced on-premises Windows AD group**: If the group is synced from an on-premises Windows AD, the group cannot be managed in Azure AD and therefore membership cannot be changed. +- **Reviewing a role with nested groups assigned**: For users who have membership through a nested group, the access review will not remove their membership to the nested group and therefore they will retain access to the role being reviewed. +- **User not found or other errors**: These may also result in an apply result not being supported. ++Follow these steps to find and complete the access review: ++1. Sign in to the [Azure portal](https://portal.azure.com/). +1. Select **Azure Active Directory** and open **Privileged Identity Management**. +1. Select **Review access**. If you have any pending access reviews, they will appear in the access reviews page. ++ :::image type="content" source="media/pim-perform-azure-ad-roles-and-resource-roles-review/rbac-access-review-azure-ad-complete.png" alt-text="Screenshot of Privileged Identity Management application, with Review access blade selected for Azure AD roles." lightbox="media/pim-perform-azure-ad-roles-and-resource-roles-review/rbac-access-review-azure-ad-complete.png"::: ++1. Select the review you want to complete. +1. Choose **Approve** or **Deny**. In the **Provide a reason box**, enter a business justification for your decision as needed. ++ :::image type="content" source="media/pim-perform-azure-ad-roles-and-resource-roles-review/rbac-access-review-azure-ad-completed.png" alt-text="Screenshot of Privileged Identity Management application, with the selected Access Review for Azure AD roles." lightbox="media/pim-perform-azure-ad-roles-and-resource-roles-review/rbac-access-review-azure-ad-completed.png"::: ++## Next steps ++- [Create an access review of Azure resource and Azure AD roles in PIM](pim-create-azure-ad-roles-and-resource-roles-review.md) +- [Complete an access review of Azure resource and Azure AD roles in PIM](pim-complete-azure-ad-roles-and-resource-roles-review.md) |
active-directory | Protected Actions Add | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/roles/protected-actions-add.md | Last updated 04/21/2023 [Protected actions](./protected-actions-overview.md) in Azure Active Directory (Azure AD) are permissions that have been assigned Conditional Access polices that are enforced when a user attempts to perform an action. This article describes how to add, test, or remove protected actions. +> [!NOTE] +> You should perform these steps in the following sequence to ensure that protected actions are properly configured and enforced. If you don't follow this order, you may get unexpected behavior, such as [getting repeated requests to reauthenticate](#symptompolicy-is-never-satisfied). + ## Prerequisites To add or remove protected actions, you must have: To add or remove protected actions, you must have: - Azure AD Premium P1 or P2 license - [Conditional Access Administrator](permissions-reference.md#conditional-access-administrator) or [Security Administrator](permissions-reference.md#security-administrator) role -## Configure Conditional Access policy +## Step 1: Configure Conditional Access policy Protected actions use a Conditional Access authentication context, so you must configure an authentication context and add it to a Conditional Access policy. If you already have a policy with an authentication context, you can skip to the next section. Protected actions use a Conditional Access authentication context, so you must c :::image type="content" source="media/protected-actions-add/policy-authentication-context.png" alt-text="Screenshot of New policy page to create a new policy with an authentication context." lightbox="media/protected-actions-add/policy-authentication-context.png"::: -## Add protected actions +## Step 2: Add protected actions To add protection actions, assign a Conditional Access policy to one or more permissions using a Conditional Access authentication context. To add protection actions, assign a Conditional Access policy to one or more per The new protected actions appear in the list of protected actions -## Test protected actions +## Step 3: Test protected actions When a user performs a protected action, they'll need to satisfy Conditional Access policy requirements. This section shows the experience for a user being prompted to satisfy a policy. In this example, the user is required to authenticate with a FIDO security key before they can update Conditional Access policies. |
active-directory | Protected Actions Overview | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/roles/protected-actions-overview.md | Here's the initial set of permissions: ## Steps to use protected actions +> [!NOTE] +> You should perform these steps in the following sequence to ensure that protected actions are properly configured and enforced. If you don't follow this order, you may get unexpected behavior, such as [getting repeated requests to reauthenticate](./protected-actions-add.md#symptompolicy-is-never-satisfied). + 1. **Check permissions** Check that you're assigned the [Conditional Access Administrator](permissions-reference.md#conditional-access-administrator) or [Security Administrator](permissions-reference.md#security-administrator) roles. If not, check with your administrator to assign the appropriate role. 1. **Configure Conditional Access policy** - Configure a Conditional Access authentication context and an associated Conditional Access policy. Protected actions use an authentication context, which allows policy enforcement for fine-grain resources in a service, like Azure AD permissions. A good policy to start with is to require passwordless MFA and exclude an emergency account. [Learn more](./protected-actions-add.md#configure-conditional-access-policy) + Configure a Conditional Access authentication context and an associated Conditional Access policy. Protected actions use an authentication context, which allows policy enforcement for fine-grain resources in a service, like Azure AD permissions. A good policy to start with is to require passwordless MFA and exclude an emergency account. [Learn more](./protected-actions-add.md#step-1-configure-conditional-access-policy) 1. **Add protected actions** - Add protected actions by assigning Conditional Access authentication context values to selected permissions. [Learn more](./protected-actions-add.md#add-protected-actions) + Add protected actions by assigning Conditional Access authentication context values to selected permissions. [Learn more](./protected-actions-add.md#step-2-add-protected-actions) 1. **Test protected actions** - Sign in as a user and test the user experience by performing the protected action. You should be prompted to satisfy the Conditional Access policy requirements. For example, if the policy requires multi-factor authentication, you should be redirected to the sign-in page and prompted for strong authentication. [Learn more](./protected-actions-add.md#test-protected-actions) + Sign in as a user and test the user experience by performing the protected action. You should be prompted to satisfy the Conditional Access policy requirements. For example, if the policy requires multi-factor authentication, you should be redirected to the sign-in page and prompted for strong authentication. [Learn more](./protected-actions-add.md#step-3-test-protected-actions) ## What happens with protected actions and applications? |
active-directory | Asana Tutorial | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/asana-tutorial.md | To get started, you need the following items: In this tutorial, you configure and test Azure AD single sign-on in a test environment. -* Asana supports **SP** initiated SSO +* Asana supports **SP** initiated SSO. -* Asana supports [**automated** user provisioning](asana-provisioning-tutorial.md) +* Asana supports [**automated** user provisioning](asana-provisioning-tutorial.md). ## Add Asana from the gallery In this section, you'll enable B.Simon to use Azure single sign-on by granting a ### Configure Asana SSO -1. In a different browser window, sign on to your Asana application. To configure SSO in Asana, access the workspace settings by clicking the workspace name on the top right corner of the screen. Then, click on **\<your workspace name\> Settings**. +1. In a different browser window, sign on to your Asana application. To configure SSO in Asana, access the admin console by clicking on the avatar on the top right corner of the screen. Then, click on **Admin Console**. -  +  -2. On the **Organization settings** window, click **Administration**. Then, click **Members must log in via SAML** to enable the SSO configuration. The perform the following steps: +2. Navigate to the **Security** tab. Then click on **SAML Authentication**. -  +  - a. In the **Sign-in page URL** textbox, paste the **Login URL**. -- b. Right click the certificate downloaded from Azure portal, then open the certificate file using Notepad or your preferred text editor. Copy the content between the begin and the end certificate title and paste it in the **X.509 Certificate** textbox. +1. Perform the following steps in the below page: + +  -3. Click **Save**. Go to [Asana guide for setting up SSO](https://asana.com/guide/help/premium/authentication#gl-saml) if you need further assistance. + a. Click on Required for all members, except guest accounts. + + b. Paste the **sign-in URL** that you copied from Azure portal into its **sign-in page URL** textbox. + c. Paste the **Certificate (Base64)** content that you copied from Azure portal into **X.509 Certificate** field. + d. Set the session duration for your members. + e. Click **Save**. + +> [!NOTE] +> Go to Asana [guide](https://asana.com/guide/help/premium/authentication#gl-saml) for setting up SSO if you need further assistance. ### Create Asana test user In this section, you create a user called Britta Simon in Asana. 1. On **Asana**, go to the **Teams** section on the left panel. Click the plus sign button. -  +  2. Type the email of the user like **britta.simon\@contoso.com** in the text box and then select **Invite**. |
active-directory | Hootsuite Tutorial | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/hootsuite-tutorial.md | Title: 'Tutorial: Azure Active Directory single sign-on (SSO) integration with Hootsuite' + Title: 'Tutorial: Azure Active Directory SSO integration with Hootsuite' description: Learn how to configure single sign-on between Azure Active Directory and Hootsuite. -In this tutorial, you'll learn how to integrate Hootsuite with Azure Active Directory (Azure AD). When you integrate Hootsuite with Azure AD, you can: +In this tutorial, you learn how to integrate Hootsuite with Azure Active Directory (Azure AD). When you integrate Hootsuite with Azure AD, you can: * Control in Azure AD who has access to Hootsuite. * Enable your users to be automatically signed-in to Hootsuite with their Azure AD accounts. Follow these steps to enable Azure AD SSO in the Azure portal.  -1. On the **Basic SAML Configuration** section, if you wish to configure the application in **IDP** initiated mode, perform the following step: +1. On the **Basic SAML Configuration** section, the user does not have to perform any step as the app is already pre-integrated with Azure. - In the **Reply URL** text box, type a URL using one of the following patterns: -- | Reply URL | - || - |`https://hootsuite.com/member/sso-complete`| - |`https://hootsuite.com/sso/<ORG_ID>`| - | --1. Click **Set additional URLs** and perform the following step if you wish to configure the application in **SP** initiated mode: +1. Perform the following step, if you wish to configure the application in **SP** initiated mode: In the **Sign-on URL** text box, type the URL: `https://hootsuite.com/login` - > [!NOTE] - > These values are not real. Update these values with the actual Reply URL. Contact [Hootsuite Client support team](https://hootsuite.com/about/contact-us#) to get these values. You can also refer to the patterns shown in the **Basic SAML Configuration** section in the Azure portal. - 1. On the **Set up single sign-on with SAML** page, in the **SAML Signing Certificate** section, find **Certificate (Base64)** and select **Download** to download the certificate and save it on your computer.  Follow these steps to enable Azure AD SSO in the Azure portal. ### Create an Azure AD test user -In this section, you'll create a test user in the Azure portal called B.Simon. +In this section, you create a test user in the Azure portal called B.Simon. 1. From the left pane in the Azure portal, select **Azure Active Directory**, select **Users**, and then select **All users**. 1. Select **New user** at the top of the screen. In this section, you'll create a test user in the Azure portal called B.Simon. ### Assign the Azure AD test user -In this section, you'll enable B.Simon to use Azure single sign-on by granting access to Hootsuite. +In this section, you enable B.Simon to use Azure single sign-on by granting access to Hootsuite. 1. In the Azure portal, select **Enterprise Applications**, and then select **All applications**. 1. In the applications list, select **Hootsuite**. |
active-directory | Marketo Tutorial | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/marketo-tutorial.md | Title: 'Tutorial: Azure Active Directory integration with Marketo' + Title: 'Tutorial: Azure Active Directory SSO integration with Marketo' description: Learn how to configure single sign-on between Azure Active Directory and Marketo. -# Tutorial: Azure Active Directory integration with Marketo +# Tutorial: Azure Active Directory SSO integration with Marketo In this tutorial, you learn how to integrate Marketo with Azure Active Directory (Azure AD). Integrating Marketo with Azure AD provides you with the following benefits: Integrating Marketo with Azure AD provides you with the following benefits: To configure Azure AD integration with Marketo, you need the following items: -* An Azure AD subscription. If you don't have an Azure AD environment, you can get one-month trial [here](https://azure.microsoft.com/pricing/free-trial/) -* Marketo single sign-on enabled subscription +* An Azure AD subscription. If you don't have a subscription, you can get a [free account](https://azure.microsoft.com/free/). +* Marketo single sign-on enabled subscription. ## Scenario description In this tutorial, you configure and test Azure AD single sign-on in a test environment. -* Marketo supports **IDP** initiated SSO +* Marketo supports **IDP** initiated SSO. > [!NOTE] > Identifier of this application is a fixed string value so only one instance can be configured in one tenant. -## Adding Marketo from the gallery +## Add Marketo from the gallery To configure the integration of Marketo into Azure AD, you need to add Marketo from the gallery to your list of managed SaaS apps. To configure the integration of Marketo into Azure AD, you need to add Marketo f 1. In the **Add from the gallery** section, type **Marketo** in the search box. 1. Select **Marketo** from results panel and then add the app. Wait a few seconds while the app is added to your tenant. - Alternatively, you can also use the [Enterprise App Configuration Wizard](https://portal.office.com/AdminPortal/home?Q=Docs#/azureadappintegration). In this wizard, you can add an application to your tenant, add users/groups to the app, assign roles, as well as walk through the SSO configuration as well. [Learn more about Microsoft 365 wizards.](/microsoft-365/admin/misc/azure-ad-setup-guides) + Alternatively, you can also use the [Enterprise App Configuration Wizard](https://portal.office.com/AdminPortal/home?Q=Docs#/azureadappintegration). In this wizard, you can add an application to your tenant, add users/groups to the app, assign roles, and walk through the SSO configuration as well. [Learn more about Microsoft 365 wizards.](/microsoft-365/admin/misc/azure-ad-setup-guides) ## Configure and test Azure AD SSO for Marketo To configure and test Azure AD single sign-on with Marketo, perform the followin 1. **[Create Marketo test user](#create-marketo-test-user)** - to have a counterpart of Britta Simon in Marketo that is linked to the Azure AD representation of user. 3. **[Test SSO](#test-sso)** - to verify whether the configuration works. -### Configure Azure AD SSO +## Configure Azure AD SSO Follow these steps to enable Azure AD SSO in the Azure portal. Follow these steps to enable Azure AD SSO in the Azure portal.  -1. On the **Basic SAML Configuration** section, enter the values for the following fields: +1. On the **Basic SAML Configuration** section, perform the following steps: a. In the **Identifier** text box, type the URL: `https://saml.marketo.com/sp` Follow these steps to enable Azure AD SSO in the Azure portal. > [!NOTE] > These values are not real. Update these values with the actual Reply URL and Relay State. Contact [Marketo Client support team](https://investors.marketo.com/contactus.cfm) to get these values. You can also refer to the patterns shown in the **Basic SAML Configuration** section in the Azure portal. -5. On the **Set up Single Sign-On with SAML** page, in the **SAML Signing Certificate** section, click **Download** to download the **Certificate (Base64)** from the given options as per your requirement and save it on your computer. +1. Your Marketo application expects the SAML assertions in a specific format, which requires you to add custom attribute mappings to your SAML token attributes configuration. The following screenshot shows an example for this. The default value of **Unique User Identifier** is **user.userprincipalname** but Marketo expects this to be mapped with the user's email address. For that you can use **user.mail** attribute from the list or use the appropriate attribute value based on your organization configuration. ++  ++1. On the **Set up Single Sign-On with SAML** page, in the **SAML Signing Certificate** section, click **Download** to download the **Certificate (Base64)** from the given options as per your requirement and save it on your computer.  -6. On the **Set up Marketo** section, copy the appropriate URL(s) as per your requirement. +1. On the **Set up Marketo** section, copy the appropriate URL(s) as per your requirement.  ### Create an Azure AD test user -In this section, you'll create a test user in the Azure portal called B.Simon. +In this section, you create a test user in the Azure portal called B.Simon. 1. From the left pane in the Azure portal, select **Azure Active Directory**, select **Users**, and then select **All users**. 1. Select **New user** at the top of the screen. In this section, you'll create a test user in the Azure portal called B.Simon. ### Assign the Azure AD test user -In this section, you'll enable B.Simon to use Azure single sign-on by granting access to Marketo. +In this section, you enable B.Simon to use Azure single sign-on by granting access to Marketo. 1. In the Azure portal, select **Enterprise Applications**, and then select **All applications**. 1. In the applications list, select **Marketo**. In this section, you create a user called Britta Simon in Marketo. follow these  -3. Navigate to the **Security** menu and click **Users & Roles** +3. Navigate to the **Security** menu and click **Users & Roles**.  -4. Click the **Invite New User** link on the Users tab +4. Click the **Invite New User** link on the Users tab.  -5. In the Invite New User wizard fill the following information +5. In the Invite New User wizard, fill the following information. a. Enter the user **Email** address in the textbox  - b. Enter the **First Name** in the textbox + b. Enter the **First Name** in the textbox. - c. Enter the **Last Name** in the textbox + c. Enter the **Last Name** in the textbox. - d. Click **Next** + d. Click **Next**. -6. In the **Permissions** tab, select the **userRoles** and click **Next** +6. In the **Permissions** tab, select the **userRoles** and click **Next**.  7. Click the **Send** button to send the user invitation In this section, you create a user called Britta Simon in Marketo. follow these 8. User receives the email notification and has to click the link and change the password to activate the account. -### Test SSO +## Test SSO In this section, you test your Azure AD single sign-on configuration with following options. |
active-directory | Sap Customer Cloud Tutorial | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/sap-customer-cloud-tutorial.md | Title: 'Tutorial: Azure Active Directory single sign-on (SSO) integration with SAP Cloud for Customer' + Title: 'Tutorial: Azure Active Directory SSO integration with SAP Cloud for Customer' description: Learn how to configure single sign-on between Azure Active Directory and SAP Cloud for Customer. -# Tutorial: Azure Active Directory single sign-on (SSO) integration with SAP Cloud for Customer +# Tutorial: Azure Active Directory SSO integration with SAP Cloud for Customer -In this tutorial, you'll learn how to integrate SAP Cloud for Customer with Azure Active Directory (Azure AD). When you integrate SAP Cloud for Customer with Azure AD, you can: +In this tutorial, you learn how to integrate SAP Cloud for Customer with Azure Active Directory (Azure AD). When you integrate SAP Cloud for Customer with Azure AD, you can: * Control in Azure AD who has access to SAP Cloud for Customer. * Enable your users to be automatically signed-in to SAP Cloud for Customer with their Azure AD accounts. * Manage your accounts in one central location - the Azure portal. - ## Prerequisites To get started, you need the following items: To get started, you need the following items: In this tutorial, you configure and test Azure AD SSO in a test environment. -* SAP Cloud for Customer supports **SP** initiated SSO +* SAP Cloud for Customer supports **SP** initiated SSO. -## Adding SAP Cloud for Customer from the gallery +## Add SAP Cloud for Customer from the gallery To configure the integration of SAP Cloud for Customer into Azure AD, you need to add SAP Cloud for Customer from the gallery to your list of managed SaaS apps. To configure the integration of SAP Cloud for Customer into Azure AD, you need t 1. In the **Add from the gallery** section, type **SAP Cloud for Customer** in the search box. 1. Select **SAP Cloud for Customer** from results panel and then add the app. Wait a few seconds while the app is added to your tenant. - Alternatively, you can also use the [Enterprise App Configuration Wizard](https://portal.office.com/AdminPortal/home?Q=Docs#/azureadappintegration). In this wizard, you can add an application to your tenant, add users/groups to the app, assign roles, as well as walk through the SSO configuration as well. [Learn more about Microsoft 365 wizards.](/microsoft-365/admin/misc/azure-ad-setup-guides) + Alternatively, you can also use the [Enterprise App Configuration Wizard](https://portal.office.com/AdminPortal/home?Q=Docs#/azureadappintegration). In this wizard, you can add an application to your tenant, add users/groups to the app, assign roles, and walk through the SSO configuration as well. [Learn more about Microsoft 365 wizards.](/microsoft-365/admin/misc/azure-ad-setup-guides) ## Configure and test Azure AD SSO for SAP Cloud for Customer Follow these steps to enable Azure AD SSO in the Azure portal. ### Create an Azure AD test user -In this section, you'll create a test user in the Azure portal called B.Simon. +In this section, you create a test user in the Azure portal called B.Simon. 1. From the left pane in the Azure portal, select **Azure Active Directory**, select **Users**, and then select **All users**. 1. Select **New user** at the top of the screen. In this section, you'll create a test user in the Azure portal called B.Simon. ### Assign the Azure AD test user -In this section, you'll enable B.Simon to use Azure single sign-on by granting access to SAP Cloud for Customer. +In this section, you enable B.Simon to use Azure single sign-on by granting access to SAP Cloud for Customer. 1. In the Azure portal, select **Enterprise Applications**, and then select **All applications**. 1. In the applications list, select **SAP Cloud for Customer**. In this section, you'll enable B.Simon to use Azure single sign-on by granting a 1. Open a new web browser window and sign into your SAP Cloud for Customer company site as an administrator. -2. From the left side of menu, click on **Identity Providers** > **Corporate Identity Providers** > **Add** and on the pop-up add the Identity provider name like **Azure AD**, click **Save** then click on **SAML 2.0 Configuration**. +2. Go to **Applications & Resources** > **Tenant Settings** and select **SAML 2.0 Configuration**. -  +  3. On the **SAML 2.0 Configuration** section, perform the following steps: In this section, you'll enable B.Simon to use Azure single sign-on by granting a a. Click **Browse** to upload the Federation Metadata XML file, which you have downloaded from Azure portal. - b. Once the XML file is successfully uploaded, the below values will get auto populated automatically then click **Save**. + b. Once the XML file is successfully uploaded, the below values get auto populated automatically then click **Save**. ### Create SAP Cloud for Customer test user In this section, you test your Azure AD single sign-on configuration with follow * You can use Microsoft My Apps. When you click the SAP Cloud for Customer tile in the My Apps, this will redirect to SAP Cloud for Customer Sign-on URL. For more information about the My Apps, see [Introduction to the My Apps](https://support.microsoft.com/account-billing/sign-in-and-start-apps-from-the-my-apps-portal-2f3b1bae-0e5a-4a86-a33e-876fbd2a4510). - ## Next steps -Once you configure the SAP Cloud for Customer you can enforce session controls, which protects exfiltration and infiltration of your organization’s sensitive data in real time. Session controls extends from Conditional Access. [Learn how to enforce session control with Microsoft Defender for Cloud Apps](/cloud-app-security/proxy-deployment-aad). +Once you configure the SAP Cloud for Customer you can enforce session controls, which protect exfiltration and infiltration of your organization’s sensitive data in real time. Session controls extend from Conditional Access. [Learn how to enforce session control with Microsoft Defender for Cloud Apps](/cloud-app-security/proxy-deployment-aad). |
aks | Azure Cni Overlay | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/azure-cni-overlay.md | az provider register --namespace Microsoft.ContainerService > [!NOTE] > The upgrade capability is still in preview and requires the preview AKS Azure CLI extension. You can update an existing Azure CNI cluster to Overlay if the cluster meets certain criteria. A cluster must:--- be on Kubernetes version 1.22+-- **not** be using the dynamic pod IP allocation feature-- **not** have network policies enabled-- **not** be using any Windows node pools with docker as the container runtime+> - be on Kubernetes version 1.22+ +> - **not** be using the dynamic pod IP allocation feature +> - **not** have network policies enabled +> - **not** be using any Windows node pools with docker as the container runtime The upgrade process will trigger each node pool to be re-imaged simultaneously (i.e. upgrading each node pool separately to Overlay is not supported). Any disruptions to cluster networking will be similar to a node image upgrade or Kubernetes version upgrade where each node in a node pool is re-imaged. +To update and existing Azure CNI cluster to use overlay, run the following CLI command: ++```azurecli-interactive +clusterName="myOverlayCluster" +resourceGroup="myResourceGroup" +location="westcentralus" +az aks update --name $clusterName \ +--group $resourceGroup \ +--network-plugin-mode overlay \ +--pod-cidr 192.168.0.0/16 +``` ++The `--pod-cidr` parameter is required when upgrading from legacy CNI because the pods will need to get IPs from a new overlay space which does not overlap with the existing node subnet. The pod CIDR also cannot overlap with any VNet address of the node pools. For example if your VNet address is 10.0.0.0/8 and your nodes are in the subnet 10.240.0.0/16, then the `--pod-cidr` cannot overlap with 10.0.0.0/8 or the existing service CIDR on the cluster. + > [!WARNING] -> Prior to Windows OS Build 20348.1668, there was a limitation around Windows Overlay pods incorrectly SNATing packets from host network pods, this had a more detrimental effect for clusters upgrading to Overlay. To avoid this issue, **use Windows OS Build 20348.1668** +> Prior to Windows OS Build 20348.1668, there was a limitation around Windows Overlay pods incorrectly SNATing packets from host network pods, this had a more detrimental effect for clusters upgrading to Overlay. To avoid this issue, **use Windows OS Build 20348.1668**. -This network disruption will only occur during the upgrade. Once the migration to Overlay has completed for all node pools, all Overlay pods will be able to communicate successfully with the Windows pods. ## Next steps |
aks | Azure Csi Disk Storage Provision | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/azure-csi-disk-storage-provision.md | Last updated 04/11/2023 # Create and use a volume with Azure Disks in Azure Kubernetes Service (AKS) -A persistent volume represents a piece of storage that has been provisioned for use with Kubernetes pods. A persistent volume can be used by one or many pods, and can be dynamically or statically provisioned. This article shows you how to dynamically create persistent volumes with Azure Disks for use by a single pod in an Azure Kubernetes Service (AKS) cluster. +A persistent volume represents a piece of storage that has been provisioned for use with Kubernetes pods. A persistent volume can be used by one or many pods, and can be dynamically or statically provisioned. This article shows you how to dynamically create persistent volumes with Azure Disks in an Azure Kubernetes Service (AKS) cluster. > [!NOTE]-> An Azure disk can only be mounted with *Access mode* type *ReadWriteOnce*, which makes it available to one pod in AKS. If you need to share a persistent volume across multiple pods, use [Azure Files][azure-files-pvc]. +> An Azure disk can only be mounted with *Access mode* type *ReadWriteOnce*, which makes it available to one node in AKS. This access mode still allows multiple pods to access the volume when the pods run on the same node. For more information, see [Kubernetes PersistentVolume access modes][access-modes]. This article shows you how to: |
aks | Cluster Autoscaler | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/cluster-autoscaler.md | You can also configure more granular details of the cluster autoscaler by changi > [!IMPORTANT] > When using the autoscaler profile, keep the following information in mind: >-> * The cluster autoscaler profile affects **all node pools** that use the cluster autoscaler. You can't set an autoscaler profile per node pool. When you est the profile, any existing node pools with the cluster autoscaler enabled immediately start using the profile. +> * The cluster autoscaler profile affects **all node pools** that use the cluster autoscaler. You can't set an autoscaler profile per node pool. When you set the profile, any existing node pools with the cluster autoscaler enabled immediately start using the profile. > * The cluster autoscaler profile requires version *2.11.1* or greater of the Azure CLI. If you need to install or upgrade, see [Install Azure CLI][azure-cli-install]. ### Set the cluster autoscaler profile on a new cluster |
aks | Egress Outboundtype | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/egress-outboundtype.md | Last updated 03/28/2023 You can customize egress for an AKS cluster to fit specific scenarios. By default, AKS will provision a standard SKU load balancer to be set up and used for egress. However, the default setup may not meet the requirements of all scenarios if public IPs are disallowed or additional hops are required for egress. This article covers the various types of outbound connectivity that are available in AKS clusters.-+how > [!NOTE] > You can now update the `outboundType` after cluster creation. This feature is in preview. See [Updating `outboundType after cluster creation (preview)](#updating-outboundtype-after-cluster-creation-preview). For more information, see [configuring cluster egress via user-defined routing]( Changing the outbound type after cluster creation will deploy or remove resources as required to put the cluster into the new egress configuration. +The following tables show the supported migration paths between outbound types for managed and BYO virtual networks. ++### Supported Migration Paths for Managed VNet ++| | SLB | Managed NATGateway | BYO NATGateway | userDefinedNATGateway | +|-|--|--|-|--| +| SLB | N/A | Supported | Not Supported | Not Supported | +| Managed NATGateway | Supported | N/A | Not Supported | Not Supported | +| BYO NATGateway | Supported | Not Supported | N/A | Not Supported | +| User Defined NATGateway | Supported | Not Supported | Supported | N/A | ++### Supported Migration Paths for BYO VNet ++| | SLB | Managed NATGateway | BYO NATGateway | userDefinedNATGateway | +|-||--|-|--| +| SLB | N/A | Supported | Supported | Supported | +| Managed NATGateway | Supported | N/A | Not Supported | Not Supported | +| BYO NATGateway | Supported | Not Supported | N/A | Supported | +| User Defined NATGateway | Not Supported | Not Supported | Not Supported | N/A | + Migration is only supported between `loadBalancer`, `managedNATGateway` (if using a managed virtual network), and `userDefinedNATGateway` (if using a custom virtual network). > [!WARNING] |
aks | Private Clusters | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/private-clusters.md | Private cluster is available in public regions, Azure Government, and Azure Chin * If using Azure Resource Manager (ARM) or the Azure REST API, the AKS API version must be 2021-05-01 or higher. * Azure Private Link service is supported on Standard Azure Load Balancer only. Basic Azure Load Balancer isn't supported. * To use a custom DNS server, add the Azure public IP address 168.63.129.16 as the upstream DNS server in the custom DNS server, and make sure to add this public IP address as the *first* DNS server. For more information about the Azure IP address, see [What is IP address 168.63.129.16?][virtual-networks-168.63.129.16]+ * The cluster's DNS zone should be what you forward to 168.63.129.16. You can find more information on zone names in [Azure services DNS zone configuration][az-dns-zone]. ## Limitations * IP authorized ranges can't be applied to the private API server endpoint, they only apply to the public API server-* [Azure Private Link service limitations][private-link-service] apply to private clusters. +* [Azure Private Link service limitations].[private-link-service] apply to private clusters. * There's no support for Azure DevOps Microsoft-hosted Agents with private clusters. Consider using [Self-hosted Agents](/azure/devops/pipelines/agents/agents). * If you need to enable Azure Container Registry to work with a private AKS cluster, [set up a private link for the container registry in the cluster virtual network][container-registry-private-link] or set up peering between the Container Registry virtual network and the private cluster's virtual network. * There's no support for converting existing AKS clusters into private clusters. For associated best practices, see [Best practices for network connectivity and [az-group-create]: /cli/azure/group#az_group_create [az-aks-create]: /cli/azure/aks#az_aks_create [az-aks-update]: /cli/azure/aks#az_aks_update+[az-dns-zone]: ../private-link/private-endpoint-dns.md#azure-services-dns-zone-configuration |
aks | Web App Routing | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/web-app-routing.md | The Web Application Routing add-on deploys the following components: - An Azure subscription. If you don't have an Azure subscription, you can create a [free account](https://azure.microsoft.com/free). - Azure CLI version 2.47.0 or later installed and configured. Run `az --version` to find the version. If you need to install or upgrade, see [Install Azure CLI][install-azure-cli]. - An Azure Key Vault to store certificates.-- The `aks-preview` Azure CLI extension version 0.5.75 or later installed. If you need to install or update, see [Install or update the `aks-preview` extension](#install-or-update-the-aks-preview-azure-cli-extension).+- The `aks-preview` Azure CLI extension version 0.5.137 or later installed. If you need to install or update, see [Install or update the `aks-preview` extension](#install-or-update-the-aks-preview-azure-cli-extension). - Optionally, a DNS solution, such as [Azure DNS](../dns/dns-getstarted-portal.md). ### Install or update the `aks-preview` Azure CLI extension The following extra add-on is required: ## Retrieve the add-on's managed identity object ID -You use the managed identity in the next steps to grant permissions to manage the Azure DNS zone and retrieve certificates from the Azure Key Vault. +You use the managed identity in the next steps to grant permissions to manage the Azure DNS zone and retrieve secrets and certificates from the Azure Key Vault. - Get the add-on's managed identity object ID using the [`az aks show`][az-aks-show] command and setting the output to a variable named *MANAGEDIDENTITY_OBJECTID*. You use the managed identity in the next steps to grant permissions to manage th The Web Application Routing add-on creates a user-created managed identity in the cluster resource group. You need to grant permissions to the managed identity so it can retrieve SSL certificates from the Azure Key Vault. -- Grant `GET` permissions for the Web Application Routing add-on to retrieve certificates from Azure Key Vault using the [`az keyvault set-policy`][az-keyvault-set-policy] command.+Azure Key Vault offers [two authorization systems](../key-vault/general/rbac-access-policy.md): **Azure role-based access control (Azure RBAC)**, which operates on the management plane, and the **access policy model**, which operates on both the management plane and the data plane. To find out which system your key vault is using, you can query the `enableRbacAuthorization` property. - ```azurecli-interactive - az keyvault set-policy --name <KeyVaultName> --object-id $MANAGEDIDENTITY_OBJECTID --secret-permissions get --certificate-permissions get - ``` +```azurecli-interactive +az keyvault show --name <KeyVaultName> --query properties.enableRbacAuthorization +``` ++If Azure RBAC authorization is enabled for your key vault, you should configure permissions using Azure RBAC. Add the `Key Vault Secrets User` role assignment to the key vault. ++```azurecli-interactive +KEYVAULTID=$(az keyvault show --name <KeyVaultName> --query "id" --output tsv) +az role assignment create --role "Key Vault Secrets User" --assignee $MANAGEDIDENTITY_OBJECTID --scope $KEYVAULTID +``` ++If Azure RBAC authorization is not enabled for your key vault, you should configure permissions using the access policy model. Grant `GET` permissions for the Web Application Routing add-on to retrieve certificates from Azure Key Vault using the [`az keyvault set-policy`][az-keyvault-set-policy] command. ++```azurecli-interactive +az keyvault set-policy --name <KeyVaultName> --object-id $MANAGEDIDENTITY_OBJECTID --secret-permissions get --certificate-permissions get +``` ## Connect to your AKS cluster |
aks | Workload Identity Overview | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/workload-identity-overview.md | The following table provides the **minimum** package version required for each l | Language | Library | Minimum Version | Example | ||-|--||-| .NET | [Azure.Identity](https://learn.microsoft.com/dotnet/api/overview/azure/identity-readme) | 1.9.0-beta.2 | [Link](https://github.com/Azure/azure-workload-identity/tree/main/examples/azure-identity/dotnet) | -| Go | [azidentity](https://pkg.go.dev/github.com/Azure/azure-sdk-for-go/sdk/azidentity) | 1.3.0-beta.1 | [Link](https://github.com/Azure/azure-workload-identity/tree/main/examples/azure-identity/go) | -| Java | [azure-identity](https://learn.microsoft.com/java/api/overview/azure/identity-readme) | 1.9.0-beta.1 | [Link](https://github.com/Azure/azure-workload-identity/tree/main/examples/azure-identity/java) | -| JavaScript | [@azure/identity](https://learn.microsoft.com/javascript/api/overview/azure/identity-readme) | 3.2.0-beta.1 | [Link](https://github.com/Azure/azure-workload-identity/tree/main/examples/azure-identity/node) | -| Python | [azure-identity](https://learn.microsoft.com/python/api/overview/azure/identity-readme) | 1.13.0b2 | [Link](https://github.com/Azure/azure-workload-identity/tree/main/examples/azure-identity/python) | +| .NET | [Azure.Identity](https://learn.microsoft.com/dotnet/api/overview/azure/identity-readme) | 1.9.0 | [Link](https://github.com/Azure/azure-workload-identity/tree/main/examples/azure-identity/dotnet) | +| Go | [azidentity](https://pkg.go.dev/github.com/Azure/azure-sdk-for-go/sdk/azidentity) | 1.3.0 | [Link](https://github.com/Azure/azure-workload-identity/tree/main/examples/azure-identity/go) | +| Java | [azure-identity](https://learn.microsoft.com/java/api/overview/azure/identity-readme) | 1.9.0 | [Link](https://github.com/Azure/azure-workload-identity/tree/main/examples/azure-identity/java) | +| JavaScript | [@azure/identity](https://learn.microsoft.com/javascript/api/overview/azure/identity-readme) | 3.2.0 | [Link](https://github.com/Azure/azure-workload-identity/tree/main/examples/azure-identity/node) | +| Python | [azure-identity](https://learn.microsoft.com/python/api/overview/azure/identity-readme) | 1.13.0 | [Link](https://github.com/Azure/azure-workload-identity/tree/main/examples/azure-identity/python) | ## Microsoft Authentication Library (MSAL) |
api-management | Compute Infrastructure | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/api-management/compute-infrastructure.md | The following table summarizes the compute platforms currently used for instance > [!NOTE] > Currently, the `stv2` platform isn't available in the US Government cloud or in the following Azure regions: China East, China East 2, China North, China North 2.+> +> Also, as Qatar Central is a recently established Azure region, only the `stv2` platform is supported for API Management services deployed in this region. ## How do I know which platform hosts my API Management instance? Migration steps depend on features enabled in your API Management instance. If t ## Next steps * [Migrate an API Management instance to the stv2 platform](migrate-stv1-to-stv2.md).-* Learn more about [upcoming breaking changes](breaking-changes/overview.md) in API Management. +* Learn more about [upcoming breaking changes](breaking-changes/overview.md) in API Management. |
api-management | Configure Custom Domain | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/api-management/configure-custom-domain.md | Choose the steps according to the [domain certificate](#domain-certificate-optio > [!NOTE] > The process of assigning the certificate may take 15 minutes or more depending on size of deployment. Developer tier has downtime, while Basic and higher tiers do not. + ## DNS configuration Configure a CNAME record that points from your custom domain name (for example, > [!NOTE] > Some domain registrars only allow you to map subdomains when using a CNAME record, such as `www.contoso.com`, and not root names, such as `contoso.com`. For more information on CNAME records, see the documentation provided by your registrar or [IETF Domain Names - Implementation and Specification](https://tools.ietf.org/html/rfc1035). +> [!CAUTION] +> When you use the free, managed certificate and configure a CNAME record with your DNS provider, make sure that it resolves to the default API Management service hostname (`<apim-service-name>.azure-api.net`). Currently, API Management doesn't automatically renew the certificate if the CNAME record doesn't resolve to the default API Management hostname. For example, if you're using the free, managed certificate and you use Cloudflare as your DNS provider, make sure that DNS proxy isn't enabled on the CNAME record. + ### TXT record When enabling the free, managed certificate for API Management, also configure a TXT record in your DNS zone to establish your ownership of the domain name. When you use the portal to configure the free, managed certificate for your cust You can also get a domain ownership identifier by calling the [Get Domain Ownership Identifier](/rest/api/apimanagement/current-ga/api-management-service/get-domain-ownership-identifier) REST API. + ## Next steps [Upgrade and scale your service](upgrade-and-scale.md) |
app-service | Upgrade To Asev3 | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/environment/upgrade-to-asev3.md | description: Take the first steps toward upgrading to App Service Environment v3 Previously updated : 05/11/2023 Last updated : 05/12/2023 # Upgrade to App Service Environment v3 This page is your one-stop shop for guidance and resources to help you upgrade s |**2**|**Migrate**|Based on results of your review, either upgrade using the migration feature or follow the manual steps.<br><br>- [Use the automated migration feature](how-to-migrate.md)<br>- [Migrate manually](migration-alternatives.md)| |**3**|**Testing and troubleshooting**|Upgrading using the automated migration feature requires a 3-6 hour service window. Support teams are monitoring upgrades to ensure success. If you have a support plan and you need technical help, create a [support request](https://portal.azure.com/#blade/Microsoft_Azure_Support/HelpAndSupportBlade/newsupportrequest).| |**4**|**Optimize your App Service plans**|Once your upgrade is complete, you can optimize the App Service plans for additional benefits.<br><br>Review the autoselected Isolated v2 SKU sizes and scale up or scale down your App Service plans as needed.<br><br>- [Scale down your App Service plans](../manage-scale-up.md)<br>- [App Service Environment post-migration scaling guidance](migrate.md#pricing)<br><br>Check out the pricing estimates if needed.<br><br>- [App Service pricing page](https://azure.microsoft.com/pricing/details/app-service/windows/)<br>- [Azure pricing calculator](https://azure.microsoft.com/pricing/calculator)|-|**5**|**Learn more**|[Frequently asked questions](migrate.md#frequently-asked-questions)<br><br>[Community support](https://aka.ms/asev1v2retirement)| +|**5**|**Learn more**|Join the [free live webinar](https://msit.events.teams.microsoft.com/event/2c472562-426a-48d6-b963-21c73d6e6cb0@72f988bf-86f1-41af-91ab-2d7cd011db47) with FastTrack Architects.<br><br>Need more help? [Submit a request](https://cxp.azure.com/nominationportal/nominationform/fasttrack) to contact FastTrack.<br><br>[Frequently asked questions](migrate.md#frequently-asked-questions)<br><br>[Community support](https://aka.ms/asev1v2retirement)| ## Additional information App Service Environment v3 is the latest version of App Service Environment. It' ## Next steps > [!div class="nextstepaction"]-> [Learn about App Service Environment v3](overview.md) +> [Learn about App Service Environment v3](overview.md) |
app-service | Manage Automatic Scaling | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/manage-automatic-scaling.md | Title: Automatic scaling + Title: How to enable automatic scaling description: Learn how to scale automatically in Azure App Service with zero configuration. Previously updated : 05/05/2023 Last updated : 05/12/2023 -App Service offers automatic scaling that adjusts the number of instances based on incoming HTTP requests. Automatic scaling guarantees that your web apps can manage different levels of traffic. You can adjust scaling settings, like setting the minimum and maximum number of instances per web app, to enhance performance. The platform tackles cold start issues by prewarming instances that act as a buffer when scaling out, resulting in smooth performance transitions. Billing is calculated per second using existing meters, and prewarmed instances are also charged per second. +Automatic scaling is a new scale out option that automatically handles scaling decisions for your web apps and App Service Plans. It's different from the pre-existing **[Azure autoscale](../azure-monitor/autoscale/autoscale-overview.md)**, which lets you define scaling rules based on schedules and resources. With automatic scaling, you can adjust scaling settings to improve your app's performance and avoid cold start issues. The platform prewarms instances to act as a buffer when scaling out, ensuring smooth performance transitions. You can use Application Insights [Live Metrics](../azure-monitor/app/live-stream.md) to check your current instance count, and [performanceCounters](../azure-functions/analyze-telemetry-data.md#query-telemetry-data) to see the instance count history. You're charged per second for every instance, including prewarmed instances. ++A comparison of scale out and scale in options available on App Service: ++| | **Manual** | **Autoscale** | **Automatic scaling** | +| | | | | +| Available pricing tiers | Basic and Up | Standard and Up | Premium v2 and Premium v3 | +|Rule-based scaling |No |Yes |No, the platform manages the scale out and in based on HTTP traffic. | +|Schedule-based scaling |No |Yes |No| +|Always ready instances | No, your web app runs on the number of manually scaled instances. | No, your web app runs on other instances available during the scale out operation, based on threshold defined for autoscale rules. | Yes (minimum 1) | +|Prewarmed instances |No |No |Yes (default 1) | +|Per-app maximum |No |No |Yes| ## How automatic scaling works -It's common to deploy multiple web apps to a single App Service Plan. You can enable automatic scaling for an App Service Plan and configure a range of instances for each of the web apps. As your web app starts receiving incoming HTTP traffic, App Service monitors the load and adds instances. Resources may be shared when multiple web apps within an App Service Plan are required to scale out simultaneously. +You enable automatic scaling for an App Service Plan and configure a range of instances for each of the web apps. As your web app starts receiving HTTP traffic, App Service monitors the load and adds instances. Resources may be shared when multiple web apps within an App Service Plan are required to scale out simultaneously. Here are a few scenarios where you should scale out automatically: Here are a few scenarios where you should scale out automatically: - You want your web apps within the same App Service Plan to scale differently and independently of each other. - Your web app is connected to a databases or legacy system, which may not scale as fast as the web app. Scaling automatically allows you to set the maximum number of instances your App Service Plan can scale to. This setting helps the web app to not overwhelm the backend. -> [!IMPORTANT] -> [`Always ON`](./configure-common.md?tabs=portal#configure-general-settings) needs to be disabled to use automatic scaling. -> - ## Enable automatic scaling __Maximum burst__ is the highest number of instances that your App Service Plan can increase to based on incoming HTTP requests. For Premium v2 & v3 plans, you can set a maximum burst of up to 30 instances. The maximum burst must be equal to or greater than the number of workers specified for the App Service Plan. +> [!IMPORTANT] +> [`Always ON`](./configure-common.md?tabs=portal#configure-general-settings) needs to be disabled to use automatic scaling. +> + #### [Azure portal](#tab/azure-portal) -To enable automatic scaling in the Azure portal, select **Scale out (App Service Plan)** in the web app's left menu. Select **Automatic (preview)**, update the __Maximum burst__ value, and select the **Save** button. +To enable automatic scaling, navigate to the web app's left menu and select **Scale out (App Service Plan)**. Select **Automatic (preview)**, update the __Maximum burst__ value, and select the **Save** button. :::image type="content" source="./media/manage-automatic-scaling/azure-portal-automatic-scaling.png" alt-text="Automatic scaling in Azure portal" ::: __Always ready instances__ is an app-level setting to specify the minimum number #### [Azure portal](#tab/azure-portal) -To set the minimum number of instances in the Azure portal, select **Scale out (App Service Plan)** in the web app's left menu, update the **Always ready instances** value, and select the **Save** button. +To set the minimum number of web app instances, navigate to the web app's left menu and select **Scale out (App Service Plan)**. Update the **Always ready instances** value, and select the **Save** button. :::image type="content" source="./media/manage-automatic-scaling/azure-portal-always-ready-instances.png" alt-text="Screenshot of always ready instances" ::: The __maximum scale limit__ sets the maximum number of instances a web app can s #### [Azure portal](#tab/azure-portal) -To set the maximum number of web app instances in the Azure portal, select **Scale out (App Service Plan)** in the web app's left menu, select **Enforce scale out limit**, update the **Maximum scale limit**, and select the **Save** button. +To set the maximum number of web app instances, navigate to the web app's left menu and select **Scale out (App Service Plan)**. Select **Enforce scale out limit**, update the **Maximum scale limit**, and select the **Save** button. :::image type="content" source="./media/manage-automatic-scaling/azure-portal-maximum-scale-limit.png" alt-text="Screenshot of maximum scale limit" ::: You can modify the number of prewarmed instances for an app using the Azure CLI. #### [Azure portal](#tab/azure-portal) -To disable automatic scaling in the Azure portal, select **Scale out (App Service Plan)** in the web app's left menu, select **Manual**, and select the **Save** button. +To disable automatic scaling, navigate to the web app's left menu and select **Scale out (App Service Plan)**. Select **Manual**, and select the **Save** button. :::image type="content" source="./media/manage-automatic-scaling/azure-portal-manual-scaling.png" alt-text="Screenshot of manual scaling" ::: az appservice plan update --resource-group <RESOURCE_GROUP> --name <APP_SERVICE_ -## Frequently asked questions -- [How is automatic scaling different than autoscale?](#how-is-automatic-scaling-different-than-autoscale)-- [How does automatic scaling work with existing Auto scale rules?](#how-does-automatic-scaling-work-with-existing-autoscale-rules)-- [Does automatic scaling support Azure Function apps?](#does-automatic-scaling-support-azure-function-apps)-- [How to monitor the current instance count and instance history?](#how-to-monitor-the-current-instance-count-and-instance-history)---### How is automatic scaling different than autoscale? -Automatic scaling is a new scaling option in App Service that automatically handles web app scaling decisions for you. **[Azure autoscale](../azure-monitor/autoscale/autoscale-overview.md)** is a pre-existing Azure capability for defining schedule-based and resource-based scaling rules for your App Service Plans. --A comparison of scale out and scale in options available on App Service: --| | **Manual scaling** | **Auto scaling** | **Automatic scaling** | -| | | | | -| Available pricing tiers | Basic and Up | Standard and Up | Premium v2 and Premium v3 | -|Rule-based scaling |No |Yes |No, the platform manages the scale out and in based on HTTP traffic. | -|Schedule-based scaling |No |Yes |No| -|Always ready instances | No, your web app runs on the number of manually scaled instances. | No, your web app runs on other instances available during the scale out operation, based on threshold defined for autoscale rules. | Yes (minimum 1) | -|Prewarmed instances |No |No |Yes (default 1) | -|Per-app maximum |No |No |Yes| --### How does automatic scaling work with existing autoscale rules? -Once automatic scaling is configured, existing Azure autoscale rules and schedules are disabled. Applications can use either automatic scaling, or autoscale, but not both. --### Does automatic scaling support Azure Function apps? +## Does automatic scaling support Azure Function apps? No, you can only have Azure App Service web apps in the App Service Plan where you wish to enable automatic scaling. If you have existing Azure Functions apps in the same App Service Plan, or if you create new Azure Functions apps, then automatic scaling is disabled. For Functions, it's recommended to use the [Azure Functions Premium plan](../azure-functions/functions-premium-plan.md) instead. -### How to monitor the current instance count and instance history? -Use Application Insights [Live Metrics](../azure-monitor/app/live-stream.md) to check the current instance count, and [performanceCounters](../azure-functions/analyze-telemetry-data.md#query-telemetry-data) to check the instance count history. - <a name="Next Steps"></a> ## More resources |
application-gateway | Configuration Frontend Ip | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/application-gateway/configuration-frontend-ip.md | Only one public IP address and one private IP address is supported. You choose t A frontend IP address is associated to a *listener*, which checks for incoming requests on the frontend IP. >[!NOTE] -> You can create private and public listeners with the same port number (Preview feature). However, be aware of any Network Security Group (NSG) associated with the application gateway subnet. Depending on your NSG's configuration, you may need an inbound rule with **Destination IP addresses** as your application gateway's public and private frontend IPs. +> You can create private and public listeners with the same port number (Preview feature). However, be aware of any Network Security Group (NSG) associated with the application gateway subnet. Depending on your NSG's configuration, you may need an inbound rule with **Destination IP addresses** as your application gateway subnet's IP prefix. > > **Inbound Rule**: > - Source: (as per your requirement)-> - Destination IP addresses: Public and Private frontend IPs of your application gateway. +> - Destination IP addresses: IP prefix of your application gateway subnet. > - Destination Port: (as per listener configuration) > - Protocol: TCP > |
applied-ai-services | Sdk Overview | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/applied-ai-services/form-recognizer/sdk-overview.md | Title: Form Recognizer SDKs -description: Form Recognizer software development kits (SDKs) expose Form Recognizer models, features and capabilities, using C#, Java, JavaScript, or Python programming language. +description: Form Recognizer software development kits (SDKs) expose Form Recognizer models, features and capabilities, using C#, Java, JavaScript, and Python programming language. Form Recognizer SDK supports the following languages and platforms: | Language → Azure Form Recognizer SDK version | Package| Supported API version| Platform support | |:-:|:-|:-| :-|-| [.NET/C# → 4.0.0 (latest GA release)](https://azuresdkdocs.blob.core.windows.net/$web/dotnet/Azure.AI.FormRecognizer/4.0.0/https://docsupdatetracker.net/index.html)|[NuGet](https://www.nuget.org/packages/Azure.AI.FormRecognizer)|[**v3.0**](https://westus.dev.cognitive.microsoft.com/docs/services/form-recognizer-api-2022-08-31/operations/AnalyzeDocument)</br> [**v2.1**](https://westus.dev.cognitive.microsoft.com/docs/services/form-recognizer-api-v2-1/operations/AnalyzeBusinessCardAsync)</br>[**v2.0**](https://westus.dev.cognitive.microsoft.com/docs/services/form-recognizer-api-v2/operations/AnalyzeLayoutAsync) |[Windows, macOS, Linux, Docker](https://dotnet.microsoft.com/download)| -|[Java → 4.0.6 (latest GA release)](https://azuresdkdocs.blob.core.windows.net/$web/java/azure-ai-formrecognizer/4.0.0/https://docsupdatetracker.net/index.html) |[MVN repository](https://mvnrepository.com/artifact/com.azure/azure-ai-formrecognizer/4.0.0-beta.6) |[**v3.0**](https://westus.dev.cognitive.microsoft.com/docs/services/form-recognizer-api-2022-08-31/operations/AnalyzeDocument)</br> [**v2.1**](https://westus.dev.cognitive.microsoft.com/docs/services/form-recognizer-api-v2-1/operations/AnalyzeBusinessCardAsync)</br>[**v2.0**](https://westus.dev.cognitive.microsoft.com/docs/services/form-recognizer-api-v2/operations/AnalyzeLayoutAsync) |[Windows, macOS, Linux](/java/openjdk/install)| -|[JavaScript → 4.0.0 (latest GA release)](https://azuresdkdocs.blob.core.windows.net/$web/javascript/azure-ai-form-recognizer/4.0.0/https://docsupdatetracker.net/index.html)| [npm](https://www.npmjs.com/package/@azure/ai-form-recognizer)| [**v3.0**](https://westus.dev.cognitive.microsoft.com/docs/services/form-recognizer-api-2022-08-31/operations/AnalyzeDocument)</br> [**v2.1**](https://westus.dev.cognitive.microsoft.com/docs/services/form-recognizer-api-v2-1/operations/AnalyzeBusinessCardAsync)</br>[**v2.0**](https://westus.dev.cognitive.microsoft.com/docs/services/form-recognizer-api-v2/operations/AnalyzeLayoutAsync) | [Browser, Windows, macOS, Linux](https://nodejs.org/en/download/) | -|[Python → 3.2.0 (latest GA release)](https://azuresdkdocs.blob.core.windows.net/$web/python/azure-ai-formrecognizer/3.2.0/https://docsupdatetracker.net/index.html) | [PyPI](https://pypi.org/project/azure-ai-formrecognizer/3.2.0/)| [**v3.0**](https://westus.dev.cognitive.microsoft.com/docs/services/form-recognizer-api-2022-08-31/operations/AnalyzeDocument)</br> [**v2.1**](https://westus.dev.cognitive.microsoft.com/docs/services/form-recognizer-api-v2-1/operations/AnalyzeBusinessCardAsync)</br>[**v2.0**](https://westus.dev.cognitive.microsoft.com/docs/services/form-recognizer-api-v2/operations/AnalyzeLayoutAsync) |[Windows, macOS, Linux](/azure/developer/python/configure-local-development-environment?tabs=windows%2Capt%2Ccmd#use-the-azure-cli) +| [.NET/C# → 4.0.0 (latest GA release)](https://azuresdkdocs.blob.core.windows.net/$web/dotnet/Azure.AI.FormRecognizer/4.0.0/https://docsupdatetracker.net/index.html)|[NuGet](https://www.nuget.org/packages/Azure.AI.FormRecognizer)|[v3.0](https://westus.dev.cognitive.microsoft.com/docs/services/form-recognizer-api-2022-08-31/operations/AnalyzeDocument)</br> [v2.1](https://westus.dev.cognitive.microsoft.com/docs/services/form-recognizer-api-v2-1/operations/AnalyzeBusinessCardAsync)</br>[v2.0](https://westus.dev.cognitive.microsoft.com/docs/services/form-recognizer-api-v2/operations/AnalyzeLayoutAsync) |[Windows, macOS, Linux, Docker](https://dotnet.microsoft.com/download)| +|[Java → 4.0.6 (latest GA release)](https://azuresdkdocs.blob.core.windows.net/$web/java/azure-ai-formrecognizer/4.0.0/https://docsupdatetracker.net/index.html) |[MVN repository](https://mvnrepository.com/artifact/com.azure/azure-ai-formrecognizer/4.0.0-beta.6) |[v3.0](https://westus.dev.cognitive.microsoft.com/docs/services/form-recognizer-api-2022-08-31/operations/AnalyzeDocument)</br> [v2.1](https://westus.dev.cognitive.microsoft.com/docs/services/form-recognizer-api-v2-1/operations/AnalyzeBusinessCardAsync)</br>[v2.0](https://westus.dev.cognitive.microsoft.com/docs/services/form-recognizer-api-v2/operations/AnalyzeLayoutAsync) |[Windows, macOS, Linux](/java/openjdk/install)| +|[JavaScript → 4.0.0 (latest GA release)](https://azuresdkdocs.blob.core.windows.net/$web/javascript/azure-ai-form-recognizer/4.0.0/https://docsupdatetracker.net/index.html)| [npm](https://www.npmjs.com/package/@azure/ai-form-recognizer)| [v3.0](https://westus.dev.cognitive.microsoft.com/docs/services/form-recognizer-api-2022-08-31/operations/AnalyzeDocument)</br> [v2.1](https://westus.dev.cognitive.microsoft.com/docs/services/form-recognizer-api-v2-1/operations/AnalyzeBusinessCardAsync)</br>[v2.0](https://westus.dev.cognitive.microsoft.com/docs/services/form-recognizer-api-v2/operations/AnalyzeLayoutAsync) | [Browser, Windows, macOS, Linux](https://nodejs.org/en/download/) | +|[Python → 3.2.0 (latest GA release)](https://azuresdkdocs.blob.core.windows.net/$web/python/azure-ai-formrecognizer/3.2.0/https://docsupdatetracker.net/index.html) | [PyPI](https://pypi.org/project/azure-ai-formrecognizer/3.2.0/)| [v3.0](https://westus.dev.cognitive.microsoft.com/docs/services/form-recognizer-api-2022-08-31/operations/AnalyzeDocument)</br> [v2.1](https://westus.dev.cognitive.microsoft.com/docs/services/form-recognizer-api-v2-1/operations/AnalyzeBusinessCardAsync)</br>[v2.0](https://westus.dev.cognitive.microsoft.com/docs/services/form-recognizer-api-v2/operations/AnalyzeLayoutAsync) |[Windows, macOS, Linux](/azure/developer/python/configure-local-development-environment?tabs=windows%2Capt%2Ccmd#use-the-azure-cli) ## Supported Clients | Language| SDK version | API version | Supported clients| | : | :--|:- | :--|-|**.NET/C#**</br> **Java**</br> **JavaScript**</br>| 4.0.0 (latest GA release)| v3.0 / 2022-08-31 (default)| **DocumentAnalysisClient**<br>**DocumentModelAdministrationClient** | -|**.NET/C#**</br> **Java**</br> **JavaScript**</br>| 3.1.x | v2.1 (default)</br>v2.0 | **FormRecognizerClient**</br>**FormTrainingClient** | -|**.NET/C#**</br> **Java**</br> **JavaScript**</br>| 3.0.x| v2.0 | **FormRecognizerClient**</br>**FormTrainingClient** | -| **Python**| 3.2.x (latest GA release) | v3.0 / 2022-08-31 (default)| **DocumentAnalysisClient**</br>**DocumentModelAdministrationClient**| -| **Python** | 3.1.x | v2.1 (default)</br>v2.0 |**FormRecognizerClient**</br>**FormTrainingClient** | -| **Python** | 3.0.0 | v2.0 |**FormRecognizerClient**</br>**FormTrainingClient** | +|.NET/C#</br> Java</br> JavaScript</br>| 4.0.0 (latest GA release)| v3.0 / 2022-08-31 (default)| **DocumentAnalysisClient**</br>**DocumentModelAdministrationClient** | +|.NET/C#</br> Java</br> JavaScript</br>| 3.1.x | v2.1 (default)</br>v2.0 | **FormRecognizerClient**</br>**FormTrainingClient** | +|.NET/C#</br> Java</br> JavaScript</br>| 3.0.x| v2.0 | **FormRecognizerClient**</br>**FormTrainingClient** | +| Python| 3.2.x (latest GA release) | v3.0 / 2022-08-31 (default)| DocumentAnalysisClient</br>DocumentModelAdministrationClient| +| Python | 3.1.x | v2.1 (default)</br>v2.0 |**FormRecognizerClient**</br>**FormTrainingClient** | +| Python | 3.0.0 | v2.0 |**FormRecognizerClient**</br>**FormTrainingClient** | ## Use Form Recognizer SDK in your applications |
azure-app-configuration | Howto Convert To The New Spring Boot | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-app-configuration/howto-convert-to-the-new-spring-boot.md | Title: Convert to the Spring Boot Library + Title: Convert to the new Spring Boot library -description: Learn how to convert to the new App Configuration Spring Boot Library from the previous version. +description: Learn how to convert to the new App Configuration library for Spring Boot from the previous version. ms.devlang: java -# Convert to new App Configuration Spring Boot library +# Convert to the new App Configuration library for Spring Boot -A new version of the App Configuration library for Spring Boot is now available. The version introduces new features such as Azure Spring global properties, but also some breaking changes. These changes aren't backwards compatible with configuration setups that were using the previous library version. For the following topics: +A new version of the Azure App Configuration library for Spring Boot is available. The version introduces new features such as Spring Cloud Azure global properties, but also some breaking changes. These changes aren't backward compatible with configuration setups that used the previous library version. -* Group and Artifact Ids -* Package path renamed -* Classes renamed -* Feature flag loading -* Possible conflicts with Azure Spring global properties +This article provides a reference on the changes and the actions needed to migrate to the new library version. -this article provides a reference on the change and actions needed to migrate to the new library version. +## Group and artifact IDs changed -## Group and Artifact ID changed --All of the Azure Spring Boot libraries have had their Group and Artifact IDs updated to match a new format. The new package names are: +All of the group and artifact IDs in the Azure libraries for Spring Boot have been updated to match a new format. The new package names are: ```xml <dependency> All of the Azure Spring Boot libraries have had their Group and Artifact IDs upd </dependency> ``` -> [!NOTE] -> The 4.7.0 version is the first 4.x version of the library. This is to match the version of the other Spring Cloud Azure libraries. +The 4.7.0 version is the first 4.x version of the library. It matches the version of the other Spring Cloud Azure libraries. -As of the 4.7.0 version, the App Configuration and Feature Management libraries are now part of the spring-cloud-azure-dependencies BOM. The BOM file makes it so that you no longer need to specify the version of the libraries in your project. The BOM automatically manages the version of the libraries. +As of the 4.7.0 version, the App Configuration and feature management libraries are part of the `spring-cloud-azure-dependencies` bill of materials (BOM). The BOM file ensures that you no longer need to specify the version of the libraries in your project. The BOM automatically manages the version of the libraries. ```xml <dependency> As of the 4.7.0 version, the App Configuration and Feature Management libraries </dependency> ``` -## Package path renamed +## Package paths renamed -The package path for the `spring-cloud-azure-feature-managment` and `spring-cloud-azure-feature-management-web` libraries have been renamed from `com.azure.spring.cloud.feature.manager` to `com.azure.spring.cloud.feature.management` and `com.azure.spring.cloud.feature.management.web`. +The package paths for the `spring-cloud-azure-feature-management` and `spring-cloud-azure-feature-management-web` libraries have been renamed from `com.azure.spring.cloud.feature.manager` to `com.azure.spring.cloud.feature.management` and `com.azure.spring.cloud.feature.management.web`. ## Classes renamed -* `ConfigurationClientBuilderSetup` has been renamed to `ConfigurationClientCustomizer` and its `setup` method has been renamed to `customize` -* `SecretClientBuilderSetup` has been renamed to `SecretClientCustomizer` and its `setup` method has been renamed to `customize` -* `AppConfigurationCredentialProvider` and `KeyVaultCredentialProvider` have been removed. Instead you can use [Azure Spring common configuration properties](/azure/developer/java/spring-framework/configuration) or modify the credentials using `ConfigurationClientCustomizer`/`SecretClientCustomizer`. +The following classes have changed: ++* `ConfigurationClientBuilderSetup` has been renamed to `ConfigurationClientCustomizer`. Its `setup` method has been renamed to `customize`. +* `SecretClientBuilderSetup` has been renamed to `SecretClientCustomizer`. Its `setup` method has been renamed to `customize`. +* `AppConfigurationCredentialProvider` and `KeyVaultCredentialProvider` have been removed. Instead, you can use [Spring Cloud Azure common configuration properties](/azure/developer/java/spring-framework/configuration) or modify the credentials by using `ConfigurationClientCustomizer` or `SecretClientCustomizer`. ## Feature flag loading -Feature flags now support loading using multiple key/label filters. +Feature flags now support loading via multiple key/label filters: ```properties spring.cloud.azure.appconfiguration.stores[0].feature-flags.enable spring.cloud.azure.appconfiguration.stores[0].feature-flags.selects[0].label-fil spring.cloud.azure.appconfiguration.stores[0].monitoring.feature-flag-refresh-interval ``` -> [!NOTE] -> The property `spring.cloud.azure.appconfiguration.stores[0].feature-flags.label` has been removed. Instead you can use `spring.cloud.azure.appconfiguration.stores[0].feature-flags.selects[0].label-filter` to specify a label filter. +The property `spring.cloud.azure.appconfiguration.stores[0].feature-flags.label` has been removed. Instead, you can use `spring.cloud.azure.appconfiguration.stores[0].feature-flags.selects[0].label-filter` to specify a label filter. -## Possible conflicts with Azure Spring global properties +## Possible conflicts with Spring Cloud Azure global properties -[Azure Spring common configuration properties](/azure/developer/java/spring-framework/configuration) enables you to customize your connections to Azure services. The new App Configuration library will picks up any global or app configuration setting configured with Azure Spring common configuration properties. Your connection to app configuration will change if the configurations have been set for another Azure Spring library. +[Spring Cloud Azure common configuration properties](/azure/developer/java/spring-framework/configuration) enable you to customize your connections to Azure services. The new App Configuration library will pick up any global or App Configuration setting that's configured with Spring Cloud Azure common configuration properties. Your connection to App Configuration will change if the configurations are set for another Spring Cloud Azure library. -> [!NOTE] -> You can override this by using `ConfigurationClientCustomizer`/`SecretClientCustomizer` to modify the clients. +You can override this behavior by using `ConfigurationClientCustomizer`/`SecretClientCustomizer` to modify the clients. > [!WARNING]-> You may now run into an issue where more than one connection method is provided as Azure Spring global properties will automatically pick up credentials, such as Environment Variables, and use them to connect to Azure services. This can cause issues if you are using a different connection method, such as Managed Identity, and the global properties are overriding it. +> Spring Cloud Azure global properties might provide more than one connection method as they automatically pick up credentials, such as environment variables, and use them to connect to Azure services. This behavior can cause problems if you're using a different connection method, such as a managed identity, and the global properties are overriding it. |
azure-arc | Overview | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/servers/overview.md | You can install the Connected Machine agent manually, or on multiple machines at When you connect your machine to Azure Arc-enabled servers, you can perform many operational functions, just as you would with native Azure virtual machines. Below are some of the key supported actions for connected machines. * **Govern**:- * Assign [Azure Policy guest configurations](../../governance/machine-configuration/overview.md) to audit settings inside the machine. To understand the cost of using Azure Policy Guest Configuration policies with Arc-enabled servers, see Azure Policy [pricing guide](https://azure.microsoft.com/pricing/details/azure-policy/). + * Assign [Azure Automanage machine configurations](../../governance/machine-configuration/overview.md) to audit settings inside the machine. To understand the cost of using Azure Automanage Machine Configuration policies with Arc-enabled servers, see Azure Policy [pricing guide](https://azure.microsoft.com/pricing/details/azure-policy/). * **Protect**: * Protect non-Azure servers with [Microsoft Defender for Endpoint](/microsoft-365/security/defender-endpoint), included through [Microsoft Defender for Cloud](../../security-center/defender-for-servers-introduction.md), for threat detection, for vulnerability management, and to proactively monitor for potential security threats. Microsoft Defender for Cloud presents the alerts and remediation suggestions from the threats detected. * Use [Microsoft Sentinel](scenario-onboard-azure-sentinel.md) to collect security-related events and correlate them with other data sources. |
azure-arc | Prerequisites | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/servers/prerequisites.md | If two agents use the same configuration, you will encounter inconsistent behavi Azure Arc supports the following Windows and Linux operating systems. Only x86-64 (64-bit) architectures are supported. The Azure Connected Machine agent does not run on x86 (32-bit) or ARM-based architectures. -* Windows Server 2012 R2, 2016, 2019, and 2022 +* Windows Server 2008 R2 SP1, 2012 R2, 2016, 2019, and 2022 * Both Desktop and Server Core experiences are supported * Azure Editions are supported on Azure Stack HCI * Windows 10, 11 (see [client operating system guidance](#client-operating-system-guidance)) Microsoft doesn't recommend running Azure Arc on short-lived (ephemeral) servers Windows operating systems: * NET Framework 4.6 or later. [Download the .NET Framework](/dotnet/framework/install/guide-for-developers).-* Windows PowerShell 4.0 or later (already included with Windows Server 2012 R2 and later). +* Windows PowerShell 4.0 or later (already included with Windows Server 2012 R2 and later). For Windows Server 2008 R2 SP1, [Download Windows Management Framework 5.1.](https://www.microsoft.com/download/details.aspx?id=54616). Linux operating systems: |
azure-cache-for-redis | Cache Azure Active Directory For Authentication | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-cache-for-redis/cache-azure-active-directory-for-authentication.md | + + Title: Use Azure Active Directory for cache authentication ++description: Learn how to use Azure Active Directory with Azure Cache for Redis. +++++ Last updated : 05/12/2023+++++# Use Azure Active Directory for cache authentication ++Azure Cache for Redis offers two methods to authenticate to your cache instance: ++- [access key](cache-configure.md#access-keys) ++- [Azure Active Directory token](/azure/active-directory/develop/access-tokens) ++Although access key authentication is simple, it comes with a set of challenges around security and password management. In this article, you learn how to use an Azure Active Directory (Azure AD) token for cache authentication. ++Azure Cache for Redis offers a password-free authentication mechanism by integrating with [Azure Active Directory](/azure/active-directory/fundamentals/active-directory-whatis). This integration also includes [role-based access control](/azure/role-based-access-control/) functionality provided through [access control lists (ACLs)](https://redis.io/docs/management/security/acl/) supported in open source Redis. ++> [!IMPORTANT] +> The updates to Azure Cache for Redis that enable Azure Active Directory for authentication are available only in East US region. ++To use the ACL integration, your client application must assume the identity of an Azure Active Directory entity, like service principal or managed identity, and connect to your cache. In this article, you learn how to use your service principal or managed identity to connect to your cache, and how to grant your connection predefined permissions based on the Azure AD artifact being used for the connection. ++## Scope of availability ++| **Tier** | Basic, Standard, Premium | Enterprise, Enterprise Flash | +|:--|::|:-:| +| **Availability** | Yes (preview) | No | ++## Prerequisites and limitations ++- To enable Azure AD token-based authentication for your Azure Cache for Redis instance, at least one Redis user must be configured under the **Data Access Policy** setting in the Resource menu. +- Azure AD-based authentication is supported for SSL connections and TLS 1.2 only. +- Azure AD-based authentication isn't supported on Azure Cache for Redis instances that run Redis version 4. +- Azure AD-based authentication isn't supported on Azure Cache for Redis instances that [depend on Cloud Services](./cache-faq.yml#caches-with-a-dependency-on-cloud-services--classic). +- Azure AD based authentication isn't supported in the Enterprise tiers of Azure Cache for Redis Enterprise. +- Some Redis commands are blocked. For a full list of blocked commands, see [Redis commands not supported in Azure Cache for Redis](cache-configure.md#redis-commands-not-supported-in-azure-cache-for-redis). ++> [!IMPORTANT] +> Once a connection is established using Azure AD token, client applications must periodically refresh Azure AD token before expiry, and send an `AUTH` command to Redis server to avoid disruption of connections. For more information, see [Configure your Redis client to use Azure Active Directory](#configure-your-redis-client-to-use-azure-active-directory). ++## Enable Azure AD token based authentication on your cache ++1. In the Azure portal, select the Azure Cache for Redis instance where you'd like to configure Azure AD token-based authentication. ++1. Select **(PREVIEW) Data Access Configuration** from the Resource menu. ++1. Select "**Add**" and choose **New Redis User**. ++1. On the **Access Policy** tab, select one the available policies in the table: **Owner**, **Contributor**, or **Reader**. Then, select the **Next:Redis Users**. ++ :::image type="content" source="media/cache-azure-active-directory-for-authentication/cache-new-redis-user.png" alt-text="Screenshot showing the available Access Policies."::: ++1. Choose either the **User or service principal** or **Managed Identity** to determine how you want to use for authenticate to your Azure Cache for Redis instance. ++1. Then, select **Select members** and select **Select**. Then, select **Next : Review + Design**. + :::image type="content" source="media/cache-azure-active-directory-for-authentication/cache-select-members.png" alt-text="Screenshot showing members to add as New Redis Users."::: ++1. From the Resource menu, select **Advanced settings**. ++1. Check the box labeled **(PREVIEW) Enable Azure AD Authorization** and select **OK**. Then, select **Save**. ++ :::image type="content" source="media/cache-azure-active-directory-for-authentication/cache-azure-ad-access-authorization.png" alt-text="Screenshot of Azure AD access authorization."::: ++1. A dialog box displays a popup notifying you that upgrading is permanent and might cause a brief connection blip. Select **Yes.** ++ > [!IMPORTANT] + > Once the enable operation is complete, the nodes in your cache instance reboots to load the new configuration. We recommend performing this operation during your maintenance window or outside your peak business hours. The operation can take up to 30 minutes. ++## Configure your Redis client to use Azure Active Directory ++Because most Azure Cache for Redis clients assume that a password/access key is used for authentication, you likely need to update your client workflow to support authentication using Azure AD. In this section, you learn how to configure your client applications to connect to Azure Cache for Redis using an Azure AD token. ++<!-- Conceptual Art goes here. --> ++### Azure AD Client Workflow ++1. Configure your client application to acquire an Azure AD token for your application using the [Microsoft Authentication Library (MSAL)](/azure/active-directory/develop/msal-overview). ++ <!-- (ADD code snippet) --> ++1. Update your Redis connection logic to use following `UserName` and `Password`: ++ - `UserName` = Object ID of your managed identity or service principal ++ - `Password` = Azure AD token that you acquired using MSAL ++ <!-- (ADD code snippet) --> ++1. Ensure that your client executes a Redis [AUTH command](https://redis.io/commands/auth/) automatically before your Azure AD token expires using: ++ 1. `UserName` = Object ID of your managed identity or service principal ++ 1. `Password` = Azure AD token refreshed periodically ++ <!-- (ADD code snippet) --> ++### Client library support ++The library `Microsoft.Azure.StackExchangeRedis` is an extension of `StackExchange.Redis` that enables you to use Azure Active Directory to authenticate connections from a Redis client application to an Azure Cache for Redis. The extension manages the authentication token, including proactively refreshing tokens before they expire to maintain persistent Redis connections over multiple days. ++This [code sample](https://www.nuget.org/packages/Microsoft.Azure.StackExchangeRedis) demonstrates how to use the `Microsoft.Azure.StackExchangeRedis` NuGet package to connect to your Azure Cache for Redis instance using Azure Active Directory. ++<!-- +The following table includes links to code samples, which demonstrate how to connect to your Azure Cache for Redis instance using an Azure AD token. A wide variety of client libraries are included in multiple languages. ++| **Client library** | **Language** | **Link to sample code**| +|-|-|-| +| StackExchange.Redis | C#/.NET | StackExchange.Redis extension as a NuGet package | +| Python | Python | [Python code Sample](https://aka.ms/redis/aad/sample-code/python) | +| Jedis | Java | [Jedis code sample](https://aka.ms/redis/aad/sample-code/java-jedis) | +| Lettuce | Java | [Lettuce code sample](https://aka.ms/redis/aad/sample-code/java-lettuce) | +| Redisson | Java | [Redisson code sample](https://aka.ms/redis/aad/sample-code/java-redisson) | +| ioredis | Node.js | [ioredis code sample](https://aka.ms/redis/aad/sample-code/js-ioredis) | +| Node-redis | Node.js | [noredis code sample](https://aka.ms/redis/aad/sample-code/js-noderedis) | ++--> ++### Best practices for Azure AD authentication ++- Configure private links or firewall rules to protect your cache from a Denial of Service attack. ++- Ensure that your client application sends a new Azure AD token at least 3 minutes before token expiry to avoid connection disruption. ++- When calling the Redis server `AUTH` command periodically, consider adding a jitter so that the `AUTH` commands are staggered, and your Redis server doesn't receive lot of `AUTH` commands at the same time. ++## Next steps ++- [Configure role-based access control with Data Access Policy](cache-configure-role-based-access-control.md) |
azure-cache-for-redis | Cache Configure Role Based Access Control | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-cache-for-redis/cache-configure-role-based-access-control.md | + + Title: Configure role-based access control with Data Access Policy ++description: Learn how to configure role-based access control with Data Access Policy. +++++ Last updated : 05/12/2023+++++# Configure role-based access control with Data Access Policy ++Managing access to your Azure Cache for Redis instance is critical to ensure that the right users have access to the right set of data and commands. In Redis version 1, the [Access Control List](https://redis.io/docs/management/security/acl/) (ACL) was introduced. ACL limits which user can execute certain commands, and the keys that a user can be access. For example, you can prohibit specific users from deleting keys in the cache using [DEL](https://redis.io/commands/del/) command. ++Azure Cache for Redis now integrates this ACL functionality with Azure Active Directory (Azure AD) to allow you to configure your Data Access Policies for your application's service principal and managed identity. ++> [!IMPORTANT] +> The updates to Azure Cache for Redis that enable Azure Active Directory for role-based access control are available only in East US region. ++Azure Cache for Redis offers three built-in access policies: _Owner_, _Contributor_, and _Reader_. If the built-in access policies don't satisfy your data protection and isolation requirements, you can create and use your own custom data access policy as described in [Configure custom data access policy](#configure-a-custom-data-access-policy-for-your-application). ++## Scope of availability ++| **Tier** | Basic, Standard, Premium | Enterprise, Enterprise Flash | +|:--|::|:-:| +| **Availability** | Yes (preview) | No | ++## Prerequisites and limitations ++- Redis ACL and Data Access Policies aren't supported on Azure Cache for Redis instances that run Redis version 1. +- Redis ACL and Data Access Policies aren't supported on Azure Cache for Redis instances that depend on [Cloud Services](cache-faq.yml#caches-with-a-dependency-on-cloud-services--classic). +- Azure AD authentication and authorization are supported for SSL connections only. +- Some Redis commands are [blocked](cache-configure.md#redis-commands-not-supported-in-azure-cache-for-redis). ++## Permissions for your data access policy ++As documented on [Redis Access Control List](https://redis.io/docs/management/security/acl/), ACL in Redis version 1.1 allows configuring access permissions for two areas: ++### Command categories ++Redis has created groupings of commands such as administrative commands, dangerous commands, etc. to make setting permissions on a group of commands easier. ++- Use `+@commandcategory` to allow a command category +- Use `-@commandcategory` to disallow a command category ++These [commands](cache-configure.md#redis-commands-not-supported-in-azure-cache-for-redis) are still blocked. The following groups are useful command categories that Redis supports. For more information on command categories, see the full list under the heading [Command Categories](https://redis.io/docs/management/security/acl/). ++- `admin` + - Administrative commands. Normal applications never need to use these, including `MONITOR`, `SHUTDOWN`, and others. +- `dangerous` + - Potentially dangerous commands. Each should be considered with care for various reasons, including `FLUSHALL`, `RESTORE`, `SORT`, `KEYS`, `CLIENT`, `DEBUG`, `INFO`, `CONFIG`, and others. +- `keyspace` + - Writing or reading from keys, databases, or their metadata in a type agnostic way, including `DEL`, `RESTORE`, `DUMP`, `RENAME`, `EXISTS`, `DBSIZE`, `KEYS`, `EXPIRE`, `TTL`, `FLUSHALL`, and more. Commands that can modify the keyspace, key, or metadata also have the write category. Commands that only read the keyspace, key, or metadata have the read category. +- `pubsub` + - PubSub-related commands. +- `read` + - Reading from keys, values or metadata. Commands that don't interact with keys, don't have either read or write. +- `set` + - Data type: sets related. +- `sortedset` + - Data type: sorted sets related. +- `stream` + - Data type: streams related. +- `string` + - Data type: strings related. +- `write` + - Writing to keys (values or metadata). ++### Commands ++_Commands_ allow you to control which specific commands can be executed by a particular Redis user. ++- Use `+command` to allow a command. +- Use `-command` to disallow a command. ++### Keys ++Keys allow you to control access to specific keys or groups of keys stored in the cache. ++- Use `~<pattern>` to provide a pattern for keys. ++- Use either `~*` or `allkeys` to indicate that the command category permissions apply to all keys in the cache instance. ++### How to specify permissions ++To specify permissions, you need to create a string to save as your custom access policy, then assign the string to your Azure Cache for Redis user. ++The following list contains some examples of permission strings for various scenarios. ++- Allow application to execute all commands on all keys ++ Permissions string: `+@all allkeys` ++- Allow application to execute only _read_ commands ++ Permissions string: `+@read *` ++- Allow application to execute _read_ command category and set command on keys with prefix `Az`. ++ Permissions string: `+@read +set ~Az*` ++## Configure a custom data access policy for your application ++1. In the Azure portal, select the Azure Cache for Redis instance that you want to configure Azure AD token based authentication for. ++1. From the Resource menu, select **(PREVIEW) Data Access configuration**. ++ :::image type="content" source="media/cache-configure-role-based-access-control/cache-data-access-configuration.png" alt-text="Screenshot showing Data Access Configuration highlighted in the Resource menu."::: ++1. Select **Add** and choose **New Access Policy**. ++ :::image type="content" source="media/cache-configure-role-based-access-control/cache-add-custom-policy.png" alt-text="Screenshot showing a form to add custom access policy."::: ++1. Provide a name for your access policy. ++1. [Configure Permissions](#permissions-for-your-data-access-policy) as per your requirements. ++## Next steps ++- [Use Azure Active Directory for cache authentication](cache-azure-active-directory-for-authentication.md) |
azure-cache-for-redis | Cache How To Premium Clustering | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-cache-for-redis/cache-how-to-premium-clustering.md | Clustering doesn't increase the number of connections available for a clustered In Azure, Redis cluster is offered as a primary/replica model where each shard has a primary/replica pair with replication, where the replication is managed by Azure Cache for Redis service. -## Azure Cache for Redis now supports upto 30 shards (preview) +## Azure Cache for Redis now supports up to 30 shards (preview) Azure Cache for Redis now supports upto 30 shards for clustered caches. Clustered caches configured with two replicas can support upto 20 shards and clustered caches configured with three replicas can support upto 15 shards. -**Limitations** -* Shard limit for caches with Redis verion 4 is 10. -* Shard limit for [caches affected by cloud service retirement](./cache-faq.yml#caches-with-a-dependency-on-cloud-services--classic) is 10. -* Maintenance will take longer as each node take roughly 20 minutes to update. Other maintenance operations will be blocked while your cache is under maintenance. +### Limitations ++- Shard limit for caches with Redis version 4 is 10. +- Shard limit for [caches affected by cloud service retirement](./cache-faq.yml#caches-with-a-dependency-on-cloud-services--classic) is 10. +- Maintenance will take longer as each node take roughly 20 minutes to update. Other maintenance operations will be blocked while your cache is under maintenance. ## Set up clustering For sample code on working with clustering with the StackExchange.Redis client, To change the cluster size on a premium cache that you created earlier, and is already running with clustering enabled, select **Cluster size** from the Resource menu. To change the cluster size, use the slider or type a number between 1 and 10 in the **Shard count** text box. Then, select **OK** to save. If you're using StackExchange.Redis and receive `MOVE` exceptions when using clu ### Does scaling out using clustering help to increase the number of supported client connections? -No,scaling out using clustering and increasing the number of shards doesn't help in increasing the number of supported client connections. +No, scaling out using clustering and increasing the number of shards doesn't help in increasing the number of supported client connections. ## Next steps |
azure-cache-for-redis | Cache Whats New | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-cache-for-redis/cache-whats-new.md | Title: What's New in Azure Cache for Redis description: Recent updates for Azure Cache for Redis + Previously updated : 03/28/2023 Last updated : 05/11/2023+ Last updated 03/28/2023 ## May 2023 -### Support for upto 30 shards for clustered Azure Cache for Redis instances +### Azure Active Directory-based authentication and authorization (preview) ++Azure Active Directory (Azure AD) based authentication and authorization is now available for public preview with Azure Cache for Redis. With this Azure AD integration, users can connect to their cache instance without an access key and use role-based access control to connect to their cache instance. ++> [!IMPORTANT] +> The updates to Azure Cache for Redis that enable both Azure Active Directory for authentication and role-based access control are available only in East US region. -Azure Cache for Redis now supports clustered caches with upto 30 shards which means your applications can store more data and scale better with your workloads. +This feature is available for Azure Cache for Redis Basic, Standard, and Premium SKUs. With this update, customers can look forward to increased security and a simplified authentication process when using Azure Cache for Redis. -For more information, see [Configure clustering for Azure Cache for Redis instance](cache-how-to-premium-clustering.md#azure-cache-for-redis-now-supports-upto-30-shards-preview). +### Support for up to 30 shards for clustered Azure Cache for Redis instances ++Azure Cache for Redis now supports clustered caches with up to 30 shards. Now, your applications can store more data and scale better with your workloads. ++For more information, see [Configure clustering for Azure Cache for Redis instance](cache-how-to-premium-clustering.md#azure-cache-for-redis-now-supports-up-to-30-shards-preview). ## March 2023 ### In-place scale up and scale out for the Enterprise tiers (preview) -The Enterprise and Enterprise Flash tiers now support the ability to scale cache instances up and out without requiring downtime or data loss. Scale up and scale out actions can both occur in the same operation. +The Enterprise and Enterprise Flash tiers now support the ability to scale cache instances up and out without requiring downtime or data loss. Scale up and scale out actions can both occur in the same operation. -For more information, see [Scale an Azure Cache for Redis instance](cache-how-to-scale.md) +For more information, see [Scale an Azure Cache for Redis instance](cache-how-to-scale.md). ### Support for RedisJSON in active geo-replicated caches (preview) -Cache instances using active geo-replication now support the RedisJSON module. +Cache instances using active geo-replication now support the RedisJSON module. -For more information, see [Configure active geo-replication](cache-how-to-active-geo-replication.md). +For more information, see [Configure active geo-replication](cache-how-to-active-geo-replication.md). ### Flush operation for active geo-replicated caches (preview) -Caches using active geo-replication now include a built-in _flush_ operation that can be initiated at the control plane level. Use the _flush_ operation with your cache instead of the `FLUSH ALL` and `FLUSH DB` operations, which are blocked by design for active geo-replicated caches. +Caches using active geo-replication now include a built-in _flush_ operation that can be initiated at the control plane level. Use the _flush_ operation with your cache instead of the `FLUSH ALL` and `FLUSH DB` operations, which are blocked by design for active geo-replicated caches. -For more information, see [Flush operation](cache-how-to-active-geo-replication.md#flush-operation) +For more information, see [Flush operation](cache-how-to-active-geo-replication.md#flush-operation). ### Customer managed key (CMK) disk encryption (preview) Redis data that is saved on disk can now be encrypted using customer managed keys (CMK) in the Enterprise and Enterprise Flash tiers. Using CMK adds another layer of control to the default disk encryption. -For more information, see [Enable disk encryption](cache-how-to-encryption.md) +For more information, see [Enable disk encryption](cache-how-to-encryption.md). ### Connection event audit logs (preview) Enterprise and Enterprise Flash tier caches can now log all connection, disconnection, and authentication events through diagnostic settings. Logging this information helps in security audits. You can also monitor who has access to your cache resource. -For more information, see [Enabling connection audit logs](cache-monitor-diagnostic-settings.md) +For more information, see [Enabling connection audit logs](cache-monitor-diagnostic-settings.md). ## November 2022 Active geo-replication is a powerful tool that enables Azure Cache for Redis clu ## January 2022 -### Support for managed identity in Azure Cache for Redis +### Support for managed identity in Azure Cache for Redis in storage Azure Cache for Redis now supports authenticating storage account connections using managed identity. Identity is established through Azure Active Directory, and both system-assigned and user-assigned identities are supported. Support for managed identity further allows the service to establish trusted access to storage for uses including data persistence and importing/exporting cache data. |
azure-maps | Data Driven Style Expressions Web Sdk | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-maps/data-driven-style-expressions-web-sdk.md | Last updated 4/4/2019 -+ # Data-driven Style Expressions (Web SDK) |
azure-monitor | Action Groups | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/alerts/action-groups.md | The following sections provide information about the various actions and notific To check limits on Automation runbook payloads, see [Automation limits](../../azure-resource-manager/management/azure-subscription-service-limits.md#automation-limits). -You might have a limited number of runbook actions per action group. +You are limited to 10 runbook actions per action group. ### Azure App Service push notifications To enable push notifications to the Azure mobile app, provide the email address that you use as your account ID when you configure the Azure mobile app. For more information about the Azure mobile app, see [Get the Azure mobile app](https://azure.microsoft.com/features/azure-portal/mobile-app/). -You might have a limited number of Azure app actions per action group. +You are limited to 10 Azure app actions per action group. ### Email An action that uses Functions calls an existing HTTP trigger endpoint in Functio When you define the function action, the function's HTTP trigger endpoint and access key are saved in the action definition, for example, `https://azfunctionurl.azurewebsites.net/api/httptrigger?code=<access_key>`. If you change the access key for the function, you must remove and re-create the function action in the action group. -You might have a limited number of function actions per action group. +You are limited to 10 function actions per action group. > [!NOTE] > You might have a limited number of function actions per action group. An ITSM action requires an ITSM connection. To learn how to create an ITSM connection, see [ITSM integration](./itsmc-overview.md). -You might have a limited number of ITSM actions per action group. +You are limited to 10 ITSM actions per action group. ### Logic Apps -You might have a limited number of Logic Apps actions per action group. +You are limited to 10 Logic Apps actions per action group. ### Secure webhook For information about rate limits, see [Rate limiting for voice, SMS, emails, Az For important information about using SMS notifications in action groups, see [SMS alert behavior in action groups](./alerts-sms-behavior.md). -You might have a limited number of SMS actions per action group. +You are limited to 10 SMS actions per action group. > [!NOTE] > You might have a limited number of SMS actions per action group. For important information about rate limits, see [Rate limiting for voice, SMS, emails, Azure App Service push notifications, and webhook posts](./alerts-rate-limiting.md). -You might have a limited number of voice actions per action group. +You are limited to 10 voice actions per action group. > [!NOTE] > |
azure-monitor | App Insights Overview | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/app-insights-overview.md | Title: Application Insights overview description: Learn how Application Insights in Azure Monitor provides performance management and usage tracking of your live web application. Previously updated : 04/24/2023 Last updated : 05/12/2023 # Application Insights overview |
azure-monitor | Codeless Overview | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/codeless-overview.md | Title: Auto-instrumentation for Azure Monitor Application Insights -description: Overview of auto-instrumentation for Azure Monitor Application Insights codeless application performance management. + Title: Autoinstrumentation for Azure Monitor Application Insights +description: Overview of autoinstrumentation for Azure Monitor Application Insights codeless application performance management. Previously updated : 02/14/2023 Last updated : 05/12/2023 -# What is auto-instrumentation for Azure Monitor Application Insights? +# What is autoinstrumentation for Azure Monitor Application Insights? -Auto-instrumentation quickly and easily enables [Application Insights](app-insights-overview.md) to make [telemetry](data-model-complete.md) like metrics, requests, and dependencies available in your [Application Insights resource](create-workspace-resource.md). +Autoinstrumentation quickly and easily enables [Application Insights](app-insights-overview.md) to make [telemetry](data-model-complete.md) like metrics, requests, and dependencies available in your [Application Insights resource](create-workspace-resource.md). > [!div class="checklist"] > - No code changes are required. Auto-instrumentation quickly and easily enables [Application Insights](app-insig ## Supported environments, languages, and resource providers -The following table shows the current state of auto-instrumentation availability. +The following table shows the current state of autoinstrumentation availability. Links are provided to more information for each supported scenario. |Environment/Resource provider | .NET Framework | .NET Core / .NET | Java | Node.js | Python | |-|||-|-|--|-|Azure App Service on Windows - Publish as Code | [ :white_check_mark: :link: ](azure-web-apps-net.md) <sup>[1](#OnBD)</sup> | [ :white_check_mark: :link: ](azure-web-apps-net-core.md) <sup>[1](#OnBD)</sup> | [ :white_check_mark: :link: ](azure-web-apps-java.md) <sup>[1](#OnBD)</sup> | [ :white_check_mark: :link: ](azure-web-apps-nodejs.md) <sup>[1](#OnBD)</sup> | :x: | +|Azure App Service on Windows - Publish as Code | [ :white_check_mark: :link: ](azure-web-apps-net.md) <sup>[1](#OnBD)</sup> | [ :white_check_mark: :link: ](azure-web-apps-net-core.md) <sup>[1](#OnBD)</sup> | [ :white_check_mark: :link: ](azure-web-apps-java.md) <sup>[1](#OnBD)</sup> | [ :white_check_mark: :link: ](azure-web-apps-nodejs.md) <sup>[1](#OnBD)</sup> | :x: | |Azure App Service on Windows - Publish as Docker | [ :white_check_mark: ](https://azure.github.io/AppService/2022/04/11/windows-containers-app-insights-preview.html) <sup>[2](#Preview)</sup> | [ :white_check_mark: ](https://azure.github.io/AppService/2022/04/11/windows-containers-app-insights-preview.html) <sup>[2](#Preview)</sup> | [ :white_check_mark: ](https://azure.github.io/AppService/2022/04/11/windows-containers-app-insights-preview.html) <sup>[2](#Preview)</sup> | :x: | :x: |-|Azure App Service on Linux - Publish as Code | :x: | [ :white_check_mark: :link: ](azure-web-apps-net-core.md?tabs=linux) <sup>[1](#OnBD)</sup> | [ :white_check_mark: :link: ](azure-web-apps-java.md) <sup>[1](#OnBD)</sup> | [ :white_check_mark: :link: ](azure-web-apps-nodejs.md?tabs=linux) | :x: | -|Azure App Service on Linux - Publish as Docker | :x: | :x: | [ :white_check_mark: :link: ](azure-web-apps-java.md) | [ :white_check_mark: :link: ](azure-web-apps-nodejs.md?tabs=linux) | :x: | +|Azure App Service on Linux - Publish as Code | :x: | [ :white_check_mark: :link: ](azure-web-apps-net-core.md?tabs=linux) <sup>[1](#OnBD)</sup> | [ :white_check_mark: :link: ](azure-web-apps-java.md) <sup>[1](#OnBD)</sup> | [ :white_check_mark: :link: ](azure-web-apps-nodejs.md?tabs=linux) | :x: | +|Azure App Service on Linux - Publish as Docker | :x: | :x: | [ :white_check_mark: :link: ](azure-web-apps-java.md) <sup>[2](#Preview)</sup> | [ :white_check_mark: :link: ](azure-web-apps-nodejs.md?tabs=linux) | :x: | |Azure Functions - basic | [ :white_check_mark: :link: ](monitor-functions.md) <sup>[1](#OnBD)</sup> | [ :white_check_mark: :link: ](monitor-functions.md) <sup>[1](#OnBD)</sup> | [ :white_check_mark: :link: ](monitor-functions.md) <sup>[1](#OnBD)</sup> | [ :white_check_mark: :link: ](monitor-functions.md) <sup>[1](#OnBD)</sup> | [ :white_check_mark: :link: ](monitor-functions.md) <sup>[1](#OnBD)</sup> | |Azure Functions - dependencies | :x: | :x: | [ :white_check_mark: :link: ](monitor-functions.md) <sup>[2](#Preview)</sup> | :x: | [ :white_check_mark: :link: ](monitor-functions.md#distributed-tracing-for-python-function-apps) | |Azure Spring Cloud | :x: | :x: | [ :white_check_mark: :link: ](azure-web-apps-java.md) | :x: | :x: |-|Azure Kubernetes Service (AKS) | :x: | :x: | [ :white_check_mark: :link: ](opentelemetry-enable.md?tabs=java) | :x: | :x: | -|Azure VMs Windows | [ :white_check_mark: :link: ](azure-vm-vmss-apps.md) <sup>[2](#Preview)</sup> <sup>[3](#Agent)</sup> | [ :white_check_mark: :link: ](azure-vm-vmss-apps.md) <sup>[2](#Preview)</sup> <sup>[3](#Agent)</sup> | [ :white_check_mark: :link: ](opentelemetry-enable.md?tabs=java) | :x: | :x: | -|On-premises VMs Windows | [ :white_check_mark: :link: ](application-insights-asp-net-agent.md) <sup>[3](#Agent)</sup> | [ :white_check_mark: :link: ](application-insights-asp-net-agent.md) <sup>[2](#Preview)</sup> <sup>[3](#Agent)</sup> | [ :white_check_mark: :link: ](opentelemetry-enable.md?tabs=java) | :x: | :x: | -|Standalone agent - any environment | :x: | :x: | [ :white_check_mark: :link: ](opentelemetry-enable.md?tabs=java) | :x: | :x: | +|Azure Kubernetes Service (AKS) | :x: | :x: | [ :white_check_mark: :link: ](opentelemetry-enable.md?tabs=java) | :x: | :x: | +|Azure VMs Windows | [ :white_check_mark: :link: ](azure-vm-vmss-apps.md) <sup>[2](#Preview)</sup> <sup>[3](#Agent)</sup> | [ :white_check_mark: :link: ](azure-vm-vmss-apps.md) <sup>[2](#Preview)</sup> <sup>[3](#Agent)</sup> | [ :white_check_mark: :link: ](opentelemetry-enable.md?tabs=java) | :x: | :x: | +|On-premises VMs Windows | [ :white_check_mark: :link: ](application-insights-asp-net-agent.md) <sup>[3](#Agent)</sup> | [ :white_check_mark: :link: ](application-insights-asp-net-agent.md) <sup>[2](#Preview)</sup> <sup>[3](#Agent)</sup> | [ :white_check_mark: :link: ](opentelemetry-enable.md?tabs=java) | :x: | :x: | +|Standalone agent - any environment | :x: | :x: | [ :white_check_mark: :link: ](opentelemetry-enable.md?tabs=java) | :x: | :x: | **Footnotes** - <a name="OnBD">1</a>: Application Insights is on by default and enabled automatically. Links are provided to more information for each supported scenario. * [Application Insights overview](app-insights-overview.md) * [Application Insights overview dashboard](overview-dashboard.md)-* [Application map](app-map.md) +* [Application map](app-map.md) |
azure-monitor | Live Stream | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/live-stream.md | As described in the [Azure TLS 1.2 migration announcement](https://azure.microso ### "Data is temporarily inaccessible" status message -When navigating to Live Metrics, you may see a banner with the status message: "Data is temporarily inaccessible. The updates on our status are posted here https://aka.ms/aistatus" +When navigating to Live Metrics, you may see a banner with the status message: "Data is temporarily inaccessible. The updates on our status are posted here https://aka.ms/aistatus" -Verify if any firewalls or browser extensions are blocking access to Live Metrics. For example, some popular ad-blocker extensions block connections to \*.monitor.azure.com. In order to use the full capabilities of Live Metrics, either disable the ad-blocker extension or add an exclusion rule for the domain \*.livediagnostics.monitor.azure.com to your ad-blocker, firewall, etc. +Follow the link to the *Azure status* page and check if there's an activate outage affecting Application Insights. If there's no outage, verify if any firewalls or browser extensions are blocking access to Live Metrics. For example, some popular ad-blocker extensions block connections to `*.monitor.azure.com`. In order to use the full capabilities of Live Metrics, either disable the ad-blocker extension or add an exclusion rule for the domain `*.livediagnostics.monitor.azure.com` to your ad-blocker, firewall, etc. ### Unexpected large number of requests to livediagnostics.monitor.azure.com -Heavier traffic is expected while the LiveMetrics pane is open. Navigate away from the LiveMetrics pane to restore normal traffic flow of traffic. -Application Insights SDKs poll QuickPulse endpoints with REST API calls once every five seconds to check if the LiveMetrics pane is being viewed. +Heavier traffic is expected while the LiveMetrics pane is open. Navigate away from the LiveMetrics pane to restore normal traffic flow of traffic. Application Insights SDKs poll QuickPulse endpoints with REST API calls once every five seconds to check if the LiveMetrics pane is being viewed. The SDKs will send new metrics to QuickPulse every one second while the LiveMetrics pane is open. |
azure-monitor | Opentelemetry Configuration | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/opentelemetry-configuration.md | This article covers configuration settings for the Azure Monitor OpenTelemetry d A connection string in Application Insights defines the target location for sending telemetry data, ensuring it reaches the appropriate resource for monitoring and analysis. -### [.NET](#tab/net) +### [ASP.NET Core](#tab/aspnetcore) Use one of the following three ways to configure the connection string: Use one of the following three ways to configure the connection string: > 2. Environment Variable > 3. Configuration File +### [.NET](#tab/net) ++Use one of the following two ways to configure the connection string: ++- Add the Azure Monitor Exporter to each OpenTelemetry signal in application startup. + ```csharp + var tracerProvider = Sdk.CreateTracerProviderBuilder() + .AddAzureMonitorTraceExporter(options => + { + options.ConnectionString = "<Your Connection String>"; + }); ++ var metricsProvider = Sdk.CreateMeterProviderBuilder() + .AddAzureMonitorMetricExporter(options => + { + options.ConnectionString = "<Your Connection String>"; + }); ++ var loggerFactory = LoggerFactory.Create(builder => + { + builder.AddOpenTelemetry(options => + { + options.AddAzureMonitorLogExporter(options => + { + options.ConnectionString = "<Your Connection String>"; + }); + }); + }); + ``` +- Set an environment variable: + ```console + APPLICATIONINSIGHTS_CONNECTION_STRING=<Your Connection String> + ``` ++> [!NOTE] +> If you set the connection string in more than one place, we adhere to the following precedence: +> 1. Code +> 2. Environment Variable + ### [Java](#tab/java) For more information about Java, see the [Java supplemental documentation](java-standalone-config.md). configure_azure_monitor( You might want to update the [Cloud Role Name](app-map.md#understand-the-cloud-role-name-within-the-context-of-an-application-map) and the Cloud Role Instance from the default values to something that makes sense to your team. They appear on the Application Map as the name underneath a node. +### [ASP.NET Core](#tab/aspnetcore) ++Set the Cloud Role Name and the Cloud Role Instance via [Resource](https://github.com/open-telemetry/opentelemetry-specification/blob/main/specification/resource/sdk.md#resource-sdk) attributes. Cloud Role Name uses `service.namespace` and `service.name` attributes, although it falls back to `service.name` if `service.namespace` isn't set. Cloud Role Instance uses the `service.instance.id` attribute value. For information on standard attributes for resources, see [Resource Semantic Conventions](https://github.com/open-telemetry/opentelemetry-specification/blob/main/specification/resource/semantic_conventions/README.md). ++```csharp +// Setting role name and role instance +var resourceAttributes = new Dictionary<string, object> { + { "service.name", "my-service" }, + { "service.namespace", "my-namespace" }, + { "service.instance.id", "my-instance" }}; ++var builder = WebApplication.CreateBuilder(args); ++builder.Services.AddOpenTelemetry().UseAzureMonitor(); +builder.Services.ConfigureOpenTelemetryTracerProvider((sp, builder) => + builder.ConfigureResource(resourceBuilder => + resourceBuilder.AddAttributes(resourceAttributes))); ++var app = builder.Build(); ++app.Run(); +``` + ### [.NET](#tab/net) Set the Cloud Role Name and the Cloud Role Instance via [Resource](https://github.com/open-telemetry/opentelemetry-specification/blob/main/specification/resource/sdk.md#resource-sdk) attributes. Cloud Role Name uses `service.namespace` and `service.name` attributes, although it falls back to `service.name` if `service.namespace` isn't set. Cloud Role Instance uses the `service.instance.id` attribute value. For information on standard attributes for resources, see [Resource Semantic Conventions](https://github.com/open-telemetry/opentelemetry-specification/blob/main/specification/resource/semantic_conventions/README.md). var resourceAttributes = new Dictionary<string, object> { { "service.namespace", "my-namespace" }, { "service.instance.id", "my-instance" }}; var resourceBuilder = ResourceBuilder.CreateDefault().AddAttributes(resourceAttributes);-// Done setting role name and role instance -// Set ResourceBuilder on the provider. var tracerProvider = Sdk.CreateTracerProviderBuilder()+ // Set ResourceBuilder on the TracerProvider. + .SetResourceBuilder(resourceBuilder) + .AddAzureMonitorTraceExporter(); ++var metricsProvider = Sdk.CreateMeterProviderBuilder() + // Set ResourceBuilder on the MeterProvider. .SetResourceBuilder(resourceBuilder)- .AddSource("OTel.AzureMonitor.Demo") - .AddAzureMonitorTraceExporter(o => + .AddAzureMonitorMetricExporter(); ++var loggerFactory = LoggerFactory.Create(builder => +{ + builder.AddOpenTelemetry(options => {- o.ConnectionString = "<Your Connection String>"; - }) - .Build(); + // Set ResourceBuilder on the Logging config. + options.SetResourceBuilder(resourceBuilder); + options.AddAzureMonitorLogExporter(); + }); +}); ``` ### [Java](#tab/java) You may want to enable sampling to reduce your data ingestion volume, which redu > [!NOTE] > Metrics are unaffected by sampling. +#### [ASP.NET Core](#tab/aspnetcore) ++The sampler expects a sample rate of between 0 and 1 inclusive. A rate of 0.1 means approximately 10% of your traces are sent. ++In this example, we utilize the `ApplicationInsightsSampler`, which is included with the Distro. ++```csharp +var builder = WebApplication.CreateBuilder(args); ++builder.Services.AddOpenTelemetry().UseAzureMonitor(); +builder.Services.Configure<ApplicationInsightsSamplerOptions>(options => { options.SamplingRatio = 0.1F; }); ++var app = builder.Build(); ++app.Run(); +``` + #### [.NET](#tab/net) The sampler expects a sample rate of between 0 and 1 inclusive. A rate of 0.1 means approximately 10% of your traces are sent. dotnet add package --prerelease OpenTelemetry.Extensions.AzureMonitor ```csharp var tracerProvider = Sdk.CreateTracerProviderBuilder()- .AddSource("OTel.AzureMonitor.Demo") - .SetSampler(new ApplicationInsightsSampler(0.1F)) - .AddAzureMonitorTraceExporter(o => - { - o.ConnectionString = "<Your Connection String>"; - }) - .Build(); + .SetSampler(new ApplicationInsightsSampler(new ApplicationInsightsSamplerOptions { SamplingRatio = 1.0F })) + .AddAzureMonitorTraceExporter(); ``` #### [Java](#tab/java) export OTEL_TRACES_SAMPLER_ARG=0.1 You might want to enable Azure Active Directory (Azure AD) Authentication for a more secure connection to Azure, which prevents unauthorized telemetry from being ingested into your subscription. -#### [.NET](#tab/net) +#### [ASP.NET Core](#tab/aspnetcore) We support the credential classes provided by [Azure Identity](https://github.com/Azure/azure-sdk-for-net/tree/main/sdk/identity/Azure.Identity#credential-classes). var app = builder.Build(); app.Run(); ```++#### [.NET](#tab/net) ++We support the credential classes provided by [Azure Identity](https://github.com/Azure/azure-sdk-for-net/tree/main/sdk/identity/Azure.Identity#credential-classes). ++- We recommend `DefaultAzureCredential` for local development. +- We recommend `ManagedIdentityCredential` for system-assigned and user-assigned managed identities. + - For system-assigned, use the default constructor without parameters. + - For user-assigned, provide the client ID to the constructor. +- We recommend `ClientSecretCredential` for service principals. + - Provide the tenant ID, client ID, and client secret to the constructor. ++```csharp +var credential = new DefaultAzureCredential(); ++var tracerProvider = Sdk.CreateTracerProviderBuilder() + .AddAzureMonitorTraceExporter(options => + { + options.Credential = credential; + }); ++var metricsProvider = Sdk.CreateMeterProviderBuilder() + .AddAzureMonitorMetricExporter(options => + { + options.Credential = credential; + }); ++var loggerFactory = LoggerFactory.Create(builder => +{ + builder.AddOpenTelemetry(options => + { + options.AddAzureMonitorLogExporter(options => + { + options.Credential = credential; + }); + }); +}); +``` #### [Java](#tab/java) configure_azure_monitor( To improve reliability and resiliency, Azure Monitor OpenTelemetry-based offerings write to offline/local storage by default when an application loses its connection with Application Insights. It saves the application telemetry to disk and periodically tries to send it again for up to 48 hours. In high-load applications, telemetry is occasionally dropped for two reasons. First, when the allowable time is exceeded, and second, when the maximum file size is exceeded or the SDK doesn't have an opportunity to clear out the file. If we need to choose, the product saves more recent events over old ones. [Learn More](data-retention-privacy.md#does-the-sdk-create-temporary-local-storage) +### [ASP.NET Core](#tab/aspnetcore) ++The Distro package includes the AzureMonitorExporter which by default uses one of the following locations for offline storage (listed in order of precedence): ++- Windows + - %LOCALAPPDATA%\Microsoft\AzureMonitor + - %TEMP%\Microsoft\AzureMonitor +- Non-Windows + - %TMPDIR%/Microsoft/AzureMonitor + - /var/tmp/Microsoft/AzureMonitor + - /tmp/Microsoft/AzureMonitor ++To override the default directory, you should set `AzureMonitorOptions.StorageDirectory`. ++```csharp +var builder = WebApplication.CreateBuilder(args); ++builder.Services.AddOpenTelemetry().UseAzureMonitor(options => +{ + options.StorageDirectory = "C:\\SomeDirectory"; +}); ++var app = builder.Build(); ++app.Run(); +``` ++To disable this feature, you should set `AzureMonitorOptions.DisableOfflineStorage = true`. + ### [.NET](#tab/net) By default, the AzureMonitorExporter uses one of the following locations for offline storage (listed in order of precedence): By default, the AzureMonitorExporter uses one of the following locations for off To override the default directory, you should set `AzureMonitorExporterOptions.StorageDirectory`. -For example: ```csharp var tracerProvider = Sdk.CreateTracerProviderBuilder()- .AddAzureMonitorTraceExporter(o => { - o.ConnectionString = "<Your Connection String>"; - o.StorageDirectory = "C:\\SomeDirectory"; - }) - .Build(); + .AddAzureMonitorTraceExporter(options => + { + options.StorageDirectory = "C:\\SomeDirectory"; + }); ++var metricsProvider = Sdk.CreateMeterProviderBuilder() + .AddAzureMonitorMetricExporter(options => + { + options.StorageDirectory = "C:\\SomeDirectory"; + }); ++var loggerFactory = LoggerFactory.Create(builder => +{ + builder.AddOpenTelemetry(options => + { + options.AddAzureMonitorLogExporter(options => + { + options.StorageDirectory = "C:\\SomeDirectory"; + }); + }); +}); ``` To disable this feature, you should set `AzureMonitorExporterOptions.DisableOfflineStorage = true`. configure_azure_monitor( ## Enable the OTLP Exporter -You might want to enable the OpenTelemetry Protocol (OTLP) Exporter alongside your Azure Monitor Exporter to send your telemetry to two locations. +You might want to enable the OpenTelemetry Protocol (OTLP) Exporter alongside the Azure Monitor Exporter to send your telemetry to two locations. > [!NOTE] > The OTLP Exporter is shown for convenience only. We don't officially support the OTLP Exporter or any components or third-party experiences downstream of it. +#### [ASP.NET Core](#tab/aspnetcore) ++1. Install the [OpenTelemetry.Exporter.OpenTelemetryProtocol](https://www.nuget.org/packages/OpenTelemetry.Exporter.OpenTelemetryProtocol/) package in your project. ++ ```dotnetcli + dotnet add package --prerelease OpenTelemetry.Exporter.OpenTelemetryProtocol + ``` ++1. Add the following code snippet. This example assumes you have an OpenTelemetry Collector with an OTLP receiver running. For details, see the [example on GitHub](https://github.com/open-telemetry/opentelemetry-dotnet/blob/main/examples/Console/TestOtlpExporter.cs). ++ ```csharp + var builder = WebApplication.CreateBuilder(args); ++ builder.Services.AddOpenTelemetry().UseAzureMonitor(); + builder.Services.ConfigureOpenTelemetryTracerProvider((sp, builder) => builder.AddOtlpExporter()); + builder.Services.ConfigureOpenTelemetryMeterProvider((sp, builder) => builder.AddOtlpExporter()); ++ var app = builder.Build(); ++ app.Run(); + ``` + #### [.NET](#tab/net) -1. Install the [OpenTelemetry.Exporter.OpenTelemetryProtocol](https://www.nuget.org/packages/OpenTelemetry.Exporter.OpenTelemetryProtocol/) package along with [Azure.Monitor.OpenTelemetry.Exporter](https://www.nuget.org/packages/Azure.Monitor.OpenTelemetry.Exporter) in your project. +1. Install the [OpenTelemetry.Exporter.OpenTelemetryProtocol](https://www.nuget.org/packages/OpenTelemetry.Exporter.OpenTelemetryProtocol/) package in your project. ++ ```dotnetcli + dotnet add package OpenTelemetry.Exporter.OpenTelemetryProtocol + ``` 1. Add the following code snippet. This example assumes you have an OpenTelemetry Collector with an OTLP receiver running. For details, see the [example on GitHub](https://github.com/open-telemetry/opentelemetry-dotnet/blob/main/examples/Console/TestOtlpExporter.cs). ```csharp- // Sends data to Application Insights as well as OTLP - using var tracerProvider = Sdk.CreateTracerProviderBuilder() - .AddSource("OTel.AzureMonitor.Demo") - .AddAzureMonitorTraceExporter(o => - { - o.ConnectionString = "<Your Connection String>" - }) - .AddOtlpExporter() - .Build(); + var tracerProvider = Sdk.CreateTracerProviderBuilder() + .AddAzureMonitorTraceExporter() + .AddOtlpExporter(); ++ var metricsProvider = Sdk.CreateMeterProviderBuilder() + .AddAzureMonitorMetricExporter() + .AddOtlpExporter(); ``` #### [Java](#tab/java) For more information about Java, see the [Java supplemental documentation](java- ## OpenTelemetry configurations The following OpenTelemetry configurations can be accessed through environment variables while using the Azure Monitor OpenTelemetry Distros.+### [ASP.NET Core](#tab/aspnetcore) ++| Environment variable | Description | +| -- | -- | +| `APPLICATIONINSIGHTS_CONNECTION_STRING` | Set this to the connection string for your Application Insights resource. | +| `APPLICATIONINSIGHTS_STATSBEAT_DISABLED` | Set this to `true` to opt-out of internal metrics collection. | +| `OTEL_RESOURCE_ATTRIBUTES` | Key-value pairs to be used as resource attributes. See the [Resource SDK specification](https://github.com/open-telemetry/opentelemetry-specification/blob/v1.5.0/specification/resource/sdk.md#specifying-resource-information-via-an-environment-variable) for more details. | +| `OTEL_SERVICE_NAME` | Sets the value of the `service.name` resource attribute. If `service.name` is also provided in `OTEL_RESOURCE_ATTRIBUTES`, then `OTEL_SERVICE_NAME` takes precedence. | + ### [.NET](#tab/net) |
azure-monitor | Opentelemetry Enable | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/opentelemetry-enable.md | Follow the steps in this section to instrument your application with OpenTelemet - Application using an officially supported version of [.NET Core](https://dotnet.microsoft.com/download/dotnet) or [.NET Framework](https://dotnet.microsoft.com/download/dotnet-framework) that's at least .NET Framework 4.6.2 +### [.NET](#tab/net) ++- Application using an officially supported version of [.NET Core](https://dotnet.microsoft.com/download/dotnet) or [.NET Framework](https://dotnet.microsoft.com/download/dotnet-framework) that's at least .NET Framework 4.6.2 + ### [Java](#tab/java) - A Java application using Java 8+ Install the latest [Azure.Monitor.OpenTelemetry.AspNetCore](https://www.nuget.or dotnet add package --prerelease Azure.Monitor.OpenTelemetry.AspNetCore ``` +### [.NET](#tab/net) ++Install the latest [Azure.Monitor.OpenTelemetry.Exporter](https://www.nuget.org/packages/Azure.Monitor.OpenTelemetry.Exporter) NuGet package: ++```dotnetcli +dotnet add package --prerelease Azure.Monitor.OpenTelemetry.Exporter +``` + #### [Java](#tab/java) Download the [applicationinsights-agent-3.4.12.jar](https://github.com/microsoft/ApplicationInsights-Java/releases/download/3.4.12/applicationinsights-agent-3.4.12.jar) file. To enable Azure Monitor Application Insights, you will make a minor modification ##### [ASP.NET Core](#tab/aspnetcore) -Add `UseAzureMonitor()` to your application startup. Depending on your version of .NET Core, this will be in either your `startup.cs` or `program.cs` class. +Add `UseAzureMonitor()` to your application startup. Depending on your version of .NET, this will be in either your `startup.cs` or `program.cs` class. ```csharp var builder = WebApplication.CreateBuilder(args); var app = builder.Build(); app.Run(); ``` +##### [.NET](#tab/net) ++Add the Azure Monitor Exporter to each OpenTelemetry signal in application startup. Depending on your version of .NET, this will be in either your `startup.cs` or `program.cs` class. +```csharp +var tracerProvider = Sdk.CreateTracerProviderBuilder() + .AddAzureMonitorTraceExporter(); ++var metricsProvider = Sdk.CreateMeterProviderBuilder() + .AddAzureMonitorMetricExporter(); ++var loggerFactory = LoggerFactory.Create(builder => +{ + builder.AddOpenTelemetry(options => + { + options.AddAzureMonitorLogExporter(); + }); +}); +``` + ##### [Java](#tab/java) Java autoinstrumentation is enabled through configuration changes; no code changes are required. Logging For more information about ILogger, see [Logging in C# and .NET](/dotnet/core/extensions/logging) and [code examples](https://github.com/open-telemetry/opentelemetry-dotnet/tree/main/docs/logs). +#### [.NET](#tab/net) ++The Azure Monitor Exporter does not include any instrumentation libraries. + #### [Java](#tab/java) Requests You can collect more data automatically when you include instrumentation librari To add a community library, use the `ConfigureOpenTelemetryMeterProvider` or `ConfigureOpenTelemetryTraceProvider` methods. -The following example demonstrates how the the [Runtime Instrumentation](https://www.nuget.org/packages/OpenTelemetry.Instrumentation.Runtime) can be added to collect additional metrics. +The following example demonstrates how the [Runtime Instrumentation](https://www.nuget.org/packages/OpenTelemetry.Instrumentation.Runtime) can be added to collect additional metrics. ```csharp var builder = WebApplication.CreateBuilder(args); var app = builder.Build(); app.Run(); ``` +### [.NET](#tab/net) ++The following example demonstrates how the [Runtime Instrumentation](https://www.nuget.org/packages/OpenTelemetry.Instrumentation.Runtime) can be added to collect additional metrics. ++```csharp +var metricsProvider = Sdk.CreateMeterProviderBuilder() + .AddRuntimeInstrumentation() + .AddAzureMonitorMetricExporter(); +``` + ### [Java](#tab/java) You cannot extend the Java Distro with community instrumentation libraries. To request that we include another instrumentation library, please open an issue on our GitHub page. You can find a link to our GitHub page in [Next Steps](#next-steps). myFruitSalePrice.Record(rand.Next(1, 1000), new("name", "apple"), new("color", " myFruitSalePrice.Record(rand.Next(1, 1000), new("name", "lemon"), new("color", "yellow")); ``` +#### [.NET](#tab/net) ++```csharp +public class Program +{ + private static readonly Meter meter = new("OTel.AzureMonitor.Demo"); ++ public static void Main() + { + using var meterProvider = Sdk.CreateMeterProviderBuilder() + .AddMeter("OTel.AzureMonitor.Demo") + .AddAzureMonitorMetricExporter(o => + { + o.ConnectionString = "<Your Connection String>"; + }) + .Build(); ++ Histogram<long> myFruitSalePrice = meter.CreateHistogram<long>("FruitSalePrice"); ++ var rand = new Random(); + myFruitSalePrice.Record(rand.Next(1, 1000), new("name", "apple"), new("color", "red")); + myFruitSalePrice.Record(rand.Next(1, 1000), new("name", "lemon"), new("color", "yellow")); + myFruitSalePrice.Record(rand.Next(1, 1000), new("name", "lemon"), new("color", "yellow")); + myFruitSalePrice.Record(rand.Next(1, 1000), new("name", "apple"), new("color", "green")); + myFruitSalePrice.Record(rand.Next(1, 1000), new("name", "apple"), new("color", "red")); + myFruitSalePrice.Record(rand.Next(1, 1000), new("name", "lemon"), new("color", "yellow")); ++ System.Console.WriteLine("Press Enter key to exit."); + System.Console.ReadLine(); + } +} +``` + #### [Java](#tab/java) ```java myFruitCounter.Add(5, new("name", "apple"), new("color", "red")); myFruitCounter.Add(4, new("name", "lemon"), new("color", "yellow")); ``` +#### [.NET](#tab/net) ++```csharp +public class Program +{ + private static readonly Meter meter = new("OTel.AzureMonitor.Demo"); ++ public static void Main() + { + using var meterProvider = Sdk.CreateMeterProviderBuilder() + .AddMeter("OTel.AzureMonitor.Demo") + .AddAzureMonitorMetricExporter(o => + { + o.ConnectionString = "<Your Connection String>"; + }) + .Build(); ++ Counter<long> myFruitCounter = meter.CreateCounter<long>("MyFruitCounter"); ++ myFruitCounter.Add(1, new("name", "apple"), new("color", "red")); + myFruitCounter.Add(2, new("name", "lemon"), new("color", "yellow")); + myFruitCounter.Add(1, new("name", "lemon"), new("color", "yellow")); + myFruitCounter.Add(2, new("name", "apple"), new("color", "green")); + myFruitCounter.Add(5, new("name", "apple"), new("color", "red")); + myFruitCounter.Add(4, new("name", "lemon"), new("color", "yellow")); ++ System.Console.WriteLine("Press Enter key to exit."); + System.Console.ReadLine(); + } +} +``` + #### [Java](#tab/java) ```Java private static IEnumerable<Measurement<int>> GetThreadState(Process process) } ``` +#### [.NET](#tab/net) ++```csharp +public class Program +{ + private static readonly Meter meter = new("OTel.AzureMonitor.Demo"); ++ public static void Main() + { + using var meterProvider = Sdk.CreateMeterProviderBuilder() + .AddMeter("OTel.AzureMonitor.Demo") + .AddAzureMonitorMetricExporter(o => + { + o.ConnectionString = "<Your Connection String>"; + }) + .Build(); ++ var process = Process.GetCurrentProcess(); + + ObservableGauge<int> myObservableGauge = meter.CreateObservableGauge("Thread.State", () => GetThreadState(process)); ++ System.Console.WriteLine("Press Enter key to exit."); + System.Console.ReadLine(); + } + + private static IEnumerable<Measurement<int>> GetThreadState(Process process) + { + foreach (ProcessThread thread in process.Threads) + { + yield return new((int)thread.ThreadState, new("ProcessId", process.Id), new("ThreadId", thread.Id)); + } + } +} +``` + #### [Java](#tab/java) ```Java using (var activity = activitySource.StartActivity("ExceptionExample")) } ``` +#### [.NET](#tab/net) ++```csharp +using (var activity = activitySource.StartActivity("ExceptionExample")) +{ + try + { + throw new Exception("Test exception"); + } + catch (Exception ex) + { + activity?.SetStatus(ActivityStatusCode.Error); + activity?.RecordException(ex); + } +} +``` + #### [Java](#tab/java) You can use `opentelemetry-api` to update the status of a span and record exceptions. By default, the activity ends up in the Application Insights `dependencies` tabl For code representing a background job not captured by an instrumentation library, we recommend setting `ActivityKind.Server` in the `StartActivity` method to ensure it appears in the Application Insights `requests` table. +#### [.NET](#tab/net) ++> [!NOTE] +> The `Activity` and `ActivitySource` classes from the `System.Diagnostics` namespace represent the OpenTelemetry concepts of `Span` and `Tracer`, respectively. You create `ActivitySource` directly by using its constructor instead of by using `TracerProvider`. Each [`ActivitySource`](https://github.com/open-telemetry/opentelemetry-dotnet/tree/main/docs/trace/customizing-the-sdk#activity-source) class must be explicitly connected to `TracerProvider` by using `AddSource()`. That's because parts of the OpenTelemetry tracing API are incorporated directly into the .NET runtime. To learn more, see [Introduction to OpenTelemetry .NET Tracing API](https://github.com/open-telemetry/opentelemetry-dotnet/blob/main/src/OpenTelemetry.Api/README.md#introduction-to-opentelemetry-net-tracing-api). ++```csharp +using var tracerProvider = Sdk.CreateTracerProviderBuilder() + .AddSource("ActivitySourceName") + .AddAzureMonitorTraceExporter(o => o.ConnectionString = "<Your Connection String>") + .Build(); ++var activitySource = new ActivitySource("ActivitySourceName"); ++using (var activity = activitySource.StartActivity("CustomActivity")) +{ + // your code here +} +``` + #### [Java](#tab/java) ##### Use the OpenTelemetry annotation The OpenTelemetry Logs/Events API is still under development. In the meantime, y Currently unavailable. +#### [.NET](#tab/net) ++Currently unavailable. + #### [Java](#tab/java) You can use `opentelemetry-api` to create span events, which populate the `traces` table in Application Insights. The string passed in to `addEvent()` is saved to the `message` field within the trace. We recommend you use the OpenTelemetry APIs whenever possible, but there may be #### [ASP.NET Core](#tab/aspnetcore) -It isn't available in .NET. +This isn't available in .NET. ++#### [.NET](#tab/net) ++This isn't available in .NET. #### [Java](#tab/java) public class ActivityEnrichingProcessor : BaseProcessor<Activity> } ``` +#### [.NET](#tab/net) ++To add span attributes, use either of the following two ways: ++* Use options provided by instrumentation libraries. +* Add a custom span processor. ++> [!TIP] +> The advantage of using options provided by instrumentation libraries, when they're available, is that the entire context is available. As a result, users can select to add or filter more attributes. For example, the enrich option in the HttpClient instrumentation library gives users access to the httpRequestMessage itself. They can select anything from it and store it as an attribute. ++1. Many instrumentation libraries provide an enrich option. For guidance, see the readme files of individual instrumentation libraries: + - [ASP.NET](https://github.com/open-telemetry/opentelemetry-dotnet-contrib/blob/Instrumentation.AspNet-1.0.0-rc9.8/src/OpenTelemetry.Instrumentation.AspNet/README.md#enrich) + - [ASP.NET Core](https://github.com/open-telemetry/opentelemetry-dotnet/blob/1.0.0-rc9.14/src/OpenTelemetry.Instrumentation.AspNetCore/README.md#enrich) + - [HttpClient](https://github.com/open-telemetry/opentelemetry-dotnet/blob/1.0.0-rc9.14/src/OpenTelemetry.Instrumentation.Http/README.md#enrich) ++1. Use a custom processor: ++> [!TIP] +> Add the processor shown here *before* the Azure Monitor Exporter. ++```csharp +using var tracerProvider = Sdk.CreateTracerProviderBuilder() + .AddSource("OTel.AzureMonitor.Demo") + .AddProcessor(new ActivityEnrichingProcessor()) + .AddAzureMonitorTraceExporter(o => + { + o.ConnectionString = "<Your Connection String>" + }) + .Build(); +``` ++Add `ActivityEnrichingProcessor.cs` to your project with the following code: ++```csharp +public class ActivityEnrichingProcessor : BaseProcessor<Activity> +{ + public override void OnEnd(Activity activity) + { + // The updated activity will be available to all processors which are called after this processor. + activity.DisplayName = "Updated-" + activity.DisplayName; + activity.SetTag("CustomDimension1", "Value1"); + activity.SetTag("CustomDimension2", "Value2"); + } +} +``` + ##### [Java](#tab/java) You can use `opentelemetry-api` to add attributes to spans. Use the add [custom property example](#add-a-custom-property-to-a-span), but rep activity.SetTag("http.client_ip", "<IP Address>"); ``` +#### [.NET](#tab/net) ++Use the add [custom property example](#add-a-custom-property-to-a-span), but replace the following lines of code in `ActivityEnrichingProcessor.cs`: ++```C# +// only applicable in case of activity.Kind == Server +activity.SetTag("http.client_ip", "<IP Address>"); +``` + ##### [Java](#tab/java) Java automatically populates this field. Use the add [custom property example](#add-a-custom-property-to-a-span). activity?.SetTag("enduser.id", "<User Id>"); ``` +##### [.NET](#tab/net) ++Use the add [custom property example](#add-a-custom-property-to-a-span). ++```csharp +activity?.SetTag("enduser.id", "<User Id>"); +``` + ##### [Java](#tab/java) Populate the `user ID` field in the `requests`, `dependencies`, or `exceptions` table. span._attributes["enduser.id"] = "<User ID>" OpenTelemetry uses .NET's ILogger. Attaching custom dimensions to logs can be accomplished using a [message template](/dotnet/core/extensions/logging?tabs=command-line#log-message-template). +#### [.NET](#tab/net) ++OpenTelemetry uses .NET's ILogger. +Attaching custom dimensions to logs can be accomplished using a [message template](/dotnet/core/extensions/logging?tabs=command-line#log-message-template). + #### [Java](#tab/java) Logback, Log4j, and java.util.logging are [autoinstrumented](#logs). Attaching custom dimensions to your logs can be accomplished in these ways: You might use the following ways to filter out telemetry before it leaves your a 1. Many instrumentation libraries provide a filter option. For guidance, see the readme files of individual instrumentation libraries: - [ASP.NET Core](https://github.com/open-telemetry/opentelemetry-dotnet/blob/1.0.0-rc9.14/src/OpenTelemetry.Instrumentation.AspNetCore/README.md#filter)- - [HttpClient and HttpWebRequest](https://github.com/open-telemetry/opentelemetry-dotnet/blob/1.0.0-rc9.14/src/OpenTelemetry.Instrumentation.Http/README.md#filter) + - [HttpClient](https://github.com/open-telemetry/opentelemetry-dotnet/blob/1.0.0-rc9.14/src/OpenTelemetry.Instrumentation.Http/README.md#filter) 1. Use a custom processor: You might use the following ways to filter out telemetry before it leaves your a 1. If a particular source isn't explicitly added by using `AddSource("ActivitySourceName")`, then none of the activities created by using that source are exported. +#### [.NET](#tab/net) ++1. Many instrumentation libraries provide a filter option. For guidance, see the readme files of individual instrumentation libraries: + - [ASP.NET](https://github.com/open-telemetry/opentelemetry-dotnet-contrib/blob/Instrumentation.AspNet-1.0.0-rc9.8/src/OpenTelemetry.Instrumentation.AspNet/README.md#filter) + - [ASP.NET Core](https://github.com/open-telemetry/opentelemetry-dotnet/blob/1.0.0-rc9.14/src/OpenTelemetry.Instrumentation.AspNetCore/README.md#filter) + - [HttpClient](https://github.com/open-telemetry/opentelemetry-dotnet/blob/1.0.0-rc9.14/src/OpenTelemetry.Instrumentation.Http/README.md#filter) ++1. Use a custom processor: + + ```csharp + using var tracerProvider = Sdk.CreateTracerProviderBuilder() + .AddSource("OTel.AzureMonitor.Demo") + .AddProcessor(new ActivityFilteringProcessor()) + .AddAzureMonitorTraceExporter(o => + { + o.ConnectionString = "<Your Connection String>" + }) + .Build(); + ``` + + Add `ActivityFilteringProcessor.cs` to your project with the following code: + + ```csharp + public class ActivityFilteringProcessor : BaseProcessor<Activity> + { + public override void OnStart(Activity activity) + { + // prevents all exporters from exporting internal activities + if (activity.Kind == ActivityKind.Internal) + { + activity.IsAllDataRequested = false; + } + } + } + ``` ++1. If a particular source isn't explicitly added by using `AddSource("ActivitySourceName")`, then none of the activities created by using that source will be exported. ++ #### [Java](#tab/java) See [sampling overrides](java-standalone-config.md#sampling-overrides-preview) and [telemetry processors](java-standalone-telemetry-processors.md). string traceId = activity?.TraceId.ToHexString(); string spanId = activity?.SpanId.ToHexString(); ``` +#### [.NET](#tab/net) ++> [!NOTE] +> The `Activity` and `ActivitySource` classes from the `System.Diagnostics` namespace represent the OpenTelemetry concepts of `Span` and `Tracer`, respectively. That's because parts of the OpenTelemetry tracing API are incorporated directly into the .NET runtime. To learn more, see [Introduction to OpenTelemetry .NET Tracing API](https://github.com/open-telemetry/opentelemetry-dotnet/blob/main/src/OpenTelemetry.Api/README.md#introduction-to-opentelemetry-net-tracing-api). ++```csharp +Activity activity = Activity.Current; +string traceId = activity?.TraceId.ToHexString(); +string spanId = activity?.SpanId.ToHexString(); +``` + #### [Java](#tab/java) You can use `opentelemetry-api` to get the trace ID or span ID. Get the request trace ID and the span ID in your code: - For OpenTelemetry issues, contact the [OpenTelemetry .NET community](https://github.com/open-telemetry/opentelemetry-dotnet) directly. - For a list of open issues related to Azure Monitor Exporter, see the [GitHub Issues Page](https://github.com/Azure/azure-sdk-for-net/issues?q=is%3Aopen+is%3Aissue+label%3A%22Monitor+-+Exporter%22). +#### [.NET](#tab/net) ++- For OpenTelemetry issues, contact the [OpenTelemetry .NET community](https://github.com/open-telemetry/opentelemetry-dotnet) directly. +- For a list of open issues related to Azure Monitor Exporter, see the [GitHub Issues Page](https://github.com/Azure/azure-sdk-for-net/issues?q=is%3Aopen+is%3Aissue+label%3A%22Monitor+-+Exporter%22). + ### [Java](#tab/java) - For help with troubleshooting, review the [troubleshooting steps](java-standalone-troubleshoot.md). To provide feedback: - To further configure the OpenTelemetry distro, see [Azure Monitor OpenTelemetry configuration](opentelemetry-configuration.md) - To review the source code, see the [Azure Monitor AspNetCore GitHub repository](https://github.com/Azure/azure-sdk-for-net/tree/main/sdk/monitor/Azure.Monitor.OpenTelemetry.AspNetCore).-- To install the NuGet package, check for updates, or view release notes, see the [Azure Monitor AspNetCore NuGet Package](https://www.nuget.org/packages/Azure.Monitor.OpenTelemetry.AspNetCore/) page.-- To become more familiar with Azure Monitor Application Insights and OpenTelemetry, see the [Azure Monitor Example Application](https://github.com/Azure/azure-sdk-for-net/tree/main/sdk/monitor/Azure.Monitor.OpenTelemetry.AspNetCore/tests/Azure.Monitor.OpenTelemetry.AspNetCore.Demo).+- To install the NuGet package, check for updates, or view release notes, see the [Azure Monitor AspNetCore NuGet Package](https://www.nuget.org/packages/Azure.Monitor.OpenTelemetry.AspNetCore) page. +- To become more familiar with Azure Monitor and OpenTelemetry, see the [Azure Monitor Example Application](https://github.com/Azure/azure-sdk-for-net/tree/main/sdk/monitor/Azure.Monitor.OpenTelemetry.AspNetCore/tests/Azure.Monitor.OpenTelemetry.AspNetCore.Demo). +- To learn more about OpenTelemetry and its community, see the [OpenTelemetry .NET GitHub repository](https://github.com/open-telemetry/opentelemetry-dotnet). +- To enable usage experiences, [enable web or browser user monitoring](javascript.md). ++#### [.NET](#tab/net) ++- To further configure the OpenTelemetry distro, see [Azure Monitor OpenTelemetry configuration](opentelemetry-configuration.md) +- To review the source code, see the [Azure Monitor Exporter GitHub repository](https://github.com/Azure/azure-sdk-for-net/tree/main/sdk/monitor/Azure.Monitor.OpenTelemetry.Exporter). +- To install the NuGet package, check for updates, or view release notes, see the [Azure Monitor Exporter NuGet Package](https://www.nuget.org/packages/Azure.Monitor.OpenTelemetry.Exporter) page. +- To become more familiar with Azure Monitor and OpenTelemetry, see the [Azure Monitor Example Application](https://github.com/Azure/azure-sdk-for-net/tree/main/sdk/monitor/Azure.Monitor.OpenTelemetry.Exporter/tests/Azure.Monitor.OpenTelemetry.Exporter.Demo). - To learn more about OpenTelemetry and its community, see the [OpenTelemetry .NET GitHub repository](https://github.com/open-telemetry/opentelemetry-dotnet). - To enable usage experiences, [enable web or browser user monitoring](javascript.md). To provide feedback: -<!-- PR for Hector--> +<!-- PR for Hector--> |
azure-monitor | Container Insights Syslog | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/containers/container-insights-syslog.md | Select the minimum log level for each facility that you want to collect. ## Known limitations--- **Onboarding**. Syslog collection can only be enabled from command line during public preview. - **Container restart data loss**. Agent Container restarts can lead to syslog data loss during public preview. ## Next steps |
azure-monitor | Prometheus Metrics Scrape Configuration | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/essentials/prometheus-metrics-scrape-configuration.md | The following table has a list of all the default targets that the Azure Monitor | Key | Type | Enabled | Pod | Description | |--||-|-|-| | kubelet | bool | `true` | Linux DaemonSet | Scrape kubelet in every node in the K8s cluster without any extra scrape config. |-| cadvisor | bool | `true` | Linux daemosnet | Scrape cadvisor in every node in the K8s cluster without any extra scrape config.<br>Linux only. | +| cadvisor | bool | `true` | Linux DaemonSet | Scrape cadvisor in every node in the K8s cluster without any extra scrape config.<br>Linux only. | | kubestate | bool | `true` | Linux replica | Scrape kube-state-metrics in the K8s cluster (installed as a part of the add-on) without any extra scrape config. | | nodeexporter | bool | `true` | Linux DaemonSet | Scrape node metrics without any extra scrape config.<br>Linux only. | | coredns | bool | `false` | Linux replica | Scrape coredns service in the K8s cluster without any extra scrape config. | |
azure-monitor | Overview | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/overview.md | Click on the picture to see a larger version of the data collection diagram in c |Collection method|Description | |||-|[Application instrumentation](app/app-insights-overview.md)| Application Insights is enabled through either [Auto-Instrumentation (agent)](app/codeless-overview.md#what-is-auto-instrumentation-for-azure-monitor-application-insights) or by adding the Application Insights SDK to your application code. For more information, reference [How do I instrument an application?](app/app-insights-overview.md#how-do-i-instrument-an-application).| +|[Application instrumentation](app/app-insights-overview.md)| Application Insights is enabled through either [Autoinstrumentation (agent)](app/codeless-overview.md#what-is-autoinstrumentation-for-azure-monitor-application-insights) or by adding the Application Insights SDK to your application code. For more information, reference [How do I instrument an application?](app/app-insights-overview.md#how-do-i-instrument-an-application).| |[Agents](agents/agents-overview.md)|Agents can collect monitoring data from the guest operating system of Azure and hybrid virtual machines.| |[Data collection rules](essentials/data-collection-rule-overview.md)|Use data collection rules to specify what data should be collected, how to transform it, and where to send it.| |Internal| Data is automatically sent to a destination without user configuration. | |
azure-monitor | Profiler Troubleshooting | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/profiler/profiler-troubleshooting.md | Title: Troubleshoot Application Insights Profiler description: Walk through troubleshooting steps and information to enable and use Application Insights Profiler. Previously updated : 07/21/2022 Last updated : 05/11/2023 Profiler writes trace messages and custom events to your Application Insights re Search for trace messages and custom events sent by Profiler to your Application Insights resource. -1. In your Application Insights resource, select **Search**. +1. In your Application Insights resource, select **Search** from the top menu. :::image type="content" source="./media/profiler-troubleshooting/search-trace-messages.png" alt-text="Screenshot that shows selecting the Search button from the Application Insights resource."::: For Profiler to work properly, make sure: 1. Select **Go**. 1. On the top menu, select **Tools** > **WebJobs dashboard**. The **WebJobs** pane opens.++ If **ApplicationInsightsProfiler3** doesn't show up, restart your App Service application. :::image type="content" source="./media/profiler-troubleshooting/profiler-web-job.png" alt-text="Screenshot that shows the WebJobs pane, which displays the name, status, and last runtime of jobs."::: |
azure-netapp-files | Dynamic Change Volume Service Level | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-netapp-files/dynamic-change-volume-service-level.md | The capacity pool that you want to move the volume to must already exist. The ca * After the volume is moved to another capacity pool, you'll no longer have access to the previous volume activity logs and volume metrics. The volume will start with new activity logs and metrics under the new capacity pool. * If you move a volume to a capacity pool of a higher service level (for example, moving from *Standard* to *Premium* or *Ultra* service level), you must wait at least seven days before you can move that volume *again* to a capacity pool of a lower service level (for example, moving from *Ultra* to *Premium* or *Standard*). You can always change to higher service level without wait time.--* You cannot change the service level for volumes in a cross-region replication relationship. ## Move a volume to another capacity pool |
azure-resource-manager | Conditional Resource Deployment | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-resource-manager/bicep/conditional-resource-deployment.md | param location string = resourceGroup().location ]) param newOrExisting string = 'new' -resource sa 'Microsoft.Storage/storageAccounts@2019-06-01' = if (newOrExisting == 'new') { +resource saNew 'Microsoft.Storage/storageAccounts@2022-09-01' = if (newOrExisting == 'new') { name: storageAccountName location: location sku: { name: 'Standard_LRS'- tier: 'Standard' } kind: 'StorageV2'- properties: { - accessTier: 'Hot' - } }++resource saExisting 'Microsoft.Storage/storageAccounts@2022-09-01' existing = if (newOrExisting == 'existing') { + name: storageAccountName +} ++output storageAccountId string = ((newOrExisting == 'new') ? saNew.id : saExisting.id) ``` -When the parameter `newOrExisting` is set to **new**, the condition evaluates to true. The storage account is deployed. However, when `newOrExisting` is set to **existing**, the condition evaluates to false and the storage account isn't deployed. +When the parameter `newOrExisting` is set to **new**, the condition evaluates to true. The storage account is deployed. Otherwise the existing storage account is used. ++> [!WARNING] +> If you reference a conditionally-deployed resource that is not deployed. You will get an error saying the resource is not defined in the template. ## Runtime functions param vmName string param location string param logAnalytics string = '' -resource vmName_omsOnboarding 'Microsoft.Compute/virtualMachines/extensions@2017-03-30' = if (!empty(logAnalytics)) { +resource vmName_omsOnboarding 'Microsoft.Compute/virtualMachines/extensions@2023-03-01'' = if (!empty(logAnalytics)) { name: '${vmName}/omsOnboarding' location: location properties: { resource vmName_omsOnboarding 'Microsoft.Compute/virtualMachines/extensions@2017 typeHandlerVersion: '1.0' autoUpgradeMinorVersion: true settings: {- workspaceId: ((!empty(logAnalytics)) ? reference(logAnalytics, '2015-11-01-preview').customerId : null) + workspaceId: ((!empty(logAnalytics)) ? reference(logAnalytics, '2022-10-01').customerId : null) } protectedSettings: {- workspaceKey: ((!empty(logAnalytics)) ? listKeys(logAnalytics, '2015-11-01-preview').primarySharedKey : null) + workspaceKey: ((!empty(logAnalytics)) ? listKeys(logAnalytics, '2022-10-01').primarySharedKey : null) } } } |
azure-resource-manager | Deployment Script Bicep | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-resource-manager/bicep/deployment-script-bicep.md | Property value details: - `forceUpdateTag`: Changing this value between Bicep file deployments forces the deployment script to re-execute. If you use the `newGuid()` or the `utcNow()` functions, both functions can only be used in the default value for a parameter. To learn more, see [Run script more than once](#run-script-more-than-once). - `containerSettings`: Specify the settings to customize Azure Container Instance. Deployment script requires a new Azure Container Instance. You can't specify an existing Azure Container Instance. However, you can customize the container group name by using `containerGroupName`. If not specified, the group name is automatically generated. - `storageAccountSettings`: Specify the settings to use an existing storage account. If `storageAccountName` is not specified, a storage account is automatically created. See [Use an existing storage account](#use-existing-storage-account).-- `azPowerShellVersion`/`azCliVersion`: Specify the module version to be used. See a list of [supported Azure PowerShell versions](https://mcr.microsoft.com/v2/azuredeploymentscripts-powershell/tags/list). See a list of [supported Azure CLI versions](https://mcr.microsoft.com/v2/azure-cli/tags/list).+- `azPowerShellVersion`/`azCliVersion`: Specify the module version to be used. See a list of [supported Azure PowerShell versions](https://mcr.microsoft.com/v2/azuredeploymentscripts-powershell/tags/list). The version determines which container image to use: - >[!IMPORTANT] - > Deployment script uses the available CLI images from Microsoft Container Registry (MCR). It takes about one month to certify a CLI image for deployment script. Don't use the CLI versions that were released within 30 days. To find the release dates for the images, see [Azure CLI release notes](/cli/azure/release-notes-azure-cli). If an unsupported version is used, the error message lists the supported versions. + - **Az version greater than or equal to 9** uses Ubuntu 22.04. + - **Az version greater than or equal to 6 but less than 9** uses Ubuntu 20.04. + - **Az version less than 6** uses Ubuntu 18.04. ++ > [!IMPORTANT] + > It is advisable to upgrade to the latest version of Ubuntu, as Ubuntu 18.04 is nearing its end of life and will no longer receive security updates beyond [May 31st, 2023](https://ubuntu.com/18-04). ++ See a list of [supported Azure CLI versions](https://mcr.microsoft.com/v2/azure-cli/tags/list). ++ > [!IMPORTANT] + > Deployment script uses the available CLI images from Microsoft Container Registry (MCR). It typically takes approximatedly one month to certify a CLI image for deployment script. Don't use the CLI versions that were released within 30 days. To find the release dates for the images, see [Azure CLI release notes](/cli/azure/release-notes-azure-cli). If an unsupported version is used, the error message lists the supported versions. - `arguments`: Specify the parameter values. The values are separated by spaces. |
azure-resource-manager | Deploy Bicep Definition | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-resource-manager/managed-applications/deploy-bicep-definition.md | + + Title: Use Bicep to deploy an Azure Managed Application definition +description: Describes how to use Bicep to deploy an Azure Managed Application definition from your service catalog. + Last updated : 05/12/2023+++# Quickstart: Use Bicep to deploy an Azure Managed Application definition ++This quickstart describes how to use Bicep to deploy an Azure Managed Application definition from your service catalog. The definition's in your service catalog are available to members of your organization. ++To deploy a managed application definition from your service catalog, do the following tasks: ++- Use Bicep to develop a template that deploys a managed application definition. +- Create a parameter file for the deployment. +- Deploy the managed application definition from your service catalog. ++## Prerequisites ++To complete the tasks in this article, you need the following items: ++- Completed the quickstart to [use Bicep to create and publish](publish-bicep-definition.md) a managed application definition in your service catalog. +- An Azure account with an active subscription. If you don't have an account, [create a free account](https://azure.microsoft.com/free/) before you begin. +- [Visual Studio Code](https://code.visualstudio.com/) with the latest [Azure Resource Manager Tools extension](https://marketplace.visualstudio.com/items?itemName=msazurermtools.azurerm-vscode-tools). For Bicep files, install the [Bicep extension for Visual Studio Code](https://marketplace.visualstudio.com/items?itemName=ms-azuretools.vscode-bicep). +- Install the latest version of [Azure PowerShell](/powershell/azure/install-az-ps) or [Azure CLI](/cli/azure/install-azure-cli). ++## Get managed application definition ++# [PowerShell](#tab/azure-powershell) ++To get the managed application's definition with Azure PowerShell, run the following commands. ++In Visual Studio Code, open a new PowerShell terminal and sign in to your Azure subscription. ++```azurepowershell +Connect-AzAccount +``` ++The command opens your default browser and prompts you to sign in to Azure. For more information, go to [Sign in with Azure PowerShell](/powershell/azure/authenticate-azureps). ++From Azure PowerShell, get your managed application's definition. In this example, use the resource group name _bicepDefinitionRG_ that was created when you deployed the managed application definition. ++```azurepowershell +Get-AzManagedApplicationDefinition -ResourceGroupName bicepDefinitionRG +``` ++`Get-AzManagedApplicationDefinition` lists all the available definitions in the specified resource group, like _sampleBicepManagedApplication_. ++The following command parses the output to show only the definition name and resource group name. You use the names when you deploy the managed application. ++```azurepowershell +Get-AzManagedApplicationDefinition -ResourceGroupName bicepDefinitionRG | Select-Object -Property Name, ResourceGroupName +``` ++# [Azure CLI](#tab/azure-cli) ++To get the managed application's definition with Azure CLI, run the following commands. ++In Visual Studio Code, open a new Bash terminal session and sign in to your Azure subscription. For example, if you have Git installed, select Git Bash. ++```azurecli +az login +``` ++The command opens your default browser and prompts you to sign in to Azure. For more information, go to [Sign in with Azure CLI](/cli/azure/authenticate-azure-cli). ++From Azure CLI, get your managed application's definition. In this example, use the resource group name _bicepDefinitionRG_ that was created when you deployed the managed application definition. ++```azurecli +az managedapp definition list --resource-group bicepDefinitionRG +``` ++The command lists all the available definitions in the specified resource group, like _sampleBicepManagedApplication_. ++The following command parses the output to show only the definition name and resource group name. You use the names when you deploy the managed application. ++```azurecli +az managedapp definition list --resource-group bicepDefinitionRG --query "[].{Name:name, ResourcGroup:resourceGroup}" +``` ++++## Create the Bicep file ++Open Visual Studio Code and create a file name _deployServiceCatalog.bicep_. Copy and paste the following code into the file and save it. ++```bicep +@description('Region where the resources are deployed.') +param location string = resourceGroup().location ++@description('Resource group name where the definition is stored.') +param definitionRG string ++@description('Name of the service catalog definition.') +param definitionName string ++// Parameters for the managed application's resource deployment +@description('Name of the managed application.') +param managedAppName string ++@description('Name for the managed resource group.') +param mrgName string ++@maxLength(40) +@description('Service plan name with maximum 40 alphanumeric characters and hyphens. Must be unique within a resource group in your subscription.') +param appServicePlanName string ++@maxLength(47) +@description('Globally unique across Azure. Maximum of 47 alphanumeric characters or hyphens.') +param appServiceNamePrefix string ++@maxLength(11) +@description('Use only lowercase letters and numbers and a maximum of 11 characters.') +param storageAccountNamePrefix string ++@allowed([ + 'Premium_LRS' + 'Standard_LRS' + 'Standard_GRS' +]) +@description('The options are Premium_LRS, Standard_LRS, or Standard_GRS') +param storageAccountType string ++@description('Resource ID for the managed application definition.') +var appResourceId = resourceId('${definitionRG}', 'Microsoft.Solutions/applicationdefinitions', '${definitionName}') ++@description('Creates the path for the managed resource group. The resource group is created during deployment.') +var mrgId = '${subscription().id}/resourceGroups/${mrgName}' ++resource bicepServiceCatalogApp 'Microsoft.Solutions/applications@2021-07-01' = { + name: managedAppName + kind: 'ServiceCatalog' + location: location + properties: { + applicationDefinitionId: appResourceId + managedResourceGroupId: mrgId + parameters: { + appServicePlanName: { + value: appServicePlanName + } + appServiceNamePrefix: { + value: appServiceNamePrefix + } + storageAccountNamePrefix: { + value: storageAccountNamePrefix + } + storageAccountType: { + value: storageAccountType + } + } + } +} +``` ++For more information about the resource type, go to [Microsoft.Solutions/applications](/azure/templates/microsoft.solutions/applications). ++## Create the parameter file ++Open Visual Studio Code and create a parameter file named _deployServiceCatalog.parameters.json_. Copy and paste the following code into the file and save it. ++```json +{ + "$schema": "https://schema.management.azure.com/schemas/2019-04-01/deploymentParameters.json#", + "contentVersion": "1.0.0.0", + "parameters": { + "definitionName": { + "value": "sampleBicepManagedApplication" + }, + "definitionRG": { + "value": "bicepDefinitionRG" + }, + "managedAppName": { + "value": "sampleBicepManagedApp" + }, + "mrgName": { + "value": "<placeholder for managed resource group name>" + }, + "appServicePlanName": { + "value": "demoAppServicePlan" + }, + "appServiceNamePrefix": { + "value": "demoApp" + }, + "storageAccountNamePrefix": { + "value": "demostg1234" + }, + "storageAccountType": { + "value": "Standard_LRS" + } + } +} +``` ++You need to provide several parameters to deploy the managed application: ++| Parameter | Value | +| - | - | +| `definitionName` | Name of the service catalog definition. This example uses _sampleBicepManagedApplication_. | +| `definitionRG` | Resource group name where the definition is stored. This example uses _bicepDefinitionRG_. +| `managedAppName` | Name for the deployed managed application. This example uses _sampleBicepManagedApp_. +| `mrgName` | Unique name for the managed resource group that contains the application's deployed resources. The resource group is created when you deploy the managed application. To create a managed resource group name, you can run the commands that follow this parameter list. | +| `appServicePlanName` | Create a plan name. Maximum of 40 alphanumeric characters and hyphens. For example, _demoAppServicePlan_. App Service plan names must be unique within a resource group in your subscription. | +| `appServiceNamePrefix` | Create a prefix for the plan name. Maximum of 47 alphanumeric characters or hyphens. For example, _demoApp_. During deployment, the prefix is concatenated with a unique string to create a name that's globally unique across Azure. | +| `storageAccountNamePrefix` | Use only lowercase letters and numbers and a maximum of 11 characters. For example, _demostg1234_. During deployment, the prefix is concatenated with a unique string to create a name globally unique across Azure. | +| `storageAccountType` | The options are Premium_LRS, Standard_LRS, and Standard_GRS. | ++You can run the following commands to create a name for the managed resource group. ++# [PowerShell](#tab/azure-powershell) ++```azurepowershell +$mrgprefix = 'mrg-sampleBicepManagedApplication-' +$mrgtimestamp = Get-Date -UFormat "%Y%m%d%H%M%S" +$mrgname = $mrgprefix + $mrgtimestamp +$mrgname +``` ++# [Azure CLI](#tab/azure-cli) ++```azurecli +mrgprefix='mrg-sampleBicepManagedApplication-' +mrgtimestamp=$(date +%Y%m%d%H%M%S) +mrgname="${mrgprefix}${mrgtimestamp}" +echo $mrgname +``` ++++The `$mrgprefix` and `$mrgtimestamp` variables are concatenated and stored in the `$mrgname` variable. The variable's value is in the format _mrg-sampleBicepManagedApplication-20230512103059_. You use the `$mrgname` variable's value when you deploy the managed application. ++## Deploy the managed application ++Use Azure PowerShell or Azure CLI to create a resource group and deploy the managed application. ++# [PowerShell](#tab/azure-powershell) ++```azurepowershell +New-AzResourceGroup -Name bicepAppRG -Location westus3 ++New-AzResourceGroupDeployment ` + -ResourceGroupName bicepAppRG ` + -TemplateFile deployServiceCatalog.bicep ` + -TemplateParameterFile deployServiceCatalog.parameters.json +``` ++# [Azure CLI](#tab/azure-cli) ++```azurecli +az group create --name bicepAppRG --location westus3 ++az deployment group create \ + --resource-group bicepAppRG \ + --template-file deployServiceCatalog.bicep \ + --parameters @deployServiceCatalog.parameters.json +``` ++++Your deployment might display a [Bicep linter](../bicep/linter-rule-use-resource-id-functions.md) warning that the `managedResourceGroupId` property expects a resource ID. Because the managed resource group is created during the deployment, there isn't a resource ID available for the property. ++## View results ++After the service catalog managed application is deployed, you have two new resource groups. One resource group contains the managed application. The other resource group contains the managed resources that were deployed. In this example, an App Service, App Service plan, and storage account. ++### Managed application ++After the deployment is finished, you can check your managed application's status. ++# [PowerShell](#tab/azure-powershell) ++Run the following command to check the managed application's status. ++```azurepowershell +Get-AzManagedApplication -Name sampleBicepManagedApp -ResourceGroupName bicepAppRG +``` ++Expand the properties to make it easier to read the `Properties` information. ++```azurepowershell +Get-AzManagedApplication -Name sampleBicepManagedApp -ResourceGroupName bicepAppRG | Select-Object -ExpandProperty Properties +``` ++# [Azure CLI](#tab/azure-cli) ++Run the following command to check the managed application's status. ++```azurecli +az managedapp show --name sampleBicepManagedApp --resource-group bicepAppRG +``` ++The following command parses the data about the managed application to show only the application's name and provisioning state. ++```azurecli +az managedapp show --name sampleBicepManagedApp --resource-group bicepAppRG --query "{Name:name, provisioningState:provisioningState}" +``` ++++### Managed resources ++You can view the resources deployed to the managed resource group. ++# [PowerShell](#tab/azure-powershell) ++To display the managed resource group's resources, run the following command. You created the `$mrgname` variable when you created the parameters. ++```azurepowershell +Get-AzResource -ResourceGroupName $mrgname +``` ++To display all the role assignments for the managed resource group. ++```azurepowershell +Get-AzRoleAssignment -ResourceGroupName $mrgname +``` ++The managed application definition you created in the quickstart articles used a group with the Owner role assignment. You can view the group with the following command. ++```azurepowershell +Get-AzRoleAssignment -ResourceGroupName $mrgname -RoleDefinitionName Owner +``` ++You can also list the deny assignments for the managed resource group. ++```azurepowershell +Get-AzDenyAssignment -ResourceGroupName $mrgname +``` ++# [Azure CLI](#tab/azure-cli) ++To display the managed resource group's resources, run the following command. You created the `$mrgname` variable when you created the parameters. ++```azurecli +az resource list --resource-group $mrgname +``` ++Run the following command to list only the name, type, and provisioning state for the managed resources. ++```azurecli +az resource list --resource-group $mrgname --query "[].{Name:name, Type:type, provisioningState:provisioningState}" +``` ++Run the following command to list the role assignment for the group that was used in the managed application's definition. ++```azurecli +az role assignment list --resource-group $mrgname +``` ++The following command parses the data for the group's role assignment. ++```azurecli +az role assignment list --resource-group $mrgname --role Owner --query "[].{ResourceGroup:resourceGroup, GroupName:principalName, RoleDefinition:roleDefinitionId, Role:roleDefinitionName}" +``` ++To review the managed resource group's deny assignments, use the Azure portal or Azure PowerShell commands. ++++## Clean up resources ++When you're finished with the managed application, you can delete the resource groups and that removes all the resources you created. For example, you created the resource groups _bicepAppRG_ and a managed resource group with the prefix _mrg-bicepManagedApplication_. ++When you delete the _bicepAppRG_ resource group, the managed application, managed resource group, and all the Azure resources are deleted. ++# [PowerShell](#tab/azure-powershell) ++The command prompts you to confirm that you want to remove the resource group. ++```azurepowershell +Remove-AzResourceGroup -Name bicepAppRG +``` ++# [Azure CLI](#tab/azure-cli) ++The command prompts for confirmation, and then returns you to command prompt while resources are being deleted. ++```azurecli +az group delete --resource-group bicepAppRG --no-wait +``` ++++If you want to delete the managed application definition, delete the resource groups you created named _packageStorageRG_ and _bicepDefinitionRG_. ++## Next steps ++- To learn how to create and publish the definition files for a managed application using Azure PowerShell, Azure CLI, or portal, go to [Quickstart: Create and publish an Azure Managed Application definition](publish-service-catalog-app.md). +- To use your own storage to create and publish the definition files for a managed application, go to [Quickstart: Bring your own storage to create and publish an Azure Managed Application definition](publish-service-catalog-bring-your-own-storage.md). |
azure-resource-manager | Publish Bicep Definition | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-resource-manager/managed-applications/publish-bicep-definition.md | + + Title: Use Bicep to create and publish an Azure Managed Application definition +description: Describes how to use Bicep to create and publish an Azure Managed Application definition in your service catalog. + Last updated : 05/12/2023+++# Quickstart: Use Bicep to create and publish an Azure Managed Application definition ++This quickstart describes how to use Bicep to create and publish an Azure Managed Application definition in your service catalog. The definition's in your service catalog are available to members of your organization. ++To create and publish a managed application definition to your service catalog, do the following tasks: ++- Use Bicep to develop your template and convert it to an Azure Resource Manager template (ARM template). The template defines the Azure resources deployed by the managed application. +- Convert Bicep to JSON with the Bicep `build` command. After the file is converted to JSON, it's recommended to verify the code for accuracy. +- Define the user interface elements for the portal when deploying the managed application. +- Create a _.zip_ package that contains the required JSON files. The _.zip_ package file has a 120-MB limit for a service catalog's managed application definition. +- Publish the managed application definition so it's available in your service catalog. ++If your managed application definition is more than 120 MB or if you want to use your own storage account for your organization's compliance reasons, go to [Quickstart: Bring your own storage to create and publish an Azure Managed Application definition](publish-service-catalog-bring-your-own-storage.md). ++You can also use Bicep deploy an existing managed application definition. For more information, go to [Quickstart: Use Bicep to deploy an Azure Managed Application definition](deploy-bicep-definition.md). ++## Prerequisites ++To complete the tasks in this article, you need the following items: ++- An Azure account with an active subscription and permissions to Azure Active Directory resources like users, groups, or service principals. If you don't have an account, [create a free account](https://azure.microsoft.com/free/) before you begin. +- [Visual Studio Code](https://code.visualstudio.com/) with the latest [Azure Resource Manager Tools extension](https://marketplace.visualstudio.com/items?itemName=msazurermtools.azurerm-vscode-tools). For Bicep files, install the [Bicep extension for Visual Studio Code](https://marketplace.visualstudio.com/items?itemName=ms-azuretools.vscode-bicep). +- Install the latest version of [Azure PowerShell](/powershell/azure/install-az-ps) or [Azure CLI](/cli/azure/install-azure-cli). ++## Create a Bicep file ++Every managed application definition includes a file named _mainTemplate.json_. The template defines the Azure resources to deploy and is no different than a regular ARM template. You can develop the template using Bicep and then convert the Bicep file to JSON. ++Open Visual Studio Code, create a file with the case-sensitive name _mainTemplate.bicep_ and save it. ++Add the following Bicep code and save the file. It defines the managed application's resources to deploy an App Service, App Service plan, and a storage account. ++```bicep +param location string = resourceGroup().location ++@description('App Service plan name.') +@maxLength(40) +param appServicePlanName string ++@description('App Service name prefix.') +@maxLength(47) +param appServiceNamePrefix string ++@description('Storage account name prefix.') +@maxLength(11) +param storageAccountNamePrefix string ++@description('Storage account type allowed values') +@allowed([ + 'Premium_LRS' + 'Standard_LRS' + 'Standard_GRS' +]) +param storageAccountType string ++var appServicePlanSku = 'F1' +var appServicePlanCapacity = 1 +var appServiceName = '${appServiceNamePrefix}${uniqueString(resourceGroup().id)}' +var storageAccountName = '${storageAccountNamePrefix}${uniqueString(resourceGroup().id)}' +var appServiceStorageConnectionString = 'DefaultEndpointsProtocol=https;AccountName=${storageAccount.name};EndpointSuffix=${environment().suffixes.storage};Key=${storageAccount.listKeys().keys[0].value}' ++resource appServicePlan 'Microsoft.Web/serverfarms@2022-03-01' = { + name: appServicePlanName + location: location + sku: { + name: appServicePlanSku + capacity: appServicePlanCapacity + } +} ++resource appServiceApp 'Microsoft.Web/sites@2022-03-01' = { + name: appServiceName + location: location + properties: { + serverFarmId: appServicePlan.id + httpsOnly: true + siteConfig: { + appSettings: [ + { + name: 'AppServiceStorageConnectionString' + value: appServiceStorageConnectionString + } + ] + } + } +} ++resource storageAccount 'Microsoft.Storage/storageAccounts@2022-09-01' = { + name: storageAccountName + location: location + sku: { + name: storageAccountType + } + kind: 'StorageV2' + properties: { + accessTier: 'Hot' + } +} ++output appServicePlan string = appServicePlan.name +output appServiceApp string = appServiceApp.properties.defaultHostName +output storageAccount string = storageAccount.properties.primaryEndpoints.blob +``` ++## Convert Bicep to JSON ++Use PowerShell or Azure CLI to build the _mainTemplate.json_ file. Go to the directory where you saved your Bicep file and run the `build` command. ++# [PowerShell](#tab/azure-powershell) ++```powershell +bicep build mainTemplate.bicep +``` ++# [Azure CLI](#tab/azure-cli) ++```azurecli +az bicep build --file mainTemplate.bicep +``` ++++To learn more, go to Bicep [build](../bicep/bicep-cli.md#build). ++After the Bicep file is converted to JSON, your _mainTemplate.json_ file should match the following example. You might have different values in the `metadata` properties for `version` and `templateHash`. ++```json +{ + "$schema": "https://schema.management.azure.com/schemas/2019-04-01/deploymentTemplate.json#", + "contentVersion": "1.0.0.0", + "metadata": { + "_generator": { + "name": "bicep", + "version": "0.17.1.54307", + "templateHash": "1234567891234567890" + } + }, + "parameters": { + "location": { + "type": "string", + "defaultValue": "[resourceGroup().location]" + }, + "appServicePlanName": { + "type": "string", + "maxLength": 40, + "metadata": { + "description": "App Service plan name." + } + }, + "appServiceNamePrefix": { + "type": "string", + "maxLength": 47, + "metadata": { + "description": "App Service name prefix." + } + }, + "storageAccountNamePrefix": { + "type": "string", + "maxLength": 11, + "metadata": { + "description": "Storage account name prefix." + } + }, + "storageAccountType": { + "type": "string", + "allowedValues": [ + "Premium_LRS", + "Standard_LRS", + "Standard_GRS" + ], + "metadata": { + "description": "Storage account type allowed values" + } + } + }, + "variables": { + "appServicePlanSku": "F1", + "appServicePlanCapacity": 1, + "appServiceName": "[format('{0}{1}', parameters('appServiceNamePrefix'), uniqueString(resourceGroup().id))]", + "storageAccountName": "[format('{0}{1}', parameters('storageAccountNamePrefix'), uniqueString(resourceGroup().id))]" + }, + "resources": [ + { + "type": "Microsoft.Web/serverfarms", + "apiVersion": "2022-03-01", + "name": "[parameters('appServicePlanName')]", + "location": "[parameters('location')]", + "sku": { + "name": "[variables('appServicePlanSku')]", + "capacity": "[variables('appServicePlanCapacity')]" + } + }, + { + "type": "Microsoft.Web/sites", + "apiVersion": "2022-03-01", + "name": "[variables('appServiceName')]", + "location": "[parameters('location')]", + "properties": { + "serverFarmId": "[resourceId('Microsoft.Web/serverfarms', parameters('appServicePlanName'))]", + "httpsOnly": true, + "siteConfig": { + "appSettings": [ + { + "name": "AppServiceStorageConnectionString", + "value": "[format('DefaultEndpointsProtocol=https;AccountName={0};EndpointSuffix={1};Key={2}', variables('storageAccountName'), environment().suffixes.storage, listKeys(resourceId('Microsoft.Storage/storageAccounts', variables('storageAccountName')), '2022-09-01').keys[0].value)]" + } + ] + } + }, + "dependsOn": [ + "[resourceId('Microsoft.Web/serverfarms', parameters('appServicePlanName'))]", + "[resourceId('Microsoft.Storage/storageAccounts', variables('storageAccountName'))]" + ] + }, + { + "type": "Microsoft.Storage/storageAccounts", + "apiVersion": "2022-09-01", + "name": "[variables('storageAccountName')]", + "location": "[parameters('location')]", + "sku": { + "name": "[parameters('storageAccountType')]" + }, + "kind": "StorageV2", + "properties": { + "accessTier": "Hot" + } + } + ], + "outputs": { + "appServicePlan": { + "type": "string", + "value": "[parameters('appServicePlanName')]" + }, + "appServiceApp": { + "type": "string", + "value": "[reference(resourceId('Microsoft.Web/sites', variables('appServiceName')), '2022-03-01').defaultHostName]" + }, + "storageAccount": { + "type": "string", + "value": "[reference(resourceId('Microsoft.Storage/storageAccounts', variables('storageAccountName')), '2022-09-01').primaryEndpoints.blob]" + } + } +} +``` ++## Define your portal experience ++As a publisher, you define the portal experience to create the managed application. The _createUiDefinition.json_ file generates the portal's user interface. You define how users provide input for each parameter using [control elements](create-uidefinition-elements.md) like drop-downs and text boxes. ++In this example, the user interface prompts you to input the App Service name prefix, App Service plan's name, storage account prefix, and storage account type. During deployment, the variables in _mainTemplate.json_ use the `uniqueString` function to append a 13-character string to the name prefixes so the names are globally unique across Azure. ++Open Visual Studio Code, create a file with the case-sensitive name _createUiDefinition.json_ and save it. ++Add the following JSON code to the file and save it. ++```json +{ + "$schema": "https://schema.management.azure.com/schemas/0.1.2-preview/CreateUIDefinition.MultiVm.json#", + "handler": "Microsoft.Azure.CreateUIDef", + "version": "0.1.2-preview", + "parameters": { + "basics": [ + {} + ], + "steps": [ + { + "name": "webAppSettings", + "label": "Web App settings", + "subLabel": { + "preValidation": "Configure the web app settings", + "postValidation": "Completed" + }, + "elements": [ + { + "name": "appServicePlanName", + "type": "Microsoft.Common.TextBox", + "label": "App Service plan name", + "placeholder": "App Service plan name", + "defaultValue": "", + "toolTip": "Use alphanumeric characters or hyphens with a maximum of 40 characters.", + "constraints": { + "required": true, + "regex": "^[a-z0-9A-Z-]{1,40}$", + "validationMessage": "Only alphanumeric characters or hyphens are allowed, with a maximum of 40 characters." + }, + "visible": true + }, + { + "name": "appServiceName", + "type": "Microsoft.Common.TextBox", + "label": "App Service name prefix", + "placeholder": "App Service name prefix", + "defaultValue": "", + "toolTip": "Use alphanumeric characters or hyphens with minimum of 2 characters and maximum of 47 characters.", + "constraints": { + "required": true, + "regex": "^[a-z0-9A-Z-]{2,47}$", + "validationMessage": "Only alphanumeric characters or hyphens are allowed, with a minimum of 2 characters and maximum of 47 characters." + }, + "visible": true + } + ] + }, + { + "name": "storageConfig", + "label": "Storage settings", + "subLabel": { + "preValidation": "Configure the storage settings", + "postValidation": "Completed" + }, + "elements": [ + { + "name": "storageAccounts", + "type": "Microsoft.Storage.MultiStorageAccountCombo", + "label": { + "prefix": "Storage account name prefix", + "type": "Storage account type" + }, + "toolTip": { + "prefix": "Enter maximum of 11 lowercase letters or numbers.", + "type": "Available choices are Standard_LRS, Standard_GRS, and Premium_LRS." + }, + "defaultValue": { + "type": "Standard_LRS" + }, + "constraints": { + "allowedTypes": [ + "Premium_LRS", + "Standard_LRS", + "Standard_GRS" + ] + }, + "visible": true + } + ] + } + ], + "outputs": { + "location": "[location()]", + "appServicePlanName": "[steps('webAppSettings').appServicePlanName]", + "appServiceNamePrefix": "[steps('webAppSettings').appServiceName]", + "storageAccountNamePrefix": "[steps('storageConfig').storageAccounts.prefix]", + "storageAccountType": "[steps('storageConfig').storageAccounts.type]" + } + } +} +``` ++To learn more, go to [Get started with CreateUiDefinition](create-uidefinition-overview.md). ++## Package the files ++Add the two files to a package file named _app.zip_. The two files must be at the root level of the _.zip_ file. If the files are in a folder, when you create the managed application definition, you receive an error that states the required files aren't present. ++Upload _app.zip_ to an Azure storage account so you can use it when you deploy the managed application's definition. The storage account name must be globally unique across Azure and the length must be 3-24 characters with only lowercase letters and numbers. In the command, replace the placeholder `<demostorageaccount>` including the angle brackets (`<>`), with your unique storage account name. ++# [PowerShell](#tab/azure-powershell) ++In Visual Studio Code, open a new PowerShell terminal and sign in to your Azure subscription. ++```azurepowershell +Connect-AzAccount +``` ++The command opens your default browser and prompts you to sign in to Azure. For more information, go to [Sign in with Azure PowerShell](/powershell/azure/authenticate-azureps). ++After you connect, run the following commands. ++```azurepowershell +New-AzResourceGroup -Name packageStorageRG -Location westus3 ++$storageAccount = New-AzStorageAccount ` + -ResourceGroupName packageStorageRG ` + -Name "<demostorageaccount>" ` + -Location westus3 ` + -SkuName Standard_LRS ` + -Kind StorageV2 ` + -AllowBlobPublicAccess $true ++$ctx = $storageAccount.Context ++New-AzStorageContainer -Name appcontainer -Context $ctx -Permission blob ++Set-AzStorageBlobContent ` + -File "app.zip" ` + -Container appcontainer ` + -Blob "app.zip" ` + -Context $ctx +``` ++Use the following command to store the package file's URI in a variable named `packageuri`. You use the variable's value when you deploy the managed application definition. ++```azurepowershell +$packageuri=(Get-AzStorageBlob -Container appcontainer -Blob app.zip -Context $ctx).ICloudBlob.StorageUri.PrimaryUri.AbsoluteUri +``` ++# [Azure CLI](#tab/azure-cli) ++In Visual Studio Code, open a new Bash terminal session and sign in to your Azure subscription. For example, if you have Git installed, select Git Bash. ++```azurecli +az login +``` ++The command opens your default browser and prompts you to sign in to Azure. For more information, go to [Sign in with Azure CLI](/cli/azure/authenticate-azure-cli). ++After you connect, run the following commands. ++```azurecli +az group create --name packageStorageRG --location westus3 ++az storage account create \ + --name <demostorageaccount> \ + --resource-group packageStorageRG \ + --location westus3 \ + --sku Standard_LRS \ + --kind StorageV2 \ + --allow-blob-public-access true +``` ++After you create the storage account, add the role assignment _Storage Blob Data Contributor_ to the storage account scope. Assign access to your Azure Active Directory user account. Depending on your access level in Azure, you might need other permissions assigned by your administrator. For more information, go to [Assign an Azure role for access to blob data](../../storage/blobs/assign-azure-role-data-access.md). ++After you add the role to the storage account, it takes a few minutes to become active in Azure. You can then use the parameter `--auth-mode login` in the commands to create the container and upload the file. ++```azurecli +az storage container create \ + --account-name <demostorageaccount> \ + --name appcontainer \ + --auth-mode login \ + --public-access blob ++az storage blob upload \ + --account-name <demostorageaccount> \ + --container-name appcontainer \ + --auth-mode login \ + --name "app.zip" \ + --file "app.zip" +``` ++For more information about storage authentication, go to [Choose how to authorize access to blob data with Azure CLI](../../storage/blobs/authorize-data-operations-cli.md). ++Use the following command to store the package file's URI in a variable named `packageuri`. You use the variable's value when you deploy the managed application definition. ++```azurecli +packageuri=$(az storage blob url \ + --account-name <demostorageaccount> \ + --container-name appcontainer \ + --auth-mode login \ + --name app.zip --output tsv) +``` ++++## Create the managed application definition ++In this section, you get identity information from Azure Active Directory, create a resource group, and deploy the managed application definition. ++### Get group ID and role definition ID ++The next step is to select a user, security group, or application for managing the resources for the customer. This identity has permissions on the managed resource group according to the assigned role. The role can be any Azure built-in role like Owner or Contributor. ++This example uses a security group, and your Azure Active Directory account should be a member of the group. To get the group's object ID, replace the placeholder `<managedAppDemo>` including the angle brackets (`<>`), with your group's name. You use the variable's value when you deploy the managed application definition. ++To create a new Azure Active Directory group, go to [Manage Azure Active Directory groups and group membership](../../active-directory/fundamentals/how-to-manage-groups.md). ++# [PowerShell](#tab/azure-powershell) ++```azurepowershell +$principalid=(Get-AzADGroup -DisplayName <managedAppDemo>).Id +``` ++# [Azure CLI](#tab/azure-cli) ++```azurecli +principalid=$(az ad group show --group <managedAppDemo> --query id --output tsv) +``` ++++Next, get the role definition ID of the Azure built-in role you want to grant access to the user, group, or application. You use the variable's value when you deploy the managed application definition. ++# [PowerShell](#tab/azure-powershell) ++```azurepowershell +$roleid=(Get-AzRoleDefinition -Name Owner).Id +``` ++# [Azure CLI](#tab/azure-cli) ++```azurecli +roleid=$(az role definition list --name Owner --query [].name --output tsv) +``` ++++## Create the definition deployment template ++Use a Bicep file to deploy the managed application definition in your service catalog. ++Open Visual Studio Code, create a file with the name _deployDefinition.bicep_ and save it. ++Add the following Bicep code and save the file. ++```bicep +param location string = resourceGroup().location ++@description('Name of the managed application definition.') +param managedApplicationDefinitionName string ++@description('The URI of the .zip package file.') +param packageFileUri string ++@description('Publishers Principal ID that needs permissions to manage resources in the managed resource group.') +param principalId string ++@description('Role ID for permissions to the managed resource group.') +param roleId string ++var definitionLockLevel = 'ReadOnly' +var definitionDisplayName = 'Sample Bicep managed application' +var definitionDescription = 'Sample Bicep managed application that deploys web resources' ++resource managedApplicationDefinition 'Microsoft.Solutions/applicationDefinitions@2021-07-01' = { + name: managedApplicationDefinitionName + location: location + properties: { + lockLevel: definitionLockLevel + description: definitionDescription + displayName: definitionDisplayName + packageFileUri: packageFileUri + authorizations: [ + { + principalId: principalId + roleDefinitionId: roleId + } + ] + } +} +``` ++For more information about the template's properties, go to [Microsoft.Solutions/applicationDefinitions](/azure/templates/microsoft.solutions/applicationdefinitions). ++The `lockLevel` on the managed resource group prevents the customer from performing undesirable operations on this resource group. Currently, `ReadOnly` is the only supported lock level. `ReadOnly` specifies that the customer can only read the resources present in the managed resource group. The publisher identities that are granted access to the managed resource group are exempt from the lock level. ++## Create the parameter file ++The managed application definition's deployment template needs input for several parameters. The deployment command prompts you for the values or you can create a parameter file for the values. In this example, we use a parameter file to pass the parameter values to the deployment command. ++In Visual Studio Code, create a new file named _deployDefinition.parameters.json_ and save it. ++Add the following to your parameter file and save it. Then, replace the `<placeholder values>` including the angle brackets (`<>`), with your values. ++```json +{ + "$schema": "https://schema.management.azure.com/schemas/2019-04-01/deploymentParameters.json#", + "contentVersion": "1.0.0.0", + "parameters": { + "managedApplicationDefinitionName": { + "value": "sampleBicepManagedApplication" + }, + "packageFileUri": { + "value": "<placeholder for the packageFileUri>" + }, + "principalId": { + "value": "<placeholder for principalid value>" + }, + "roleId": { + "value": "<placeholder for roleid value>" + } + } +} +``` ++The following table describes the parameter values for the managed application definition. ++| Parameter | Value | +| - | - | +| `managedApplicationDefinitionName` | Name of the managed application definition. For this example, use _sampleBicepManagedApplication_.| +| `packageFileUri` | Enter the URI for your _.zip_ package file. Use your `packageuri` variable's value. The format is `https://yourStorageAccountName.blob.core.windows.net/appcontainer/app.zip`. | +| `principalId` | The publishers principal ID that needs permissions to manage resources in the managed resource group. Use your `principalid` variable's value. | +| `roleId` | Role ID for permissions to the managed resource group. For example Owner, Contributor, Reader. Use your `roleid` variable's value. | ++To get your variable values: +- Azure PowerShell: In PowerShell, type `$variableName` to display a variable's value. +- Azure CLI: In Bash, type `echo $variableName` to display a variable's value. ++## Deploy the definition ++When you deploy the managed application's definition, it becomes available in your service catalog. This process doesn't deploy the managed application's resources. ++Create a resource group named _bicepDefinitionRG_ and deploy the managed application definition. ++# [PowerShell](#tab/azure-powershell) ++```azurepowershell +New-AzResourceGroup -Name bicepDefinitionRG -Location westus3 ++New-AzResourceGroupDeployment ` + -ResourceGroupName bicepDefinitionRG ` + -TemplateFile deployDefinition.bicep ` + -TemplateParameterFile deployDefinition.parameters.json +``` ++# [Azure CLI](#tab/azure-cli) ++```azurecli +az group create --name bicepDefinitionRG --location westus3 ++az deployment group create \ + --resource-group bicepDefinitionRG \ + --template-file deployDefinition.bicep \ + --parameters @deployDefinition.parameters.json +``` ++++## Verify the results ++Run the following command to verify the definition is published in your service catalog. ++# [PowerShell](#tab/azure-powershell) ++```azurepowershell +Get-AzManagedApplicationDefinition -ResourceGroupName bicepDefinitionRG +``` ++`Get-AzManagedApplicationDefinition` lists all the available definitions in the specified resource group, like _sampleBicepManagedApplication_. ++# [Azure CLI](#tab/azure-cli) ++```azurecli +az managedapp definition list --resource-group bicepDefinitionRG +``` ++The command lists all the available definitions in the specified resource group, like _sampleBicepManagedApplication_. ++++## Make sure users can access your definition ++You have access to the managed application definition, but you want to make sure other users in your organization can access it. Grant them at least the Reader role on the definition. They may have inherited this level of access from the subscription or resource group. To check who has access to the definition and add users or groups, go to [Assign Azure roles using the Azure portal](../../role-based-access-control/role-assignments-portal.md). ++## Clean up resources ++If you're going to deploy the definition, continue with the **Next steps** section that links to the article to deploy the definition with Bicep. ++If you're finished with the managed application definition, you can delete the resource groups you created named _packageStorageRG_ and _bicepDefinitionRG_. ++# [PowerShell](#tab/azure-powershell) ++The command prompts you to confirm that you want to remove the resource group. ++```azurepowershell +Remove-AzResourceGroup -Name packageStorageRG ++Remove-AzResourceGroup -Name bicepDefinitionRG +``` ++# [Azure CLI](#tab/azure-cli) ++The command prompts for confirmation, and then returns you to command prompt while resources are being deleted. ++```azurecli +az group delete --resource-group packageStorageRG --no-wait ++az group delete --resource-group bicepDefinitionRG --no-wait +``` ++++## Next steps ++You've published the managed application definition. The next step is to learn how to deploy an instance of that definition. ++> [!div class="nextstepaction"] +> [Quickstart: Use Bicep to deploy an Azure Managed Application definition](deploy-bicep-definition.md). |
azure-resource-manager | Manage Resources Rest | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-resource-manager/management/manage-resources-rest.md | + + Title: Manage resources - REST +description: Use REST operations with Azure Resource Manager to manage your resources. Shows how to read, deploy, and delete resources. ++ Last updated : 04/26/2023++# Manage Azure resources by using the REST API ++Learn how to use the REST API for [Azure Resource Manager](overview.md) to manage your Azure resources. For a comprehensive reference of how to structure Azure REST calls, see [Getting Started with REST](/rest/api/azure/). View the [Resource Management REST API reference](/rest/api/resources/) for more details on the available operations. ++## Obtain an access token +To make a REST API call to Azure, you first need to obtain an access token. Include this access token in the headers of your Azure REST API calls using the "Authorization" header and setting the value to "Bearer {access-token}". ++If you need to programatically retrieve new tokens as part of your application, you can obtain an access token by [Registering your client application with Azure AD](/rest/api/azure/#register-your-client-application-with-azure-ad). ++If you are getting started and want to test Azure REST APIs using your individual token, you can retrieve your current access token quickly with either Azure PowerShell or Azure CLI. ++### [Azure CLI](#tab/azure-cli) +```azurecli-interactive +token=$(az account get-access-token --query accessToken --output tsv) +``` ++### [Azure PowerShell](#tab/azure-powershell) +```azurepowershell-interactive +$token = (Get-AzAccessToken).Token +``` ++++## Operation scope +You can call many Azure Resource Manager operations at different scopes: ++| Type | Scope | +| | | +| Management group | `providers/Microsoft.Management/managementGroups/{managementGroupId}` | +| Subscription | `subscriptions/{subscriptionId}` | +| Resource group | `subscriptions/{subscriptionId}/resourceGroups/{resourceGroupName}` | +| Resource | `subscriptions/{subscriptionId}/resourceGroups/{resourceGroupName}/providers/{resourceProviderName}/{resourceType}/{resourceName}` | ++## List resources +The following REST operation returns the resources within a provided resource group. ++```http +GET /subscriptions/{subscriptionId}/resourceGroups/{resourceGroupName}/resources?api-version=2021-04-01 HTTP/1.1 +Authorization: Bearer <bearer-token> +Host: management.azure.com +``` ++Here is an example cURL command that you can use to list all resources in a resource group using the Azure Resource Manager API: +```curl +curl -H "Authorization: Bearer $token" -H 'Content-Type: application/json' -X GET 'https://management.azure.com/subscriptions/{subscriptionId}/resourceGroups/{resourceGroupName}/resources?api-version=2021-04-01' +``` +++With the authentication step, this example looks like: +### [Azure CLI](#tab/azure-cli) +```azurecli-interactive +token=$(az account get-access-token --query accessToken --output tsv) +curl -H "Authorization: Bearer $token" -H 'Content-Type: application/json' -X GET 'https://management.azure.com/subscriptions/{subscriptionId}/resourceGroups/{resourceGroupName}/resources?api-version=2021-04-01' +``` ++### [Azure PowerShell](#tab/azure-powershell) +```azurepowershell-interactive +$token = (Get-AzAccessToken).Token +$headers = @{Authorization="Bearer $token"} +Invoke-WebRequest -Method GET -Headers $headers -Uri 'https://management.azure.com/subscriptions/{subscriptionId}/resourceGroups/{resourceGroupName}/resources?api-version=2021-04-01' +``` ++++## Deploy resources to an existing resource group ++You can deploy Azure resources directly by using the REST API, or deploy a Resource Manager template to create Azure resources. ++### Deploy a resource ++The following REST operation creates a storage account. To see this example in more detail, see [Create an Azure Storage account with the REST API](/rest/api/storagerp/storage-sample-create-account). Complete reference documentation and samples for the Storage Resource Provider are available in the [Storage Resource Provider REST API Reference](/rest/api/storagerp/). ++```http +PUT /subscriptions/{subscriptionId}/resourceGroups/{resourceGroupName}/providers/Microsoft.Storage/storageAccounts/{accountName}?api-version=2018-02-01 HTTP/1.1 +Authorization: Bearer <bearer-token> +Content-Type: application/json +Host: management.azure.com ++{ + "sku": { + "name": "Standard_GRS" + }, + "kind": "StorageV2", + "location": "eastus2", +} +``` ++### Deploy a template ++The following operations deploy a Quickstart template to create a storage account. For more information, see [Quickstart: Create Azure Resource Manager templates by using Visual Studio Code](../templates/quickstart-create-templates-use-visual-studio-code.md). For the API reference of this call, see [Deployments - Create Or Update](/rest/api/resources/deployments/create-or-update). +++```http +PUT /subscriptions/{subscriptionId}/resourcegroups/{resourceGroupName}/providers/Microsoft.Resources/deployments/my-deployment?api-version=2021-04-01 HTTP/1.1 +Authorization: Bearer <bearer-token> +Content-Type: application/json +Host: management.azure.com ++{ + "properties": { + "templateLink": { + "uri": "https://example.com/azuretemplates/azuredeploy.json" + }, + "parametersLink": { + "uri": "https://example.com/azuretemplates/azuredeploy.parameters.json" + }, + "mode": "Incremental" + } +} +``` +For the REST APIs, the value of `uri` can't be a local file or a file that is only available on your local network. Azure Resource Manager must be able to access the template. Provide a URI value that downloadable as HTTP or HTTPS. +For more information, see [Deploy resources with Resource Manager templates and Azure PowerShell](../templates/deploy-powershell.md). ++## Deploy a resource group and resources ++You can create a resource group and deploy resources to the group by using a template. For more information, see [Create resource group and deploy resources](../templates/deploy-to-subscription.md#resource-groups). ++## Deploy resources to multiple subscriptions or resource groups ++Typically, you deploy all the resources in your template to a single resource group. However, there are scenarios where you want to deploy a set of resources together but place them in different resource groups or subscriptions. For more information, see [Deploy Azure resources to multiple subscriptions or resource groups](../templates/deploy-to-resource-group.md). ++## Delete resources ++The following operation shows how to delete a storage account. ++```http +DELETE /subscriptions/{subscriptionId}/resourceGroups/{resourceGroupName}/providers/Microsoft.Storage/storageAccounts/{accountName}?api-version=2022-09-01 HTTP/1.1 +Authorization: Bearer <bearer-token> +Host: management.azure.com +``` ++For more information about how Azure Resource Manager orders the deletion of resources, see [Azure Resource Manager resource group deletion](delete-resource-group.md). ++## Manage access to resources ++[Azure role-based access control (Azure RBAC)](../../role-based-access-control/overview.md) is the way that you manage access to resources in Azure. For more information, see [Add or remove Azure role assignments using REST](../../role-based-access-control/role-assignments-rest.md). ++## Next steps ++- To learn Azure Resource Manager, see [Azure Resource Manager overview](overview.md). +- To learn more about Azure Resource Manager's supported REST operations, see [Azure Resource Manager REST reference](/rest/api/resources/). +- To learn the Resource Manager template syntax, see [Understand the structure and syntax of Azure Resource Manager templates](../templates/syntax.md). +- To learn how to develop templates, see the [step-by-step tutorials](../index.yml). +- To view the Azure Resource Manager template schemas, see [template reference](/azure/templates/). |
azure-resource-manager | Move Support Resources | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-resource-manager/management/move-support-resources.md | Before starting your move operation, review the [checklist](./move-resource-grou > [!IMPORTANT]-> See [Cloud Services (extended support) deployment move guidance](./move-limitations/classic-model-move-limitations.md). Cloud Services (extended support) deployment resources can be moved across subscriptions with an operation specific to that scenario. +> See [Cloud Services (extended support) deployment move guidance](./move-limitations/cloud-services-extended-support.md). Cloud Services (extended support) deployment resources can be moved across subscriptions with an operation specific to that scenario. > [!div class="mx-tableFixed"] > | Resource type | Resource group | Subscription | Region move | |
azure-resource-manager | Conditional Resource Deployment | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-resource-manager/templates/conditional-resource-deployment.md | Title: Conditional deployment with templates description: Describes how to conditionally deploy a resource in an Azure Resource Manager template (ARM template). Previously updated : 01/19/2022 Last updated : 05/12/2023 # Conditional deployment in ARM templates You can use conditional deployment to create a new resource or use an existing o ] } },- "functions": [], - "resources": [ - { + "resources": { + "saNew": { "condition": "[equals(parameters('newOrExisting'), 'new')]", "type": "Microsoft.Storage/storageAccounts",- "apiVersion": "2019-06-01", + "apiVersion": "2022-09-01", "name": "[parameters('storageAccountName')]", "location": "[parameters('location')]", "sku": {- "name": "Standard_LRS", - "tier": "Standard" + "name": "Standard_LRS" },- "kind": "StorageV2", - "properties": { - "accessTier": "Hot" - } + "kind": "StorageV2" + }, + "saExisting": { + "condition": "[equals(parameters('newOrExisting'), 'existing')]", + "existing": true, + "type": "Microsoft.Storage/storageAccounts", + "apiVersion": "2022-09-01", + "name": "[parameters('storageAccountName')]" }- ] + }, + "outputs": { + "storageAccountId": { + "type": "string", + "value": "[if(equals(parameters('newOrExisting'), 'new'), resourceId('Microsoft.Storage/storageAccounts', parameters('storageAccountName')), resourceId('Microsoft.Storage/storageAccounts', parameters('storageAccountName')))]" + } + } } ``` -When the parameter `newOrExisting` is set to **new**, the condition evaluates to true. The storage account is deployed. However, when `newOrExisting` is set to **existing**, the condition evaluates to false and the storage account isn't deployed. +When the parameter `newOrExisting` is set to **new**, the condition evaluates to true. The storage account is deployed. Otherwise the existing storage account is used. For a complete example template that uses the `condition` element, see [VM with a new or existing Virtual Network, Storage, and Public IP](https://github.com/Azure/azure-quickstart-templates/tree/master/quickstarts/microsoft.compute/vm-new-or-existing-conditions). |
azure-resource-manager | Deployment Script Template Configure Dev | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-resource-manager/templates/deployment-script-template-configure-dev.md | Title: Configure development environment for deployment scripts in templates | Microsoft Docs description: Configure development environment for deployment scripts in Azure Resource Manager templates (ARM templates).--- Last updated 12/14/2020- ms.devlang: azurecli The following Azure Resource Manager template (ARM template) creates a container }, "containerImage": { "type": "string",- "defaultValue": "mcr.microsoft.com/azuredeploymentscripts-powershell:az5.2", + "defaultValue": "mcr.microsoft.com/azuredeploymentscripts-powershell:az9.7", "metadata": { "description": "Specify the container image." } The following Azure Resource Manager template (ARM template) creates a container The default value for the mount path is `/mnt/azscripts/azscriptinput`. This is the path in the container instance where it's mounted to the file share. -The default container image specified in the template is **mcr.microsoft.com/azuredeploymentscripts-powershell:az5.2**. See a list of all [supported Azure PowerShell versions](https://mcr.microsoft.com/v2/azuredeploymentscripts-powershell/tags/list). +The default container image specified in the template is **mcr.microsoft.com/azuredeploymentscripts-powershell:az9.7**. See a list of all [supported Azure PowerShell versions](https://mcr.microsoft.com/v2/azuredeploymentscripts-powershell/tags/list). The template suspends the container instance after 1,800 seconds. You have 30 minutes before the container instance goes into a terminated state and the session ends. |
azure-resource-manager | Deployment Script Template | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-resource-manager/templates/deployment-script-template.md | Property value details: - `forceUpdateTag`: Changing this value between template deployments forces the deployment script to re-execute. If you use the `newGuid()` or the `utcNow()` functions, both functions can only be used in the default value for a parameter. To learn more, see [Run script more than once](#run-script-more-than-once). - `containerSettings`: Specify the settings to customize Azure Container Instance. Deployment script requires a new Azure Container Instance. You can't specify an existing Azure Container Instance. However, you can customize the container group name by using `containerGroupName`. If not specified, the group name is automatically generated. - `storageAccountSettings`: Specify the settings to use an existing storage account. If `storageAccountName` isn't specified, a storage account is automatically created. See [Use an existing storage account](#use-existing-storage-account).-- `azPowerShellVersion`/`azCliVersion`: Specify the module version to be used. See a list of [supported Azure PowerShell versions](https://mcr.microsoft.com/v2/azuredeploymentscripts-powershell/tags/list). See a list of [supported Azure CLI versions](https://mcr.microsoft.com/v2/azure-cli/tags/list).+- `azPowerShellVersion`/`azCliVersion`: Specify the module version to be used. See a list of [supported Azure PowerShell versions](https://mcr.microsoft.com/v2/azuredeploymentscripts-powershell/tags/list). The version determines which container image to use: - >[!IMPORTANT] - > Deployment script uses the available CLI images from Microsoft Container Registry (MCR). It takes about one month to certify a CLI image for deployment script. Don't use the CLI versions that were released within 30 days. To find the release dates for the images, see [Azure CLI release notes](/cli/azure/release-notes-azure-cli). If an unsupported version is used, the error message lists the supported versions. + - **Az version greater than or equal to 9** uses Ubuntu 22.04. + - **Az version greater than or equal to 6 but less than 9** uses Ubuntu 20.04. + - **Az version less than 6** uses Ubuntu 18.04. ++ > [!IMPORTANT] + > It is advisable to upgrade to the latest version of Ubuntu, as Ubuntu 18.04 is nearing its end of life and will no longer receive security updates beyond [May 31st, 2023](https://ubuntu.com/18-04). ++ See a list of [supported Azure CLI versions](https://mcr.microsoft.com/v2/azure-cli/tags/list). ++ > [!IMPORTANT] + > Deployment script uses the available CLI images from Microsoft Container Registry (MCR). It typically takes approximatedly one month to certify a CLI image for deployment script. Don't use the CLI versions that were released within 30 days. To find the release dates for the images, see [Azure CLI release notes](/cli/azure/release-notes-azure-cli). If an unsupported version is used, the error message lists the supported versions. - `arguments`: Specify the parameter values. The values are separated by spaces. |
azure-sql-edge | Tutorial Renewable Energy Demo | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-sql-edge/tutorial-renewable-energy-demo.md | Title: Deploying Azure SQL Edge on turbines in a Contoso wind farm -description: In this tutorial, you'll use Azure SQL Edge for wake-detection on the turbines in a Contoso wind farm. +description: This tutorial shows you how to use Azure SQL Edge for wake-detection on the turbines in a Contoso wind farm. Previously updated : 12/18/2020 Last updated : 05/11/2023 - -# Using Azure SQL Edge to build smarter renewable resources +# Use Azure SQL Edge to build smarter renewable resources -This Azure SQL Edge demo is based on a Contoso Renewable Energy, a wind turbine farm that uses SQL DB edge for data processing onboard the generator. +The [Wind Turbine Demo](https://github.com/microsoft/sql-server-samples/tree/master/samples/demos/azure-sql-edge-demos/Wind%20Turbine%20Demo) for Azure SQL Edge is based on Contoso Renewable Energy, a wind turbine farm that uses SQL Edge for data processing onboard the generator. -This demo will walk you through resolving an alert being raised because of wind turbulence being detected at the device. You will train a model and deploy it to SQL DB Edge that will correct the detected wind wake and ultimately optimize power output. +This demo walks you through resolving an alert being raised because of wind turbulence being detected at the device. You'll train a model and deploy it to SQL Edge, which corrects the detected wind wake and ultimately optimizes power output. Azure SQL Edge - renewable Energy demo video on Channel 9:++<br /> + > [!VIDEO https://learn.microsoft.com/shows/Data-Exposed/Azure-SQL-Edge-Demo-Renewable-Energy/player] -## Setting up the demo on your local computer -Git will be used to copy all files from the demo to your local computer. +## Set up the demo on your local computer ++Git is used to copy all files from the demo to your local computer. -1. Install git from [here](https://git-scm.com/download). -2. Open a command prompt and navigate to a folder where the repo should be downloaded. -3. Issue the command https://github.com/microsoft/sql-server-samples.git. -4. Navigate to **'sql-server-samples\samples\demos\azure-sql-edge-demos\Wind Turbine Demo'** in the location where the repository is cloned. -5. Follow the instructions in README.md to set up the demo environment and execute the demo. +1. Install [Git](https://git-scm.com/download). +1. Open a command prompt and navigate to a folder where the repo should be downloaded. +1. Issue the command `git clone https://github.com/microsoft/sql-server-samples.git`. +1. Navigate to `sql-server-samples\samples\demos\azure-sql-edge-demos\Wind Turbine Demo` in the location where the repository is cloned. +1. Follow the instructions in `README.md` to set up the demo environment and execute the demo. |
azure-video-indexer | Release Notes | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-video-indexer/release-notes.md | To stay up-to-date with the most recent Azure Video Indexer developments, this a * Bug fixes * Deprecated functionality +## May 2023 ++### Topics insight improvements ++We now support all five levels of IPTC ontology. + ## April 2023 ### Resource Health support |
azure-vmware | Azure Vmware Solution Platform Updates | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-vmware/azure-vmware-solution-platform-updates.md | Last updated 4/24/2023 Microsoft will regularly apply important updates to the Azure VMware Solution for new features and software lifecycle management. You'll receive a notification through Azure Service Health that includes the timeline of the maintenance. For more information, see [Host maintenance and lifecycle management](concepts-private-clouds-clusters.md#host-maintenance-and-lifecycle-management). +## May 2023 ++**Azure VMware Solution in Azure Gov** + +Azure VMware Service will become generally available on May 17, 2023, to US Federal and State and Local Government (US) customers and their partners, in the regions of Arizona and Virgina. With this release, we are combining world-class Azure infrastructure together with VMware technologies by offering Azure VMware Solutions on Azure Government, which is designed, built, and supported by Microsoft. ++ +**New Azure VMware Solution Region: Qatar** ++We are excited to announce that the Azure VMware Solution has gone live in Qatar Central and is now available to customers. ++With the introduction of AV36P in Qatar, customers will receive access to 36 cores, 2.6 GHz clock speed, 768GB of RAM, and 19.2TB of SSD storage. ++To learn more about available regions of Azure products, see [Azure Products by Region](https://azure.microsoft.com/explore/global-infrastructure/products-by-region/?products=azure-vmware®ions=all) + ## April 2023 **VMware HCX Run Commands** Documented workarounds for the vSphere stack, as per [VMSA-2021-0002](https://ww ## Post update Once complete, newer versions of VMware solution components will appear. If you notice any issues or have any questions, contact our support team by opening a support ticket. +++ |
azure-vmware | Introduction | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-vmware/introduction.md | -Azure VMware Solution provides you with private clouds that contain VMware vSphere clusters built from dedicated bare-metal Azure infrastructure. Azure VMware Solution is available in Azure Commercial and in Public Preview in Azure Government.The minimum initial deployment is three hosts, but more hosts can be added, up to a maximum of 16 hosts per cluster. All provisioned private clouds have VMware vCenter Server, VMware vSAN, VMware vSphere, and VMware NSX-T Data Center. As a result, you can migrate workloads from your on-premises environments, deploy new virtual machines (VMs), and consume Azure services from your private clouds. For information about the SLA, see the [Azure service-level agreements](https://azure.microsoft.com/support/legal/sla/azure-vmware/v1_1/) page. +Azure VMware Solution provides you with private clouds that contain VMware vSphere clusters built from dedicated bare-metal Azure infrastructure. Azure VMware Solution is available in Azure Commercial and in Azure Government. The minimum initial deployment is three hosts, but more hosts can be added, up to a maximum of 16 hosts per cluster. All provisioned private clouds have VMware vCenter Server, VMware vSAN, VMware vSphere, and VMware NSX-T Data Center. As a result, you can migrate workloads from your on-premises environments, deploy new virtual machines (VMs), and consume Azure services from your private clouds. For information about the SLA, see the [Azure service-level agreements](https://azure.microsoft.com/support/legal/sla/azure-vmware/v1_1/) page. Azure VMware Solution is a VMware validated solution with ongoing validation and testing of enhancements and upgrades. Microsoft manages and maintains the private cloud infrastructure and software. It allows you to focus on developing and running workloads in your private clouds to deliver business value. The next step is to learn key [private cloud and cluster concepts](concepts-priv <!-- LINKS - external --> -<!-- LINKS - internal --> [concepts-private-clouds-clusters]: ./concepts-private-clouds-clusters.md++ |
azure-vmware | Upgrade Hcx Azure Vmware Solutions | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-vmware/upgrade-hcx-azure-vmware-solutions.md | You can update HCX Connector and HCX Cloud systems during separate maintenance w - For systems requirements, compatibility, and upgrade prerequisites, see the [VMware HCX release notes](https://docs.vmware.com/en/VMware-HCX/https://docsupdatetracker.net/index.html). - For more information about the upgrade path, see the [Product Interoperability Matrix](https://interopmatrix.vmware.com/Upgrade?productId=660). +- For information regarding VMware product compatibility by version, see the [Compatibility Matrix](https://interopmatrix.vmware.com/Interoperability?col=660,&row=0,). +- Review VMware Software Versioning, Skew and Legacy Support Policies [here](https://docs.vmware.com/en/VMware-HCX/4.5/hcx-skew-policy/GUID-787FB2A1-52AF-483C-B595-CF382E728674.html). - Ensure HCX manager and site pair configurations are healthy. See [Rolling Back an Upgrade Using Snapshots](https://docs.vmware.com/en/VMware- ## Next steps [Software Versioning, Skew and Legacy Support Policies](https://docs.vmware.com/en/VMware-HCX/4.5/hcx-skew-policy/GUID-787FB2A1-52AF-483C-B595-CF382E728674.html) -[Updating VMware HCX](https://docs.vmware.com/en/VMware-HCX/4.5/hcx-user-guide/GUID-508A94B2-19F6-47C7-9C0D-2C89A00316B9.html) +[Updating VMware HCX](https://docs.vmware.com/en/VMware-HCX/4.5/hcx-user-guide/GUID-508A94B2-19F6-47C7-9C0D-2C89A00316B9.html) |
backup | Sap Hana Backup Support Matrix | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/backup/sap-hana-backup-support-matrix.md | Title: SAP HANA Backup support matrix description: In this article, learn about the supported scenarios and limitations when you use Azure Backup to back up SAP HANA databases on Azure VMs. Previously updated : 11/14/2022 Last updated : 05/12/2023 Azure Backup supports the backup of SAP HANA databases to Azure. This article su | **Scenario** | **Supported configurations** | **Unsupported configurations** | | -- | | | | **Topology** | SAP HANA running in Azure Linux VMs only | HANA Large Instances (HLI) |-| **Regions** | **Americas** ΓÇô Central US, East US 2, East US, North Central US, South Central US, West US 2, West US 3, West Central US, West US, Canada Central, Canada East, Brazil South <br> **Asia Pacific** ΓÇô Australia Central, Australia Central 2, Australia East, Australia Southeast, Japan East, Japan West, Korea Central, Korea South, East Asia, Southeast Asia, Central India, South India, West India, China East, China East 2, China East 3, China North, China North 2, China North 3 <br> **Europe** ΓÇô West Europe, North Europe, France Central, UK South, UK West, Germany North, Germany West Central, Switzerland North, Switzerland West, Central Switzerland North, Norway East, Norway West <br> **Africa / ME** - South Africa North, South Africa West, UAE North, UAE Central <BR> **Azure Government regions** | France South, Germany Central, Germany Northeast, US Gov IOWA | +| **Regions** | **Americas** ΓÇô Central US, East US 2, East US, North Central US, South Central US, West US 2, West US 3, West Central US, West US, Canada Central, Canada East, Brazil South <br> **Asia Pacific** ΓÇô Australia Central, Australia Central 2, Australia East, Australia Southeast, Japan East, Japan West, Korea Central, Korea South, East Asia, Southeast Asia, Central India, South India, West India, China East, China East 2, China East 3, China North, China North 2, China North 3 <br> **Europe** ΓÇô West Europe, North Europe, France Central, UK South, UK West, Germany North, Germany West Central, Switzerland North, Switzerland West, Central Switzerland North, Norway East, Norway West, Sweden Central <br> **Africa / ME** - South Africa North, South Africa West, UAE North, UAE Central <BR> **Azure Government regions** | France South, Germany Central, Germany Northeast, US Gov IOWA | | **OS versions** | SLES 12 with SP2, SP3, SP4 and SP5; SLES 15 with SP0, SP1, SP2, SP3, and SP4 <br><br> RHEL 7.4, 7.6, 7.7, 7.9, 8.1, 8.2, 8.4, and 8.6 | | | **HANA versions** | SDC on HANA 1.x, MDC on HANA 2.x SPS04, SPS05 Rev <= 59, SPS 06 (validated for encryption enabled scenarios as well) | | | **Encryption** | SSLEnforce, HANA data encryption | | |
cdn | Cdn Custom Ssl | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cdn/cdn-custom-ssl.md | Grant Azure CDN permission to access the certificates (secrets) in your Azure Ke > [!NOTE] > * Azure CDN only supports PFX certificates.- > * In order for the certificate to be automatically rotated to the latest version when a newer version of the certificate is available in your Key Vault, please set the certificate/secret version to 'Latest'. If a specific version is selected, you have to re-select the new version manually for certificate rotation. It takes up to 72 hours for the new version of the certificate/secret to be deployed. + > * In order for the certificate to be automatically rotated to the latest version when a newer version of the certificate is available in your Key Vault, please set the certificate/secret version to 'Latest'. If a specific version is selected, you have to re-select the new version manually for certificate rotation. It takes up to 72 hours for the new version of the certificate/secret to be deployed. Only Standard Microsoft SKU supports certificate auto rotation. 5. Select **On** to enable HTTPS. |
chaos-studio | Troubleshooting | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/chaos-studio/troubleshooting.md | You can upgrade your Virtual Machine Scale Sets instances with Azure CLI: az vmss update-instances --resource-group myResourceGroup --name myScaleSet --instance-ids {instanceIds} ``` -For more information, see [How to bring VMs up-to-date with the latest scale set model](../virtual-machine-scale-sets/virtual-machine-scale-sets-upgrade-scale-set.md#how-to-bring-vms-up-to-date-with-the-latest-scale-set-model) +For more information, see [How to bring VMs up-to-date with the latest scale set model](../virtual-machine-scale-sets/virtual-machine-scale-sets-upgrade-policy.md) ### AKS Chaos Mesh faults fail AKS Chaos Mesh faults may fail for various reasons related to missing prerequisites: |
cloud-services-extended-support | In Place Migration Common Errors | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cloud-services-extended-support/in-place-migration-common-errors.md | Title: Common errors and known issues when migration to Azure Cloud Services (extended support) + Title: Common errors and known issues when migrating to Azure Cloud Services (extended support) description: Overview of common errors when migrating from Cloud Services (classic) to Cloud Service (extended support) Last updated 2/08/2021 -# Common errors and known issues when migration to Azure Cloud Services (extended support) +# Common errors and known issues when migrating to Azure Cloud Services (extended support) This article covers known issues and common errors you might encounter when migration from Cloud Services (classic) to Cloud Services (extended support). Following issues are known and being addressed. | Name of the lock on Cloud Services (extended support) lock is incorrect. | Non-impacting. Solution not yet available. | | IP address name is incorrect on Cloud Services (extended support) portal blade. | Non-impacting. Solution not yet available. | | Invalid DNS name shown for virtual IP address after on update operation on a migrated cloud service. | Non-impacting. Solution not yet available. | -| After successful prepare, linking a new Cloud Services (extended support) deployment as swappable is not allowed. | Do not link a new cloud service as swappable to a prepared cloud service. | +| After successful prepare, linking a new Cloud Services (extended support) deployment as swappable isn't allowed. | Do not link a new cloud service as swappable to a prepared cloud service. | | Error messages need to be updated. | Non-impacting. | ## Common migration errors Common migration errors and mitigation steps. | The resource type could not be found in the namespace `Microsoft.Compute` for api version '2020-10-01-preview'. | [Register the subscription](in-place-migration-overview.md#setup-access-for-migration) for CloudServices feature flag to access public preview. | | The server encountered an internal error. Retry the request. | Retry the operation, use [Microsoft Q&A](/answers/topics/azure-cloud-services-extended-support.html) or contact support. | | The server encountered an unexpected error while trying to allocate network resources for the cloud service. Retry the request. | Retry the operation, use [Microsoft Q&A](/answers/topics/azure-cloud-services-extended-support.html) or contact support. | -| Deployment deployment-name in cloud service cloud-service-name must be within a virtual network to be migrated. | Deployment is not located in a virtual network. Refer [this](in-place-migration-technical-details.md#migration-of-deployments-not-in-a-virtual-network) document for more details. | -| Migration of deployment deployment-name in cloud service cloud-service-name is not supported because it is in region region-name. Allowed regions: [list of available regions]. | Region is not yet supported for migration. | +| Deployment deployment-name in cloud service cloud-service-name must be within a virtual network to be migrated. | Deployment isn't located in a virtual network. Refer to [this](in-place-migration-technical-details.md#migration-of-deployments-not-in-a-virtual-network) document for more details. | +| Migration of deployment deployment-name in cloud service cloud-service-name isn't supported because it is in region region-name. Allowed regions: [list of available regions]. | Region isn't yet supported for migration. | | The Deployment deployment-name in cloud service cloud-service-name cannot be migrated because there are no subnets associated with the role(s) role-name. Associate all roles with a subnet, then retry the migration of the cloud service. | Update the cloud service (classic) deployment by placing it in a subnet before migration. | | The deployment deployment-name in cloud service cloud-service-name cannot be migrated because the deployment requires at least one feature that not registered on the subscription in Azure Resource Manager. Register all required features to migrate this deployment. Missing feature(s): [list of missing features]. | Contact support to get the feature flags registered. | | The deployment cannot be migrated because the deployment's cloud service has two occupied slots. Migration of cloud services is only supported for deployments that are the only deployment in their cloud service. Delete the other deployment in the cloud service to proceed with the migration of this deployment. | Refer to the [unsupported scenario](in-place-migration-technical-details.md#unsupported-configurations--migration-scenarios) list for more details. | Common migration errors and mitigation steps. | Migration of Deployment {0} in HostedService {1} is in the process of being committed and cannot be changed until it completes successfully. | Wait or retry operation. | | Migration of Deployment {0} in HostedService {1} is in the process of being aborted and cannot be changed until it completes successfully. | Wait or retry operation. | | One or more VMs in Deployment {0} in HostedService {1} is undergoing an update operation. It can't be migrated until the previous operation completes successfully. Retry after sometime. | Wait for operation to complete. | -| Migration is not supported for Deployment {0} in HostedService {1} because it uses following features not yet supported for migration: Non-vnet deployment.| Deployment is not located in a virtual network. Refer [this](in-place-migration-technical-details.md#migration-of-deployments-not-in-a-virtual-network) document for more details. | +| Migration isn't supported for Deployment {0} in HostedService {1} because it uses following features not yet supported for migration: Non-vnet deployment.| Deployment isn't located in a virtual network. Refer to [this](in-place-migration-technical-details.md#migration-of-deployments-not-in-a-virtual-network) document for more details. | | The virtual network name cannot be null or empty. | Provide virtual network name in the REST request body | | The Subnet Name cannot be null or empty. | Provide subnet name in the REST request body. | | DestinationVirtualNetwork must be set to one of the following values: Default, New, or Existing. | Provide DestinationVirtualNetwork property in the REST request body. | -| Default VNet destination option not implemented. | ΓÇ£DefaultΓÇ¥ value is not supported for DestinationVirtualNetwork property in the REST request body. | -| The deployment {0} cannot be migrated because the CSPKG is not available. | Upgrade the deployment and try again. | +| Default VNet destination option not implemented. | ΓÇ£DefaultΓÇ¥ value isn't supported for DestinationVirtualNetwork property in the REST request body. | +| The deployment {0} cannot be migrated because the CSPKG isn't available. | Upgrade the deployment and try again. | | The subnet with ID '{0}' is in a different location than deployment '{1}' in hosted service '{2}'. The location for the subnet is '{3}' and the location for the hosted service is '{4}'. Specify a subnet in the same location as the deployment. | Update the cloud service to have both subnet and cloud service in the same location before migration. | | Migration of Deployment {0} in HostedService {1} is in the process of being aborted and cannot be changed until it completes successfully. | Wait for abort to complete or retry abort. Use [Microsoft Q&A](/answers/topics/azure-cloud-services-extended-support.html) or Contact support otherwise. | | Deployment {0} in HostedService {1} has not been prepared for Migration. | Run prepare on the cloud service before running the commit operation. | -| UnknownExceptionInEndExecute: Contract.Assert failed: rgName is null or empty: Exception received in EndExecute that is not an RdfeException. | Use [Microsoft Q&A](/answers/topics/azure-cloud-services-extended-support.html) or Contact support. | -| UnknownExceptionInEndExecute: A task was canceled: Exception received in EndExecute that is not an RdfeException. | Use [Microsoft Q&A](/answers/topics/azure-cloud-services-extended-support.html) or Contact support. | +| UnknownExceptionInEndExecute: Contract.Assert failed: rgName is null or empty: Exception received in EndExecute that isn't an RdfeException. | Use [Microsoft Q&A](/answers/topics/azure-cloud-services-extended-support.html) or Contact support. | +| UnknownExceptionInEndExecute: A task was canceled: Exception received in EndExecute that isn't an RdfeException. | Use [Microsoft Q&A](/answers/topics/azure-cloud-services-extended-support.html) or Contact support. | | XrpVirtualNetworkMigrationError: Virtual network migration failure. | Use [Microsoft Q&A](/answers/topics/azure-cloud-services-extended-support.html) or Contact support. | | Deployment {0} in HostedService {1} belongs to Virtual Network {2}. Migrate Virtual Network {2} to migrate this HostedService {1}. | Refer to [Virtual Network migration](in-place-migration-technical-details.md#virtual-network-migration). | | The current quota for Resource name in Azure Resource Manager is insufficient to complete migration. Current quota is {0}, additional needed is {1}. File a support request to raise the quota and retry migration once the quota has been raised. | Follow appropriate channels to request quota increase: <br>[Quota increase for networking resources](../azure-portal/supportability/networking-quota-requests.md) <br>[Quota increase for compute resources](../azure-portal/supportability/per-vm-quota-requests.md) | -|XrpPaaSMigrationCscfgCsdefValidationMismatch: Migration could not be completed on deployment deployment-name in hosted service service-name because the deployment's metadata is stale. Please abort the migration and upgrade the deployment before retrying migration. Validation Message: The service name 'service-name'in the service defintion file does not match the name 'service-name-in-config-file' in the service configuration file|match the service names in both .csdef and .cscfg file| -|NetworkingInternalOperationError when deploying Cloud Service (extended support) resource| The issue may occur if the Service name is same as role name. The reccomended remidiation is to use diferent names for service and roles| +|XrpPaaSMigrationCscfgCsdefValidationMismatch: Migration could not be completed on deployment deployment-name in hosted service service-name because the deployment's metadata is stale. Please abort the migration and upgrade the deployment before retrying migration. Validation Message: The service name 'service-name'in the service definition file does not match the name 'service-name-in-config-file' in the service configuration file|match the service names in both .csdef and .cscfg file| +|NetworkingInternalOperationError when deploying Cloud Service (extended support) resource| The issue may occur if the Service name is same as role name. The recommended remediation is to use different names for service and roles| ## Next steps For more information on the requirements of migration, see [Technical details of migrating to Azure Cloud Services (extended support)](in-place-migration-technical-details.md) |
cognitive-services | Background Removal | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Computer-vision/how-to/background-removal.md | -This guide assumes you've already [created a Computer Vision resource](https://portal.azure.com/#create/Microsoft.CognitiveServicesComputerVision) and obtained a key and endpoint URL. +## Prerequisites ++This guide assumes you have successfully followed the steps mentioned in the [quickstart](../quickstarts-sdk/image-analysis-client-library-40.md) page. This means: ++* You have <a href="https://portal.azure.com/#create/Microsoft.CognitiveServicesComputerVision" title="created a Computer Vision resource" target="_blank">created a Computer Vision resource </a> and obtained a key and endpoint URL. +* If you're using the client SDK, you have the appropriate SDK package installed and you have a running quickstart application. You modify this quickstart application based on code examples here. +* If you're using 4.0 REST API calls directly, you have successfully made a `curl.exe` call to the service (or used an alternative tool). You modify the `curl.exe` call based on the examples here. ++The quickstart shows you how to extract visual features from an image, however, the concepts are similar to background removal. Therefore you benefit from starting from the quickstart and making modifications. > [!IMPORTANT]-> These APIs are only available in the following geographic regions: East US, France Central, Korea Central, North Europe, Southeast Asia, West Europe, West US. +> Background removal is only available in the following Azure regions: East US, France Central, Korea Central, North Europe, Southeast Asia, West Europe, West US. ++## Authenticate against the service ++To authenticate against the Image Analysis service, you need a Computer Vision key and endpoint URL. ++> [!TIP] +> Don't include the key directly in your code, and never post it publicly. See the Cognitive Services [security](/azure/cognitive-services/security-features) article for more authentication options like [Azure Key Vault](/azure/cognitive-services/use-key-vault). ++The SDK example assumes that you defined the environment variables `VISION_KEY` and `VISION_ENDPOINT` with your key and endpoint. ++#### [C#](#tab/csharp) ++Start by creating a [VisionServiceOptions](/dotnet/api/azure.ai.vision.core.options.visionserviceoptions) object using one of the constructors. For example: ++[!code-csharp[](~/azure-ai-vision-sdk/docs/learn.microsoft.com/csharp/image-analysis/1/Program.cs?name=vision_service_options)] ++#### [Python](#tab/python) ++Start by creating a [VisionServiceOptions](/python/api/azure-ai-vision/azure.ai.vision.visionserviceoptions) object using one of the constructors. For example: ++[!code-python[](~/azure-ai-vision-sdk/docs/learn.microsoft.com/python/image-analysis/1/main.py?name=vision_service_options)] ++#### [C++](#tab/cpp) ++At the start of your code, use one of the static constructor methods [VisionServiceOptions::FromEndpoint](/cpp/cognitive-services/vision/service-visionserviceoptions#fromendpoint-1) to create a *VisionServiceOptions* object. For example: ++[!code-cpp[](~/azure-ai-vision-sdk/docs/learn.microsoft.com/cpp/image-analysis/1/1.cpp?name=vision_service_options)] ++Where we used this helper function to read the value of an environment variable: ++[!code-cpp[](~/azure-ai-vision-sdk/docs/learn.microsoft.com/cpp/image-analysis/1/1.cpp?name=get_env_var)] ++#### [REST API](#tab/rest) ++Authentication is done by adding the HTTP request header **Ocp-Apim-Subscription-Key** and setting it to your vision key. The call is made to the URL `https://<endpoint>/computervision/imageanalysis:segment&api-version=2023-02-01-preview`, where `<endpoint>` is your unique computer vision endpoint URL. See [Select a mode ](./background-removal.md#select-a-mode) section for another query string you add to this URL. ++++## Select the image to analyze ++The code in this guide uses remote images referenced by URL. You may want to try different images on your own to see the full capability of the Image Analysis features. ++#### [C#](#tab/csharp) ++Create a new **VisionSource** object from the URL of the image you want to analyze, using the static constructor [VisionSource.FromUrl](/dotnet/api/azure.ai.vision.core.input.visionsource.fromurl). ++**VisionSource** implements **IDisposable**, therefore create the object with a **using** statement or explicitly call **Dispose** method after analysis completes. ++[!code-csharp[](~/azure-ai-vision-sdk/docs/learn.microsoft.com/csharp/image-analysis/1/Program.cs?name=vision_source)] ++> [!TIP] +> You can also analyze a local image by passing in the full-path image file name. See [VisionSource.FromFile](/dotnet/api/azure.ai.vision.core.input.visionsource.fromfile). ++#### [Python](#tab/python) ++In your script, create a new [VisionSource](/python/api/azure-ai-vision/azure.ai.vision.visionsource) object from the URL of the image you want to analyze. ++[!code-python[](~/azure-ai-vision-sdk/docs/learn.microsoft.com/python/image-analysis/1/main.py?name=vision_source)] ++> [!TIP] +> You can also analyze a local image by passing in the full-path image file name to the **VisionSource** constructor instead of the image URL. ++#### [C++](#tab/cpp) ++Create a new **VisionSource** object from the URL of the image you want to analyze, using the static constructor [VisionSource::FromUrl](/cpp/cognitive-services/vision/input-visionsource#fromurl). -## Submit data to the service +[!code-cpp[](~/azure-ai-vision-sdk/docs/learn.microsoft.com/cpp/image-analysis/1/1.cpp?name=vision_source)] -When calling the **[Segment](https://centraluseuap.dev.cognitive.microsoft.com/docs/services/unified-vision-apis-public-preview-2023-02-01-preview/operations/63e6b6d9217d201194bbecbd)** API, you specify the image's URL by formatting the request body like this: `{"url":"https://docs.microsoft.com/azure/cognitive-services/computer-vision/images/windows-kitchen.jpg"}`. +> [!TIP] +> You can also analyze a local image by passing in the full-path image file name. See [VisionSource::FromFile](/cpp/cognitive-services/vision/input-visionsource#fromfile). -To analyze a local image, you'd put the binary image data in the HTTP request body. +#### [REST API](#tab/rest) -## Determine how to process the data +When analyzing a remote image, you specify the image's URL by formatting the request body like this: `{"url":"https://learn.microsoft.com/azure/cognitive-services/computer-vision/images/windows-kitchen.jpg"}`. The **Content-Type** should be `application/json`. -### Select a mode +To analyze a local image, you'd put the binary image data in the HTTP request body. The **Content-Type** should be `application/octet-stream` or `multipart/form-data`. +++## Select a mode ++### [C#](#tab/csharp) ++<!-- TODO: After C# ref-docs get published, add link to SegmentationMode (/dotnet/api/azure.ai.vision.imageanalysis.imageanalysisoptions.segmentationmode) & ImageSegmentationMode (/dotnet/api/azure.ai.vision.imageanalysis.imagesegmentationmode) --> ++Create a new [ImageAnalysisOptions](/dotnet/api/azure.ai.vision.imageanalysis.imageanalysisoptions) object and set the property `SegmentationMode`. This property must be set if you want to do segmentation. See `ImageSegmentationMode` for supported values. ++[!code-csharp[](~/azure-ai-vision-sdk/docs/learn.microsoft.com/csharp/image-analysis/segmentation/Program.cs?name=segmentation_mode)] ++### [Python](#tab/python) ++<!-- TODO: Where Python ref-docs get published, add link to SegmentationMode (/python/api/azure-ai-vision/azure.ai.vision.imageanalysisoptions#azure-ai-vision-imageanalysisoptions-segmentation-mode) & ImageSegmentationMode (/python/api/azure-ai-vision/azure.ai.vision.enums.imagesegmentationmode)> --> ++Create a new [ImageAnalysisOptions](/python/api/azure-ai-vision/azure.ai.vision.imageanalysisoptions) object and set the property `segmentation_mode`. This property must be set if you want to do segmentation. See `ImageSegmentationMode` for supported values. -|URL parameter |Value |Description | -|||| -|`mode` | `backgroundRemoval` | Outputs an image of the detected foreground object with a transparent background. | -|`mode` | `foregroundMatting` | Outputs a grayscale alpha matte image showing the opacity of the detected foreground object. | +[!code-python[](~/azure-ai-vision-sdk/docs/learn.microsoft.com/python/image-analysis/segmentation/main.py?name=segmentation_mode)] +### [C++](#tab/cpp) -A populated URL for backgroundRemoval would look like this: `https://{endpoint}/computervision/imageanalysis:segment?api-version=2023-02-01-preview&mode=backgroundRemoval` +Create a new [ImageAnalysisOptions](/cpp/cognitive-services/vision/imageanalysis-imageanalysisoptions) object and call the [SetSegmentationMode](/cpp/cognitive-services/vision/imageanalysis-imageanalysisoptions#setsegmentationmode) method. You must call this method if you want to do segmentation. See [ImageSegmentationMode](/cpp/cognitive-services/vision/azure-ai-vision-imageanalysis-namespace#enum-imagesegmentationmode) for supported values. ++[!code-cpp[](~/azure-ai-vision-sdk/docs/learn.microsoft.com/cpp/image-analysis/segmentation/segmentation.cpp?name=segmentation_mode)] ++### [REST](#tab/rest) ++Set the query string *mode** to one of these two values. This query string is mandatory if you want to do image segmentation. ++|URL parameter | Value |Description | +|--||-| +| `mode` | `backgroundRemoval` | Outputs an image of the detected foreground object with a transparent background. | +| `mode` | `foregroundMatting` | Outputs a gray-scale alpha matte image showing the opacity of the detected foreground object. | ++A populated URL for backgroundRemoval would look like this: `https://<endpoint>/computervision/imageanalysis:segment?api-version=2023-02-01-preview&mode=backgroundRemoval` ++ ## Get results from the service -This section shows you how to parse the results of the API call. +This section shows you how to make the API call and parse the results. ++#### [C#](#tab/csharp) ++The following code calls the Image Analysis API and saves the resulting segmented image to a file named **output.png**. It also displays some metadata about the segmented image. -The service returns a `200` HTTP response, and the body contains the returned image in the form of a binary stream. The following is an example of the 4-channel PNG image response for the `backgroundRemoval` mode: +**SegmentationResult** implements **IDisposable**, therefore create the object with a **using** statement or explicitly call **Dispose** method after analysis completes. +[!code-csharp[](~/azure-ai-vision-sdk/docs/learn.microsoft.com/csharp/image-analysis/segmentation/Program.cs?name=segment)] -The following is an example of the 1-channel PNG image response for the `foregroundMatting` mode: +#### [Python](#tab/python) +The following code calls the Image Analysis API and saves the resulting segmented image to a file named **output.png**. It also displays some metadata about the segmented image. ++[!code-python[](~/azure-ai-vision-sdk/docs/learn.microsoft.com/python/image-analysis/segmentation/main.py?name=segment)] ++#### [C++](#tab/cpp) ++The following code calls the Image Analysis API and saves the resulting segmented image to a file named **output.png**. It also displays some metadata about the segmented image. ++[!code-cpp[](~/azure-ai-vision-sdk/docs/learn.microsoft.com/cpp/image-analysis/segmentation/segmentation.cpp?name=segment)] ++#### [REST](#tab/rest) ++The service returns a `200` HTTP response on success with `Content-Type: image/png`, and the body contains the returned PNG image in the form of a binary stream. ++++As an example, assume background removal is run on the following image: +++On a successful background removal call, The following four-channel PNG image is the response for the `backgroundRemoval` mode: +++The following one-channel PNG image is the response for the `foregroundMatting` mode: +++The API returns an image the same size as the original for the `foregroundMatting` mode, but at most 16 megapixels (preserving image aspect ratio) for the `backgroundRemoval` mode. -The API will return an image the same size as the original for the `foregroundMatting` mode, but at most 16 megapixels (preserving image aspect ratio) for the `backgroundRemoval` mode. ## Error codes -See the following list of possible errors and their causes: --- `400 - InvalidRequest`- - `Value for mode is invalid.` Ensure you have selected exactly one of the valid options for the `mode` parameter. - - `This operation is not enabled in this region.` Ensure that your resource is in one of the geographic regions where the API is supported. - - `The image size is not allowed to be zero or larger than {number} bytes.` Ensure your image is within the specified size limits. - - `The image dimension is not allowed to be smaller than {min number of pixels} and larger than {max number of pixels}`. Ensure both dimensions of the image are within the specified dimension limits. - - `Image format is not valid.` Ensure the input data is a valid JPEG, GIF, TIFF, BMP, or PNG image. -- `500`- - `InternalServerError.` The processing resulted in an internal error. -- `503`- - `ServiceUnavailable.` The service is unavailable. ++ ## Next steps |
cognitive-services | Call Analyze Image 40 | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Computer-vision/how-to/call-analyze-image-40.md | -This guide assumes you've already <a href="https://portal.azure.com/#create/Microsoft.CognitiveServicesComputerVision" title="created a Computer Vision resource" target="_blank">created a Computer Vision resource </a> and obtained a key and endpoint URL. If you're using a client SDK, you'll also need to authenticate a client object. If you haven't done these steps, follow the [quickstart](../quickstarts-sdk/image-analysis-client-library-40.md) to get started. - -## Submit data to the service +## Prerequisites -The code in this guide uses remote images referenced by URL. You may want to try different images on your own to see the full capability of the Image Analysis features. +This guide assumes you have successfully followed the steps mentioned in the [quickstart](../quickstarts-sdk/image-analysis-client-library-40.md) page. This means: -#### [C#](#tab/csharp) +* You have <a href="https://portal.azure.com/#create/Microsoft.CognitiveServicesComputerVision" title="created a Computer Vision resource" target="_blank">created a Computer Vision resource </a> and obtained a key and endpoint URL. +* If you're using the client SDK, you have the appropriate SDK package installed and you have a running quickstart application. You modify this quickstart application based on code examples here. +* If you're using 4.0 REST API calls directly, you have successfully made a `curl.exe` call to the service (or used an alternative tool). You modify the `curl.exe` call based on the examples here. -In your main class, save a reference to the URL of the image you want to analyze. +## Authenticate against the service -```csharp -var imageSource = VisionSource.FromUrl(new Uri("https://learn.microsoft.com/azure/cognitive-services/computer-vision/images/windows-kitchen.jpg")); -``` +To authenticate against the Image Analysis service, you need a Computer Vision key and endpoint URL. > [!TIP]-> You can also analyze a local image. See the [reference documentation](/dotnet/api/azure.ai.vision.imageanalysis) for alternative **Analyze** methods. Or, see the sample code on [GitHub](https://github.com/Azure-Samples/azure-ai-vision-sdk) for scenarios involving local images. +> Don't include the key directly in your code, and never post it publicly. See the Cognitive Services [security](/azure/cognitive-services/security-features) article for more authentication options like [Azure Key Vault](/azure/cognitive-services/use-key-vault). +The SDK example assumes that you defined the environment variables `VISION_KEY` and `VISION_ENDPOINT` with your key and endpoint. -#### [Python](#tab/python) +#### [C#](#tab/csharp) -Save a reference to the URL of the image you want to analyze. +Start by creating a [VisionServiceOptions](/dotnet/api/azure.ai.vision.core.options.visionserviceoptions) object using one of the constructors. For example: -```python -image_url = 'https://learn.microsoft.com/azure/cognitive-services/computer-vision/images/windows-kitchen.jpg' -vision_source = visionsdk.VisionSource(url=image_url) -``` +[!code-csharp[](~/azure-ai-vision-sdk/docs/learn.microsoft.com/csharp/image-analysis/1/Program.cs?name=vision_service_options)] -> [!TIP] -> You can also analyze a local image. See the [reference documentation](/python/api/azure-ai-vision) for alternative **Analyze** methods. Or, see the sample code on [GitHub](https://github.com/Azure-Samples/azure-ai-vision-sdk) for scenarios involving local images. +#### [Python](#tab/python) ++Start by creating a [VisionServiceOptions](/python/api/azure-ai-vision/azure.ai.vision.visionserviceoptions) object using one of the constructors. For example: ++[!code-python[](~/azure-ai-vision-sdk/docs/learn.microsoft.com/python/image-analysis/1/main.py?name=vision_service_options)] #### [C++](#tab/cpp) -Save a reference to the URL of the image you want to analyze. +At the start of your code, use one of the static constructor methods [VisionServiceOptions::FromEndpoint](/cpp/cognitive-services/vision/service-visionserviceoptions#fromendpoint-1) to create a *VisionServiceOptions* object. For example: -```cpp -auto imageSource = VisionSource::FromUrl("https://learn.microsoft.com/azure/cognitive-services/computer-vision/images/windows-kitchen.jpg"); -``` +[!code-cpp[](~/azure-ai-vision-sdk/docs/learn.microsoft.com/cpp/image-analysis/1/1.cpp?name=vision_service_options)] -> [!TIP] -> You can also analyze a local image. See the [reference documentation]() for alternative **Analyze** methods. Or, see the sample code on [GitHub](https://github.com/Azure-Samples/azure-ai-vision-sdk) for scenarios involving local images. +Where we used this helper function to read the value of an environment variable: -#### [REST](#tab/rest) +[!code-cpp[](~/azure-ai-vision-sdk/docs/learn.microsoft.com/cpp/image-analysis/1/1.cpp?name=get_env_var)] -When analyzing a remote image, you specify the image's URL by formatting the request body like this: `{"url":"https://learn.microsoft.com/azure/cognitive-services/computer-vision/images/windows-kitchen.jpg"}`. +#### [REST API](#tab/rest) -To analyze a local image, you'd put the binary image data in the HTTP request body. +Authentication is done by adding the HTTP request header **Ocp-Apim-Subscription-Key** and setting it to your vision key. The call is made to the URL `https://<endpoint>/computervision/imageanalysis:analyze&api-version=2023-02-01-preview`, where `<endpoint>` is your unique computer vision endpoint URL. You add query strings based on your analysis options. +## Select the image to analyze -## Determine how to process the data --### Select visual features +The code in this guide uses remote images referenced by URL. You may want to try different images on your own to see the full capability of the Image Analysis features. -The Analysis 4.0 API gives you access to all of the service's image analysis features. Choose which operations to do based on your own use case. See the [overview](../overview.md) for a description of each feature. The examples in the sections below add all of the available visual features, but for practical usage you'll likely only need one or two. +#### [C#](#tab/csharp) +Create a new **VisionSource** object from the URL of the image you want to analyze, using the static constructor [VisionSource.FromUrl](/dotnet/api/azure.ai.vision.core.input.visionsource.fromurl). -#### [C#](#tab/csharp) +**VisionSource** implements **IDisposable**, therefore create the object with a **using** statement or explicitly call **Dispose** method after analysis completes. -Define an **ImageAnalysisOptions** object, which specifies visual features you'd like to extract in your analysis. +[!code-csharp[](~/azure-ai-vision-sdk/docs/learn.microsoft.com/csharp/image-analysis/1/Program.cs?name=vision_source)] -```csharp -var analysisOptions = new ImageAnalysisOptions() -{ - // Mandatory. You must set one or more features to analyze. Here we use the full set of features. - // Note that 'Caption' is only supported in Azure GPU regions (East US, France Central, Korea Central, - // North Europe, Southeast Asia, West Europe, West US and East Asia) - Features = - ImageAnalysisFeature.CropSuggestions - | ImageAnalysisFeature.Caption - | ImageAnalysisFeature.Objects - | ImageAnalysisFeature.People - | ImageAnalysisFeature.Text - | ImageAnalysisFeature.Tags -}; -``` +> [!TIP] +> You can also analyze a local image by passing in the full-path image file name. See [VisionSource.FromFile](/dotnet/api/azure.ai.vision.core.input.visionsource.fromfile). #### [Python](#tab/python) -Specify which visual features you'd like to extract in your analysis. --```python -# Set the language and one or more visual features as analysis options -image_analysis_options = visionsdk.ImageAnalysisOptions() --# Mandatory. You must set one or more features to analyze. Here we use the full set of features. -# Note that 'Caption' is only supported in Azure GPU regions (East US, France Central, Korea Central, -# North Europe, Southeast Asia, West Europe, West US) -image_analysis_options.features = ( - visionsdk.ImageAnalysisFeature.CROP_SUGGESTIONS | - visionsdk.ImageAnalysisFeature.CAPTION | - visionsdk.ImageAnalysisFeature.OBJECTS | - visionsdk.ImageAnalysisFeature.PEOPLE | - visionsdk.ImageAnalysisFeature.TEXT | - visionsdk.ImageAnalysisFeature.TAGS -) -``` -#### [C++](#tab/cpp) +In your script, create a new [VisionSource](/python/api/azure-ai-vision/azure.ai.vision.visionsource) object from the URL of the image you want to analyze. -Define an **ImageAnalysisOptions** object, which specifies visual features you'd like to extract in your analysis. +[!code-python[](~/azure-ai-vision-sdk/docs/learn.microsoft.com/python/image-analysis/1/main.py?name=vision_source)] -```cpp -auto analysisOptions = ImageAnalysisOptions::Create(); +> [!TIP] +> You can also analyze a local image by passing in the full-path image file name to the **VisionSource** constructor instead of the image URL. -analysisOptions->SetFeatures( - { - ImageAnalysisFeature::CropSuggestions, - ImageAnalysisFeature::Caption, - ImageAnalysisFeature::Objects, - ImageAnalysisFeature::People, - ImageAnalysisFeature::Text, - ImageAnalysisFeature::Tags - }); -``` +#### [C++](#tab/cpp) -#### [REST](#tab/rest) +Create a new **VisionSource** object from the URL of the image you want to analyze, using the static constructor [VisionSource::FromUrl](/cpp/cognitive-services/vision/input-visionsource#fromurl). -You can specify which features you want to use by setting the URL query parameters of the [Analysis 4.0 API](https://aka.ms/vision-4-0-ref). A parameter can have multiple values, separated by commas. Each feature you specify will require more computation time, so only specify what you need. +[!code-cpp[](~/azure-ai-vision-sdk/docs/learn.microsoft.com/cpp/image-analysis/1/1.cpp?name=vision_source)] -|URL parameter | Value | Description| -|||--| -|`features`|`Read` | reads the visible text in the image and outputs it as structured JSON data.| -|`features`|`Caption` | describes the image content with a complete sentence in supported languages.| -|`features`|`DenseCaption` | generates detailed captions for up to 10 prominent image regions. | -|`features`|`SmartCrops` | finds the rectangle coordinates that would crop the image to a desired aspect ratio while preserving the area of interest.| -|`features`|`Objects` | detects various objects within an image, including the approximate location. The Objects argument is only available in English.| -|`features`|`Tags` | tags the image with a detailed list of words related to the image content.| +> [!TIP] +> You can also analyze a local image by passing in the full-path image file name. See [VisionSource::FromFile](/cpp/cognitive-services/vision/input-visionsource#fromfile). -A populated URL might look like this: +#### [REST API](#tab/rest) -`https://{endpoint}/computervision/imageanalysis:analyze?api-version=2023-02-01-preview&features=tags,read,caption,denseCaption,smartCrops,objects,people` +When analyzing a remote image, you specify the image's URL by formatting the request body like this: `{"url":"https://learn.microsoft.com/azure/cognitive-services/computer-vision/images/windows-kitchen.jpg"}`. The **Content-Type** should be `application/json`. +To analyze a local image, you'd put the binary image data in the HTTP request body. The **Content-Type** should be `application/octet-stream` or `multipart/form-data`. -### Use a custom model +## Select analysis options (using standard model) -You can also do image analysis with a custom trained model. To create and train a model, see [Create a custom Image Analysis model](./model-customization.md). Once your model is trained, all you need is the model's name value. +### Select visual features -#### [C#](#tab/csharp) +The Analysis 4.0 API gives you access to all of the service's image analysis features. Choose which operations to do based on your own use case. See the [overview](../overview.md) for a description of each feature. The example in this section adds all of the available visual features, but for practical usage you likely need fewer. -To use a custom model, create the ImageAnalysisOptions with no features, and set the name of your model. +Visual features 'Captions' and 'DenseCaptions' are only supported in the following Azure regions: East US, France Central, Korea Central, North Europe, Southeast Asia, West Europe, West US. -```csharp -var analysisOptions = new ImageAnalysisOptions() -{ - ModelName = "MyCustomModelName" -}; -``` +> [!NOTE] +> The REST API uses the terms **Smart Crops** and **Smart Crops Aspect Ratios**. The SDK uses the terms **Crop Suggestions** and **Cropping Aspect Ratios**. They both refer to the same service operation. Similarly, the REST API users the term **Read** for detecting text in the image, whereas the SDK uses the term **Text** for the same operation. -#### [Python](#tab/python) +#### [C#](#tab/csharp) -To use a custom model, create an **ImageAnalysisOptions** object with no features set, and set the name of your model. +Create a new [ImageAnalysisOptions](/dotnet/api/azure.ai.vision.imageanalysis.imageanalysisoptions) object and specify the visual features you'd like to extract, by setting the [Features](/dotnet/api/azure.ai.vision.imageanalysis.imageanalysisoptions.features#azure-ai-vision-imageanalysis-imageanalysisoptions-features) property. [ImageAnalysisFeature](/dotnet/api/azure.ai.vision.imageanalysis.imageanalysisfeature) enum defines the supported values. +[!code-csharp[](~/azure-ai-vision-sdk/docs/learn.microsoft.com/csharp/image-analysis/1/Program.cs?name=visual_features)] -```python -analysis_options = sdk.ImageAnalysisOptions() +#### [Python](#tab/python) -analysis_options.model_name = "MyCustomModelName" -``` +Create a new [ImageAnalysisOptions](/python/api/azure-ai-vision/azure.ai.vision.imageanalysisoptions) object and specify the visual features you'd like to extract, by setting the [features](/python/api/azure-ai-vision/azure.ai.vision.imageanalysisoptions#azure-ai-vision-imageanalysisoptions-features) property. [ImageAnalysisFeature](/python/api/azure-ai-vision/azure.ai.vision.enums.imageanalysisfeature) enum defines the supported values. ++[!code-python[](~/azure-ai-vision-sdk/docs/learn.microsoft.com/python/image-analysis/1/main.py?name=visual_features)] #### [C++](#tab/cpp) -To use a custom model, create an **ImageAnalysisOptions** object with no features set, and set the name of your model. +Create a new [ImageAnalysisOptions](/cpp/cognitive-services/vision/imageanalysis-imageanalysisoptions) object. Then specify an `std::vector` of visual features you'd like to extract, by calling the [SetFeatures](/cpp/cognitive-services/vision/imageanalysis-imageanalysisoptions#setfeatures) method. [ImageAnalysisFeature](/cpp/cognitive-services/vision/azure-ai-vision-imageanalysis-namespace#enum-imageanalysisfeature) enum defines the supported values. -```cpp -auto analysisOptions = ImageAnalysisOptions::Create(); +[!code-cpp[](~/azure-ai-vision-sdk/docs/learn.microsoft.com/cpp/image-analysis/1/1.cpp?name=visual_features)] -analysisOptions->SetModelName("MyCustomModelName"); -``` +#### [REST API](#tab/rest) +You can specify which features you want to use by setting the URL query parameters of the [Analysis 4.0 API](https://aka.ms/vision-4-0-ref). A parameter can have multiple values, separated by commas. -#### [REST](#tab/rest) +|URL parameter | Value | Description| +|||--| +|`features`|`read` | Reads the visible text in the image and outputs it as structured JSON data.| +|`features`|`caption` | Describes the image content with a complete sentence in supported languages.| +|`features`|`denseCaption` | Generates detailed captions for up to 10 prominent image regions. | +|`features`|`smartCrops` | Finds the rectangle coordinates that would crop the image to a desired aspect ratio while preserving the area of interest.| +|`features`|`objects` | Detects various objects within an image, including the approximate location. The Objects argument is only available in English.| +|`features`|`tags` | Tags the image with a detailed list of words related to the image content.| +|`features`|`people` | Detects people appearing in images, including the approximate locations. | -To use a custom model, do not use the features query parameter. Set the model-name parameter to the name of your model. +A populated URL might look like this: -`https://{endpoint}/computervision/imageanalysis:analyze?api-version=2023-02-01-preview&model-name=MyCustomModelName` +`https://<endpoint>/computervision/imageanalysis:analyze?api-version=2023-02-01-preview&features=tags,read,caption,denseCaption,smartCrops,objects,people` ### Specify languages -You can also specify the language of the returned data. This is optional, and the default language is English. See [Language support](https://aka.ms/cv-languages) for a list of supported language codes and which visual features are supported for each language. -+You can specify the language of the returned data. The language is optional, with the default being English. See [Language support](https://aka.ms/cv-languages) for a list of supported language codes and which visual features are supported for each language. #### [C#](#tab/csharp) -Use the *language* property of your **ImageAnalysisOptions** object to specify a language. --```csharp -var analysisOptions = new ImageAnalysisOptions() -{ -- // Optional. Default is "en" for English. See https://aka.ms/cv-languages for a list of supported - // language codes and which visual features are supported for each language. - Language = "en", -}; -``` +Use the [Language](/dotnet/api/azure.ai.vision.imageanalysis.imageanalysisoptions.language) property of your **ImageAnalysisOptions** object to specify a language. +[!code-csharp[](~/azure-ai-vision-sdk/docs/learn.microsoft.com/csharp/image-analysis/1/Program.cs?name=language)] #### [Python](#tab/python) -Use the *language* property of your **ImageAnalysisOptions** object to specify a language. +Use the [language](/python/api/azure-ai-vision/azure.ai.vision.imageanalysisoptions#azure-ai-vision-imageanalysisoptions-language) property of your **ImageAnalysisOptions** object to specify a language. -```python -# Optional. Default is "en" for English. See https://aka.ms/cv-languages for a list of supported -# language codes and which visual features are supported for each language. -image_analysis_options.language = 'en' -``` +[!code-python[](~/azure-ai-vision-sdk/docs/learn.microsoft.com/python/image-analysis/1/main.py?name=language)] #### [C++](#tab/cpp) -Use the *language* property of your **ImageAnalysisOptions** object to specify a language. +Call the [SetLanguage](/cpp/cognitive-services/vision/imageanalysis-imageanalysisoptions#setlanguage) method on your **ImageAnalysisOptions** object to specify a language. -```cpp -// Optional. Default is "en" for English. See https://aka.ms/cv-languages for a list of supported -// language codes and which visual features are supported for each language. -analysisOptions->SetLanguage("en"); -``` +[!code-cpp[](~/azure-ai-vision-sdk/docs/learn.microsoft.com/cpp/image-analysis/1/1.cpp?name=language)] -#### [REST](#tab/rest) +#### [REST API](#tab/rest) The following URL query parameter specifies the language. The default value is `en`. The following URL query parameter specifies the language. The default value is ` A populated URL might look like this: -`https://{endpoint}/computervision/imageanalysis:analyze?api-version=2023-02-01-preview&features=tags,read,caption,denseCaption,smartCrops,objects,people&language=en` -+`https://<endpoint>/computervision/imageanalysis:analyze?api-version=2023-02-01-preview&features=caption&language=en` +### Select gender neutral captions -## Get results from the service --This section shows you how to parse the results of the API call. It includes the API call itself. -+If you're extracting captions or dense captions, you can ask for gender neutral captions. Gender neutral captions is optional, with the default being gendered captions. For example, in English, when you select gender neutral captions, terms like **woman** or **man** are replaced with **person**, and **boy** or **girl** are replaced with **child**. #### [C#](#tab/csharp) -### With visual features +Set the [GenderNeutralCaption](/dotnet/api/azure.ai.vision.imageanalysis.imageanalysisoptions.genderneutralcaption) property of your **ImageAnalysisOptions** object to true to enable gender neutral captions. -The following code calls the Image Analysis API and prints the results for all standard visual features. +[!code-csharp[](~/azure-ai-vision-sdk/docs/learn.microsoft.com/csharp/image-analysis/1/Program.cs?name=gender_neutral_caption)] -```csharp -using var analyzer = new ImageAnalyzer(serviceOptions, imageSource, analysisOptions); +#### [Python](#tab/python) -var result = analyzer.Analyze(); +Set the [gender_neutral_caption](/python/api/azure-ai-vision/azure.ai.vision.imageanalysisoptions#azure-ai-vision-imageanalysisoptions-gender-neutral-caption) property of your **ImageAnalysisOptions** object to true to enable gender neutral captions. -if (result.Reason == ImageAnalysisResultReason.Analyzed) -{ - Console.WriteLine($" Image height = {result.ImageHeight}"); - Console.WriteLine($" Image width = {result.ImageWidth}"); - Console.WriteLine($" Model version = {result.ModelVersion}"); +[!code-python[](~/azure-ai-vision-sdk/docs/learn.microsoft.com/python/image-analysis/1/main.py?name=gender_neutral_caption)] - if (result.Caption != null) - { - Console.WriteLine(" Caption:"); - Console.WriteLine($" \"{result.Caption.Content}\", Confidence {result.Caption.Confidence:0.0000}"); - } +#### [C++](#tab/cpp) - if (result.Objects != null) - { - Console.WriteLine(" Objects:"); - foreach (var detectedObject in result.Objects) - { - Console.WriteLine($" \"{detectedObject.Name}\", Bounding box {detectedObject.BoundingBox}, Confidence {detectedObject.Confidence:0.0000}"); - } - } +Call the [SetGenderNeutralCaption](/cpp/cognitive-services/vision/imageanalysis-imageanalysisoptions#setgenderneutralcaption) method of your **ImageAnalysisOptions** object with **true** as the argument, to enable gender neutral captions. - if (result.Tags != null) - { - Console.WriteLine($" Tags:"); - foreach (var tag in result.Tags) - { - Console.WriteLine($" \"{tag.Name}\", Confidence {tag.Confidence:0.0000}"); - } - } +[!code-cpp[](~/azure-ai-vision-sdk/docs/learn.microsoft.com/cpp/image-analysis/1/1.cpp?name=gender_neutral_caption)] - if (result.People != null) - { - Console.WriteLine($" People:"); - foreach (var person in result.People) - { - Console.WriteLine($" Bounding box {person.BoundingBox}, Confidence {person.Confidence:0.0000}"); - } - } +#### [REST API](#tab/rest) - if (result.CropSuggestions != null) - { - Console.WriteLine($" Crop Suggestions:"); - foreach (var cropSuggestion in result.CropSuggestions) - { - Console.WriteLine($" Aspect ratio {cropSuggestion.AspectRatio}: " - + $"Crop suggestion {cropSuggestion.BoundingBox}"); - }; - } +Add the optional query string `gender-neutral-caption` with values `true` or `false` (the default). - if (result.Text != null) - { - Console.WriteLine($" Text:"); - foreach (var line in result.Text.Lines) - { - string pointsToString = "{" + string.Join(',', line.BoundingPolygon.Select(pointsToString => pointsToString.ToString())) + "}"; - Console.WriteLine($" Line: '{line.Content}', Bounding polygon {pointsToString}"); -- foreach (var word in line.Words) - { - pointsToString = "{" + string.Join(',', word.BoundingPolygon.Select(pointsToString => pointsToString.ToString())) + "}"; - Console.WriteLine($" Word: '{word.Content}', Bounding polygon {pointsToString}, Confidence {word.Confidence:0.0000}"); - } - } - } +A populated URL might look like this: - var resultDetails = ImageAnalysisResultDetails.FromResult(result); - Console.WriteLine($" Result details:"); - Console.WriteLine($" Image ID = {resultDetails.ImageId}"); - Console.WriteLine($" Result ID = {resultDetails.ResultId}"); - Console.WriteLine($" Connection URL = {resultDetails.ConnectionUrl}"); - Console.WriteLine($" JSON result = {resultDetails.JsonResult}"); -} -else if (result.Reason == ImageAnalysisResultReason.Error) -{ - var errorDetails = ImageAnalysisErrorDetails.FromResult(result); - Console.WriteLine(" Analysis failed."); - Console.WriteLine($" Error reason : {errorDetails.Reason}"); - Console.WriteLine($" Error code : {errorDetails.ErrorCode}"); - Console.WriteLine($" Error message: {errorDetails.Message}"); -} -``` +`https://<endpoint>/computervision/imageanalysis:analyze?api-version=2023-02-01-preview&features=caption&gender-neutral-caption=true` -### With custom model + -The following code calls the Image Analysis API and prints the results for custom model analysis. +### Select smart cropping aspect ratios -```csharp -using var analyzer = new ImageAnalyzer(serviceOptions, imageSource, analysisOptions); +An aspect ratio is calculated by dividing the target crop width by the height. Supported values are from 0.75 to 1.8 (inclusive). Setting this property is only relevant when the **smartCrop** option (REST API) or **CropSuggestions** (SDK) was selected as part the visual feature list. If you select smartCrop/CropSuggestions but don't specify aspect ratios, the service returns one crop suggestion with an aspect ratio it sees fit. In this case, the aspect ratio is between 0.5 and 2.0 (inclusive). -var result = analyzer.Analyze(); +#### [C#](#tab/csharp) -if (result.Reason == ImageAnalysisResultReason.Analyzed) -{ - if (result.CustomObjects != null) - { - Console.WriteLine(" Custom Objects:"); - foreach (var detectedObject in result.CustomObjects) - { - Console.WriteLine($" \"{detectedObject.Name}\", Bounding box {detectedObject.BoundingBox}, Confidence {detectedObject.Confidence:0.0000}"); - } - } +Set the [CroppingAspectRatios](/dotnet/api/azure.ai.vision.imageanalysis.imageanalysisoptions.croppingaspectratios) property of your **ImageAnalysisOptions** to a list of aspect ratios. For example, to set aspect ratios of 0.9 and 1.33: - if (result.CustomTags != null) - { - Console.WriteLine($" Custom Tags:"); - foreach (var tag in result.CustomTags) - { - Console.WriteLine($" \"{tag.Name}\", Confidence {tag.Confidence:0.0000}"); - } - } -} -else if (result.Reason == ImageAnalysisResultReason.Error) -{ - var errorDetails = ImageAnalysisErrorDetails.FromResult(result); - Console.WriteLine(" Analysis failed."); - Console.WriteLine($" Error reason : {errorDetails.Reason}"); - Console.WriteLine($" Error code : {errorDetails.ErrorCode}"); - Console.WriteLine($" Error message: {errorDetails.Message}"); -} -``` +[!code-csharp[](~/azure-ai-vision-sdk/docs/learn.microsoft.com/csharp/image-analysis/1/Program.cs?name=cropping_aspect_rations)] #### [Python](#tab/python) -### With visual features --The following code calls the Image Analysis API and prints the results for all standard visual features. --```python -image_analyzer = sdk.ImageAnalyzer(service_options, vision_source, analysis_options) --result = image_analyzer.analyze() --if result.reason == sdk.ImageAnalysisResultReason.ANALYZED: -- print(" Image height: {}".format(result.image_height)) - print(" Image width: {}".format(result.image_width)) - print(" Model version: {}".format(result.model_version)) -- if result.caption is not None: - print(" Caption:") - print(" '{}', Confidence {:.4f}".format(result.caption.content, result.caption.confidence)) -- if result.objects is not None: - print(" Objects:") - for object in result.objects: - print(" '{}', {} Confidence: {:.4f}".format(object.name, object.bounding_box, object.confidence)) -- if result.tags is not None: - print(" Tags:") - for tag in result.tags: - print(" '{}', Confidence {:.4f}".format(tag.name, tag.confidence)) -- if result.people is not None: - print(" People:") - for person in result.people: - print(" {}, Confidence {:.4f}".format(person.bounding_box, person.confidence)) -- if result.crop_suggestions is not None: - print(" Crop Suggestions:") - for crop_suggestion in result.crop_suggestions: - print(" Aspect ratio {}: Crop suggestion {}" - .format(crop_suggestion.aspect_ratio, crop_suggestion.bounding_box)) -- if result.text is not None: - print(" Text:") - for line in result.text.lines: - points_string = "{" + ", ".join([str(int(point)) for point in line.bounding_polygon]) + "}" - print(" Line: '{}', Bounding polygon {}".format(line.content, points_string)) - for word in line.words: - points_string = "{" + ", ".join([str(int(point)) for point in word.bounding_polygon]) + "}" - print(" Word: '{}', Bounding polygon {}, Confidence {:.4f}" - .format(word.content, points_string, word.confidence)) -- result_details = sdk.ImageAnalysisResultDetails.from_result(result) - print(" Result details:") - print(" Image ID: {}".format(result_details.image_id)) - print(" Result ID: {}".format(result_details.result_id)) - print(" Connection URL: {}".format(result_details.connection_url)) - print(" JSON result: {}".format(result_details.json_result)) --elif result.reason == sdk.ImageAnalysisResultReason.ERROR: -- error_details = sdk.ImageAnalysisErrorDetails.from_result(result) - print(" Analysis failed.") - print(" Error reason: {}".format(error_details.reason)) - print(" Error code: {}".format(error_details.error_code)) - print(" Error message: {}".format(error_details.message)) -``` +Set the [cropping_aspect_ratios](/python/api/azure-ai-vision/azure.ai.vision.imageanalysisoptions#azure-ai-vision-imageanalysisoptions-cropping-aspect-ratios) property of your **ImageAnalysisOptions** to a list of aspect ratios. For example, to set aspect ration of 0.9 and 1.33: -### With custom model +[!code-python[](~/azure-ai-vision-sdk/docs/learn.microsoft.com/python/image-analysis/1/main.py?name=cropping_aspect_rations)] -The following code calls the Image Analysis API and prints the results for custom model analysis. +#### [C++](#tab/cpp) -```python -image_analyzer = sdk.ImageAnalyzer(service_options, vision_source, analysis_options) +Call the [SetCroppingAspectRatios](/cpp/cognitive-services/vision/imageanalysis-imageanalysisoptions#setcroppingaspectratios) method of your **ImageAnalysisOptions** with an `std::vector` of aspect ratios. For example, to set aspect ratios of 0.9 and 1.33: -result = image_analyzer.analyze() +[!code-cpp[](~/azure-ai-vision-sdk/docs/learn.microsoft.com/cpp/image-analysis/1/1.cpp?name=cropping_aspect_rations)] -if result.reason == sdk.ImageAnalysisResultReason.ANALYZED: +#### [REST API](#tab/rest) - if result.custom_objects is not None: - print(" Custom Objects:") - for object in result.custom_objects: - print(" '{}', {} Confidence: {:.4f}".format(object.name, object.bounding_box, object.confidence)) +Add the optional query string `smartcrops-aspect-ratios`, with one or more aspect ratios separated by a comma. - if result.custom_tags is not None: - print(" Custom Tags:") - for tag in result.custom_tags: - print(" '{}', Confidence {:.4f}".format(tag.name, tag.confidence)) +A populated URL might look like this: -elif result.reason == sdk.ImageAnalysisResultReason.ERROR: +`https://<endpoint>/computervision/imageanalysis:analyze?api-version=2023-02-01-preview&features=smartCrops&smartcrops-aspect-ratios=0.8,1.2` - error_details = sdk.ImageAnalysisErrorDetails.from_result(result) - print(" Analysis failed.") - print(" Error reason: {}".format(error_details.reason)) - print(" Error code: {}".format(error_details.error_code)) - print(" Error message: {}".format(error_details.message)) -``` -#### [C++](#tab/cpp) + -### With visual features +## Get results from the service (standard model) -The following code calls the Image Analysis API and prints the results for all standard visual features. +This section shows you how to make an analysis call to the service using the standard model, and get the results. -```cpp -auto analyzer = ImageAnalyzer::Create(serviceOptions, imageSource, analysisOptions); +#### [C#](#tab/csharp) -auto result = analyzer->Analyze(); +1. Using the **VisionServiceOptions**, **VisionSource** and **ImageAnalysisOptions** objects, construct a new [ImageAnalyzer](/dotnet/api/azure.ai.vision.imageanalysis.imageanalyzer) object. **ImageAnalyzer** implements **IDisposable**, therefore create the object with a **using** statement, or explicitly call **Dispose** method after analysis completes. -if (result->GetReason() == ImageAnalysisResultReason::Analyzed) -{ - std::cout << " Image height = " << result->GetImageHeight().Value() << std::endl; - std::cout << " Image width = " << result->GetImageWidth().Value() << std::endl; - std::cout << " Model version = " << result->GetModelVersion().Value() << std::endl; +1. Call the **Analyze** method on the **ImageAnalyzer** object, as shown here. This is a blocking (synchronous) call until the service returns the results or an error occurred. Alternatively, you can call the nonblocking **AnalyzeAsync** method. - const auto caption = result->GetCaption(); - if (caption.HasValue()) - { - std::cout << " Caption:" << std::endl; - std::cout << " \"" << caption.Value().Content << "\", Confidence " << caption.Value().Confidence << std::endl; - } +1. Check the **Reason** property on the [ImageAnalysisResult](/dotnet/api/azure.ai.vision.imageanalysis.imageanalysisresult) object, to determine if analysis succeeded or failed. - const auto objects = result->GetObjects(); - if (objects.HasValue()) - { - std::cout << " Objects:" << std::endl; - for (const auto object : objects.Value()) - { - std::cout << " \"" << object.Name << "\", "; - std::cout << "Bounding box " << object.BoundingBox.ToString(); - std::cout << ", Confidence " << object.Confidence << std::endl; - } - } +1. If succeeded, proceed to access the relevant result properties based on your selected visual features, as shown here. Additional information (not commonly needed) can be obtained by constructing the [ImageAnalysisResultDetails](/dotnet/api/azure.ai.vision.imageanalysis.imageanalysisresultdetails) object. - const auto tags = result->GetTags(); - if (tags.HasValue()) - { - std::cout << " Tags:" << std::endl; - for (const auto tag : tags.Value()) - { - std::cout << " \"" << tag.Name << "\""; - std::cout << ", Confidence " << tag.Confidence << std::endl; - } - } +1. If failed, you can construct the [ImageAnalysisErrorDetails](/dotnet/api/azure.ai.vision.imageanalysis.imageanalysisresultdetails) object to get information on the failure. - const auto people = result->GetPeople(); - if (people.HasValue()) - { - std::cout << " People:" << std::endl; - for (const auto person : people.Value()) - { - std::cout << " Bounding box " << person.BoundingBox.ToString(); - std::cout << ", Confidence " << person.Confidence << std::endl; - } - } +[!code-csharp[](~/azure-ai-vision-sdk/docs/learn.microsoft.com/csharp/image-analysis/1/Program.cs?name=analyze)] - const auto cropSuggestions = result->GetCropSuggestions(); - if (cropSuggestions.HasValue()) - { - std::cout << " Crop Suggestions:" << std::endl; - for (const auto cropSuggestion : cropSuggestions.Value()) - { - std::cout << " Aspect ratio " << cropSuggestion.AspectRatio; - std::cout << ": Crop suggestion " << cropSuggestion.BoundingBox.ToString() << std::endl; - } - } +#### [Python](#tab/python) - const auto detectedText = result->GetText(); - if (detectedText.HasValue()) - { - std::cout << " Text:\n"; - for (const auto line : detectedText.Value().Lines) - { - std::cout << " Line: \"" << line.Content << "\""; - std::cout << ", Bounding polygon " << PolygonToString(line.BoundingPolygon) << std::endl; -- for (const auto word : line.Words) - { - std::cout << " Word: \"" << word.Content << "\""; - std::cout << ", Bounding polygon " << PolygonToString(word.BoundingPolygon); - std::cout << ", Confidence " << word.Confidence << std::endl; - } - } - } +1. Using the **VisionServiceOptions**, **VisionSource** and **ImageAnalysisOptions** objects, construct a new [ImageAnalyzer](/python/api/azure-ai-vision/azure.ai.vision.imageanalyzer) object. - auto resultDetails = ImageAnalysisResultDetails::FromResult(result); - std::cout << " Result details:\n";; - std::cout << " Image ID = " << resultDetails->GetImageId() << std::endl; - std::cout << " Result ID = " << resultDetails->GetResultId() << std::endl; - std::cout << " Connection URL = " << resultDetails->GetConnectionUrl() << std::endl; - std::cout << " JSON result = " << resultDetails->GetJsonResult() << std::endl; -} -else if (result->GetReason() == ImageAnalysisResultReason::Error) -{ - auto errorDetails = ImageAnalysisErrorDetails::FromResult(result); - std::cout << " Analysis failed." << std::endl; - std::cout << " Error reason = " << (int)errorDetails->GetReason() << std::endl; - std::cout << " Error code = " << errorDetails->GetErrorCode() << std::endl; - std::cout << " Error message = " << errorDetails->GetMessage() << std::endl; -} -``` +1. Call the **analyze** method on the **ImageAnalyzer** object, as shown here. This is a blocking (synchronous) call until the service returns the results or an error occurred. Alternatively, you can call the nonblocking **analyze_async** method. -Use the following helper method to display rectangle coordinates: +1. Check the **reason** property on the [ImageAnalysisResult](/python/api/azure-ai-vision/azure.ai.vision.imageanalysisresult) object, to determine if analysis succeeded or failed. -```cpp -std::string PolygonToString(std::vector<int32_t> boundingPolygon) -{ - std::string out = "{"; - for (int i = 0; i < boundingPolygon.size(); i += 2) - { - out += ((i == 0) ? "{" : ",{") + - std::to_string(boundingPolygon[i]) + "," + - std::to_string(boundingPolygon[i + 1]) + "}"; - } - out += "}"; - return out; -} -``` +1. If succeeded, proceed to access the relevant result properties based on your selected visual features, as shown here. Additional information (not commonly needed) can be obtained by constructing the [ImageAnalysisResultDetails](/python/api/azure-ai-vision/azure.ai.vision.imageanalysisresultdetails) object. -### With custom model +1. If failed, you can construct the [ImageAnalysisErrorDetails](/python/api/azure-ai-vision/azure.ai.vision.imageanalysiserrordetails) object to get information on the failure. -The following code calls the Image Analysis API and prints the results for custom model analysis. +[!code-python[](~/azure-ai-vision-sdk/docs/learn.microsoft.com/python/image-analysis/1/main.py?name=analyze)] -```cpp -auto analyzer = ImageAnalyzer::Create(serviceOptions, imageSource, analysisOptions); +#### [C++](#tab/cpp) -auto result = analyzer->Analyze(); +1. Using the **VisionServiceOptions**, **VisionSource** and **ImageAnalysisOptions** objects, construct a new [ImageAnalyzer](/cpp/cognitive-services/vision/imageanalysis-imageanalyzer) object. -if (result->GetReason() == ImageAnalysisResultReason::Analyzed) -{ - const auto objects = result->GetCustomObjects(); - if (objects.HasValue()) - { - std::cout << " Custom objects:" << std::endl; - for (const auto object : objects.Value()) - { - std::cout << " \"" << object.Name << "\", "; - std::cout << "Bounding box " << object.BoundingBox.ToString(); - std::cout << ", Confidence " << object.Confidence << std::endl; - } - } +1. Call the **Analyze** method on the **ImageAnalyzer** object, as shown here. This is a blocking (synchronous) call until the service returns the results or an error occurred. Alternatively, you can call the nonblocking **AnalyzeAsync** method. - const auto tags = result->GetCustomTags(); - if (tags.HasValue()) - { - std::cout << " Custom tags:" << std::endl; - for (const auto tag : tags.Value()) - { - std::cout << " \"" << tag.Name << "\""; - std::cout << ", Confidence " << tag.Confidence << std::endl; - } - } -} -else if (result->GetReason() == ImageAnalysisResultReason::Error) -{ - auto errorDetails = ImageAnalysisErrorDetails::FromResult(result); - std::cout << " Analysis failed." << std::endl; - std::cout << " Error reason = " << (int)errorDetails->GetReason() << std::endl; - std::cout << " Error code = " << errorDetails->GetErrorCode() << std::endl; - std::cout << " Error message = " << errorDetails->GetMessage() << std::endl; -} -``` +1. Call **GetReason** method on the [ImageAnalysisResult](/cpp/cognitive-services/vision/imageanalysis-imageanalysisresult) object, to determine if analysis succeeded or failed. ++1. If succeeded, proceed to call the relevant **Get** methods on the result based on your selected visual features, as shown here. Additional information (not commonly needed) can be obtained by constructing the [ImageAnalysisResultDetails](/cpp/cognitive-services/vision/imageanalysis-imageanalysisresultdetails) object. -#### [REST](#tab/rest) +1. If failed, you can construct the [ImageAnalysisErrorDetails](/cpp/cognitive-services/vision/imageanalysis-imageanalysiserrordetails) object to get information on the failure. ++[!code-cpp[](~/azure-ai-vision-sdk/docs/learn.microsoft.com/cpp/image-analysis/1/1.cpp?name=analyze)] ++The code uses the following helper method to display the coordinates of a bounding polygon: ++[!code-cpp[](~/azure-ai-vision-sdk/docs/learn.microsoft.com/cpp/image-analysis/1/1.cpp?name=polygon_to_string)] ++#### [REST API](#tab/rest) The service returns a `200` HTTP response, and the body contains the returned data in the form of a JSON string. The following text is an example of a JSON response. The service returns a `200` HTTP response, and the body contains the returned da } ``` -### Error codes +++## Select analysis options (using custom model) ++You can also do image analysis with a custom trained model. To create and train a model, see [Create a custom Image Analysis model](./model-customization.md). Once your model is trained, all you need is the model's name. ++### [C#](#tab/csharp) ++To use a custom model, create the [ImageAnalysisOptions](/dotnet/api/azure.ai.vision.imageanalysis.imageanalysisoptions) object and set the [ModelName](/dotnet/api/azure.ai.vision.imageanalysis.imageanalysisoptions.modelname#azure-ai-vision-imageanalysis-imageanalysisoptions-modelname) property. You don't need to set any other properties on **ImageAnalysisOptions**. There's no need to set the [Features](/dotnet/api/azure.ai.vision.imageanalysis.imageanalysisoptions.features#azure-ai-vision-imageanalysis-imageanalysisoptions-features) property, as you do with the standard model, since your custom model already implies the visual features the service extracts. ++[!code-csharp[](~/azure-ai-vision-sdk/docs/learn.microsoft.com/csharp/image-analysis/3/Program.cs?name=model_name)] ++### [Python](#tab/python) ++To use a custom model, create the [ImageAnalysisOptions](/python/api/azure-ai-vision/azure.ai.vision.imageanalysisoptions) object and set the [model_name](/python/api/azure-ai-vision/azure.ai.vision.imageanalysisoptions#azure-ai-vision-imageanalysisoptions-model-name) property. You don't need to set any other properties on **ImageAnalysisOptions**. There's no need to set the [features](/python/api/azure-ai-vision/azure.ai.vision.imageanalysisoptions#azure-ai-vision-imageanalysisoptions-features) property, as you do with the standard model, since your custom model already implies the visual features the service extracts. ++[!code-python[](~/azure-ai-vision-sdk/docs/learn.microsoft.com/python/image-analysis/3/main.py?name=model_name)] ++### [C++](#tab/cpp) ++To use a custom model, create the [ImageAnalysisOptions](/cpp/cognitive-services/vision/imageanalysis-imageanalysisoptions) object and call the [SetModelName](/cpp/cognitive-services/vision/imageanalysis-imageanalysisoptions#setmodelname) method. You don't need to call any other methods on **ImageAnalysisOptions**. There's no need to call [SetFeatures](/cpp/cognitive-services/vision/imageanalysis-imageanalysisoptions#setfeatures) as you do with standard model, since your custom model already implies the visual features the service extracts. -See the following list of possible errors and their causes: +[!code-cpp[](~/azure-ai-vision-sdk/docs/learn.microsoft.com/cpp/image-analysis/3/3.cpp?name=model_name)] -* 400 - * `InvalidImageUrl` - Image URL is badly formatted or not accessible. - * `InvalidImageFormat` - Input data is not a valid image. - * `InvalidImageSize` - Input image is too large. - * `NotSupportedVisualFeature` - Specified feature type isn't valid. - * `NotSupportedImage` - Unsupported image, for example child pornography. - * `InvalidDetails` - Unsupported `detail` parameter value. - * `NotSupportedLanguage` - The requested operation isn't supported in the language specified. - * `BadArgument` - More details are provided in the error message. -* 415 - Unsupported media type error. The Content-Type isn't in the allowed types: - * For an image URL, Content-Type should be `application/json` - * For a binary image data, Content-Type should be `application/octet-stream` or `multipart/form-data` -* 500 - * `FailedToProcess` - * `Timeout` - Image processing timed out. - * `InternalServerError` +### [REST API](#tab/rest) +To use a custom model, don't use the features query parameter. Instead, set the `model-name` parameter to the name of your model as shown here. Replace `MyCustomModelName` with your custom model name. ++`https://<endpoint>/computervision/imageanalysis:analyze?api-version=2023-02-01-preview&model-name=MyCustomModelName` -> [!TIP] -> While working with Computer Vision, you might encounter transient failures caused by [rate limits](https://azure.microsoft.com/pricing/details/cognitive-services/computer-vision/) enforced by the service, or other transient problems like network outages. For information about handling these types of failures, see [Retry pattern](/azure/architecture/patterns/retry) in the Cloud Design Patterns guide, and the related [Circuit Breaker pattern](/azure/architecture/patterns/circuit-breaker). +## Get results from the service (using custom model) ++This section shows you how to make an analysis call to the service, when using a custom model. ++#### [C#](#tab/csharp) ++The code is similar to the standard model case. The only difference is that results from the custom model are available on the **CustomTags** and/or **CustomObjects** properties of the [ImageAnalysisResult](/dotnet/api/azure.ai.vision.imageanalysis.imageanalysisresult) object. ++[!code-csharp[](~/azure-ai-vision-sdk/docs/learn.microsoft.com/csharp/image-analysis/3/Program.cs?name=analyze)] ++#### [Python](#tab/python) ++The code is similar to the standard model case. The only difference is that results from the custom model are available on the **custom_tags** and/or **custom_objects** properties of the [ImageAnalysisResult](/python/api/azure-ai-vision/azure.ai.vision.imageanalysisresult) object. ++[!code-python[](~/azure-ai-vision-sdk/docs/learn.microsoft.com/python/image-analysis/3/main.py?name=analyze)] ++#### [C++](#tab/cpp) ++The code is similar to the standard model case. The only difference is that results from the custom model are available by calling the **GetCustomTags** and/or **GetCustomObjects** methods of the [ImageAnalysisResult](/cpp/cognitive-services/vision/imageanalysis-imageanalysisresult) object. ++[!code-cpp[](~/azure-ai-vision-sdk/docs/learn.microsoft.com/cpp/image-analysis/3/3.cpp?name=analyze)] ++#### [REST API](#tab/rest) ++++## Error codes ## Next steps * Explore the [concept articles](../concept-describe-images-40.md) to learn more about each feature.-* Explore the [code samples on GitHub](https://github.com/Azure-Samples/azure-ai-vision-sdk/blob/main/samples/). -* See the [API reference](https://aka.ms/vision-4-0-ref) to learn more about the API functionality. +* Explore the [SDK code samples on GitHub](https://github.com/Azure-Samples/azure-ai-vision-sdk). +* See the [REST API reference](https://aka.ms/vision-4-0-ref) to learn more about the API functionality. |
cognitive-services | Call Analyze Image | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Computer-vision/how-to/call-analyze-image.md | You can specify which features you want to use by setting the URL query paramete A populated URL might look like this: -`https://{endpoint}/computervision/imageanalysis:analyze?api-version=2023-02-01-preview&features=Tags` +`https://<endpoint>/computervision/imageanalysis:analyze?api-version=2023-02-01-preview&features=Tags` #### [C#](#tab/csharp) The following URL query parameter specifies the language. The default value is ` A populated URL might look like this: -`https://{endpoint}/computervision/imageanalysis:analyze?api-version=2023-02-01-preview&features=Tags&language=en` +`https://<endpoint>/computervision/imageanalysis:analyze?api-version=2023-02-01-preview&features=Tags&language=en` #### [C#](#tab/csharp) This section shows you how to parse the results of the API call. It includes the > [!NOTE] > **Scoped API calls** >-> Some of the features in Image Analysis can be called directly as well as through the Analyze API call. For example, you can do a scoped analysis of only image tags by making a request to `https://{endpoint}/vision/v3.2/tag` (or to the corresponding method in the SDK). See the [reference documentation](https://westus.dev.cognitive.microsoft.com/docs/services/computer-vision-v3-2/operations/56f91f2e778daf14a499f21b) for other features that can be called separately. +> Some of the features in Image Analysis can be called directly as well as through the Analyze API call. For example, you can do a scoped analysis of only image tags by making a request to `https://<endpoint>/vision/v3.2/tag` (or to the corresponding method in the SDK). See the [reference documentation](https://westus.dev.cognitive.microsoft.com/docs/services/computer-vision-v3-2/operations/56f91f2e778daf14a499f21b) for other features that can be called separately. #### [REST](#tab/rest) |
cognitive-services | Model Customization | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Computer-vision/how-to/model-customization.md | -This guide shows you how to create and train a custom image classification model. The few differences between this and object detection models are noted. +This guide shows you how to create and train a custom image classification model. The few differences between training an image classification model and object detection model are noted. ## Prerequisites The API call returns an **ImageAnalysisResult** JSON object, which contains all In this guide, you created and trained a custom image classification model using Image Analysis. Next, learn more about the Analyze Image 4.0 API, so you can call your custom model from an application using REST or library SDKs. -* [Call the Analyze Image API](./call-analyze-image-40.md#use-a-custom-model) +* [Call the Analyze Image API](./call-analyze-image-40.md#select-analysis-options-using-custom-model) * See the [Model customization concepts](../concept-model-customization.md) guide for a broad overview of this feature and a list of frequently asked questions. |
cognitive-services | Image Analysis Client Library 40 | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Computer-vision/quickstarts-sdk/image-analysis-client-library-40.md | keywords: computer vision, computer vision service # Quickstart: Image Analysis 4.0 -Get started with the Image Analysis 4.0 REST API or client libraries to set up a basic image analysis application. The Image Analysis service provides you with AI algorithms for processing images and returning information on their visual features. Follow these steps to install a package to your application and try out the sample code. +Get started with the Image Analysis 4.0 REST API or client library to set up a basic image analysis application. The Image Analysis service provides you with AI algorithms for processing images and returning information on their visual features. Follow these steps to install a package to your application and try out the sample code. ::: zone pivot="programming-language-csharp" |
cognitive-services | Data Collection | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/LUIS/data-collection.md | Select utterances for your training set based on the following criteria: * **Input diversity**: Consider your data input path. If you are collecting data from one person, department or input device (microphone) you are likely missing diversity that will be important for your app to learn about all input paths. * **Punctuation diversity**: Consider that people use varying levels of punctuation in text applications and make sure you have a diversity of how punctuation is used. If you're using data that comes from speech, it won't have any punctuation, so your data shouldn't either. * **Data distribution**: Make sure the data spread across intents represents the same spread of data your client application receives. If your LUIS app will classify utterances that are requests to schedule a leave (50%), but it will also see utterances about inquiring about leave days left (20%), approving leaves (20%) and some out of scope and chit chat (10%) then your data set should have the sample percentages of each type of utterance.-* **Use all data forms**: If your LUIS app will take data in multiple forms, make sure to include those forms in your training utterances. For example, if your client application takes both speech and typed text input, you need to have speech-to-text generated utterances as well as typed utterances. You will see different variations in how people speak from how they type as well as different errors in speech recognition and typos. All of this variation should be represented in your training data. +* **Use all data forms**: If your LUIS app will take data in multiple forms, make sure to include those forms in your training utterances. For example, if your client application takes both speech and typed text input, you need to have speech to text generated utterances as well as typed utterances. You will see different variations in how people speak from how they type as well as different errors in speech recognition and typos. All of this variation should be represented in your training data. * **Positive and negative examples**: To teach a LUIS app, it must learn about what the intent is (positive) and what it is not (negative). In LUIS, utterances can only be positive for a single intent. When an utterance is added to an intent, LUIS automatically makes that same example utterance a negative example for all the other intents. * **Data outside of application scope**: If your application will see utterances that fall outside of your defined intents, make sure to provide those. The examples that arenΓÇÖt assigned to a particular defined intent will be labeled with the **None** intent. ItΓÇÖs important to have realistic examples for the **None** intent to properly predict utterances that are outside the scope of the defined intents. |
cognitive-services | Batch Synthesis | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Speech-Service/batch-synthesis.md | The response body will resemble the following JSON example: ## Next steps - [Speech Synthesis Markup Language (SSML)](speech-synthesis-markup.md)-- [Text-to-speech quickstart](get-started-text-to-speech.md)+- [Text to speech quickstart](get-started-text-to-speech.md) - [Migrate to batch synthesis](migrate-to-batch-synthesis.md) |
cognitive-services | Batch Transcription Create | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Speech-Service/batch-transcription-create.md | With batch transcriptions, you submit the [audio data](batch-transcription-audio ::: zone pivot="rest-api" -To create a transcription, use the [Transcriptions_Create](https://eastus.dev.cognitive.microsoft.com/docs/services/speech-to-text-api-v3-1/operations/Transcriptions_Create) operation of the [Speech-to-text REST API](rest-speech-to-text.md#transcriptions). Construct the request body according to the following instructions: +To create a transcription, use the [Transcriptions_Create](https://eastus.dev.cognitive.microsoft.com/docs/services/speech-to-text-api-v3-1/operations/Transcriptions_Create) operation of the [Speech to text REST API](rest-speech-to-text.md#transcriptions). Construct the request body according to the following instructions: - You must set either the `contentContainerUrl` or `contentUrls` property. For more information about Azure blob storage for batch transcription, see [Locate audio files for batch transcription](batch-transcription-audio-data.md). - Set the required `locale` property. This should match the expected locale of the audio data to transcribe. The locale can't be changed later. Here are some property options that you can use to configure a transcription whe |`contentContainerUrl`| You can submit individual audio files, or a whole storage container.<br/><br/>You must specify the audio data location via either the `contentContainerUrl` or `contentUrls` property. For more information about Azure blob storage for batch transcription, see [Locate audio files for batch transcription](batch-transcription-audio-data.md).<br/><br/>This property won't be returned in the response.| |`contentUrls`| You can submit individual audio files, or a whole storage container.<br/><br/>You must specify the audio data location via either the `contentContainerUrl` or `contentUrls` property. For more information, see [Locate audio files for batch transcription](batch-transcription-audio-data.md).<br/><br/>This property won't be returned in the response.| |`destinationContainerUrl`|The result can be stored in an Azure container. If you don't specify a container, the Speech service stores the results in a container managed by Microsoft. When the transcription job is deleted, the transcription result data is also deleted. For more information, see [Destination container URL](#destination-container-url).|-|`diarization`|Indicates that diarization analysis should be carried out on the input, which is expected to be a mono channel that contains multiple voices. Specify the minimum and maximum number of people who might be speaking. You must also set the `diarizationEnabled` property to `true`. The [transcription file](batch-transcription-get.md#transcription-result-file) will contain a `speaker` entry for each transcribed phrase.<br/><br/>You need to use this property when you expect three or more speakers. For two speakers setting `diarizationEnabled` property to `true` is enough. See an example of the property usage in [Transcriptions_Create](https://eastus.dev.cognitive.microsoft.com/docs/services/speech-to-text-api-v3-1/operations/Transcriptions_Create) operation description.<br/><br/>Diarization is the process of separating speakers in audio data. The batch pipeline can recognize and separate multiple speakers on mono channel recordings. The feature isn't available with stereo recordings.<br/><br/>When this property is selected, source audio length can't exceed 240 minutes per file.<br/><br/>**Note**: This property is only available with Speech-to-text REST API version 3.1.| -|`diarizationEnabled`|Specifies that diarization analysis should be carried out on the input, which is expected to be a mono channel that contains two voices. The default value is `false`.<br/><br/>For three or more voices you also need to use property `diarization` (only with Speech-to-text REST API version 3.1).<br/><br/>When this property is selected, source audio length can't exceed 240 minutes per file.| +|`diarization`|Indicates that diarization analysis should be carried out on the input, which is expected to be a mono channel that contains multiple voices. Specify the minimum and maximum number of people who might be speaking. You must also set the `diarizationEnabled` property to `true`. The [transcription file](batch-transcription-get.md#transcription-result-file) will contain a `speaker` entry for each transcribed phrase.<br/><br/>You need to use this property when you expect three or more speakers. For two speakers setting `diarizationEnabled` property to `true` is enough. See an example of the property usage in [Transcriptions_Create](https://eastus.dev.cognitive.microsoft.com/docs/services/speech-to-text-api-v3-1/operations/Transcriptions_Create) operation description.<br/><br/>Diarization is the process of separating speakers in audio data. The batch pipeline can recognize and separate multiple speakers on mono channel recordings. The feature isn't available with stereo recordings.<br/><br/>When this property is selected, source audio length can't exceed 240 minutes per file.<br/><br/>**Note**: This property is only available with Speech to text REST API version 3.1.| +|`diarizationEnabled`|Specifies that diarization analysis should be carried out on the input, which is expected to be a mono channel that contains two voices. The default value is `false`.<br/><br/>For three or more voices you also need to use property `diarization` (only with Speech to text REST API version 3.1).<br/><br/>When this property is selected, source audio length can't exceed 240 minutes per file.| |`displayName`|The name of the batch transcription. Choose a name that you can refer to later. The display name doesn't have to be unique.<br/><br/>This property is required.| |`languageIdentification`|Language identification is used to identify languages spoken in audio when compared against a list of [supported languages](language-support.md?tabs=language-identification).<br/><br/>If you set the `languageIdentification` property, then you must also set its enclosed `candidateLocales` property.| |`languageIdentification.candidateLocales`|The candidate locales for language identification such as `"properties": { "languageIdentification": { "candidateLocales": ["en-US", "de-DE", "es-ES"]}}`. A minimum of 2 and a maximum of 10 candidate locales, including the main locale for the transcription, is supported.| |
cognitive-services | Batch Transcription Get | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Speech-Service/batch-transcription-get.md | To get transcription results, first check the [status](#get-transcription-status ::: zone pivot="rest-api" -To get the status of the transcription job, call the [Transcriptions_Get](https://eastus.dev.cognitive.microsoft.com/docs/services/speech-to-text-api-v3-1/operations/Transcriptions_Get) operation of the [Speech-to-text REST API](rest-speech-to-text.md). +To get the status of the transcription job, call the [Transcriptions_Get](https://eastus.dev.cognitive.microsoft.com/docs/services/speech-to-text-api-v3-1/operations/Transcriptions_Get) operation of the [Speech to text REST API](rest-speech-to-text.md). Make an HTTP GET request using the URI as shown in the following example. Replace `YourTranscriptionId` with your transcription ID, replace `YourSubscriptionKey` with your Speech resource key, and replace `YourServiceRegion` with your Speech resource region. Depending in part on the request parameters set when you created the transcripti |`combinedRecognizedPhrases`|The concatenated results of all phrases for the channel.| |`confidence`|The confidence value for the recognition.| |`display`|The display form of the recognized text. Added punctuation and capitalization are included.|-|`displayPhraseElements`|A list of results with display text for each word of the phrase. The `displayFormWordLevelTimestampsEnabled` request property must be set to `true`, otherwise this property is not present.<br/><br/>**Note**: This property is only available with speech-to-text REST API version 3.1.| +|`displayPhraseElements`|A list of results with display text for each word of the phrase. The `displayFormWordLevelTimestampsEnabled` request property must be set to `true`, otherwise this property is not present.<br/><br/>**Note**: This property is only available with Speech to text REST API version 3.1.| |`duration`|The audio duration. The value is an ISO 8601 encoded duration.| |`durationInTicks`|The audio duration in ticks (1 tick is 100 nanoseconds).| |`itn`|The inverse text normalized (ITN) form of the recognized text. Abbreviations such as "Doctor Smith" to "Dr Smith", phone numbers, and other transformations are applied.| |`lexical`|The actual words recognized.|-|`locale`|The locale identified from the input the audio. The `languageIdentification` request property must be set, otherwise this property is not present.<br/><br/>**Note**: This property is only available with speech-to-text REST API version 3.1.| +|`locale`|The locale identified from the input the audio. The `languageIdentification` request property must be set, otherwise this property is not present.<br/><br/>**Note**: This property is only available with Speech to text REST API version 3.1.| |`maskedITN`|The ITN form with profanity masking applied.| |`nBest`|A list of possible transcriptions for the current phrase with confidences.| |`offset`|The offset in audio of this phrase. The value is an ISO 8601 encoded duration.| |
cognitive-services | Batch Transcription | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Speech-Service/batch-transcription.md | -Batch transcription is used to transcribe a large amount of audio data in storage. Both the [Speech-to-text REST API](rest-speech-to-text.md#transcriptions) and [Speech CLI](spx-basics.md) support batch transcription. +Batch transcription is used to transcribe a large amount of audio data in storage. Both the [Speech to text REST API](rest-speech-to-text.md#transcriptions) and [Speech CLI](spx-basics.md) support batch transcription. You should provide multiple files per request or point to an Azure Blob Storage container with the audio files to transcribe. The batch transcription service can handle a large number of submitted transcriptions. The service transcribes the files concurrently, which reduces the turnaround time. You should provide multiple files per request or point to an Azure Blob Storage With batch transcriptions, you submit the audio data, and then retrieve transcription results asynchronously. The service transcribes the audio data and stores the results in a storage container. You can then retrieve the results from the storage container. > [!TIP]-> For a low or no-code solution, you can use the [Batch Speech-to-text Connector](/connectors/cognitiveservicesspe/) in Power Platform applications such as Power Automate, Power Apps, and Logic Apps. See the [Power automate batch transcription](power-automate-batch-transcription.md) guide to get started. +> For a low or no-code solution, you can use the [Batch Speech to text Connector](/connectors/cognitiveservicesspe/) in Power Platform applications such as Power Automate, Power Apps, and Logic Apps. See the [Power automate batch transcription](power-automate-batch-transcription.md) guide to get started. To use the batch transcription REST API: |
cognitive-services | Call Center Overview | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Speech-Service/call-center-overview.md | Once you've transcribed your audio with the Speech service, you can use the Lang The Speech service offers the following features that can be used for call center use cases: -- [Real-time speech-to-text](./how-to-recognize-speech.md): Recognize and transcribe audio in real-time from multiple inputs. For example, with virtual agents or agent-assist, you can continuously recognize audio input and control how to process results based on multiple events.-- [Batch speech-to-text](./batch-transcription.md): Transcribe large amounts of audio files asynchronously including speaker diarization and is typically used in post-call analytics scenarios. Diarization is the process of recognizing and separating speakers in mono channel audio data.-- [Text-to-speech](./text-to-speech.md): Text-to-speech enables your applications, tools, or devices to convert text into humanlike synthesized speech.+- [Real-time speech to text](./how-to-recognize-speech.md): Recognize and transcribe audio in real-time from multiple inputs. For example, with virtual agents or agent-assist, you can continuously recognize audio input and control how to process results based on multiple events. +- [Batch speech to text](./batch-transcription.md): Transcribe large amounts of audio files asynchronously including speaker diarization and is typically used in post-call analytics scenarios. Diarization is the process of recognizing and separating speakers in mono channel audio data. +- [Text to speech](./text-to-speech.md): Text to speech enables your applications, tools, or devices to convert text into humanlike synthesized speech. - [Speaker identification](./speaker-recognition-overview.md): Helps you determine an unknown speakerΓÇÖs identity within a group of enrolled speakers and is typically used for call center customer verification scenarios or fraud detection. - [Language Identification](./language-identification.md): Identify languages spoken in audio and can be used in real-time and post-call analysis for insights or to control the environment (such as output language of a virtual agent). The Speech service works well with prebuilt models. However, you might want to f | Speech customization | Description | | -- | -- |-| [Custom Speech](./custom-speech-overview.md) | A speech-to-text feature used evaluate and improve the speech recognition accuracy of use-case specific entities (such as alpha-numeric customer, case, and contract IDs, license plates, and names). You can also train a custom model with your own product names and industry terminology. | -| [Custom Neural Voice](./custom-neural-voice.md) | A text-to-speech feature that lets you create a one-of-a-kind, customized, synthetic voice for your applications. | +| [Custom Speech](./custom-speech-overview.md) | A speech to text feature used evaluate and improve the speech recognition accuracy of use-case specific entities (such as alpha-numeric customer, case, and contract IDs, license plates, and names). You can also train a custom model with your own product names and industry terminology. | +| [Custom Neural Voice](./custom-neural-voice.md) | A text to speech feature that lets you create a one-of-a-kind, customized, synthetic voice for your applications. | ### Language service |
cognitive-services | Call Center Telephony Integration | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Speech-Service/call-center-telephony-integration.md | Title: Call Center Telephony Integration - Speech service -description: A common scenario for speech-to-text is transcribing large volumes of telephony data that come from various systems, such as interactive voice response (IVR) in real-time. This requires an integration with the Telephony System used. +description: A common scenario for speech to text is transcribing large volumes of telephony data that come from various systems, such as interactive voice response (IVR) in real-time. This requires an integration with the Telephony System used. To build this integration we recommend using the [Speech SDK](./speech-sdk.md). > [!TIP]-> For guidance on reducing Text to Speech latency check out the **[How to lower speech synthesis latency](./how-to-lower-speech-synthesis-latency.md?pivots=programming-language-csharp)** guide. +> For guidance on reducing Text to speech latency check out the **[How to lower speech synthesis latency](./how-to-lower-speech-synthesis-latency.md?pivots=programming-language-csharp)** guide. > -> In addition, consider implementing a Text to Speech cache to store all synthesized audio and playback from the cache in case a string has previously been synthesized. +> In addition, consider implementing a Text to speech cache to store all synthesized audio and playback from the cache in case a string has previously been synthesized. ## Next steps |
cognitive-services | Conversation Transcription | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Speech-Service/conversation-transcription.md | -Conversation transcription is a [speech-to-text](speech-to-text.md) solution that provides real-time or asynchronous transcription of any conversation. This feature, which is currently in preview, combines speech recognition, speaker identification, and sentence attribution to determine who said what, and when, in a conversation. +Conversation transcription is a [speech to text](speech-to-text.md) solution that provides real-time or asynchronous transcription of any conversation. This feature, which is currently in preview, combines speech recognition, speaker identification, and sentence attribution to determine who said what, and when, in a conversation. > [!NOTE] > Multi-device conversation access is a preview feature. Audio data is processed live to return the speaker identifier and transcript, an ## Language support -Currently, conversation transcription supports [all speech-to-text languages](language-support.md?tabs=stt) in the following regions:ΓÇ»`centralus`, `eastasia`, `eastus`, `westeurope`. +Currently, conversation transcription supports [all speech to text languages](language-support.md?tabs=stt) in the following regions:ΓÇ»`centralus`, `eastasia`, `eastus`, `westeurope`. ## Next steps |
cognitive-services | Custom Commands | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Speech-Service/custom-commands.md | -Applications such as [Voice assistants](voice-assistants.md) listen to users and take an action in response, often speaking back. They use [speech-to-text](speech-to-text.md) to transcribe the user's speech, then take action on the natural language understanding of the text. This action frequently includes spoken output from the assistant generated with [text-to-speech](text-to-speech.md). Devices connect to assistants with the Speech SDK's `DialogServiceConnector` object. +Applications such as [Voice assistants](voice-assistants.md) listen to users and take an action in response, often speaking back. They use [speech to text](speech-to-text.md) to transcribe the user's speech, then take action on the natural language understanding of the text. This action frequently includes spoken output from the assistant generated with [text to speech](text-to-speech.md). Devices connect to assistants with the Speech SDK's `DialogServiceConnector` object. Custom Commands makes it easy to build rich voice commanding apps optimized for voice-first interaction experiences. It provides a unified authoring experience, an automatic hosting model, and relatively lower complexity, helping you focus on building the best solution for your voice commanding scenarios. |
cognitive-services | Custom Neural Voice Lite | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Speech-Service/custom-neural-voice-lite.md | The training process takes approximately one compute hour. You can check the pro To review the CNV Lite model and listen to your own synthetic voice, follow these steps: 1. Select **Custom Voice** > Your project name > **Review model**. Here you can review the voice model name, model language, sample data size, and training progress. The voice name is composed of the word "Neural" appended to your project name.-1. Select the voice model name to review the model details and listen to the sample text-to-speech results. +1. Select the voice model name to review the model details and listen to the sample text to speech results. 1. Select the play icon to hear your voice speak each script. :::image type="content" source="media/custom-voice/lite/lite-review-model.png" alt-text="Screenshot of the review sample output dashboard."::: From here, you can use the CNV Lite voice model similarly as you would use a CNV ## Next steps * [Create a CNV Pro project](how-to-custom-voice.md) -* [Try the text-to-speech quickstart](get-started-text-to-speech.md) +* [Try the text to speech quickstart](get-started-text-to-speech.md) * [Learn more about speech synthesis](how-to-speech-synthesis.md) |
cognitive-services | Custom Neural Voice | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Speech-Service/custom-neural-voice.md | Title: Custom Neural Voice overview - Speech service -description: Custom Neural Voice is a text-to-speech feature that allows you to create a one-of-a-kind, customized, synthetic voice for your applications. You provide your own audio data as a sample. +description: Custom Neural Voice is a text to speech feature that allows you to create a one-of-a-kind, customized, synthetic voice for your applications. You provide your own audio data as a sample. -Custom Neural Voice (CNV) is a text-to-speech feature that lets you create a one-of-a-kind, customized, synthetic voice for your applications. With Custom Neural Voice, you can build a highly natural-sounding voice for your brand or characters by providing human speech samples as training data. +Custom Neural Voice (CNV) is a text to speech feature that lets you create a one-of-a-kind, customized, synthetic voice for your applications. With Custom Neural Voice, you can build a highly natural-sounding voice for your brand or characters by providing human speech samples as training data. > [!IMPORTANT] > Custom Neural Voice access is [limited](/legal/cognitive-services/speech-service/custom-neural-voice/limited-access-custom-neural-voice?context=%2fazure%2fcognitive-services%2fspeech-service%2fcontext%2fcontext) based on eligibility and usage criteria. Request access on the [intake form](https://aka.ms/customneural). > > Access to [Custom Neural Voice (CNV) Lite](custom-neural-voice-lite.md) is available for anyone to demo and evaluate CNV before investing in professional recordings to create a higher-quality voice. -Out of the box, [text-to-speech](text-to-speech.md) can be used with prebuilt neural voices for each [supported language](language-support.md?tabs=tts). The prebuilt neural voices work very well in most text-to-speech scenarios if a unique voice isn't required. +Out of the box, [text to speech](text-to-speech.md) can be used with prebuilt neural voices for each [supported language](language-support.md?tabs=tts). The prebuilt neural voices work very well in most text to speech scenarios if a unique voice isn't required. -Custom Neural Voice is based on the neural text-to-speech technology and the multilingual, multi-speaker, universal model. You can create synthetic voices that are rich in speaking styles, or adaptable cross languages. The realistic and natural sounding voice of Custom Neural Voice can represent brands, personify machines, and allow users to interact with applications conversationally. See the [supported languages](language-support.md?tabs=tts) for Custom Neural Voice. +Custom Neural Voice is based on the neural text to speech technology and the multilingual, multi-speaker, universal model. You can create synthetic voices that are rich in speaking styles, or adaptable cross languages. The realistic and natural sounding voice of Custom Neural Voice can represent brands, personify machines, and allow users to interact with applications conversationally. See the [supported languages](language-support.md?tabs=tts) for Custom Neural Voice. ## How does it work? Here's an overview of the steps to create a custom neural voice in Speech Studio You can tune, adjust, and use your custom voice, similarly as you would use a prebuilt neural voice. Convert text into speech in real-time, or generate audio content offline with text input. You can do this by using the [REST API](./rest-text-to-speech.md), the [Speech SDK](./get-started-text-to-speech.md), or the [Speech Studio](https://speech.microsoft.com/audiocontentcreation). -The style and the characteristics of the trained voice model depend on the style and the quality of the recordings from the voice talent used for training. However, you can make several adjustments by using [SSML (Speech Synthesis Markup Language)](./speech-synthesis-markup.md?tabs=csharp) when you make the API calls to your voice model to generate synthetic speech. SSML is the markup language used to communicate with the text-to-speech service to convert text into audio. The adjustments you can make include change of pitch, rate, intonation, and pronunciation correction. If the voice model is built with multiple styles, you can also use SSML to switch the styles. +The style and the characteristics of the trained voice model depend on the style and the quality of the recordings from the voice talent used for training. However, you can make several adjustments by using [SSML (Speech Synthesis Markup Language)](./speech-synthesis-markup.md?tabs=csharp) when you make the API calls to your voice model to generate synthetic speech. SSML is the markup language used to communicate with the text to speech service to convert text into audio. The adjustments you can make include change of pitch, rate, intonation, and pronunciation correction. If the voice model is built with multiple styles, you can also use SSML to switch the styles. ## Components sequence Next, the phoneme sequence goes into the neural acoustic model to predict acoust  -Neural text-to-speech voice models are trained by using deep neural networks based on +Neural text to speech voice models are trained by using deep neural networks based on the recording samples of human voices. For more information, see [this Microsoft blog post](https://techcommunity.microsoft.com/t5/azure-ai/neural-text-to-speech-extends-support-to-15-more-languages-with/ba-p/1505911). To learn more about how a neural vocoder is trained, see [this Microsoft blog post](https://techcommunity.microsoft.com/t5/azure-ai/azure-neural-tts-upgraded-with-hifinet-achieving-higher-audio/ba-p/1847860). ## Migrate to Custom Neural Voice To learn how to use Custom Neural Voice responsibly, check the following article * [Disclosure for voice talent](/legal/cognitive-services/speech-service/disclosure-voice-talent?context=/azure/cognitive-services/speech-service/context/context) * [Disclosure design guidelines](/legal/cognitive-services/speech-service/custom-neural-voice/concepts-disclosure-guidelines?context=/azure/cognitive-services/speech-service/context/context) * [Disclosure design patterns](/legal/cognitive-services/speech-service/custom-neural-voice/concepts-disclosure-patterns?context=/azure/cognitive-services/speech-service/context/context) -* [Code of Conduct for Text-to-Speech integrations](/legal/cognitive-services/speech-service/tts-code-of-conduct?context=/azure/cognitive-services/speech-service/context/context) +* [Code of Conduct for Text to speech integrations](/legal/cognitive-services/speech-service/tts-code-of-conduct?context=/azure/cognitive-services/speech-service/context/context) * [Data, privacy, and security for Custom Neural Voice](/legal/cognitive-services/speech-service/custom-neural-voice/data-privacy-security-custom-neural-voice?context=/azure/cognitive-services/speech-service/context/context) ## Next steps |
cognitive-services | Custom Speech Overview | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Speech-Service/custom-speech-overview.md | Title: Custom Speech overview - Speech service -description: Custom Speech is a set of online tools that allows you to evaluate and improve the Microsoft speech-to-text accuracy for your applications, tools, and products. +description: Custom Speech is a set of online tools that allows you to evaluate and improve the Microsoft speech to text accuracy for your applications, tools, and products. -With Custom Speech, you can evaluate and improve the accuracy of speech recognition for your applications and products. A custom speech model can be used for [real-time speech-to-text](speech-to-text.md), [speech translation](speech-translation.md), and [batch transcription](batch-transcription.md). +With Custom Speech, you can evaluate and improve the accuracy of speech recognition for your applications and products. A custom speech model can be used for [real-time speech to text](speech-to-text.md), [speech translation](speech-translation.md), and [batch transcription](batch-transcription.md). Out of the box, speech recognition utilizes a Universal Language Model as a base model that is trained with Microsoft-owned data and reflects commonly used spoken language. The base model is pre-trained with dialects and phonetics representing a variety of common domains. When you make a speech recognition request, the most recent base model for each [supported language](language-support.md?tabs=stt) is used by default. The base model works very well in most speech recognition scenarios. With Custom Speech, you can upload your own data, test and train a custom model, Here's more information about the sequence of steps shown in the previous diagram: 1. [Create a project](how-to-custom-speech-create-project.md) and choose a model. Use a <a href="https://portal.azure.com/#create/Microsoft.CognitiveServicesSpeechServices" title="Create a Speech resource" target="_blank">Speech resource</a> that you create in the Azure portal. If you will train a custom model with audio data, choose a Speech resource region with dedicated hardware for training audio data. See footnotes in the [regions](regions.md#speech-service) table for more information.-1. [Upload test data](./how-to-custom-speech-upload-data.md). Upload test data to evaluate the Microsoft speech-to-text offering for your applications, tools, and products. +1. [Upload test data](./how-to-custom-speech-upload-data.md). Upload test data to evaluate the Microsoft speech to text offering for your applications, tools, and products. 1. [Test recognition quality](how-to-custom-speech-inspect-data.md). Use the [Speech Studio](https://aka.ms/speechstudio/customspeech) to play back uploaded audio and inspect the speech recognition quality of your test data. -1. [Test model quantitatively](how-to-custom-speech-evaluate-data.md). Evaluate and improve the accuracy of the speech-to-text model. The Speech service provides a quantitative word error rate (WER), which you can use to determine if additional training is required. +1. [Test model quantitatively](how-to-custom-speech-evaluate-data.md). Evaluate and improve the accuracy of the speech to text model. The Speech service provides a quantitative word error rate (WER), which you can use to determine if additional training is required. 1. [Train a model](how-to-custom-speech-train-model.md). Provide written transcripts and related text, along with the corresponding audio data. Testing a model before and after training is optional but recommended. > [!NOTE] > You pay for Custom Speech model usage and endpoint hosting, but you are not charged for training a model. |
cognitive-services | Customize Pronunciation | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Speech-Service/customize-pronunciation.md | Title: Structured text phonetic pronunciation data -description: Use phonemes to customize pronunciation of words in Speech-to-Text. +description: Use phonemes to customize pronunciation of words in Speech to text. |
cognitive-services | Direct Line Speech | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Speech-Service/direct-line-speech.md | -[Voice assistants](voice-assistants.md) listen to users and take an action in response, often speaking back. They use [speech-to-text](speech-to-text.md) to transcribe the user's speech, then take action on the natural language understanding of the text. This action frequently includes spoken output from the assistant generated with [text-to-speech](text-to-speech.md). +[Voice assistants](voice-assistants.md) listen to users and take an action in response, often speaking back. They use [speech to text](speech-to-text.md) to transcribe the user's speech, then take action on the natural language understanding of the text. This action frequently includes spoken output from the assistant generated with [text to speech](text-to-speech.md). Direct Line Speech offers the highest levels of customization and sophistication for voice assistants. It's designed for conversational scenarios that are open-ended, natural, or hybrids of the two with task completion or command-and-control use. This high degree of flexibility comes with a greater complexity, and scenarios that are scoped to well-defined tasks using natural language input may want to consider [Custom Commands](custom-commands.md) for a streamlined solution experience. Sample code for creating a voice assistant is available on GitHub. These samples ## Customization -Voice assistants built using Speech service can use the full range of customization options available for [speech-to-text](speech-to-text.md), [text-to-speech](text-to-speech.md), and [custom keyword selection](./custom-keyword-basics.md). +Voice assistants built using Speech service can use the full range of customization options available for [speech to text](speech-to-text.md), [text to speech](text-to-speech.md), and [custom keyword selection](./custom-keyword-basics.md). > [!NOTE] > Customization options vary by language/locale (see [Supported languages](./language-support.md?tabs=stt)). |
cognitive-services | Display Text Format | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Speech-Service/display-text-format.md | zone_pivot_groups: programming-languages-speech-sdk-cli # Display text formatting with speech to text -Speech-to-text offers an array of formatting features to ensure that the transcribed text is clear and legible. Below is an overview of these features and how each one is used to improve the overall clarity of the final text output. +Speech to text offers an array of formatting features to ensure that the transcribed text is clear and legible. Below is an overview of these features and how each one is used to improve the overall clarity of the final text output. ## ITN -Inverse Text Normalization (ITN) is a process that converts spoken words into their written form. For example, the spoken word "four" is converted to the written form "4". This process is performed by the speech-to-text service and isn't configurable. Some of the supported text formats include dates, times, decimals, currencies, addresses, emails, and phone numbers. You can speak naturally, and the service formats text as expected. The following table shows the ITN rules that are applied to the text output. +Inverse Text Normalization (ITN) is a process that converts spoken words into their written form. For example, the spoken word "four" is converted to the written form "4". This process is performed by the speech to text service and isn't configurable. Some of the supported text formats include dates, times, decimals, currencies, addresses, emails, and phone numbers. You can speak naturally, and the service formats text as expected. The following table shows the ITN rules that are applied to the text output. |Recognized speech|Display text| ||| Inverse Text Normalization (ITN) is a process that converts spoken words into th ## Capitalization -Speech-to-text models recognize words that should be capitalized to improve readability, accuracy, and grammar. For example, the Speech service will automatically capitalize proper nouns and words at the beginning of a sentence. Some examples are shown in this table. +Speech to text models recognize words that should be capitalized to improve readability, accuracy, and grammar. For example, the Speech service will automatically capitalize proper nouns and words at the beginning of a sentence. Some examples are shown in this table. |Recognized speech|Display text| ||| Speech-to-text models recognize words that should be capitalized to improve read ## Disfluency removal -When speaking, it's common for someone to stutter, duplicate words, and say filler words like "uhm" or "uh". Speech-to-text can recognize such disfluencies and remove them from the display text. Disfluency removal is great for transcribing live unscripted speeches to read them back later. Some examples are shown in this table. +When speaking, it's common for someone to stutter, duplicate words, and say filler words like "uhm" or "uh". Speech to text can recognize such disfluencies and remove them from the display text. Disfluency removal is great for transcribing live unscripted speeches to read them back later. Some examples are shown in this table. |Recognized speech|Display text| ||| When speaking, it's common for someone to stutter, duplicate words, and say fill ## Punctuation -Speech-to-text automatically punctuates your text to improve clarity. Punctuation is helpful for reading back call or conversation transcriptions. Some examples are shown in this table. +Speech to text automatically punctuates your text to improve clarity. Punctuation is helpful for reading back call or conversation transcriptions. Some examples are shown in this table. |Recognized speech|Display text| ||| |`how are you`|`How are you?`| |`we can go to the mall park or beach`|`We can go to the mall, park, or beach.`| -When you're using speech-to-text with continuous recognition, you can configure the Speech service to recognize explicit punctuation marks. Then you can speak punctuation aloud in order to make your text more legible. This is especially useful in a situation where you want to use complex punctuation without having to merge it later. Some examples are shown in this table. +When you're using speech to text with continuous recognition, you can configure the Speech service to recognize explicit punctuation marks. Then you can speak punctuation aloud in order to make your text more legible. This is especially useful in a situation where you want to use complex punctuation without having to merge it later. Some examples are shown in this table. |Recognized speech|Display text| ||| When you're using speech-to-text with continuous recognition, you can configure |`the options are apple forward slash banana forward slash orange period`|`The options are apple/banana/orange.`| |`are you sure question mark`|`Are you sure?`| -Use the Speech SDK to enable dictation mode when you're using speech-to-text with continuous recognition. This mode will cause the speech configuration instance to interpret word descriptions of sentence structures such as punctuation. +Use the Speech SDK to enable dictation mode when you're using speech to text with continuous recognition. This mode will cause the speech configuration instance to interpret word descriptions of sentence structures such as punctuation. ::: zone pivot="programming-language-csharp" ```csharp Profanity filter is applied to the result `Text` and `MaskedNormalizedForm` prop ## Next steps -* [Speech-to-text quickstart](get-started-speech-to-text.md) +* [Speech to text quickstart](get-started-speech-to-text.md) * [Get speech recognition results](get-speech-recognition-results.md) |
cognitive-services | Embedded Speech | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Speech-Service/embedded-speech.md | zone_pivot_groups: programming-languages-set-thirteen # Embedded Speech (preview) -Embedded Speech is designed for on-device [speech-to-text](speech-to-text.md) and [text-to-speech](text-to-speech.md) scenarios where cloud connectivity is intermittent or unavailable. For example, you can use embedded speech in industrial equipment, a voice enabled air conditioning unit, or a car that might travel out of range. You can also develop hybrid cloud and offline solutions. For scenarios where your devices must be in a secure environment like a bank or government entity, you should first consider [disconnected containers](../containers/disconnected-containers.md). +Embedded Speech is designed for on-device [speech to text](speech-to-text.md) and [text to speech](text-to-speech.md) scenarios where cloud connectivity is intermittent or unavailable. For example, you can use embedded speech in industrial equipment, a voice enabled air conditioning unit, or a car that might travel out of range. You can also develop hybrid cloud and offline solutions. For scenarios where your devices must be in a secure environment like a bank or government entity, you should first consider [disconnected containers](../containers/disconnected-containers.md). > [!IMPORTANT] > Microsoft limits access to embedded speech. You can apply for access through the Azure Cognitive Services [embedded speech limited access review](https://aka.ms/csgate-embedded-speech). For more information, see [Limited access for embedded speech](/legal/cognitive-services/speech-service/embedded-speech/limited-access-embedded-speech?context=/azure/cognitive-services/speech-service/context/context). dependencies { ## Models and voices -For embedded speech, you'll need to download the speech recognition models for [speech-to-text](speech-to-text.md) and voices for [text-to-speech](text-to-speech.md). Instructions will be provided upon successful completion of the [limited access review](https://aka.ms/csgate-embedded-speech) process. +For embedded speech, you'll need to download the speech recognition models for [speech to text](speech-to-text.md) and voices for [text to speech](text-to-speech.md). Instructions will be provided upon successful completion of the [limited access review](https://aka.ms/csgate-embedded-speech) process. -The following [speech-to-text](speech-to-text.md) models are available: de-DE, en-AU, en-CA, en-GB, en-IE, en-IN, en-NZ, en-US, es-ES, es-MX, fr-CA, fr-FR, hi-IN, it-IT, ja-JP, ko-KR, nl-NL, pt-BR, ru-RU, sv-SE, tr-TR, zh-CN, zh-HK, and zh-TW. +The following [speech to text](speech-to-text.md) models are available: de-DE, en-AU, en-CA, en-GB, en-IE, en-IN, en-NZ, en-US, es-ES, es-MX, fr-CA, fr-FR, hi-IN, it-IT, ja-JP, ko-KR, nl-NL, pt-BR, ru-RU, sv-SE, tr-TR, zh-CN, zh-HK, and zh-TW. -The following [text-to-speech](text-to-speech.md) locales and voices are available: +The following [text to speech](text-to-speech.md) locales and voices are available: -| Locale (BCP-47) | Language | Text-to-speech voices | +| Locale (BCP-47) | Language | Text to speech voices | | -- | -- | -- | | `de-DE` | German (Germany) | `de-DE-KatjaNeural` (Female)<br/>`de-DE-ConradNeural` (Male)| | `en-AU` | English (Australia) | `en-AU-AnnetteNeural` (Female)<br/>`en-AU-WilliamNeural` (Male)| The following [text-to-speech](text-to-speech.md) locales and voices are availab For cloud connected applications, as shown in most Speech SDK samples, you use the `SpeechConfig` object with a Speech resource key and region. For embedded speech, you don't use a Speech resource. Instead of a cloud resource, you use the [models and voices](#models-and-voices) that you downloaded to your local device. -Use the `EmbeddedSpeechConfig` object to set the location of the models or voices. If your application is used for both speech-to-text and text-to-speech, you can use the same `EmbeddedSpeechConfig` object to set the location of the models and voices. +Use the `EmbeddedSpeechConfig` object to set the location of the models or voices. If your application is used for both speech to text and text to speech, you can use the same `EmbeddedSpeechConfig` object to set the location of the models and voices. ::: zone pivot="programming-language-csharp" paths.Add("C:\\dev\\embedded-speech\\stt-models"); paths.Add("C:\\dev\\embedded-speech\\tts-voices"); var embeddedSpeechConfig = EmbeddedSpeechConfig.FromPaths(paths.ToArray()); -// For speech-to-text +// For speech to text embeddedSpeechConfig.SetSpeechRecognitionModel( "Microsoft Speech Recognizer en-US FP Model V8", Environment.GetEnvironmentVariable("MODEL_KEY")); -// For text-to-speech +// For text to speech embeddedSpeechConfig.SetSpeechSynthesisVoice( "Microsoft Server Speech Text to Speech Voice (en-US, JennyNeural)", Environment.GetEnvironmentVariable("VOICE_KEY")); embeddedSpeechConfig.SetSpeechSynthesisOutputFormat(SpeechSynthesisOutputFormat. ::: zone pivot="programming-language-cpp" > [!TIP]-> The `GetEnvironmentVariable` function is defined in the [speech-to-text quickstart](get-started-speech-to-text.md) and [text-to-speech quickstart](get-started-text-to-speech.md). +> The `GetEnvironmentVariable` function is defined in the [speech to text quickstart](get-started-speech-to-text.md) and [text to speech quickstart](get-started-text-to-speech.md). ```cpp // Provide the location of the models and voices. paths.push_back("C:\\dev\\embedded-speech\\stt-models"); paths.push_back("C:\\dev\\embedded-speech\\tts-voices"); auto embeddedSpeechConfig = EmbeddedSpeechConfig::FromPaths(paths); -// For speech-to-text +// For speech to text embeddedSpeechConfig->SetSpeechRecognitionModel(( "Microsoft Speech Recognizer en-US FP Model V8", GetEnvironmentVariable("MODEL_KEY")); -// For text-to-speech +// For text to speech embeddedSpeechConfig->SetSpeechSynthesisVoice( "Microsoft Server Speech Text to Speech Voice (en-US, JennyNeural)", GetEnvironmentVariable("VOICE_KEY")); paths.add("C:\\dev\\embedded-speech\\stt-models"); paths.add("C:\\dev\\embedded-speech\\tts-voices"); var embeddedSpeechConfig = EmbeddedSpeechConfig.fromPaths(paths); -// For speech-to-text +// For speech to text embeddedSpeechConfig.setSpeechRecognitionModel( "Microsoft Speech Recognizer en-US FP Model V8", System.getenv("MODEL_KEY")); -// For text-to-speech +// For text to speech embeddedSpeechConfig.setSpeechSynthesisVoice( "Microsoft Server Speech Text to Speech Voice (en-US, JennyNeural)", System.getenv("VOICE_KEY")); You can find ready to use embedded speech samples at [GitHub](https://aka.ms/emb Hybrid speech with the `HybridSpeechConfig` object uses the cloud speech service by default and embedded speech as a fallback in case cloud connectivity is limited or slow. -With hybrid speech configuration for [speech-to-text](speech-to-text.md) (recognition models), embedded speech is used when connection to the cloud service fails after repeated attempts. Recognition may continue using the cloud service again if the connection is later resumed. +With hybrid speech configuration for [speech to text](speech-to-text.md) (recognition models), embedded speech is used when connection to the cloud service fails after repeated attempts. Recognition may continue using the cloud service again if the connection is later resumed. -With hybrid speech configuration for [text-to-speech](text-to-speech.md) (voices), embedded and cloud synthesis are run in parallel and the result is selected based on which one gives a faster response. The best result is evaluated on each synthesis request. +With hybrid speech configuration for [text to speech](text-to-speech.md) (voices), embedded and cloud synthesis are run in parallel and the result is selected based on which one gives a faster response. The best result is evaluated on each synthesis request. ## Cloud speech -For cloud speech, you use the `SpeechConfig` object, as shown in the [speech-to-text quickstart](get-started-speech-to-text.md) and [text-to-speech quickstart](get-started-text-to-speech.md). To run the quickstarts for embedded speech, you can replace `SpeechConfig` with `EmbeddedSpeechConfig` or `HybridSpeechConfig`. Most of the other speech recognition and synthesis code are the same, whether using cloud, embedded, or hybrid configuration. +For cloud speech, you use the `SpeechConfig` object, as shown in the [speech to text quickstart](get-started-speech-to-text.md) and [text to speech quickstart](get-started-text-to-speech.md). To run the quickstarts for embedded speech, you can replace `SpeechConfig` with `EmbeddedSpeechConfig` or `HybridSpeechConfig`. Most of the other speech recognition and synthesis code are the same, whether using cloud, embedded, or hybrid configuration. ## Next steps |
cognitive-services | Gaming Concepts | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Speech-Service/gaming-concepts.md | Here are a few Speech features to consider for flexible and interactive game exp - Custom neural voice for creating a voice that stays on-brand with consistent quality and speaking style. You can add emotions, accents, nuances, laughter, and other para linguistic sounds and expressions. - Use game dialogue prototyping to shorten the amount of time and money spent in product to get the game to market sooner. You can rapidly swap lines of dialog and listen to variations in real-time to iterate the game content. -You can use the [Speech SDK](speech-sdk.md) or [Speech CLI](spx-overview.md) for real-time low latency speech-to-text, text-to-speech, language identification, and speech translation. You can also use the [Batch transcription API](batch-transcription.md) to transcribe pre-recorded speech to text. To synthesize a large volume of text input (long and short) to speech, use the [Batch synthesis API](batch-synthesis.md). +You can use the [Speech SDK](speech-sdk.md) or [Speech CLI](spx-overview.md) for real-time low latency speech to text, text to speech, language identification, and speech translation. You can also use the [Batch transcription API](batch-transcription.md) to transcribe pre-recorded speech to text. To synthesize a large volume of text input (long and short) to speech, use the [Batch synthesis API](batch-synthesis.md). For information about locale and regional availability, see [Language and voice support](language-support.md) and [Region support](regions.md). -## Text-to-speech +## Text to speech -Help bring everyone into the conversation by converting text messages to audio using [Text-to-Speech](text-to-speech.md) for scenarios, such as game dialogue prototyping, greater accessibility, or non-playable character (NPC) voices. Text-to-Speech includes [prebuilt neural voice](language-support.md?tabs=tts#prebuilt-neural-voices) and [custom neural voice](language-support.md?tabs=tts#custom-neural-voice) features. Prebuilt neural voice can provide highly natural out-of-box voices with leading voice variety in terms of a large portfolio of languages and voices. Custom neural voice is an easy-to-use self-service for creating a highly natural custom voice. +Help bring everyone into the conversation by converting text messages to audio using [Text to speech](text-to-speech.md) for scenarios, such as game dialogue prototyping, greater accessibility, or non-playable character (NPC) voices. Text to speech includes [prebuilt neural voice](language-support.md?tabs=tts#prebuilt-neural-voices) and [custom neural voice](language-support.md?tabs=tts#custom-neural-voice) features. Prebuilt neural voice can provide highly natural out-of-box voices with leading voice variety in terms of a large portfolio of languages and voices. Custom neural voice is an easy-to-use self-service for creating a highly natural custom voice. When enabling this functionality in your game, keep in mind the following benefits: -- Voices and languages supported - A large portfolio of [locales and voices](language-support.md?tabs=tts#supported-languages) are supported. You can also [specify multiple languages](speech-synthesis-markup-voice.md#adjust-speaking-languages) for Text-to-Speech output. For [custom neural voice](custom-neural-voice.md), you can [choose to create](how-to-custom-voice-create-voice.md?tabs=neural#choose-a-training-method) different languages from single language training data.+- Voices and languages supported - A large portfolio of [locales and voices](language-support.md?tabs=tts#supported-languages) are supported. You can also [specify multiple languages](speech-synthesis-markup-voice.md#adjust-speaking-languages) for Text to speech output. For [custom neural voice](custom-neural-voice.md), you can [choose to create](how-to-custom-voice-create-voice.md?tabs=neural#choose-a-training-method) different languages from single language training data. - Emotional styles supported - [Emotional tones](language-support.md?tabs=tts#voice-styles-and-roles), such as cheerful, angry, sad, excited, hopeful, friendly, unfriendly, terrified, shouting, and whispering. You can [adjust the speaking style](speech-synthesis-markup-voice.md#speaking-styles-and-roles), style degree, and role at the sentence level. - Visemes supported - You can use visemes during real-time synthesizing to control the movement of 2D and 3D avatar models, so that the mouth movements are perfectly matched to synthetic speech. For more information, see [Get facial position with viseme](how-to-speech-synthesis-viseme.md).-- Fine-tuning Text-to-Speech output with Speech Synthesis Markup Language (SSML) - With SSML, you can customize Text-to-Speech outputs, with richer voice tuning supports. For more information, see [Speech Synthesis Markup Language (SSML) overview](speech-synthesis-markup.md).+- Fine-tuning Text to speech output with Speech Synthesis Markup Language (SSML) - With SSML, you can customize Text to speech outputs, with richer voice tuning supports. For more information, see [Speech Synthesis Markup Language (SSML) overview](speech-synthesis-markup.md). - Audio outputs - Each prebuilt neural voice model is available at 24 kHz and high-fidelity 48 kHz. If you select 48-kHz output format, the high-fidelity voice model with 48 kHz will be invoked accordingly. The sample rates other than 24 kHz and 48 kHz can be obtained through upsampling or downsampling when synthesizing. For example, 44.1 kHz is downsampled from 48 kHz. Each audio format incorporates a bitrate and encoding type. For more information, see the [supported audio formats](rest-text-to-speech.md?tabs=streaming#audio-outputs). For more information on 48-kHz high-quality voices, see [this introduction blog](https://techcommunity.microsoft.com/t5/ai-cognitive-services-blog/azure-neural-tts-voices-upgraded-to-48khz-with-hifinet2-vocoder/ba-p/3665252). -For an example, see the [Text-to-speech quickstart](get-started-text-to-speech.md). +For an example, see the [Text to speech quickstart](get-started-text-to-speech.md). -## Speech-to-text +## Speech to text -You can use [speech-to-text](speech-to-text.md) to display text from the spoken audio in your game. For an example, see the [Speech-to-text quickstart](get-started-speech-to-text.md). +You can use [speech to text](speech-to-text.md) to display text from the spoken audio in your game. For an example, see the [Speech to text quickstart](get-started-speech-to-text.md). ## Language identification For an example, see the [Speech translation quickstart](get-started-speech-trans ## Next steps * [Azure gaming documentation](/gaming/azure/)-* [Text-to-speech quickstart](get-started-text-to-speech.md) -* [Speech-to-text quickstart](get-started-speech-to-text.md) +* [Text to speech quickstart](get-started-text-to-speech.md) +* [Speech to text quickstart](get-started-speech-to-text.md) * [Speech translation quickstart](get-started-speech-translation.md) |
cognitive-services | Get Started Speech To Text | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Speech-Service/get-started-speech-to-text.md | Title: "Speech-to-text quickstart - Speech service" + Title: "Speech to text quickstart - Speech service" description: In this quickstart, you convert speech to text with recognition from a microphone. |
cognitive-services | Get Started Text To Speech | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Speech-Service/get-started-text-to-speech.md | Title: "Text-to-speech quickstart - Speech service" + Title: "Text to speech quickstart - Speech service" description: In this quickstart, you convert text to speech. Learn about object construction and design patterns, supported audio output formats, and custom configuration options for speech synthesis. |
cognitive-services | How To Audio Content Creation | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Speech-Service/how-to-audio-content-creation.md | It takes a few moments to deploy your new Speech resource. After the deployment The following diagram displays the process for fine-tuning the Text to speech outputs. Each step in the preceding diagram is described here: 1. Choose the Speech resource you want to work with. 1. [Create an audio tuning file](#create-an-audio-tuning-file) by using plain text or SSML scripts. Enter or upload your content into Audio Content Creation.-1. Choose the voice and the language for your script content. Audio Content Creation includes all of the [prebuilt text-to-speech voices](language-support.md?tabs=tts). You can use prebuilt neural voices or a custom neural voice. +1. Choose the voice and the language for your script content. Audio Content Creation includes all of the [prebuilt text to speech voices](language-support.md?tabs=tts). You can use prebuilt neural voices or a custom neural voice. > [!NOTE] > Gated access is available for Custom Neural Voice, which allows you to create high-definition voices that are similar to natural-sounding speech. For more information, see [Gating process](./text-to-speech.md). Each step in the preceding diagram is described here: Improve the output by adjusting pronunciation, break, pitch, rate, intonation, voice style, and more. For a complete list of options, see [Speech Synthesis Markup Language](speech-synthesis-markup.md). - For more information about fine-tuning speech output, view the [How to convert Text to Speech using Microsoft Azure AI voices](https://youtu.be/ygApYuOOG6w) video. + For more information about fine-tuning speech output, view the [How to convert Text to speech using Microsoft Azure AI voices](https://youtu.be/ygApYuOOG6w) video. 1. Save and [export your tuned audio](#export-tuned-audio). |
cognitive-services | How To Custom Commands Setup Speech Sdk | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Speech-Service/how-to-custom-commands-setup-speech-sdk.md | Add the code-behind source so that the application works as expected. The code-b - A simple implementation to ensure microphone access, wired to a button handler - Basic UI helpers to present messages and errors in the application - A landing point for the initialization code path that will be populated later-- A helper to play back text-to-speech (without streaming support)+- A helper to play back text to speech (without streaming support) - An empty button handler to start listening that will be populated later Add the code-behind source as follows: Add the code-behind source as follows: // once audio capture is completed connector.Recognized += (sender, recognitionEventArgs) => {- NotifyUser($"Final speech-to-text result: '{recognitionEventArgs.Result.Text}'"); + NotifyUser($"Final speech to text result: '{recognitionEventArgs.Result.Text}'"); }; // SessionStarted will notify when audio begins flowing to the service for a turn |
cognitive-services | How To Custom Speech Create Project | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Speech-Service/how-to-custom-speech-create-project.md | spx help csr project ::: zone pivot="rest-api" -To create a project, use the [Projects_Create](https://eastus.dev.cognitive.microsoft.com/docs/services/speech-to-text-api-v3-1/operations/Projects_Create) operation of the [Speech-to-text REST API](rest-speech-to-text.md). Construct the request body according to the following instructions: +To create a project, use the [Projects_Create](https://eastus.dev.cognitive.microsoft.com/docs/services/speech-to-text-api-v3-1/operations/Projects_Create) operation of the [Speech to text REST API](rest-speech-to-text.md). Construct the request body according to the following instructions: - Set the required `locale` property. This should be the locale of the contained datasets. The locale can't be changed later. - Set the required `displayName` property. This is the project name that will be displayed in the Speech Studio. |
cognitive-services | How To Custom Speech Deploy Model | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Speech-Service/how-to-custom-speech-deploy-model.md | spx help csr endpoint ::: zone pivot="rest-api" -To create an endpoint and deploy a model, use the [Endpoints_Create](https://eastus.dev.cognitive.microsoft.com/docs/services/speech-to-text-api-v3-1/operations/Endpoints_Create) operation of the [Speech-to-text REST API](rest-speech-to-text.md). Construct the request body according to the following instructions: +To create an endpoint and deploy a model, use the [Endpoints_Create](https://eastus.dev.cognitive.microsoft.com/docs/services/speech-to-text-api-v3-1/operations/Endpoints_Create) operation of the [Speech to text REST API](rest-speech-to-text.md). Construct the request body according to the following instructions: - Set the `project` property to the URI of an existing project. This is recommended so that you can also view and manage the endpoint in Speech Studio. You can make a [Projects_List](https://eastus.dev.cognitive.microsoft.com/docs/services/speech-to-text-api-v3-1/operations/Projects_List) request to get available projects. - Set the required `model` property to the URI of the model that you want deployed to the endpoint. spx help csr endpoint ::: zone pivot="rest-api" -To redeploy the custom endpoint with a new model, use the [Endpoints_Update](https://eastus.dev.cognitive.microsoft.com/docs/services/speech-to-text-api-v3-1/operations/Endpoints_Update) operation of the [Speech-to-text REST API](rest-speech-to-text.md). Construct the request body according to the following instructions: +To redeploy the custom endpoint with a new model, use the [Endpoints_Update](https://eastus.dev.cognitive.microsoft.com/docs/services/speech-to-text-api-v3-1/operations/Endpoints_Update) operation of the [Speech to text REST API](rest-speech-to-text.md). Construct the request body according to the following instructions: - Set the `model` property to the URI of the model that you want deployed to the endpoint. The location of each log file with more details are returned in the response bod ::: zone pivot="rest-api" -To get logs for an endpoint, start by using the [Endpoints_Get](https://eastus.dev.cognitive.microsoft.com/docs/services/speech-to-text-api-v3-1/operations/Endpoints_Get) operation of the [Speech-to-text REST API](rest-speech-to-text.md). +To get logs for an endpoint, start by using the [Endpoints_Get](https://eastus.dev.cognitive.microsoft.com/docs/services/speech-to-text-api-v3-1/operations/Endpoints_Get) operation of the [Speech to text REST API](rest-speech-to-text.md). Make an HTTP GET request using the URI as shown in the following example. Replace `YourEndpointId` with your endpoint ID, replace `YourSubscriptionKey` with your Speech resource key, and replace `YourServiceRegion` with your Speech resource region. |
cognitive-services | How To Custom Speech Evaluate Data | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Speech-Service/how-to-custom-speech-evaluate-data.md | Title: Test accuracy of a Custom Speech model - Speech service -description: In this article, you learn how to quantitatively measure and improve the quality of our speech-to-text model or your custom model. +description: In this article, you learn how to quantitatively measure and improve the quality of our speech to text model or your custom model. no-loc: [$$, '\times', '\over'] # Test accuracy of a Custom Speech model -In this article, you learn how to quantitatively measure and improve the accuracy of the Microsoft speech-to-text model or your own custom models. [Audio + human-labeled transcript](how-to-custom-speech-test-and-train.md#audio--human-labeled-transcript-data-for-training-or-testing) data is required to test accuracy. You should provide from 30 minutes to 5 hours of representative audio. +In this article, you learn how to quantitatively measure and improve the accuracy of the Microsoft speech to text model or your own custom models. [Audio + human-labeled transcript](how-to-custom-speech-test-and-train.md#audio--human-labeled-transcript-data-for-training-or-testing) data is required to test accuracy. You should provide from 30 minutes to 5 hours of representative audio. [!INCLUDE [service-pricing-advisory](includes/service-pricing-advisory.md)] ## Create a test -You can test the accuracy of your custom model by creating a test. A test requires a collection of audio files and their corresponding transcriptions. You can compare a custom model's accuracy with a Microsoft speech-to-text base model or another custom model. After you [get](#get-test-results) the test results, [evaluate](#evaluate-word-error-rate) the word error rate (WER) compared to speech recognition results. +You can test the accuracy of your custom model by creating a test. A test requires a collection of audio files and their corresponding transcriptions. You can compare a custom model's accuracy with a Microsoft speech to text base model or another custom model. After you [get](#get-test-results) the test results, [evaluate](#evaluate-word-error-rate) the word error rate (WER) compared to speech recognition results. ::: zone pivot="speech-studio" spx help csr evaluation ::: zone pivot="rest-api" -To create a test, use the [Evaluations_Create](https://eastus.dev.cognitive.microsoft.com/docs/services/speech-to-text-api-v3-1/operations/Evaluations_Create) operation of the [Speech-to-text REST API](rest-speech-to-text.md). Construct the request body according to the following instructions: +To create a test, use the [Evaluations_Create](https://eastus.dev.cognitive.microsoft.com/docs/services/speech-to-text-api-v3-1/operations/Evaluations_Create) operation of the [Speech to text REST API](rest-speech-to-text.md). Construct the request body according to the following instructions: - Set the `project` property to the URI of an existing project. This is recommended so that you can also view the test in Speech Studio. You can make a [Projects_List](https://eastus.dev.cognitive.microsoft.com/docs/services/speech-to-text-api-v3-1/operations/Projects_List) request to get available projects. - Set the `testingKind` property to `Evaluation` within `customProperties`. If you don't specify `Evaluation`, the test is treated as a quality inspection test. Whether the `testingKind` property is set to `Evaluation` or `Inspection`, or not set, you can access the accuracy scores via the API, but not in the Speech Studio. spx help csr evaluation ::: zone pivot="rest-api" -To get test results, start by using the [Evaluations_Get](https://eastus.dev.cognitive.microsoft.com/docs/services/speech-to-text-api-v3-1/operations/Evaluations_Get) operation of the [Speech-to-text REST API](rest-speech-to-text.md). +To get test results, start by using the [Evaluations_Get](https://eastus.dev.cognitive.microsoft.com/docs/services/speech-to-text-api-v3-1/operations/Evaluations_Get) operation of the [Speech to text REST API](rest-speech-to-text.md). Make an HTTP GET request using the URI as shown in the following example. Replace `YourEvaluationId` with your evaluation ID, replace `YourSubscriptionKey` with your Speech resource key, and replace `YourServiceRegion` with your Speech resource region. |
cognitive-services | How To Custom Speech Inspect Data | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Speech-Service/how-to-custom-speech-inspect-data.md | spx help csr evaluation ::: zone pivot="rest-api" -To create a test, use the [Evaluations_Create](https://eastus.dev.cognitive.microsoft.com/docs/services/speech-to-text-api-v3-1/operations/Evaluations_Create) operation of the [Speech-to-text REST API](rest-speech-to-text.md). Construct the request body according to the following instructions: +To create a test, use the [Evaluations_Create](https://eastus.dev.cognitive.microsoft.com/docs/services/speech-to-text-api-v3-1/operations/Evaluations_Create) operation of the [Speech to text REST API](rest-speech-to-text.md). Construct the request body according to the following instructions: - Set the `project` property to the URI of an existing project. This is recommended so that you can also view the test in Speech Studio. You can make a [Projects_List](https://eastus.dev.cognitive.microsoft.com/docs/services/speech-to-text-api-v3-1/operations/Projects_List) request to get available projects. - Set the required `model1` property to the URI of a model that you want to test. spx help csr evaluation ::: zone pivot="rest-api" -To get test results, start by using the [Evaluations_Get](https://eastus.dev.cognitive.microsoft.com/docs/services/speech-to-text-api-v3-1/operations/Evaluations_Get) operation of the [Speech-to-text REST API](rest-speech-to-text.md). +To get test results, start by using the [Evaluations_Get](https://eastus.dev.cognitive.microsoft.com/docs/services/speech-to-text-api-v3-1/operations/Evaluations_Get) operation of the [Speech to text REST API](rest-speech-to-text.md). Make an HTTP GET request using the URI as shown in the following example. Replace `YourEvaluationId` with your evaluation ID, replace `YourSubscriptionKey` with your Speech resource key, and replace `YourServiceRegion` with your Speech resource region. |
cognitive-services | How To Custom Speech Model And Endpoint Lifecycle | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Speech-Service/how-to-custom-speech-model-and-endpoint-lifecycle.md | spx help csr model ::: zone pivot="rest-api" -To get the training and transcription expiration dates for a base model, use the [Models_GetBaseModel](https://eastus.dev.cognitive.microsoft.com/docs/services/speech-to-text-api-v3-1/operations/Models_GetBaseModel) operation of the [Speech-to-text REST API](rest-speech-to-text.md). You can make a [Models_ListBaseModels](https://eastus.dev.cognitive.microsoft.com/docs/services/speech-to-text-api-v3-1/operations/Models_ListBaseModels) request to get available base models for all locales. +To get the training and transcription expiration dates for a base model, use the [Models_GetBaseModel](https://eastus.dev.cognitive.microsoft.com/docs/services/speech-to-text-api-v3-1/operations/Models_GetBaseModel) operation of the [Speech to text REST API](rest-speech-to-text.md). You can make a [Models_ListBaseModels](https://eastus.dev.cognitive.microsoft.com/docs/services/speech-to-text-api-v3-1/operations/Models_ListBaseModels) request to get available base models for all locales. Make an HTTP GET request using the model URI as shown in the following example. Replace `BaseModelId` with your model ID, replace `YourSubscriptionKey` with your Speech resource key, and replace `YourServiceRegion` with your Speech resource region. spx help csr model ::: zone pivot="rest-api" -To get the transcription expiration date for your custom model, use the [Models_GetCustomModel](https://eastus.dev.cognitive.microsoft.com/docs/services/speech-to-text-api-v3-1/operations/Models_GetCustomModel) operation of the [Speech-to-text REST API](rest-speech-to-text.md). +To get the transcription expiration date for your custom model, use the [Models_GetCustomModel](https://eastus.dev.cognitive.microsoft.com/docs/services/speech-to-text-api-v3-1/operations/Models_GetCustomModel) operation of the [Speech to text REST API](rest-speech-to-text.md). Make an HTTP GET request using the model URI as shown in the following example. Replace `YourModelId` with your model ID, replace `YourSubscriptionKey` with your Speech resource key, and replace `YourServiceRegion` with your Speech resource region. |
cognitive-services | How To Custom Speech Test And Train | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Speech-Service/how-to-custom-speech-test-and-train.md | To help determine which dataset to use to address your problems, refer to the fo You can use audio + human-labeled transcript data for both [training](how-to-custom-speech-train-model.md) and [testing](how-to-custom-speech-evaluate-data.md) purposes. You must provide human-labeled transcriptions (word by word) for comparison: - To improve the acoustic aspects like slight accents, speaking styles, and background noises.-- To measure the accuracy of Microsoft's speech-to-text accuracy when it's processing your audio files. +- To measure the accuracy of Microsoft's speech to text accuracy when it's processing your audio files. For a list of base models that support training with audio data, see [Language support](language-support.md?tabs=stt). Even if a base model does support training with audio data, the service might use only part of the audio. And it will still use all the transcripts. Refer to the following table to ensure that your pronunciation dataset files are ### Audio data for training or testing -Audio data is optimal for testing the accuracy of Microsoft's baseline speech-to-text model or a custom model. Keep in mind that audio data is used to inspect the accuracy of speech with regard to a specific model's performance. If you want to quantify the accuracy of a model, use [audio + human-labeled transcripts](#audio--human-labeled-transcript-data-for-training-or-testing). +Audio data is optimal for testing the accuracy of Microsoft's baseline speech to text model or a custom model. Keep in mind that audio data is used to inspect the accuracy of speech with regard to a specific model's performance. If you want to quantify the accuracy of a model, use [audio + human-labeled transcripts](#audio--human-labeled-transcript-data-for-training-or-testing). > [!NOTE] > Audio only data for training is available in preview for the `en-US` locale. For other locales, to train with audio data you must also provide [human-labeled transcripts](#audio--human-labeled-transcript-data-for-training-or-testing). |
cognitive-services | How To Custom Speech Train Model | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Speech-Service/how-to-custom-speech-train-model.md | Title: Train a Custom Speech model - Speech service -description: Learn how to train Custom Speech models. Training a speech-to-text model can improve recognition accuracy for the Microsoft base model or a custom model. +description: Learn how to train Custom Speech models. Training a speech to text model can improve recognition accuracy for the Microsoft base model or a custom model. spx help csr model ::: zone pivot="rest-api" -To create a model with datasets for training, use the [Models_Create](https://eastus.dev.cognitive.microsoft.com/docs/services/speech-to-text-api-v3-1/operations/Models_Create) operation of the [Speech-to-text REST API](rest-speech-to-text.md). Construct the request body according to the following instructions: +To create a model with datasets for training, use the [Models_Create](https://eastus.dev.cognitive.microsoft.com/docs/services/speech-to-text-api-v3-1/operations/Models_Create) operation of the [Speech to text REST API](rest-speech-to-text.md). Construct the request body according to the following instructions: - Set the `project` property to the URI of an existing project. This is recommended so that you can also view and manage the model in Speech Studio. You can make a [Projects_List](https://eastus.dev.cognitive.microsoft.com/docs/services/speech-to-text-api-v3-1/operations/Projects_List) request to get available projects. - Set the required `datasets` property to the URI of the datasets that you want used for training. After the model is successfully copied, you'll be notified and can view it in th ::: zone pivot="speech-cli" -Copying a model directly to a project in another region is not supported with the Speech CLI. You can copy a model to a project in another region using the [Speech Studio](https://aka.ms/speechstudio/customspeech) or [Speech-to-text REST API](rest-speech-to-text.md). +Copying a model directly to a project in another region is not supported with the Speech CLI. You can copy a model to a project in another region using the [Speech Studio](https://aka.ms/speechstudio/customspeech) or [Speech to text REST API](rest-speech-to-text.md). ::: zone-end ::: zone pivot="rest-api" -To copy a model to another Speech resource, use the [Models_CopyTo](https://eastus.dev.cognitive.microsoft.com/docs/services/speech-to-text-api-v3-1/operations/Models_CopyTo) operation of the [Speech-to-text REST API](rest-speech-to-text.md). Construct the request body according to the following instructions: +To copy a model to another Speech resource, use the [Models_CopyTo](https://eastus.dev.cognitive.microsoft.com/docs/services/speech-to-text-api-v3-1/operations/Models_CopyTo) operation of the [Speech to text REST API](rest-speech-to-text.md). Construct the request body according to the following instructions: - Set the required `targetSubscriptionKey` property to the key of the destination Speech resource. spx help csr model ::: zone pivot="rest-api" -To connect a new model to a project of the Speech resource where the model was copied, use the [Models_Update](https://eastus.dev.cognitive.microsoft.com/docs/services/speech-to-text-api-v3-1/operations/Models_Update) operation of the [Speech-to-text REST API](rest-speech-to-text.md). Construct the request body according to the following instructions: +To connect a new model to a project of the Speech resource where the model was copied, use the [Models_Update](https://eastus.dev.cognitive.microsoft.com/docs/services/speech-to-text-api-v3-1/operations/Models_Update) operation of the [Speech to text REST API](rest-speech-to-text.md). Construct the request body according to the following instructions: - Set the required `project` property to the URI of an existing project. This is recommended so that you can also view and manage the model in Speech Studio. You can make a [Projects_List](https://eastus.dev.cognitive.microsoft.com/docs/services/speech-to-text-api-v3-1/operations/Projects_List) request to get available projects. |
cognitive-services | How To Custom Speech Upload Data | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Speech-Service/how-to-custom-speech-upload-data.md | spx help csr dataset [!INCLUDE [Map CLI and API kind to Speech Studio options](includes/how-to/custom-speech/cli-api-kind.md)] -To create a dataset and connect it to an existing project, use the [Datasets_Create](https://eastus.dev.cognitive.microsoft.com/docs/services/speech-to-text-api-v3-1/operations/Datasets_Create) operation of the [Speech-to-text REST API](rest-speech-to-text.md). Construct the request body according to the following instructions: +To create a dataset and connect it to an existing project, use the [Datasets_Create](https://eastus.dev.cognitive.microsoft.com/docs/services/speech-to-text-api-v3-1/operations/Datasets_Create) operation of the [Speech to text REST API](rest-speech-to-text.md). Construct the request body according to the following instructions: - Set the `project` property to the URI of an existing project. This is recommended so that you can also view and manage the dataset in Speech Studio. You can make a [Projects_List](https://eastus.dev.cognitive.microsoft.com/docs/services/speech-to-text-api-v3-1/operations/Projects_List) request to get available projects. - Set the required `kind` property. The possible set of values for dataset kind are: Language, Acoustic, Pronunciation, and AudioFiles. |
cognitive-services | How To Custom Voice Create Voice | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Speech-Service/how-to-custom-voice-create-voice.md | Each paragraph of the utterance results in a separate audio. If you want to comb ### Update engine version for your voice model -Azure Text-to-Speech engines are updated from time to time to capture the latest language model that defines the pronunciation of the language. After you've trained your voice, you can apply your voice to the new language model by updating to the latest engine version. +Azure Text to speech engines are updated from time to time to capture the latest language model that defines the pronunciation of the language. After you've trained your voice, you can apply your voice to the new language model by updating to the latest engine version. When a new engine is available, you're prompted to update your neural voice model. Navigate to the project where you copied the model to [deploy the model copy](ho - [Deploy and use your voice model](how-to-deploy-and-use-endpoint.md) - [How to record voice samples](record-custom-voice-samples.md)-- [Text-to-Speech API reference](rest-text-to-speech.md)+- [Text to speech API reference](rest-text-to-speech.md) |
cognitive-services | How To Custom Voice Prepare Data | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Speech-Service/how-to-custom-voice-prepare-data.md | -When you're ready to create a custom Text-to-Speech voice for your application, the first step is to gather audio recordings and associated scripts to start training the voice model. For details on recording voice samples, see [the tutorial](record-custom-voice-samples.md). The Speech service uses this data to create a unique voice tuned to match the voice in the recordings. After you've trained the voice, you can start synthesizing speech in your applications. +When you're ready to create a custom Text to speech voice for your application, the first step is to gather audio recordings and associated scripts to start training the voice model. For details on recording voice samples, see [the tutorial](record-custom-voice-samples.md). The Speech service uses this data to create a unique voice tuned to match the voice in the recordings. After you've trained the voice, you can start synthesizing speech in your applications. All data you upload must meet the requirements for the data type that you choose. It's important to correctly format your data before it's uploaded, which ensures the data will be accurately processed by the Speech service. To confirm that your data is correctly formatted, see [Training data types](how-to-custom-voice-training-data.md). |
cognitive-services | How To Custom Voice Training Data | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Speech-Service/how-to-custom-voice-training-data.md | -When you're ready to create a custom Text-to-Speech voice for your application, the first step is to gather audio recordings and associated scripts to start training the voice model. The Speech service uses this data to create a unique voice tuned to match the voice in the recordings. After you've trained the voice, you can start synthesizing speech in your applications. +When you're ready to create a custom Text to speech voice for your application, the first step is to gather audio recordings and associated scripts to start training the voice model. The Speech service uses this data to create a unique voice tuned to match the voice in the recordings. After you've trained the voice, you can start synthesizing speech in your applications. > [!TIP] > To create a voice for production use, we recommend you use a professional recording studio and voice talent. For more information, see [record voice samples to create a custom neural voice](record-custom-voice-samples.md). A voice training dataset includes audio recordings, and a text file with the ass In some cases, you may not have the right dataset ready and will want to test the custom neural voice training with available audio files, short or long, with or without transcripts. -This table lists data types and how each is used to create a custom Text-to-Speech voice model. +This table lists data types and how each is used to create a custom Text to speech voice model. | Data type | Description | When to use | Extra processing required | | | -- | -- | | For data format examples, refer to the sample training set on [GitHub](https://g ### Audio data for Individual utterances + matching transcript -Each audio file should contain a single utterance (a single sentence or a single turn of a dialog system), less than 15 seconds long. All files must be in the same spoken language. Multi-language custom Text-to-Speech voices aren't supported, except for the Chinese-English bi-lingual. Each audio file must have a unique filename with the filename extension .wav. +Each audio file should contain a single utterance (a single sentence or a single turn of a dialog system), less than 15 seconds long. All files must be in the same spoken language. Multi-language custom Text to speech voices aren't supported, except for the Chinese-English bi-lingual. Each audio file must have a unique filename with the filename extension .wav. Follow these guidelines when preparing audio. ItΓÇÖs important that the transcripts are 100% accurate transcriptions of the co > [!NOTE] > For **Long audio + transcript (Preview)**, only these languages are supported: Chinese (Mandarin, Simplified), English (India), English (United Kingdom), English (United States), French (France), German (Germany), Italian (Italy), Japanese (Japan), Portuguese (Brazil), and Spanish (Mexico). -In some cases, you may not have segmented audio available. The Speech Studio can help you segment long audio files and create transcriptions. The long-audio segmentation service will use the [Batch Transcription API](batch-transcription.md) feature of speech-to-text. +In some cases, you may not have segmented audio available. The Speech Studio can help you segment long audio files and create transcriptions. The long-audio segmentation service will use the [Batch Transcription API](batch-transcription.md) feature of speech to text. During the processing of the segmentation, your audio files and the transcripts will also be sent to the Custom Speech service to refine the recognition model so the accuracy can be improved for your data. No data will be retained during this process. After the segmentation is done, only the utterances segmented and their mapping transcripts will be stored for your downloading and training. > [!NOTE]-> This service will be charged toward your speech-to-text subscription usage. The long-audio segmentation service is only supported with standard (S0) Speech resources. +> This service will be charged toward your speech to text subscription usage. The long-audio segmentation service is only supported with standard (S0) Speech resources. ### Audio data for Long audio + transcript After your dataset is successfully uploaded, we'll help you segment the audio fi > [!NOTE] > For **Audio only (Preview)**, only these languages are supported: Chinese (Mandarin, Simplified), English (India), English (United Kingdom), English (United States), French (France), German (Germany), Italian (Italy), Japanese (Japan), Portuguese (Brazil), and Spanish (Mexico). -If you don't have transcriptions for your audio recordings, use the **Audio only** option to upload your data. Our system can help you segment and transcribe your audio files. Keep in mind, this service will be charged toward your speech-to-text subscription usage. +If you don't have transcriptions for your audio recordings, use the **Audio only** option to upload your data. Our system can help you segment and transcribe your audio files. Keep in mind, this service will be charged toward your speech to text subscription usage. Follow these guidelines when preparing audio. > [!NOTE]-> The long-audio segmentation service will leverage the batch transcription feature of speech-to-text, which only supports standard subscription (S0) users. +> The long-audio segmentation service will leverage the batch transcription feature of speech to text, which only supports standard subscription (S0) users. | Property | Value | | -- | -- | |
cognitive-services | How To Deploy And Use Endpoint | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Speech-Service/how-to-deploy-and-use-endpoint.md | zone_pivot_groups: programming-languages-set-nineteen After you've successfully created and [trained](how-to-custom-voice-create-voice.md) your voice model, you deploy it to a custom neural voice endpoint. -Use the Speech Studio to [add a deployment endpoint](#add-a-deployment-endpoint) for your custom neural voice. You can use either the Speech Studio or text-to-speech REST API to [suspend or resume](#suspend-and-resume-an-endpoint) a custom neural voice endpoint. +Use the Speech Studio to [add a deployment endpoint](#add-a-deployment-endpoint) for your custom neural voice. You can use either the Speech Studio or text to speech REST API to [suspend or resume](#suspend-and-resume-an-endpoint) a custom neural voice endpoint. > [!NOTE] > You can create up to 50 endpoints with a standard (S0) Speech resource, each with its own custom neural voice. -To use your custom neural voice, you must specify the voice model name, use the custom URI directly in an HTTP request, and use the same Speech resource to pass through the authentication of the text-to-speech service. +To use your custom neural voice, you must specify the voice model name, use the custom URI directly in an HTTP request, and use the same Speech resource to pass through the authentication of the text to speech service. ## Add a deployment endpoint The application settings that you use as REST API [request parameters](#request- ## Use your custom voice -The custom endpoint is functionally identical to the standard endpoint that's used for text-to-speech requests. +The custom endpoint is functionally identical to the standard endpoint that's used for text to speech requests. -One difference is that the `EndpointId` must be specified to use the custom voice via the Speech SDK. You can start with the [text-to-speech quickstart](get-started-text-to-speech.md) and then update the code with the `EndpointId` and `SpeechSynthesisVoiceName`. +One difference is that the `EndpointId` must be specified to use the custom voice via the Speech SDK. You can start with the [text to speech quickstart](get-started-text-to-speech.md) and then update the code with the `EndpointId` and `SpeechSynthesisVoiceName`. ::: zone pivot="programming-language-csharp" ```csharp The HTTP status code for each response indicates success or common errors. ## Next steps - [How to record voice samples](record-custom-voice-samples.md)-- [Text-to-Speech API reference](rest-text-to-speech.md)+- [Text to speech API reference](rest-text-to-speech.md) - [Batch synthesis](batch-synthesis.md) |
cognitive-services | How To Get Speech Session Id | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Speech-Service/how-to-get-speech-session-id.md | Title: How to get Speech-to-text Session ID and Transcription ID + Title: How to get Speech to text Session ID and Transcription ID -description: Learn how to get Speech service Speech-to-text Session ID and Transcription ID +description: Learn how to get Speech service Speech to text Session ID and Transcription ID Last updated 11/29/2022 -# How to get Speech-to-text Session ID and Transcription ID +# How to get Speech to text Session ID and Transcription ID -If you use [Speech-to-text](speech-to-text.md) and need to open a support case, you are often asked to provide a *Session ID* or *Transcription ID* of the problematic transcriptions to debug the issue. This article explains how to get these IDs. +If you use [Speech to text](speech-to-text.md) and need to open a support case, you are often asked to provide a *Session ID* or *Transcription ID* of the problematic transcriptions to debug the issue. This article explains how to get these IDs. > [!NOTE] > * *Session ID* is used in [real-time speech to text](get-started-speech-to-text.md) and [speech translation](speech-translation.md). If you use Speech SDK for JavaScript, get the Session ID as described in [this s If you use [Speech CLI](spx-overview.md), you can also get the Session ID interactively. See details in [this section](#get-session-id-using-speech-cli). -In case of [Speech-to-text REST API for short audio](rest-speech-to-text-short.md) you need to "inject" the session information in the requests. See details in [this section](#provide-session-id-using-rest-api-for-short-audio). +In case of [Speech to text REST API for short audio](rest-speech-to-text-short.md) you need to "inject" the session information in the requests. See details in [this section](#provide-session-id-using-rest-api-for-short-audio). ### Enable logging in the Speech SDK spx help translate log ### Provide Session ID using REST API for short audio -Unlike Speech SDK, [Speech-to-text REST API for short audio](rest-speech-to-text-short.md) does not automatically generate a Session ID. You need to generate it yourself and provide it within the REST request. +Unlike Speech SDK, [Speech to text REST API for short audio](rest-speech-to-text-short.md) does not automatically generate a Session ID. You need to generate it yourself and provide it within the REST request. Generate a GUID inside your code or using any standard tool. Use the GUID value *without dashes or other dividers*. As an example we will use `9f4ffa5113a846eba289aa98b28e766f`. https://westeurope.stt.speech.microsoft.com/speech/recognition/conversation/cogn ## Getting Transcription ID for Batch transcription -[Batch transcription API](batch-transcription.md) is a subset of the [Speech-to-text REST API](rest-speech-to-text.md). +[Batch transcription API](batch-transcription.md) is a subset of the [Speech to text REST API](rest-speech-to-text.md). The required Transcription ID is the GUID value contained in the main `self` element of the Response body returned by requests, like [Transcriptions_Create](https://eastus.dev.cognitive.microsoft.com/docs/services/speech-to-text-api-v3-1/operations/Transcriptions_Create). |
cognitive-services | How To Migrate To Custom Neural Voice | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Speech-Service/how-to-migrate-to-custom-neural-voice.md | -The custom neural voice lets you build higher-quality voice models while requiring less data. You can develop more realistic, natural, and conversational voices. Your customers and end users will benefit from the latest Text-to-Speech technology, in a responsible way. +The custom neural voice lets you build higher-quality voice models while requiring less data. You can develop more realistic, natural, and conversational voices. Your customers and end users will benefit from the latest Text to speech technology, in a responsible way. |Custom voice |Custom neural voice | |--|--| |
cognitive-services | How To Pronunciation Assessment | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Speech-Service/how-to-pronunciation-assessment.md | zone_pivot_groups: programming-languages-speech-sdk # Use pronunciation assessment -In this article, you'll learn how to evaluate pronunciation with the Speech-to-Text capability through the Speech SDK. To [get pronunciation assessment results](#get-pronunciation-assessment-results), you'll apply the `PronunciationAssessmentConfig` settings to a `SpeechRecognizer` object. +In this article, you'll learn how to evaluate pronunciation with the Speech to text capability through the Speech SDK. To [get pronunciation assessment results](#get-pronunciation-assessment-results), you'll apply the `PronunciationAssessmentConfig` settings to a `SpeechRecognizer` object. ::: zone pivot="programming-language-go" > [!NOTE] You can get pronunciation assessment scores for: > [!NOTE] > The syllable group, phoneme name, and spoken phoneme of pronunciation assessment are currently only available for the en-US locale. > -> Usage of pronunciation assessment costs the same as standard Speech to Text pay-as-you-go [pricing](https://azure.microsoft.com/pricing/details/cognitive-services/speech-services). Pronunciation assessment doesn't yet support commitment tier pricing. +> Usage of pronunciation assessment costs the same as standard Speech to text pay-as-you-go [pricing](https://azure.microsoft.com/pricing/details/cognitive-services/speech-services). Pronunciation assessment doesn't yet support commitment tier pricing. > > For information about availability of pronunciation assessment, see [supported languages](language-support.md?tabs=pronunciation-assessment) and [available regions](regions.md#speech-service). |
cognitive-services | How To Speech Synthesis Viseme | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Speech-Service/how-to-speech-synthesis-viseme.md | For more information about visemes, view this [introductory video](https://youtu ## Overall workflow of producing viseme with speech -Neural Text-to-SpeechΓÇ»(Neural TTS) turns input text or SSML (Speech Synthesis Markup Language) into lifelike synthesized speech. Speech audio output can be accompanied by viseme ID, Scalable Vector Graphics (SVG), or blend shapes. Using a 2D or 3D rendering engine, you can use these viseme events to animate your avatar. +Neural Text to speechΓÇ»(Neural TTS) turns input text or SSML (Speech Synthesis Markup Language) into lifelike synthesized speech. Speech audio output can be accompanied by viseme ID, Scalable Vector Graphics (SVG), or blend shapes. Using a 2D or 3D rendering engine, you can use these viseme events to animate your avatar. The overall workflow of viseme is depicted in the following flowchart: |
cognitive-services | How To Speech Synthesis | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Speech-Service/how-to-speech-synthesis.md | keywords: text to speech ## Next steps -* [Try the text-to-speech quickstart](get-started-text-to-speech.md) +* [Try the text to speech quickstart](get-started-text-to-speech.md) * [Get started with Custom Neural Voice](how-to-custom-voice.md) * [Improve synthesis with SSML](speech-synthesis-markup.md) |
cognitive-services | How To Track Speech Sdk Memory Usage | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Speech-Service/how-to-track-speech-sdk-memory-usage.md | Title: How to track Speech SDK memory usage - Speech service -description: The Speech Service SDK supports numerous programming languages for speech-to-text and text-to-speech conversion, along with speech translation. This article discusses memory management tooling built into the SDK. +description: The Speech Service SDK supports numerous programming languages for speech to text and text to speech conversion, along with speech translation. This article discusses memory management tooling built into the SDK. |
cognitive-services | How To Windows Voice Assistants Get Started | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Speech-Service/how-to-windows-voice-assistants-get-started.md | For a complete voice assistant experience, the application will need a dialog se These are the requirements to create a basic dialog service using Direct Line Speech. -- **Speech resource:** A resource for Cognitive Speech Services for speech-to-text and text-to-speech conversions. Create a Speech resource on the [Azure portal](https://portal.azure.com). For more information, see [Create a new Azure Cognitive Services resource](~/articles/cognitive-services/cognitive-services-apis-create-account.md?tabs=speech#create-a-new-azure-cognitive-services-resource).+- **Speech resource:** A resource for Cognitive Speech Services for speech to text and text to speech conversions. Create a Speech resource on the [Azure portal](https://portal.azure.com). For more information, see [Create a new Azure Cognitive Services resource](~/articles/cognitive-services/cognitive-services-apis-create-account.md?tabs=speech#create-a-new-azure-cognitive-services-resource). - **Bot Framework bot:** A bot created using Bot Framework version 4.2 or above that's subscribed to [Direct Line Speech](./direct-line-speech.md) to enable voice input and output. [This guide](./tutorial-voice-enable-your-bot-speech-sdk.md) contains step-by-step instructions to make an "echo bot" and subscribe it to Direct Line Speech. You can also go [here](https://blog.botframework.com/2018/05/07/build-a-microsoft-bot-framework-bot-with-the-bot-builder-sdk-v4/) for steps on how to create a customized bot, then follow the same steps [here](./tutorial-voice-enable-your-bot-speech-sdk.md) to subscribe it to Direct Line Speech, but with your new bot rather than the "echo bot". ## Try out the sample app |
cognitive-services | Improve Accuracy Phrase List | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Speech-Service/improve-accuracy-phrase-list.md | Now try Speech Studio to see how phrase list can improve recognition accuracy. > [!NOTE] > You may be prompted to select your Azure subscription and Speech resource, and then acknowledge billing for your region. -1. Go to **Real-time Speech-to-text** in [Speech Studio](https://aka.ms/speechstudio/speechtotexttool). +1. Go to **Real-time Speech to text** in [Speech Studio](https://aka.ms/speechstudio/speechtotexttool). 1. You test speech recognition by uploading an audio file or recording audio with a microphone. For example, select **record audio with a microphone** and then say "Hi Rehaan, this is Jessie from Contoso bank. " Then select the red button to stop recording. 1. You should see the transcription result in the **Test results** text box. If "Rehaan", "Jessie", or "Contoso" were recognized incorrectly, you can add the terms to a phrase list in the next step. 1. Select **Show advanced options** and turn on **Phrase list**. |
cognitive-services | Ingestion Client | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Speech-Service/ingestion-client.md | Internally, the tool uses Speech and Language services, and follows best practic The following Speech service feature is used by the Ingestion Client: -- [Batch speech-to-text](./batch-transcription.md): Transcribe large amounts of audio files asynchronously including speaker diarization and is typically used in post-call analytics scenarios. Diarization is the process of recognizing and separating speakers in mono channel audio data.+- [Batch speech to text](./batch-transcription.md): Transcribe large amounts of audio files asynchronously including speaker diarization and is typically used in post-call analytics scenarios. Diarization is the process of recognizing and separating speakers in mono channel audio data. Here are some Language service features that are used by the Ingestion Client: |
cognitive-services | Intent Recognition | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Speech-Service/intent-recognition.md | Conversational language understanding (CLU) enables users to build custom natura Both a Speech resource and Language resource are required to use CLU with the Speech SDK. The Speech resource is used to transcribe the user's speech into text, and the Language resource is used to recognize the intent of the utterance. To get started, see the [quickstart](get-started-intent-recognition-clu.md). > [!IMPORTANT]-> When you use conversational language understanding with the Speech SDK, you are charged both for the Speech-to-text recognition request and the Language service request for CLU. For more information about pricing for conversational language understanding, see [Language service pricing](https://azure.microsoft.com/pricing/details/cognitive-services/language-service/). +> When you use conversational language understanding with the Speech SDK, you are charged both for the Speech to text recognition request and the Language service request for CLU. For more information about pricing for conversational language understanding, see [Language service pricing](https://azure.microsoft.com/pricing/details/cognitive-services/language-service/). For information about how to use conversational language understanding without the Speech SDK and without speech recognition, see the [Language service documentation](../language-service/conversational-language-understanding/overview.md). |
cognitive-services | Keyword Recognition Overview | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Speech-Service/keyword-recognition-overview.md | With the [Custom Keyword portal on Speech Studio](https://speech.microsoft.com/c ### Pricing -There's no cost to use custom keyword to generate models, including both Basic and Advanced models. There's also no cost to run models on-device with the Speech SDK when used in conjunction with other Speech service features such as speech-to-text. +There's no cost to use custom keyword to generate models, including both Basic and Advanced models. There's also no cost to run models on-device with the Speech SDK when used in conjunction with other Speech service features such as speech to text. ### Types of models Keyword verification is a cloud service that reduces the impact of false accepts ### Pricing -Keyword verification is always used in combination with speech-to-text. There's no cost to use keyword verification beyond the cost of speech-to-text. +Keyword verification is always used in combination with speech to text. There's no cost to use keyword verification beyond the cost of speech to text. -### Keyword verification and speech-to-text +### Keyword verification and speech to text -When keyword verification is used, it's always in combination with speech-to-text. Both services run in parallel, which means audio is sent to both services for simultaneous processing. +When keyword verification is used, it's always in combination with speech to text. Both services run in parallel, which means audio is sent to both services for simultaneous processing. - + -Running keyword verification and speech-to-text in parallel yields the following benefits: +Running keyword verification and speech to text in parallel yields the following benefits: -* **No other latency on speech-to-text results**: Parallel execution means that keyword verification adds no latency. The client receives speech-to-text results as quickly. If keyword verification determines the keyword wasn't present in the audio, speech-to-text processing is terminated. This action protects against unnecessary speech-to-text processing. Network and cloud model processing increases the user-perceived latency of voice activation. For more information, see [Recommendations and guidelines](keyword-recognition-guidelines.md). -* **Forced keyword prefix in speech-to-text results**: Speech-to-text processing ensures that the results sent to the client are prefixed with the keyword. This behavior allows for increased accuracy in the speech-to-text results for speech that follows the keyword. -* **Increased speech-to-text timeout**: Because of the expected presence of the keyword at the beginning of audio, speech-to-text allows for a longer pause of up to five seconds after the keyword before it determines the end of speech and terminates speech-to-text processing. This behavior ensures that the user experience is correctly handled for staged commands (*\<keyword> \<pause> \<command>*) and chained commands (*\<keyword> \<command>*). +* **No other latency on speech to text results**: Parallel execution means that keyword verification adds no latency. The client receives speech to text results as quickly. If keyword verification determines the keyword wasn't present in the audio, speech to text processing is terminated. This action protects against unnecessary speech to text processing. Network and cloud model processing increases the user-perceived latency of voice activation. For more information, see [Recommendations and guidelines](keyword-recognition-guidelines.md). +* **Forced keyword prefix in speech to text results**: Speech to text processing ensures that the results sent to the client are prefixed with the keyword. This behavior allows for increased accuracy in the speech to text results for speech that follows the keyword. +* **Increased speech to text timeout**: Because of the expected presence of the keyword at the beginning of audio, speech to text allows for a longer pause of up to five seconds after the keyword before it determines the end of speech and terminates speech to text processing. This behavior ensures that the user experience is correctly handled for staged commands (*\<keyword> \<pause> \<command>*) and chained commands (*\<keyword> \<command>*). ### Keyword verification responses and latency considerations Rejected cases often yield higher latencies as the service processes more audio ### Use keyword verification with on-device models from custom keyword -The Speech SDK enables seamless use of on-device models generated by using custom keyword with keyword verification and speech-to-text. It transparently handles: +The Speech SDK enables seamless use of on-device models generated by using custom keyword with keyword verification and speech to text. It transparently handles: * Audio gating to keyword verification and speech recognition based on the outcome of an on-device model. * Communicating the keyword to keyword verification. The Speech SDK enables easy use of personalized on-device keyword recognition mo | Scenario | Description | Samples | | -- | -- | - |-| End-to-end keyword recognition with speech-to-text | Best suited for products that will use a customized on-device keyword model from custom keyword with Azure Speech keyword verification and speech-to-text. This scenario is the most common. | <ul><li>[Voice assistant sample code](https://github.com/Azure-Samples/Cognitive-Services-Voice-Assistant)</li><li>[Tutorial: Voice enable your assistant built using Azure Bot Service with the C# Speech SDK](./tutorial-voice-enable-your-bot-speech-sdk.md)</li><li>[Tutorial: Create a custom commands application with simple voice commands](./how-to-develop-custom-commands-application.md)</li></ul> | +| End-to-end keyword recognition with speech to text | Best suited for products that will use a customized on-device keyword model from custom keyword with Azure Speech keyword verification and speech to text. This scenario is the most common. | <ul><li>[Voice assistant sample code](https://github.com/Azure-Samples/Cognitive-Services-Voice-Assistant)</li><li>[Tutorial: Voice enable your assistant built using Azure Bot Service with the C# Speech SDK](./tutorial-voice-enable-your-bot-speech-sdk.md)</li><li>[Tutorial: Create a custom commands application with simple voice commands](./how-to-develop-custom-commands-application.md)</li></ul> | | Offline keyword recognition | Best suited for products without network connectivity that will use a customized on-device keyword model from custom keyword. | <ul><li>[C# on Windows UWP sample](https://github.com/Azure-Samples/cognitive-services-speech-sdk/tree/master/quickstart/csharp/uwp/keyword-recognizer)</li><li>[Java on Android sample](https://github.com/Azure-Samples/cognitive-services-speech-sdk/tree/master/quickstart/java/android/keyword-recognizer)</li></ul> ## Next steps |
cognitive-services | Language Identification | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Speech-Service/language-identification.md | Language identification is used to identify languages spoken in audio when compa Language identification (LID) use cases include: -* [Speech-to-text recognition](#speech-to-text) when you need to identify the language in an audio source and then transcribe it to text. +* [Speech to text recognition](#speech-to-text) when you need to identify the language in an audio source and then transcribe it to text. * [Speech translation](#speech-translation) when you need to identify the language in an audio source and then translate it to another language. For speech recognition, the initial latency is higher with language identification. You should only include this optional feature as needed. For speech recognition, the initial latency is higher with language identificati `SpeechServiceConnection_SingleLanguageIdPriority` and `SpeechServiceConnection_ContinuousLanguageIdPriority` properties have been removed and replaced by a single property `SpeechServiceConnection_LanguageIdMode`. Prioritizing between low latency and high accuracy is no longer necessary following recent model improvements. Now, you only need to select whether to run at-start or continuous Language Identification when doing continuous speech recognition or translation. -Whether you use language identification with [speech-to-text](#speech-to-text) or with [speech translation](#speech-translation), there are some common concepts and configuration options. +Whether you use language identification with [speech to text](#speech-to-text) or with [speech translation](#speech-translation), there are some common concepts and configuration options. - Define a list of [candidate languages](#candidate-languages) that you expect in the audio. - Decide whether to use [at-start or continuous](#at-start-and-continuous-language-identification) language identification. recognizer.stop_continuous_recognition() ::: zone-end -## Speech-to-text +## Speech to text -You use Speech-to-text recognition when you need to identify the language in an audio source and then transcribe it to text. For more information, see [Speech-to-text overview](speech-to-text.md). +You use Speech to text recognition when you need to identify the language in an audio source and then transcribe it to text. For more information, see [Speech to text overview](speech-to-text.md). > [!NOTE]-> Speech-to-text recognition with at-start language identification is supported with Speech SDKs in C#, C++, Python, Java, JavaScript, and Objective-C. Speech-to-text recognition with continuous language identification is only supported with Speech SDKs in C#, C++, Java, JavaScript, and Python. +> Speech to text recognition with at-start language identification is supported with Speech SDKs in C#, C++, Python, Java, JavaScript, and Objective-C. Speech to text recognition with continuous language identification is only supported with Speech SDKs in C#, C++, Java, JavaScript, and Python. > -> Currently for speech-to-text recognition with continuous language identification, you must create a SpeechConfig from the `wss://{region}.stt.speech.microsoft.com/speech/universal/v2` endpoint string, as shown in code examples. In a future SDK release you won't need to set it. +> Currently for speech to text recognition with continuous language identification, you must create a SpeechConfig from the `wss://{region}.stt.speech.microsoft.com/speech/universal/v2` endpoint string, as shown in code examples. In a future SDK release you won't need to set it. ::: zone pivot="programming-language-csharp" -See more examples of speech-to-text recognition with language identification on [GitHub](https://github.com/Azure-Samples/cognitive-services-speech-sdk/blob/master/samples/csharp/sharedcontent/console/speech_recognition_with_language_id_samples.cs). +See more examples of speech to text recognition with language identification on [GitHub](https://github.com/Azure-Samples/cognitive-services-speech-sdk/blob/master/samples/csharp/sharedcontent/console/speech_recognition_with_language_id_samples.cs). ### [Recognize once](#tab/once) using (var audioInput = AudioConfig.FromWavFileInput(@"en-us_zh-cn.wav")) ::: zone pivot="programming-language-cpp" -See more examples of speech-to-text recognition with language identification on [GitHub](https://github.com/Azure-Samples/cognitive-services-speech-sdk/blob/master/samples/cpp/windows/console/samples/speech_recognition_samples.cpp). +See more examples of speech to text recognition with language identification on [GitHub](https://github.com/Azure-Samples/cognitive-services-speech-sdk/blob/master/samples/cpp/windows/console/samples/speech_recognition_samples.cpp). ### [Recognize once](#tab/once) auto detectedLanguage = autoDetectSourceLanguageResult->Language; ::: zone pivot="programming-language-java" -See more examples of speech-to-text recognition with language identification on [GitHub](https://github.com/Azure-Samples/cognitive-services-speech-sdk/blob/master/samples/java/jre/console/src/com/microsoft/cognitiveservices/speech/samples/console/SpeechRecognitionSamples.java). +See more examples of speech to text recognition with language identification on [GitHub](https://github.com/Azure-Samples/cognitive-services-speech-sdk/blob/master/samples/java/jre/console/src/com/microsoft/cognitiveservices/speech/samples/console/SpeechRecognitionSamples.java). ### [Recognize once](#tab/once) result.close(); ::: zone pivot="programming-language-python" -See more examples of speech-to-text recognition with language identification on [GitHub](https://github.com/Azure-Samples/cognitive-services-speech-sdk/blob/master/samples/python/console/speech_sample.py). +See more examples of speech to text recognition with language identification on [GitHub](https://github.com/Azure-Samples/cognitive-services-speech-sdk/blob/master/samples/python/console/speech_sample.py). ### [Recognize once](#tab/once) speechRecognizer.recognizeOnceAsync((result: SpeechSDK.SpeechRecognitionResult) ::: zone-end -### Speech-to-text custom models +### Speech to text custom models > [!NOTE] > Language detection with custom models can only be used with real-time speech to text and speech translation. Batch transcription only supports language detection for base models. When you run language ID in a container, use the `SourceLanguageRecognizer` obje For more information about containers, see the [language identification speech containers](speech-container-lid.md#use-the-container) how-to guide. -## Speech-to-text batch transcription +## Speech to text batch transcription To identify languages with [Batch transcription REST API](batch-transcription.md), you need to use `languageIdentification` property in the body of your [Transcriptions_Create](https://eastus.dev.cognitive.microsoft.com/docs/services/speech-to-text-api-v3-1/operations/Transcriptions_Create) request. |
cognitive-services | Language Learning Overview | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Speech-Service/language-learning-overview.md | The Pronunciation Assessment feature offers several benefits for educators, serv - For service providers, it offers high real-time capabilities, worldwide Speech cognitive service and supports growing global business. - For students and learners, it provides a convenient way to practice and receive feedback, authoritative scoring to compare with native pronunciation and helps to follow the exact text order for long sentences or full documents. -## Speech-to-Text +## Speech to text -Azure [Speech-to-Text](speech-to-text.md) supports real-time language identification for multilingual language learning scenarios, help human-human interaction with better understanding and readable context. +Azure [Speech to text](speech-to-text.md) supports real-time language identification for multilingual language learning scenarios, help human-human interaction with better understanding and readable context. -## Text-to-Speech +## Text to speech -[Text-to-Speech](text-to-speech.md) prebuilt neural voices can read out learning materials natively and empower self-served learning. A broad portfolio of [languages and voices](language-support.md?tabs=tts) are supported for AI teacher, content read aloud capabilities, and more. Microsoft is continuously working on bringing new languages to the world. +[Text to speech](text-to-speech.md) prebuilt neural voices can read out learning materials natively and empower self-served learning. A broad portfolio of [languages and voices](language-support.md?tabs=tts) are supported for AI teacher, content read aloud capabilities, and more. Microsoft is continuously working on bringing new languages to the world. [Custom Neural Voice](custom-neural-voice.md) is available for you to create a customized synthetic voice for your applications. Education companies are using this technology to personalize language learning, by creating unique characters with distinct voices that match the culture and background of their target audience. Azure [Speech-to-Text](speech-to-text.md) supports real-time language identifica ## Next steps * [How to use pronunciation assessment](how-to-pronunciation-assessment.md)-* [What is Speech-to-Text](speech-to-text.md) -* [What is Text-to-Speech](text-to-speech.md) +* [What is Speech to text](speech-to-text.md) +* [What is Text to speech](text-to-speech.md) * [What is Custom Neural Voice](custom-neural-voice.md) |
cognitive-services | Language Support | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Speech-Service/language-support.md | Title: Language support - Speech service -description: The Speech service supports numerous languages for speech-to-text and text-to-speech conversion, along with speech translation. This article provides a comprehensive list of language support by service feature. +description: The Speech service supports numerous languages for speech to text and text to speech conversion, along with speech translation. This article provides a comprehensive list of language support by service feature. -The following tables summarize language support for [speech-to-text](speech-to-text.md), [text-to-speech](text-to-speech.md), [pronunciation assessment](how-to-pronunciation-assessment.md), [speech translation](speech-translation.md), [speaker recognition](speaker-recognition-overview.md), and additional service features. +The following tables summarize language support for [speech to text](speech-to-text.md), [text to speech](text-to-speech.md), [pronunciation assessment](how-to-pronunciation-assessment.md), [speech translation](speech-translation.md), [speaker recognition](speaker-recognition-overview.md), and additional service features. -You can also get a list of locales and voices supported for each specific region or endpoint through the [Speech SDK](speech-sdk.md), [Speech-to-text REST API](rest-speech-to-text.md), [Speech-to-text REST API for short audio](rest-speech-to-text-short.md) and [Text-to-speech REST API](rest-text-to-speech.md#get-a-list-of-voices). +You can also get a list of locales and voices supported for each specific region or endpoint through the [Speech SDK](speech-sdk.md), [Speech to text REST API](rest-speech-to-text.md), [Speech to text REST API for short audio](rest-speech-to-text-short.md) and [Text to speech REST API](rest-text-to-speech.md#get-a-list-of-voices). ## Supported languages Language support varies by Speech service functionality. **Choose a Speech feature** -# [Speech-to-text](#tab/stt) +# [Speech to text](#tab/stt) -The table in this section summarizes the locales and voices supported for Speech-to-text. Please see the table footnotes for more details. +The table in this section summarizes the locales and voices supported for Speech to text. Please see the table footnotes for more details. -Additional remarks for Speech-to-text locales are included in the [Custom Speech](#custom-speech) section below. +Additional remarks for Speech to text locales are included in the [Custom Speech](#custom-speech) section below. > [!TIP]-> Try out the [Real-time Speech-to-text tool](https://speech.microsoft.com/portal/speechtotexttool) without having to use any code. +> Try out the [Real-time Speech to text tool](https://speech.microsoft.com/portal/speechtotexttool) without having to use any code. [!INCLUDE [Language support include](includes/language-support/stt.md)] ### Custom Speech -To improve Speech-to-text recognition accuracy, customization is available for some languages and base models. Depending on the locale, you can upload audio + human-labeled transcripts, plain text, structured text, and pronunciation data. By default, plain text customization is supported for all available base models. To learn more about customization, see [Custom Speech](./custom-speech-overview.md). +To improve Speech to text recognition accuracy, customization is available for some languages and base models. Depending on the locale, you can upload audio + human-labeled transcripts, plain text, structured text, and pronunciation data. By default, plain text customization is supported for all available base models. To learn more about customization, see [Custom Speech](./custom-speech-overview.md). -# [Text-to-speech](#tab/tts) +# [Text to speech](#tab/tts) -The tables in this section summarizes the locales and voices supported for Text-to-speech. Please see the table footnotes for more details. +The tables in this section summarizes the locales and voices supported for Text to speech. Please see the table footnotes for more details. -Additional remarks for Text-to-speech locales are included in the [Voice styles and roles](#voice-styles-and-roles), [Prebuilt neural voices](#prebuilt-neural-voices), and [Custom Neural Voice](#custom-neural-voice) sections below. +Additional remarks for Text to speech locales are included in the [Voice styles and roles](#voice-styles-and-roles), [Prebuilt neural voices](#prebuilt-neural-voices), and [Custom Neural Voice](#custom-neural-voice) sections below. > [!TIP] > Check the the [Voice Gallery](https://speech.microsoft.com/portal/voicegallery) and determine the right voice for your business needs. With the cross-lingual feature (preview), you can transfer your custom neural vo # [Pronunciation assessment](#tab/pronunciation-assessment) -The table in this section summarizes the locales supported for Pronunciation assessment, and each language is available on all [Speech-to-text regions](regions.md#speech-service). +The table in this section summarizes the locales supported for Pronunciation assessment, and each language is available on all [Speech to text regions](regions.md#speech-service). [!INCLUDE [Language support include](includes/language-support/pronunciation-assessment.md)] # [Speech translation](#tab/speech-translation) -The table in this section summarizes the locales supported for Speech translation. Speech translation supports different languages for speech-to-speech and speech-to-text translation. The available target languages depend on whether the translation target is speech or text. +The table in this section summarizes the locales supported for Speech translation. Speech translation supports different languages for speech-to-speech and speech to text translation. The available target languages depend on whether the translation target is speech or text. #### Translate from language -To set the input speech recognition language, specify the full locale with a dash (`-`) separator. See the [speech-to-text language table](?tabs=stt#supported-languages). The default language is `en-US` if you don't specify a language. +To set the input speech recognition language, specify the full locale with a dash (`-`) separator. See the [speech to text language table](?tabs=stt#supported-languages). The default language is `en-US` if you don't specify a language. #### Translate to text language |
cognitive-services | Logging Audio Transcription | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Speech-Service/logging-audio-transcription.md | Title: How to log audio and transcriptions for speech recognition -description: Learn how to use audio and transcription logging for speech-to-text and speech translation. +description: Learn how to use audio and transcription logging for speech to text and speech translation. zone_pivot_groups: programming-languages-speech-services-nomore-variant # How to log audio and transcriptions for speech recognition -You can enable logging for both audio input and recognized speech when using [speech-to-text](get-started-speech-to-text.md) or [speech translation](get-started-speech-to-text.md). For speech translation, only the audio and transcription of the original audio are logged. The translations aren't logged. This article describes how to enable, access and delete the audio and transcription logs. +You can enable logging for both audio input and recognized speech when using [speech to text](get-started-speech-to-text.md) or [speech translation](get-started-speech-to-text.md). For speech translation, only the audio and transcription of the original audio are logged. The translations aren't logged. This article describes how to enable, access and delete the audio and transcription logs. Audio and transcription logs can be used as input for [Custom Speech](custom-speech-overview.md) model training. You might have other use cases. You can enable logging for a single recognition session, whether using the defau > [!WARNING] > For custom model endpoints, the logging setting of your deployed endpoint is prioritized over your session-level setting (SDK or REST API). If logging is enabled for the custom model endpoint, the session-level setting (whether it's set to true or false) is ignored. If logging isn't enabled for the custom model endpoint, the session-level setting determines whether logging is active. -#### Enable logging for speech-to-text with the Speech SDK +#### Enable logging for speech to text with the Speech SDK ::: zone pivot="programming-language-csharp" Each [TranslationRecognizer](/objectivec/cognitive-services/speech/spxtranslatio ::: zone-end -#### Enable logging for speech-to-text REST API for short audio +#### Enable logging for Speech to text REST API for short audio -If you use [Speech-to-text REST API for short audio](rest-speech-to-text-short.md) and want to enable audio and transcription logging, you need to use the query parameter and value `storeAudio=true` as a part of your REST request. A sample request looks like this: +If you use [Speech to text REST API for short audio](rest-speech-to-text-short.md) and want to enable audio and transcription logging, you need to use the query parameter and value `storeAudio=true` as a part of your REST request. A sample request looks like this: ```http https://eastus.stt.speech.microsoft.com/speech/recognition/conversation/cognitiveservices/v1?language=en-US&storeAudio=true Logging can be enabled or disabled in the persistent custom model endpoint setti You can enable audio and transcription logging for a custom model endpoint: - When you create the endpoint using the Speech Studio, REST API, or Speech CLI. For details about how to enable logging for a Custom Speech endpoint, see [Deploy a Custom Speech model](how-to-custom-speech-deploy-model.md#add-a-deployment-endpoint).-- When you update the endpoint ([Endpoints_Update](https://eastus.dev.cognitive.microsoft.com/docs/services/speech-to-text-api-v3-1/operations/Endpoints_Update)) using the [Speech-to-text REST API](rest-speech-to-text.md). For an example of how to update the logging setting for an endpoint, see [Turn off logging for a custom model endpoint](#turn-off-logging-for-a-custom-model-endpoint). But instead of setting the `contentLoggingEnabled` property to `false`, set it to `true` to enable logging for the endpoint.+- When you update the endpoint ([Endpoints_Update](https://eastus.dev.cognitive.microsoft.com/docs/services/speech-to-text-api-v3-1/operations/Endpoints_Update)) using the [Speech to text REST API](rest-speech-to-text.md). For an example of how to update the logging setting for an endpoint, see [Turn off logging for a custom model endpoint](#turn-off-logging-for-a-custom-model-endpoint). But instead of setting the `contentLoggingEnabled` property to `false`, set it to `true` to enable logging for the endpoint. ## Turn off logging for a custom model endpoint -To disable audio and transcription logging for a custom model endpoint, you must update the persistent endpoint logging setting using the [Speech-to-text REST API](rest-speech-to-text.md). There isn't a way to disable logging for an existing custom model endpoint using the Speech Studio. +To disable audio and transcription logging for a custom model endpoint, you must update the persistent endpoint logging setting using the [Speech to text REST API](rest-speech-to-text.md). There isn't a way to disable logging for an existing custom model endpoint using the Speech Studio. -To turn off logging for a custom endpoint, use the [Endpoints_Update](https://eastus.dev.cognitive.microsoft.com/docs/services/speech-to-text-api-v3-1/operations/Endpoints_Update) operation of the [Speech-to-text REST API](rest-speech-to-text.md). Construct the request body according to the following instructions: +To turn off logging for a custom endpoint, use the [Endpoints_Update](https://eastus.dev.cognitive.microsoft.com/docs/services/speech-to-text-api-v3-1/operations/Endpoints_Update) operation of the [Speech to text REST API](rest-speech-to-text.md). Construct the request body according to the following instructions: - Set the `contentLoggingEnabled` property within `properties`. Set this property to `true` to enable logging of the endpoint's traffic. Set this property to `false` to disable logging of the endpoint's traffic. The response body should reflect the new setting. The name of the logging proper ## Get audio and transcription logs -You can access audio and transcription logs using [Speech-to-text REST API](#get-audio-and-transcription-logs-with-speech-to-text-rest-api). For [custom model](how-to-custom-speech-deploy-model.md) endpoints, you can also use [Speech Studio](#get-audio-and-transcription-logs-with-speech-studio). See details in the following sections. +You can access audio and transcription logs using [Speech to text REST API](#get-audio-and-transcription-logs-with-speech-to-text-rest-api). For [custom model](how-to-custom-speech-deploy-model.md) endpoints, you can also use [Speech Studio](#get-audio-and-transcription-logs-with-speech-studio). See details in the following sections. > [!NOTE] > Logging data is kept for 30 days. After this period the logs are automatically deleted. However you can [delete](#delete-audio-and-transcription-logs) specific logs or a range of available logs at any time. To download the endpoint logs: With this approach, you can download all available log sets at once. There's no way to download selected log sets in Speech Studio. -### Get audio and transcription logs with Speech-to-text REST API +### Get audio and transcription logs with Speech to text REST API You can download all or a subset of available log sets. This method is applicable for base and [custom model](how-to-custom-speech-deploy-model.md) endpoints. To list and download audio and transcription logs:-- Base models: Use the [Endpoints_ListBaseModelLogs](https://eastus.dev.cognitive.microsoft.com/docs/services/speech-to-text-api-v3-1/operations/Endpoints_ListBaseModelLogs) operation of the [Speech-to-text REST API](rest-speech-to-text.md). This operation gets the list of audio and transcription logs that have been stored when using the default base model of a given language.-- Custom model endpoints: Use the [Endpoints_ListLogs](https://eastus.dev.cognitive.microsoft.com/docs/services/speech-to-text-api-v3-1/operations/Endpoints_ListLogs) operation of the [Speech-to-text REST API](rest-speech-to-text.md). This operation gets the list of audio and transcription logs that have been stored for a given endpoint.+- Base models: Use the [Endpoints_ListBaseModelLogs](https://eastus.dev.cognitive.microsoft.com/docs/services/speech-to-text-api-v3-1/operations/Endpoints_ListBaseModelLogs) operation of the [Speech to text REST API](rest-speech-to-text.md). This operation gets the list of audio and transcription logs that have been stored when using the default base model of a given language. +- Custom model endpoints: Use the [Endpoints_ListLogs](https://eastus.dev.cognitive.microsoft.com/docs/services/speech-to-text-api-v3-1/operations/Endpoints_ListLogs) operation of the [Speech to text REST API](rest-speech-to-text.md). This operation gets the list of audio and transcription logs that have been stored for a given endpoint. -### Get log IDs with Speech-to-text REST API +### Get log IDs with Speech to text REST API In some scenarios, you may need to get IDs of the available logs. For example, you may want to delete a specific log as described [later in this article](#delete-specific-log). To get IDs of the available logs:-- Base models: Use the [Endpoints_ListBaseModelLogs](https://eastus.dev.cognitive.microsoft.com/docs/services/speech-to-text-api-v3-1/operations/Endpoints_ListBaseModelLogs) operation of the [Speech-to-text REST API](rest-speech-to-text.md). This operation gets the list of audio and transcription logs that have been stored when using the default base model of a given language.-- Custom model endpoints: Use the [Endpoints_ListLogs](https://eastus.dev.cognitive.microsoft.com/docs/services/speech-to-text-api-v3-1/operations/Endpoints_ListLogs) operation of the [Speech-to-text REST API](rest-speech-to-text.md). This operation gets the list of audio and transcription logs that have been stored for a given endpoint.+- Base models: Use the [Endpoints_ListBaseModelLogs](https://eastus.dev.cognitive.microsoft.com/docs/services/speech-to-text-api-v3-1/operations/Endpoints_ListBaseModelLogs) operation of the [Speech to text REST API](rest-speech-to-text.md). This operation gets the list of audio and transcription logs that have been stored when using the default base model of a given language. +- Custom model endpoints: Use the [Endpoints_ListLogs](https://eastus.dev.cognitive.microsoft.com/docs/services/speech-to-text-api-v3-1/operations/Endpoints_ListLogs) operation of the [Speech to text REST API](rest-speech-to-text.md). This operation gets the list of audio and transcription logs that have been stored for a given endpoint. Here's a sample output of [Endpoints_ListLogs](https://eastus.dev.cognitive.microsoft.com/docs/services/speech-to-text-api-v3-1/operations/Endpoints_ListLogs). For simplicity, only one log set is shown: Logging data is kept for 30 days. After this period, the logs are automatically For any base or [custom model](how-to-custom-speech-deploy-model.md) endpoint you can delete all available logs, logs for a given time frame, or a particular log based on its Log ID. The deletion process is done asynchronously and can take minutes, hours, one day, or longer depending on the number of log files. -To delete audio and transcription logs you must use the [Speech-to-text REST API](rest-speech-to-text.md). There isn't a way to delete logs using the Speech Studio. +To delete audio and transcription logs you must use the [Speech to text REST API](rest-speech-to-text.md). There isn't a way to delete logs using the Speech Studio. ### Delete all logs or logs for a given time frame To delete all logs or logs for a given time frame: -- Base models: Use the [Endpoints_DeleteBaseModelLogs](https://eastus.dev.cognitive.microsoft.com/docs/services/speech-to-text-api-v3-1/operations/Endpoints_DeleteBaseModelLogs) operation of the [Speech-to-text REST API](rest-speech-to-text.md). -- Custom model endpoints: Use the [Endpoints_DeleteLogs](https://eastus.dev.cognitive.microsoft.com/docs/services/speech-to-text-api-v3-1/operations/Endpoints_DeleteLogs) operation of the [Speech-to-text REST API](rest-speech-to-text.md).+- Base models: Use the [Endpoints_DeleteBaseModelLogs](https://eastus.dev.cognitive.microsoft.com/docs/services/speech-to-text-api-v3-1/operations/Endpoints_DeleteBaseModelLogs) operation of the [Speech to text REST API](rest-speech-to-text.md). +- Custom model endpoints: Use the [Endpoints_DeleteLogs](https://eastus.dev.cognitive.microsoft.com/docs/services/speech-to-text-api-v3-1/operations/Endpoints_DeleteLogs) operation of the [Speech to text REST API](rest-speech-to-text.md). Optionally, set the `endDate` of the audio logs deletion (specific day, UTC). Expected format: "yyyy-mm-dd". For instance, "2023-03-15" results in deleting all logs on March 15, 2023 and before. Optionally, set the `endDate` of the audio logs deletion (specific day, UTC). Ex To delete a specific log by ID: -- Base models: Use the [Endpoints_DeleteBaseModelLog](https://eastus.dev.cognitive.microsoft.com/docs/services/speech-to-text-api-v3-1/operations/Endpoints_DeleteBaseModelLog) operation of the [Speech-to-text REST API](rest-speech-to-text.md).-- Custom model endpoints: Use the [Endpoints_DeleteLog](https://eastus.dev.cognitive.microsoft.com/docs/services/speech-to-text-api-v3-1/operations/Endpoints_DeleteLog) operation of the [Speech-to-text REST API](rest-speech-to-text.md).+- Base models: Use the [Endpoints_DeleteBaseModelLog](https://eastus.dev.cognitive.microsoft.com/docs/services/speech-to-text-api-v3-1/operations/Endpoints_DeleteBaseModelLog) operation of the [Speech to text REST API](rest-speech-to-text.md). +- Custom model endpoints: Use the [Endpoints_DeleteLog](https://eastus.dev.cognitive.microsoft.com/docs/services/speech-to-text-api-v3-1/operations/Endpoints_DeleteLog) operation of the [Speech to text REST API](rest-speech-to-text.md). -For details about how to get Log IDs, see a previous section [Get log IDs with Speech-to-text REST API](#get-log-ids-with-speech-to-text-rest-api). +For details about how to get Log IDs, see a previous section [Get log IDs with Speech to text REST API](#get-log-ids-with-speech-to-text-rest-api). Since audio and transcription logs have separate IDs (such as IDs `2023-03-13_163715__0420c53d-e6ac-4857-bce0-f39c3f9f5ff9_v2_json` and `2023-03-13_163715__0420c53d-e6ac-4857-bce0-f39c3f9f5ff9_wav` from a [previous example in this article](#get-log-ids-with-speech-to-text-rest-api)), when you want to delete both audio and transcription logs you execute separate [delete by ID](#delete-specific-log) requests. ## Next steps -* [Speech-to-text quickstart](get-started-speech-to-text.md) +* [Speech to text quickstart](get-started-speech-to-text.md) * [Speech translation quickstart](./get-started-speech-translation.md) * [Create and train custom speech models](custom-speech-overview.md) |
cognitive-services | Migrate To Batch Synthesis | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Speech-Service/migrate-to-batch-synthesis.md | The Long Audio API is limited to the following regions: ## Voices list -Batch synthesis API supports all [text-to-speech voices and styles](language-support.md?tabs=tts). +Batch synthesis API supports all [text to speech voices and styles](language-support.md?tabs=tts). The Long Audio API is limited to the set of voices returned by a GET request to `https://<endpoint>/api/texttospeech/v3.0/longaudiosynthesis/voices`. With Batch synthesis API, you can use any of the [supported SSML elements](speec ## Audio output formats -Batch synthesis API supports all [text-to-speech audio output formats](rest-text-to-speech.md#audio-outputs). +Batch synthesis API supports all [text to speech audio output formats](rest-text-to-speech.md#audio-outputs). The Long Audio API is limited to the following set of audio output formats. The sample rate for long audio voices is 24kHz, not 48kHz. Other sample rates can be obtained through upsampling or downsampling when synthesizing. The Long Audio API is limited to 20,000 requests for each Azure subscription acc - [Batch synthesis API](batch-synthesis.md) - [Speech Synthesis Markup Language (SSML)](speech-synthesis-markup.md)-- [Text-to-speech quickstart](get-started-text-to-speech.md)+- [Text to speech quickstart](get-started-text-to-speech.md) |
cognitive-services | Migrate V2 To V3 | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Speech-Service/migrate-v2-to-v3.md | -Compared to v2, the v3 version of the Speech services REST API for speech-to-text is more reliable, easier to use, and more consistent with APIs for similar services. Most teams can migrate from v2 to v3 in a day or two. +Compared to v2, the v3 version of the Speech services REST API for speech to text is more reliable, easier to use, and more consistent with APIs for similar services. Most teams can migrate from v2 to v3 in a day or two. > [!IMPORTANT]-> The Speech-to-text REST API v2.0 is deprecated and will be retired by February 29, 2024. Please migrate your applications to the Speech-to-text REST API v3.1. Complete the steps in this article and then see the [Migrate code from v3.0 to v3.1 of the REST API](migrate-v3-0-to-v3-1.md) guide for additional requirements. +> The Speech to text REST API v2.0 is deprecated and will be retired by February 29, 2024. Please migrate your applications to the Speech to text REST API v3.1. Complete the steps in this article and then see the [Migrate code from v3.0 to v3.1 of the REST API](migrate-v3-0-to-v3-1.md) guide for additional requirements. ## Forward compatibility General changes: ### Host name changes -Endpoint host names have changed from `{region}.cris.ai` to `{region}.api.cognitive.microsoft.com`. Paths to the new endpoints no longer contain `api/` because it's part of the hostname. The [Speech-to-text REST API v3.0](https://eastus.dev.cognitive.microsoft.com/docs/services/speech-to-text-api-v3-0) reference documentation lists valid regions and paths. +Endpoint host names have changed from `{region}.cris.ai` to `{region}.api.cognitive.microsoft.com`. Paths to the new endpoints no longer contain `api/` because it's part of the hostname. The [Speech to text REST API v3.0](https://eastus.dev.cognitive.microsoft.com/docs/services/speech-to-text-api-v3-0) reference documentation lists valid regions and paths. >[!IMPORTANT] >Change the hostname from `{region}.cris.ai` to `{region}.api.cognitive.microsoft.com` where region is the region of your speech subscription. Also remove `api/`from any path in your client code. Accuracy tests have been renamed to evaluations because the new name describes b ## Next steps -* [Speech-to-text REST API](rest-speech-to-text.md) -* [Speech-to-text REST API v3.0 reference](https://eastus.dev.cognitive.microsoft.com/docs/services/speech-to-text-api-v3-0) +* [Speech to text REST API](rest-speech-to-text.md) +* [Speech to text REST API v3.0 reference](https://eastus.dev.cognitive.microsoft.com/docs/services/speech-to-text-api-v3-0) |
cognitive-services | Migrate V3 0 To V3 1 | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Speech-Service/migrate-v3-0-to-v3-1.md | -The Speech-to-text REST API is used for [Batch transcription](batch-transcription.md) and [Custom Speech](custom-speech-overview.md). Changes from version 3.0 to 3.1 are described in the sections below. +The Speech to text REST API is used for [Batch transcription](batch-transcription.md) and [Custom Speech](custom-speech-overview.md). Changes from version 3.0 to 3.1 are described in the sections below. > [!IMPORTANT]-> Speech-to-text REST API v3.1 is generally available. Version 3.0 of the [Speech to Text REST API](rest-speech-to-text.md) will be retired. +> Speech to text REST API v3.1 is generally available. Version 3.0 of the [Speech to text REST API](rest-speech-to-text.md) will be retired. ## Base path For more details, see [Operation IDs](#operation-ids) later in this guide. ## Batch transcription > [!NOTE]-> Don't use Speech-to-text REST API v3.0 to retrieve a transcription created via Speech-to-text REST API v3.1. You'll see an error message such as the following: "The API version cannot be used to access this transcription. Please use API version v3.1 or higher." +> Don't use Speech to text REST API v3.0 to retrieve a transcription created via Speech to text REST API v3.1. You'll see an error message such as the following: "The API version cannot be used to access this transcription. Please use API version v3.1 or higher." In the [Transcriptions_Create](https://eastus.dev.cognitive.microsoft.com/docs/services/speech-to-text-api-v3-1/operations/Transcriptions_Create) operation the following three properties are added: - The `displayFormWordLevelTimestampsEnabled` property can be used to enable the reporting of word-level timestamps on the display form of the transcription results. The results are returned in the `displayPhraseElements` property of the transcription file. The name of each `operationId` in version 3.1 is prefixed with the object name. ## Next steps -* [Speech-to-text REST API](rest-speech-to-text.md) -* [Speech-to-text REST API v3.1 reference](https://eastus.dev.cognitive.microsoft.com/docs/services/speech-to-text-api-v3-1) -* [Speech-to-text REST API v3.0 reference](https://eastus.dev.cognitive.microsoft.com/docs/services/speech-to-text-api-v3-0) +* [Speech to text REST API](rest-speech-to-text.md) +* [Speech to text REST API v3.1 reference](https://eastus.dev.cognitive.microsoft.com/docs/services/speech-to-text-api-v3-1) +* [Speech to text REST API v3.0 reference](https://eastus.dev.cognitive.microsoft.com/docs/services/speech-to-text-api-v3-0) |
cognitive-services | Migration Overview Neural Voice | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Speech-Service/migration-overview-neural-voice.md | -We're retiring two features from [Text-to-Speech](index-text-to-speech.yml) capabilities as detailed below. +We're retiring two features from [Text to speech](index-text-to-speech.yml) capabilities as detailed below. ## Custom voice (non-neural training) |
cognitive-services | Multi Device Conversation | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Speech-Service/multi-device-conversation.md | A participant is a user who joins a conversation. When creating or joining a conversation, each user must choose a primary language: the language that they will speak and send instant messages in, and also the language they will see other users' messages. -There are two kinds of languages: speech-to-text and text-only: -- If the user chooses a speech-to-text language as their primary language, then they will be able to use both speech and text input in the conversation.+There are two kinds of languages: speech to text and text-only: +- If the user chooses a speech to text language as their primary language, then they will be able to use both speech and text input in the conversation. -- If the user chooses a text-only language, then they will only be able to use text input and send instant messages in the conversation. Text-only languages are the languages that are supported for text translation, but not speech-to-text. You can see available languages on the [language support](./language-support.md) page.+- If the user chooses a text-only language, then they will only be able to use text input and send instant messages in the conversation. Text-only languages are the languages that are supported for text translation, but not speech to text. You can see available languages on the [language support](./language-support.md) page. Apart from their primary language, each participant can also specify additional languages for translating the conversation. Below is a summary of what the user will be able to do in a multi-device conversation, based to their chosen primary language. -| What the user can do in the conversation | Speech-to-text | Text-only | +| What the user can do in the conversation | Speech to text | Text-only | |--|-|| | Use speech input | ✔️ | ❌ | | Send instant messages | ✔️ | ✔️ | | Translate the conversation | ✔️ | ✔️ | > [!NOTE]-> For lists of available speech-to-text and text translation languages, see [supported languages](./language-support.md). +> For lists of available speech to text and text translation languages, see [supported languages](./language-support.md). |
cognitive-services | Overview | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Speech-Service/overview.md | Title: What is the Speech service? -description: The Speech service provides speech-to-text, text-to-speech, and speech translation capabilities with an Azure resource. Add speech to your applications, tools, and devices with the Speech SDK, Speech Studio, or REST APIs. +description: The Speech service provides speech to text, text to speech, and speech translation capabilities with an Azure resource. Add speech to your applications, tools, and devices with the Speech SDK, Speech Studio, or REST APIs. -The Speech service provides speech-to-text and text-to-speech capabilities with an [Azure Speech resource](~/articles/cognitive-services/cognitive-services-apis-create-account.md?tabs=speech#create-a-new-azure-cognitive-services-resource). You can transcribe speech to text with high accuracy, produce natural-sounding text-to-speech voices, translate spoken audio, and use speaker recognition during conversations. +The Speech service provides speech to text and text to speech capabilities with an [Azure Speech resource](~/articles/cognitive-services/cognitive-services-apis-create-account.md?tabs=speech#create-a-new-azure-cognitive-services-resource). You can transcribe speech to text with high accuracy, produce natural-sounding text to speech voices, translate spoken audio, and use speaker recognition during conversations. :::image type="content" border="false" source="media/overview/speech-features-highlight.png" alt-text="Image of tiles that highlight some Speech service features."::: Microsoft uses Speech for many scenarios, such as captioning in Teams, dictation Speech feature summaries are provided below with links for more information. -### Speech-to-text +### Speech to text -Use [speech-to-text](speech-to-text.md) to transcribe audio into text, either in [real-time](#real-time-speech-to-text) or asynchronously with [batch transcription](#batch-transcription). +Use [speech to text](speech-to-text.md) to transcribe audio into text, either in [real-time](#real-time-speech-to-text) or asynchronously with [batch transcription](#batch-transcription). > [!TIP]-> You can try real-time speech-to-text in [Speech Studio](https://aka.ms/speechstudio/speechtotexttool) without signing up or writing any code. +> You can try real-time speech to text in [Speech Studio](https://aka.ms/speechstudio/speechtotexttool) without signing up or writing any code. Convert audio to text from a range of sources, including microphones, audio files, and blob storage. Use speaker diarization to determine who said what and when. Get readable transcripts with automatic formatting and punctuation. The base model may not be sufficient if the audio contains ambient noise or includes a lot of industry and domain-specific jargon. In these cases, you can create and train [custom speech models](custom-speech-overview.md) with acoustic, language, and pronunciation data. Custom speech models are private and can offer a competitive advantage. -### Real-time speech-to-text +### Real-time speech to text -With [real-time speech-to-text](get-started-speech-to-text.md), the audio is transcribed as speech is recognized from a microphone or file. Use real-time speech-to-text for applications that need to transcribe audio in real-time such as: +With [real-time speech to text](get-started-speech-to-text.md), the audio is transcribed as speech is recognized from a microphone or file. Use real-time speech to text for applications that need to transcribe audio in real-time such as: - Transcriptions, captions, or subtitles for live meetings - Contact center agent assist - Dictation With [real-time speech-to-text](get-started-speech-to-text.md), the audio is tra - Contact center post-call analytics - Diarization -### Text-to-speech +### Text to speech With [text to speech](text-to-speech.md), you can convert input text into humanlike synthesized speech. Use neural voices, which are humanlike voices powered by deep neural networks. Use the [Speech Synthesis Markup Language (SSML)](speech-synthesis-markup.md) to fine-tune the pitch, pronunciation, speaking rate, volume, and more. With [text to speech](text-to-speech.md), you can convert input text into humanl ### Speech translation -[Speech translation](speech-translation.md) enables real-time, multilingual translation of speech to your applications, tools, and devices. Use this feature for speech-to-speech and speech-to-text translation. +[Speech translation](speech-translation.md) enables real-time, multilingual translation of speech to your applications, tools, and devices. Use this feature for speech-to-speech and speech to text translation. ### Language identification -[Language identification](language-identification.md) is used to identify languages spoken in audio when compared against a list of [supported languages](language-support.md). Use language identification by itself, with speech-to-text recognition, or with speech translation. +[Language identification](language-identification.md) is used to identify languages spoken in audio when compared against a list of [supported languages](language-support.md). Use language identification by itself, with speech to text recognition, or with speech translation. ### Speaker recognition [Speaker recognition](speaker-recognition-overview.md) provides algorithms that verify and identify speakers by their unique voice characteristics. Speaker recognition is used to answer the question, "Who is speaking?". With [text to speech](text-to-speech.md), you can convert input text into humanl ### Intent recognition -[Intent recognition](./intent-recognition.md): Use speech-to-text with conversational language understanding to derive user intents from transcribed speech and act on voice commands. +[Intent recognition](./intent-recognition.md): Use speech to text with conversational language understanding to derive user intents from transcribed speech and act on voice commands. ## Delivery and presence In some cases, you can't or shouldn't use the [Speech SDK](speech-sdk.md). In th We offer quickstarts in many popular programming languages. Each quickstart is designed to teach you basic design patterns and have you running code in less than 10 minutes. See the following list for the quickstart for each feature: -* [Speech-to-text quickstart](get-started-speech-to-text.md) -* [Text-to-speech quickstart](get-started-text-to-speech.md) +* [Speech to text quickstart](get-started-speech-to-text.md) +* [Text to speech quickstart](get-started-text-to-speech.md) * [Speech translation quickstart](./get-started-speech-translation.md) ## Code samples Sample code for the Speech service is available on GitHub. These samples cover common scenarios like reading audio from a file or stream, continuous and single-shot recognition, and working with custom models. Use these links to view SDK and REST samples: -- [Speech-to-text, text-to-speech, and speech translation samples (SDK)](https://github.com/Azure-Samples/cognitive-services-speech-sdk)+- [Speech to text, text to speech, and speech translation samples (SDK)](https://github.com/Azure-Samples/cognitive-services-speech-sdk) - [Batch transcription samples (REST)](https://github.com/Azure-Samples/cognitive-services-speech-sdk/tree/master/samples/batch)-- [Text-to-speech samples (REST)](https://github.com/Azure-Samples/Cognitive-Speech-TTS)+- [Text to speech samples (REST)](https://github.com/Azure-Samples/Cognitive-Speech-TTS) - [Voice assistant samples (SDK)](https://aka.ms/csspeech/samples) ## Next steps -* [Get started with speech-to-text](get-started-speech-to-text.md) -* [Get started with text-to-speech](get-started-text-to-speech.md) +* [Get started with speech to text](get-started-speech-to-text.md) +* [Get started with text to speech](get-started-text-to-speech.md) |
cognitive-services | Power Automate Batch Transcription | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Speech-Service/power-automate-batch-transcription.md | Last updated 03/09/2023 # Power automate batch transcription -This article describes how to use [Power Automate](/power-automate/getting-started) and the [Azure Cognitive Services for Batch Speech-to-text connector](/connectors/cognitiveservicesspe/) to transcribe audio files from an Azure Storage container. The connector uses the [Batch Transcription REST API](batch-transcription.md), but you don't need to write any code to use it. If the connector doesn't meet your requirements, you can still use the [REST API](rest-speech-to-text.md#transcriptions) directly. +This article describes how to use [Power Automate](/power-automate/getting-started) and the [Azure Cognitive Services for Batch Speech to text connector](/connectors/cognitiveservicesspe/) to transcribe audio files from an Azure Storage container. The connector uses the [Batch Transcription REST API](batch-transcription.md), but you don't need to write any code to use it. If the connector doesn't meet your requirements, you can still use the [REST API](rest-speech-to-text.md#transcriptions) directly. -In addition to [Power Automate](/power-automate/getting-started), you can use the [Azure Cognitive Services for Batch Speech-to-text connector](/connectors/cognitiveservicesspe/) with [Power Apps](/power-apps) and [Logic Apps](../../logic-apps/index.yml). +In addition to [Power Automate](/power-automate/getting-started), you can use the [Azure Cognitive Services for Batch Speech to text connector](/connectors/cognitiveservicesspe/) with [Power Apps](/power-apps) and [Logic Apps](../../logic-apps/index.yml). > [!TIP] > Try more Speech features in [Speech Studio](https://aka.ms/speechstudio/speechtotexttool) without signing up or writing any code. By now, you should have a flow that looks like this: ### Create transcription -1. Select **+ New step** to begin adding a new operation for the [Azure Cognitive Services for Batch Speech-to-text connector](/connectors/cognitiveservicesspe/). -1. Enter "batch speech-to-text" in the search connectors and actions box to narrow the results. -1. Select the **Azure Cognitive Services for Batch Speech-to-text** connector. +1. Select **+ New step** to begin adding a new operation for the [Azure Cognitive Services for Batch Speech to text connector](/connectors/cognitiveservicesspe/). +1. Enter "batch speech to text" in the search connectors and actions box to narrow the results. +1. Select the **Azure Cognitive Services for Batch Speech to text** connector. 1. Select the **Create transcription** action. 1. Create a new connection to the Speech resource that you [created previously](#prerequisites). The connection will be available throughout the Power Automate environment. For more information, see [Manage connections in Power Automate](/power-automate/add-manage-connections). 1. Enter a name for the connection such as "speech-resource-key". You can choose any name that you like. By now, you should have a flow that looks like this: 1. Select `Web Url` as dynamic content for the **contentUrls Item - 1** field. This is the SAS URI output from the [Create SAS URI by path](#create-sas-uri-by-path) action. > [!TIP]- > For more information about create transcription parameters, see the [Azure Cognitive Services for Batch Speech-to-text](/connectors/cognitiveservicesspe/#create-transcription) documentation. + > For more information about create transcription parameters, see the [Azure Cognitive Services for Batch Speech to text](/connectors/cognitiveservicesspe/#create-transcription) documentation. 1. From the top navigation menu, select **Save**. You can select and expand the **Create transcription** to see detailed input and ## Next steps -- [Azure Cognitive Services for Batch Speech-to-text connector](/connectors/cognitiveservicesspe/)+- [Azure Cognitive Services for Batch Speech to text connector](/connectors/cognitiveservicesspe/) - [Azure Blob Storage connector](/connectors/azureblob/) - [Power Platform](/power-platform/) |
cognitive-services | Pronunciation Assessment Tool | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Speech-Service/pronunciation-assessment-tool.md | -Pronunciation assessment uses the Speech-to-Text capability to provide subjective and objective feedback for language learners. Practicing pronunciation and getting timely feedback are essential for improving language skills. Assessments driven by experienced teachers can take a lot of time and effort and makes a high-quality assessment expensive for learners. Pronunciation assessment can help make the language assessment more engaging and accessible to learners of all backgrounds. +Pronunciation assessment uses the Speech to text capability to provide subjective and objective feedback for language learners. Practicing pronunciation and getting timely feedback are essential for improving language skills. Assessments driven by experienced teachers can take a lot of time and effort and makes a high-quality assessment expensive for learners. Pronunciation assessment can help make the language assessment more engaging and accessible to learners of all backgrounds. Pronunciation assessment provides various assessment results in different granularities, from individual phonemes to the entire text input. - At the full-text level, pronunciation assessment offers additional Fluency and Completeness scores: Fluency indicates how closely the speech matches a native speaker's use of silent breaks between words, and Completeness indicates how many words are pronounced in the speech to the reference text input. An overall score aggregated from Accuracy, Fluency and Completeness is then given to indicate the overall pronunciation quality of the given speech. Pronunciation assessment provides various assessment results in different granul This article describes how to use the pronunciation assessment tool through the [Speech Studio](https://speech.microsoft.com). You can get immediate feedback on the accuracy and fluency of your speech without writing any code. For information about how to integrate pronunciation assessment in your speech applications, see [How to use pronunciation assessment](how-to-pronunciation-assessment.md). > [!NOTE]-> Usage of pronunciation assessment costs the same as standard Speech to Text pay-as-you-go [pricing](https://azure.microsoft.com/pricing/details/cognitive-services/speech-services). Pronunciation assessment doesn't yet support commitment tier pricing. +> Usage of pronunciation assessment costs the same as standard Speech to text pay-as-you-go [pricing](https://azure.microsoft.com/pricing/details/cognitive-services/speech-services). Pronunciation assessment doesn't yet support commitment tier pricing. > > For information about availability of pronunciation assessment, see [supported languages](language-support.md?tabs=pronunciation-assessment) and [available regions](regions.md#speech-service). |
cognitive-services | Setup Platform | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Speech-Service/quickstarts/setup-platform.md | zone_pivot_groups: programming-languages-speech-sdk ## Next steps -* [Speech-to-text quickstart](../get-started-speech-to-text.md) -* [Text-to-speech quickstart](../get-started-text-to-speech.md) +* [Speech to text quickstart](../get-started-speech-to-text.md) +* [Text to speech quickstart](../get-started-text-to-speech.md) * [Speech translation quickstart](../get-started-speech-translation.md) |
cognitive-services | Record Custom Voice Samples | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Speech-Service/record-custom-voice-samples.md | Below are some general guidelines that you can follow to create a good corpus (r Best practices include: - Balanced coverage for Parts of Speech, like verbs, nouns, adjectives, and so on. - - Balanced coverage for pronunciations. Include all letters from A to Z so the Text-to-Speech engine learns how to pronounce each letter in your style. + - Balanced coverage for pronunciations. Include all letters from A to Z so the Text to speech engine learns how to pronounce each letter in your style. - Readable, understandable, common-sense scripts for the speaker to read. - Avoid too many similar patterns for words/phrases, like "easy" and "easier". - Include different formats of numbers: address, unit, phone, quantity, date, and so on, in all sentence types. |
cognitive-services | Regions | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Speech-Service/regions.md | Title: Regions - Speech service -description: A list of available regions and endpoints for the Speech service, including speech-to-text, text-to-speech, and speech translation. +description: A list of available regions and endpoints for the Speech service, including speech to text, text to speech, and speech translation. Keep in mind the following points: ## Speech service -The following regions are supported for Speech service features such as speech-to-text, text-to-speech, pronunciation assessment, and translation. The geographies are listed in alphabetical order. +The following regions are supported for Speech service features such as speech to text, text to speech, pronunciation assessment, and translation. The geographies are listed in alphabetical order. | Geography | Region | Region identifier | | -- | -- | -- | |
cognitive-services | Releasenotes | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Speech-Service/releasenotes.md | Azure Cognitive Service for Speech is updated on an ongoing basis. To stay up-to ## Recent highlights * Speech SDK 1.28.0 was released in May 2023.-* Speech-to-text and text-to-speech container versions were updated in March 2023. +* Speech to text and text to speech container versions were updated in March 2023. * Some Speech Studio [scenarios](speech-studio-overview.md#speech-studio-scenarios) are available to try without an Azure subscription.-* Custom Speech-to-Text container disconnected mode was released in January 2023. -* Text-to-speech Batch synthesis API is available in public preview. -* Speech-to-text REST API version 3.1 is generally available. +* Custom Speech to text container disconnected mode was released in January 2023. +* Text to speech Batch synthesis API is available in public preview. +* Speech to text REST API version 3.1 is generally available. ## Release notes Azure Cognitive Service for Speech is updated on an ongoing basis. To stay up-to [!INCLUDE [speech-cli](./includes/release-notes/release-notes-cli.md)] -# [Text-to-speech service](#tab/text-to-speech) +# [Text to speech service](#tab/text-to-speech) -# [Speech-to-text service](#tab/speech-to-text) +# [Speech to text service](#tab/speech-to-text) # [Containers](#tab/containers) |
cognitive-services | Resiliency And Recovery Plan | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Speech-Service/resiliency-and-recovery-plan.md | Follow these steps to configure your client to monitor for errors: 5. Configure your code to monitor for connectivity errors (typically connection timeouts and service unavailability errors). Here's sample code in C#: [GitHub: Adding Sample for showing a possible candidate for switching regions](https://github.com/Azure-Samples/cognitive-services-speech-sdk/blob/fa6428a0837779cbeae172688e0286625e340942/samples/csharp/sharedcontent/console/speech_recognition_samples.cs#L965). 1. Since networks experience transient errors, for single connectivity issue occurrences, the suggestion is to retry.- 2. For persistence redirect traffic to the new STS token service and Speech service endpoint. (For Text-to-Speech, reference sample code: [GitHub: TTS public voice switching region](https://github.com/Azure-Samples/cognitive-services-speech-sdk/blob/master/samples/csharp/sharedcontent/console/speech_synthesis_samples.cs#L880). + 2. For persistence redirect traffic to the new STS token service and Speech service endpoint. (For Text to speech, reference sample code: [GitHub: TTS public voice switching region](https://github.com/Azure-Samples/cognitive-services-speech-sdk/blob/master/samples/csharp/sharedcontent/console/speech_synthesis_samples.cs#L880). The recovery from regional failures for this usage type can be instantaneous and at a low cost. All that is required is the development of this functionality on the client side. The data loss that will incur assuming no backup of the audio stream will be minimal. |
cognitive-services | Rest Speech To Text Short | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Speech-Service/rest-speech-to-text-short.md | Title: Speech-to-text REST API for short audio - Speech service + Title: Speech to text REST API for short audio - Speech service -description: Learn how to use Speech-to-text REST API for short audio to convert speech to text. +description: Learn how to use Speech to text REST API for short audio to convert speech to text. ms.devlang: csharp -# Speech-to-text REST API for short audio +# Speech to text REST API for short audio -Use cases for the speech-to-text REST API for short audio are limited. Use it only in cases where you can't use the [Speech SDK](speech-sdk.md). +Use cases for the Speech to text REST API for short audio are limited. Use it only in cases where you can't use the [Speech SDK](speech-sdk.md). -Before you use the speech-to-text REST API for short audio, consider the following limitations: +Before you use the Speech to text REST API for short audio, consider the following limitations: * Requests that use the REST API for short audio and transmit audio directly can contain no more than 60 seconds of audio. The input [audio formats](#audio-formats) are more limited compared to the [Speech SDK](speech-sdk.md). * The REST API for short audio returns only final results. It doesn't provide partial results. * [Speech translation](speech-translation.md) is not supported via REST API for short audio. You need to use [Speech SDK](speech-sdk.md).-* [Batch transcription](batch-transcription.md) and [Custom Speech](custom-speech-overview.md) are not supported via REST API for short audio. You should always use the [Speech to Text REST API](rest-speech-to-text.md) for batch transcription and Custom Speech. +* [Batch transcription](batch-transcription.md) and [Custom Speech](custom-speech-overview.md) are not supported via REST API for short audio. You should always use the [Speech to text REST API](rest-speech-to-text.md) for batch transcription and Custom Speech. -Before you use the speech-to-text REST API for short audio, understand that you need to complete a token exchange as part of authentication to access the service. For more information, see [Authentication](#authentication). +Before you use the Speech to text REST API for short audio, understand that you need to complete a token exchange as part of authentication to access the service. For more information, see [Authentication](#authentication). ## Regions and endpoints Audio is sent in the body of the HTTP `POST` request. It must be in one of the f ## Request headers -This table lists required and optional headers for speech-to-text requests: +This table lists required and optional headers for speech to text requests: |Header| Description | Required or optional | ||-|| |
cognitive-services | Rest Speech To Text | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Speech-Service/rest-speech-to-text.md | Title: Speech-to-text REST API - Speech service + Title: Speech to text REST API - Speech service -description: Get reference documentation for Speech-to-text REST API. +description: Get reference documentation for Speech to text REST API. ms.devlang: csharp -# Speech-to-text REST API +# Speech to text REST API -Speech-to-text REST API is used for [Batch transcription](batch-transcription.md) and [Custom Speech](custom-speech-overview.md). +Speech to text REST API is used for [Batch transcription](batch-transcription.md) and [Custom Speech](custom-speech-overview.md). > [!IMPORTANT]-> Speech-to-text REST API v3.1 is generally available. Version 3.0 of the [Speech to Text REST API](rest-speech-to-text.md) will be retired. For more information, see the [Migrate code from v3.0 to v3.1 of the REST API](migrate-v3-0-to-v3-1.md) guide. +> Speech to text REST API v3.1 is generally available. Version 3.0 of the [Speech to text REST API](rest-speech-to-text.md) will be retired. For more information, see the [Migrate code from v3.0 to v3.1 of the REST API](migrate-v3-0-to-v3-1.md) guide. > [!div class="nextstepaction"]-> [See the Speech to Text API v3.1 reference documentation](https://eastus.dev.cognitive.microsoft.com/docs/services/speech-to-text-api-v3-1/) +> [See the Speech to text REST API v3.1 reference documentation](https://eastus.dev.cognitive.microsoft.com/docs/services/speech-to-text-api-v3-1/) > [!div class="nextstepaction"]-> [See the Speech to Text API v3.0 reference documentation](https://eastus.dev.cognitive.microsoft.com/docs/services/speech-to-text-api-v3-0/) +> [See the Speech to text REST API v3.0 reference documentation](https://eastus.dev.cognitive.microsoft.com/docs/services/speech-to-text-api-v3-0/) -Use Speech-to-text REST API to: +Use Speech to text REST API to: - [Custom Speech](custom-speech-overview.md): With Custom Speech, you can upload your own data, test and train a custom model, compare accuracy between models, and deploy a model to a custom endpoint. Copy models to other subscriptions if you want colleagues to have access to a model that you built, or if you want to deploy a model to more than one region. - [Batch transcription](batch-transcription.md): Transcribe audio files as a batch from multiple URLs or an Azure container. -Speech-to-text REST API includes such features as: +Speech to text REST API includes such features as: - Get logs for each endpoint if logs have been requested for that endpoint. - Request the manifest of the models that you create, to set up on-premises containers. See [Create a transcription](batch-transcription-create.md?pivots=rest-api) for Web hooks are applicable for [Custom Speech](custom-speech-overview.md) and [Batch Transcription](batch-transcription.md). In particular, web hooks apply to [datasets](#datasets), [endpoints](#endpoints), [evaluations](#evaluations), [models](#models), and [transcriptions](#transcriptions). Web hooks can be used to receive notifications about creation, processing, completion, and deletion events. -This table includes all the web hook operations that are available with the speech-to-text REST API. +This table includes all the web hook operations that are available with the Speech to text REST API. |Path|Method|Version 3.1|Version 3.0| ||||| |
cognitive-services | Rest Text To Speech | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Speech-Service/rest-text-to-speech.md | Title: Text-to-speech API reference (REST) - Speech service + Title: Text to speech API reference (REST) - Speech service description: Learn how to use the REST API to convert text into synthesized speech. -# Text-to-speech REST API +# Text to speech REST API The Speech service allows you to [convert text into synthesized speech](#convert-text-to-speech) and [get a list of supported voices](#get-a-list-of-voices) for a region by using a REST API. In this article, you'll learn about authorization options, query options, how to structure a request, and how to interpret a response. > [!TIP]-> Use cases for the text-to-speech REST API are limited. Use it only in cases where you can't use the [Speech SDK](speech-sdk.md). For example, with the Speech SDK you can [subscribe to events](how-to-speech-synthesis.md#subscribe-to-synthesizer-events) for more insights about the text-to-speech processing and results. +> Use cases for the text to speech REST API are limited. Use it only in cases where you can't use the [Speech SDK](speech-sdk.md). For example, with the Speech SDK you can [subscribe to events](how-to-speech-synthesis.md#subscribe-to-synthesizer-events) for more insights about the text to speech processing and results. -The text-to-speech REST API supports neural text-to-speech voices, which support specific languages and dialects that are identified by locale. Each available endpoint is associated with a region. A Speech resource key for the endpoint or region that you plan to use is required. Here are links to more information: +The text to speech REST API supports neural text to speech voices, which support specific languages and dialects that are identified by locale. Each available endpoint is associated with a region. A Speech resource key for the endpoint or region that you plan to use is required. Here are links to more information: - For a complete list of voices, see [Language and voice support for the Speech service](language-support.md?tabs=tts). - For information about regional availability, see [Speech service supported regions](regions.md#speech-service). The text-to-speech REST API supports neural text-to-speech voices, which support > [!IMPORTANT] > Costs vary for prebuilt neural voices (called *Neural* on the pricing page) and custom neural voices (called *Custom Neural* on the pricing page). For more information, see [Speech service pricing](https://azure.microsoft.com/pricing/details/cognitive-services/speech-services/). -Before you use the text-to-speech REST API, understand that you need to complete a token exchange as part of authentication to access the service. For more information, see [Authentication](#authentication). +Before you use the text to speech REST API, understand that you need to complete a token exchange as part of authentication to access the service. For more information, see [Authentication](#authentication). ## Get a list of voices You can use the `tts.speech.microsoft.com/cognitiveservices/voices/list` endpoin ### Request headers -This table lists required and optional headers for text-to-speech requests: +This table lists required and optional headers for text to speech requests: | Header | Description | Required or optional | |--|-|| The `cognitiveservices/v1` endpoint allows you to convert text to speech by usin ### Regions and endpoints -These regions are supported for text-to-speech through the REST API. Be sure to select the endpoint that matches your Speech resource region. +These regions are supported for text to speech through the REST API. Be sure to select the endpoint that matches your Speech resource region. [!INCLUDE [](includes/cognitive-services-speech-service-endpoints-text-to-speech.md)] ### Request headers -This table lists required and optional headers for text-to-speech requests: +This table lists required and optional headers for text to speech requests: | Header | Description | Required or optional | |--|-|| This table lists required and optional headers for text-to-speech requests: ### Request body -If you're using a custom neural voice, the body of a request can be sent as plain text (ASCII or UTF-8). Otherwise, the body of each `POST` request is sent as [SSML](speech-synthesis-markup.md). SSML allows you to choose the voice and language of the synthesized speech that the text-to-speech feature returns. For a complete list of supported voices, see [Language and voice support for the Speech service](language-support.md?tabs=tts). +If you're using a custom neural voice, the body of a request can be sent as plain text (ASCII or UTF-8). Otherwise, the body of each `POST` request is sent as [SSML](speech-synthesis-markup.md). SSML allows you to choose the voice and language of the synthesized speech that the text to speech feature returns. For a complete list of supported voices, see [Language and voice support for the Speech service](language-support.md?tabs=tts). ### Sample request User-Agent: <Your application name> <speak version='1.0' xml:lang='en-US'><voice xml:lang='en-US' xml:gender='Male' name='en-US-ChristopherNeural'>- Microsoft Speech Service Text-to-Speech API + Microsoft Speech Service Text to speech API </voice></speak> ``` <sup>*</sup> For the Content-Length, you should use your own content length. In most cases, this value is calculated automatically. |
cognitive-services | Sovereign Clouds | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Speech-Service/sovereign-clouds.md | Available to US government entities and their partners only. See more informatio - **Available pricing tiers:** - Free (F0) and Standard (S0). See more details [here](https://azure.microsoft.com/pricing/details/cognitive-services/speech-services/) - **Supported features:**- - Speech-to-text + - Speech to text - Custom speech (Acoustic Model (AM) and Language Model (LM) adaptation) - [Speech Studio](https://speech.azure.us/)- - Text-to-speech + - Text to speech - Standard voice - Neural voice - Speech translation Available to US government entities and their partners only. See more informatio ### Endpoint information -This section contains Speech Services endpoint information for the usage with [Speech SDK](speech-sdk.md), [Speech-to-text REST API](rest-speech-to-text.md), and [Text-to-speech REST API](rest-text-to-speech.md). +This section contains Speech Services endpoint information for the usage with [Speech SDK](speech-sdk.md), [Speech to text REST API](rest-speech-to-text.md), and [Text to speech REST API](rest-text-to-speech.md). #### Speech Services REST API Speech Services REST API endpoints in Azure Government have the following format | REST API type / operation | Endpoint format | |--|--| | Access token | `https://<REGION_IDENTIFIER>.api.cognitive.microsoft.us/sts/v1.0/issueToken`-| [Speech-to-text REST API](rest-speech-to-text.md) | `https://<REGION_IDENTIFIER>.api.cognitive.microsoft.us/<URL_PATH>` | -| [Speech-to-text REST API for short audio](rest-speech-to-text-short.md) | `https://<REGION_IDENTIFIER>.stt.speech.azure.us/<URL_PATH>` | -| [Text-to-speech REST API](rest-text-to-speech.md) | `https://<REGION_IDENTIFIER>.tts.speech.azure.us/<URL_PATH>` | +| [Speech to text REST API](rest-speech-to-text.md) | `https://<REGION_IDENTIFIER>.api.cognitive.microsoft.us/<URL_PATH>` | +| [Speech to text REST API for short audio](rest-speech-to-text-short.md) | `https://<REGION_IDENTIFIER>.stt.speech.azure.us/<URL_PATH>` | +| [Text to speech REST API](rest-text-to-speech.md) | `https://<REGION_IDENTIFIER>.tts.speech.azure.us/<URL_PATH>` | Replace `<REGION_IDENTIFIER>` with the identifier matching the region of your subscription from this table: Replace `subscriptionKey` with your Speech resource key. Replace `usGovHost` wit | Region / Service offering | Host expression | |--|--| | **US Gov Arizona** | |-| Speech-to-text | `wss://usgovarizona.stt.speech.azure.us` | -| Text-to-Speech | `https://usgovarizona.tts.speech.azure.us` | +| Speech to text | `wss://usgovarizona.stt.speech.azure.us` | +| Text to speech | `https://usgovarizona.tts.speech.azure.us` | | **US Gov Virginia** | |-| Speech-to-text | `wss://usgovvirginia.stt.speech.azure.us` | -| Text-to-Speech | `https://usgovvirginia.tts.speech.azure.us` | +| Speech to text | `wss://usgovvirginia.stt.speech.azure.us` | +| Text to speech | `https://usgovvirginia.tts.speech.azure.us` | ## Azure China Available to organizations with a business presence in China. See more informati - **Available pricing tiers:** - Free (F0) and Standard (S0). See more details [here](https://www.azure.cn/pricing/details/cognitive-services/https://docsupdatetracker.net/index.html) - **Supported features:**- - Speech-to-text + - Speech to text - Custom speech (Acoustic Model (AM) and Language Model (LM) adaptation) - [Speech Studio](https://speech.azure.cn/) - [Pronunciation assessment](how-to-pronunciation-assessment.md)- - Text-to-speech + - Text to speech - Standard voice - Neural voice - Speech translator Available to organizations with a business presence in China. See more informati ### Endpoint information -This section contains Speech Services endpoint information for the usage with [Speech SDK](speech-sdk.md), [Speech-to-text REST API](rest-speech-to-text.md), and [Text-to-speech REST API](rest-text-to-speech.md). +This section contains Speech Services endpoint information for the usage with [Speech SDK](speech-sdk.md), [Speech to text REST API](rest-speech-to-text.md), and [Text to speech REST API](rest-text-to-speech.md). #### Speech Services REST API Speech Services REST API endpoints in Azure China have the following format: | REST API type / operation | Endpoint format | |--|--| | Access token | `https://<REGION_IDENTIFIER>.api.cognitive.azure.cn/sts/v1.0/issueToken`-| [Speech-to-text REST API](rest-speech-to-text.md) | `https://<REGION_IDENTIFIER>.api.cognitive.azure.cn/<URL_PATH>` | -| [Speech-to-text REST API for short audio](rest-speech-to-text-short.md) | `https://<REGION_IDENTIFIER>.stt.speech.azure.cn/<URL_PATH>` | -| [Text-to-speech REST API](rest-text-to-speech.md) | `https://<REGION_IDENTIFIER>.tts.speech.azure.cn/<URL_PATH>` | +| [Speech to text REST API](rest-speech-to-text.md) | `https://<REGION_IDENTIFIER>.api.cognitive.azure.cn/<URL_PATH>` | +| [Speech to text REST API for short audio](rest-speech-to-text-short.md) | `https://<REGION_IDENTIFIER>.stt.speech.azure.cn/<URL_PATH>` | +| [Text to speech REST API](rest-text-to-speech.md) | `https://<REGION_IDENTIFIER>.tts.speech.azure.cn/<URL_PATH>` | Replace `<REGION_IDENTIFIER>` with the identifier matching the region of your subscription from this table: Replace `subscriptionKey` with your Speech resource key. Replace `azCnHost` with | Region / Service offering | Host expression | |--|--| | **China East 2** | |-| Speech-to-text | `wss://chinaeast2.stt.speech.azure.cn` | -| Text-to-Speech | `https://chinaeast2.tts.speech.azure.cn` | +| Speech to text | `wss://chinaeast2.stt.speech.azure.cn` | +| Text to speech | `https://chinaeast2.tts.speech.azure.cn` | | **China North 2** | |-| Speech-to-text | `wss://chinanorth2.stt.speech.azure.cn` | -| Text-to-Speech | `https://chinanorth2.tts.speech.azure.cn` | +| Speech to text | `wss://chinanorth2.stt.speech.azure.cn` | +| Text to speech | `https://chinanorth2.tts.speech.azure.cn` | |
cognitive-services | Speech Container Configuration | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Speech-Service/speech-container-configuration.md | The exact syntax of the host mount location varies depending on the host operati The custom speech containers use [volume mounts](https://docs.docker.com/storage/volumes/) to persist custom models. You can specify a volume mount by adding the `-v` (or `--volume`) option to the [docker run](https://docs.docker.com/engine/reference/commandline/run/) command. > [!NOTE]-> The volume mount settings are only applicable for [Custom Speech-to-text](speech-container-cstt.md) containers. +> The volume mount settings are only applicable for [Custom Speech to text](speech-container-cstt.md) containers. Custom models are downloaded the first time that a new model is ingested as part of the custom speech container docker run command. Sequential runs of the same `ModelId` for a custom speech container will use the previously downloaded model. If the volume mount is not provided, custom models cannot be persisted. |
cognitive-services | Speech Container Cstt | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Speech-Service/speech-container-cstt.md | Title: Custom speech-to-text containers - Speech service + Title: Custom speech to text containers - Speech service -description: Install and run custom speech-to-text containers with Docker to perform speech recognition, transcription, generation, and more on-premises. +description: Install and run custom speech to text containers with Docker to perform speech recognition, transcription, generation, and more on-premises. zone_pivot_groups: programming-languages-speech-sdk-cli keywords: on-premises, Docker, container -# Custom speech-to-text containers with Docker +# Custom speech to text containers with Docker -The Custom speech-to-text container transcribes real-time speech or batch audio recordings with intermediate results. You can use a custom model that you created in the [Custom Speech portal](https://speech.microsoft.com/customspeech). In this article, you'll learn how to download, install, and run a Custom speech-to-text container. +The Custom speech to text container transcribes real-time speech or batch audio recordings with intermediate results. You can use a custom model that you created in the [Custom Speech portal](https://speech.microsoft.com/customspeech). In this article, you'll learn how to download, install, and run a Custom speech to text container. > [!NOTE] > You must [request and get approval](speech-container-overview.md#request-approval-to-run-the-container) to use a Speech container. For more information about prerequisites, validating that a container is running ## Container images -The Custom speech-to-text container image for all supported versions and locales can be found on the [Microsoft Container Registry (MCR)](https://mcr.microsoft.com/product/azure-cognitive-services/speechservices/custom-speech-to-text/tags) syndicate. It resides within the `azure-cognitive-services/speechservices/` repository and is named `custom-speech-to-text`. +The Custom speech to text container image for all supported versions and locales can be found on the [Microsoft Container Registry (MCR)](https://mcr.microsoft.com/product/azure-cognitive-services/speechservices/custom-speech-to-text/tags) syndicate. It resides within the `azure-cognitive-services/speechservices/` repository and is named `custom-speech-to-text`. :::image type="content" source="./media/containers/mcr-tags-custom-speech-to-text.png" alt-text="A screenshot of the search connectors and triggers dialog." lightbox="./media/containers/mcr-tags-custom-speech-to-text.png"::: All tags, except for `latest`, are in the following format and are case sensitiv ``` > [!NOTE]-> The `locale` and `voice` for custom speech-to-text containers is determined by the custom model ingested by the container. +> The `locale` and `voice` for custom speech to text containers is determined by the custom model ingested by the container. The tags are also available [in JSON format](https://mcr.microsoft.com/v2/azure-cognitive-services/speechservices/custom-speech-to-text/tags/list) for your convenience. The body includes the container path and list of tags. The tags aren't sorted by version, but `"latest"` is always included at the end of the list as shown in this snippet: Checking available base model for en-us ## Display model download -Before you [run](#run-the-container-with-docker-run) the container, you can optionally get the available display models information and choose to download those models into your speech-to-text container to get highly improved final display output. Display model download is available with custom-speech-to-text container version 3.1.0 and later. +Before you [run](#run-the-container-with-docker-run) the container, you can optionally get the available display models information and choose to download those models into your speech to text container to get highly improved final display output. Display model download is available with custom-speech-to-text container version 3.1.0 and later. > [!NOTE] > Although you use the `docker run` command, the container isn't started for service. To run disconnected containers (not connected to the internet), you must submit If you have been approved to run the container disconnected from the internet, the following example shows the formatting of the `docker run` command to use, with placeholder values. Replace these placeholder values with your own values. -In order to prepare and configure a disconnected custom speech-to-text container you will need two separate speech resources: +In order to prepare and configure a disconnected custom speech to text container you will need two separate speech resources: - A regular Azure Speech Service resource which is either configured to use a "**S0 - Standard**" pricing tier or a "**Speech to Text (Custom)**" commitment tier pricing plan. This is used to train, download, and configure your custom speech models for use in your container. - An Azure Speech Service resource which is configured to use the "**DC0 Commitment (Disconnected)**" pricing plan. This is used to download your disconnected container license file required to run the container in disconnected mode. Mounts:License={CONTAINER_LICENSE_DIRECTORY} Mounts:Output={CONTAINER_OUTPUT_DIRECTORY} ``` -The Custom Speech-to-Text container provides a default directory for writing the license file and billing log at runtime. The default directories are /license and /output respectively. +The Custom Speech to text container provides a default directory for writing the license file and billing log at runtime. The default directories are /license and /output respectively. When you're mounting these directories to the container with the `docker run -v` command, make sure the local machine directory is set ownership to `user:group nonroot:nonroot` before running the container. sudo chown -R nonroot:nonroot <YOUR_LOCAL_MACHINE_PATH_1> <YOUR_LOCAL_MACHINE_PA [!INCLUDE [Speech container authentication](includes/containers-speech-config-ws.md)] -[Try the speech-to-text quickstart](get-started-speech-to-text.md) using host authentication instead of key and region. +[Try the speech to text quickstart](get-started-speech-to-text.md) using host authentication instead of key and region. ## Next steps |
cognitive-services | Speech Container Howto On Premises | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Speech-Service/speech-container-howto-on-premises.md | Title: Use Speech service containers with Kubernetes and Helm -description: Using Kubernetes and Helm to define the speech-to-text and text-to-speech container images, we'll create a Kubernetes package. This package will be deployed to a Kubernetes cluster on-premises. +description: Using Kubernetes and Helm to define the speech to text and text to speech container images, we'll create a Kubernetes package. This package will be deployed to a Kubernetes cluster on-premises. -One option to manage your Speech containers on-premises is to use Kubernetes and Helm. Using Kubernetes and Helm to define the speech-to-text and text-to-speech container images, we'll create a Kubernetes package. This package will be deployed to a Kubernetes cluster on-premises. Finally, we'll explore how to test the deployed services and various configuration options. For more information about running Docker containers without Kubernetes orchestration, see [install and run Speech service containers](speech-container-howto.md). +One option to manage your Speech containers on-premises is to use Kubernetes and Helm. Using Kubernetes and Helm to define the speech to text and text to speech container images, we'll create a Kubernetes package. This package will be deployed to a Kubernetes cluster on-premises. Finally, we'll explore how to test the deployed services and various configuration options. For more information about running Docker containers without Kubernetes orchestration, see [install and run Speech service containers](speech-container-howto.md). ## Prerequisites Refer to the [Speech service container host computer][speech-container-host-comp | Service | CPU / Container | Memory / Container | |--|--|--|-| **Speech-to-Text** | one decoder requires a minimum of 1,150 millicores. If the `optimizedForAudioFile` is enabled, then 1,950 millicores are required. (default: two decoders) | Required: 2 GB<br>Limited: 4 GB | -| **Text-to-Speech** | one concurrent request requires a minimum of 500 millicores. If the `optimizeForTurboMode` is enabled, then 1,000 millicores are required. (default: two concurrent requests) | Required: 1 GB<br> Limited: 2 GB | +| **speech to text** | one decoder requires a minimum of 1,150 millicores. If the `optimizedForAudioFile` is enabled, then 1,950 millicores are required. (default: two decoders) | Required: 2 GB<br>Limited: 4 GB | +| **text to speech** | one concurrent request requires a minimum of 500 millicores. If the `optimizeForTurboMode` is enabled, then 1,000 millicores are required. (default: two concurrent requests) | Required: 1 GB<br> Limited: 2 GB | ## Connect to the Kubernetes cluster Next, we'll configure our Helm chart values. Copy and paste the following YAML i ```yaml # These settings are deployment specific and users can provide customizations-# speech-to-text configurations +# speech to text configurations speechToText: enabled: true numberOfConcurrentRequest: 3 speechToText: billing: # {ENDPOINT_URI} apikey: # {API_KEY} -# text-to-speech configurations +# text to speech configurations textToSpeech: enabled: true numberOfConcurrentRequest: 3 The *Helm chart* contains the configuration of which docker image(s) to pull fro > A [Helm chart][helm-charts] is a collection of files that describe a related set of Kubernetes resources. A single chart might be used to deploy something simple, like a memcached pod, or something complex, like a full web app stack with HTTP servers, databases, caches, and so on. -The provided *Helm charts* pull the docker images of the Speech service, both text-to-speech and the speech-to-text services from the `mcr.microsoft.com` container registry. +The provided *Helm charts* pull the docker images of the Speech service, both text to speech and the speech to text services from the `mcr.microsoft.com` container registry. ## Install the Helm chart on the Kubernetes cluster horizontalpodautoscaler.autoscaling/text-to-speech-autoscaler Deployment/text- ### Verify Helm deployment with Helm tests -The installed Helm charts define *Helm tests*, which serve as a convenience for verification. These tests validate service readiness. To verify both **speech-to-text** and **text-to-speech** services, we'll execute the [Helm test][helm-test] command. +The installed Helm charts define *Helm tests*, which serve as a convenience for verification. These tests validate service readiness. To verify both **speech to text** and **text to speech** services, we'll execute the [Helm test][helm-test] command. ```console helm test onprem-speech helm test onprem-speech These tests will output various status results: ```console-RUNNING: speech-to-text-readiness-test -PASSED: speech-to-text-readiness-test -RUNNING: text-to-speech-readiness-test -PASSED: text-to-speech-readiness-test +RUNNING: speech to text-readiness-test +PASSED: speech to text-readiness-test +RUNNING: text to speech-readiness-test +PASSED: text to speech-readiness-test ``` As an alternative to executing the *helm tests*, you could collect the *External IP* addresses and corresponding ports from the `kubectl get all` command. Using the IP and port, open a web browser and navigate to `http://<external-ip>:<port>:/swagger/https://docsupdatetracker.net/index.html` to view the API swagger page(s). Helm charts are hierarchical. Being hierarchical allows for chart inheritance, i [!INCLUDE [Speech umbrella-helm-chart-config](includes/speech-umbrella-helm-chart-config.md)] ## Next steps |
cognitive-services | Speech Container Howto | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Speech-Service/speech-container-howto.md | The following table describes the minimum and recommended allocation of resource | Container | Minimum | Recommended |Speech Model| |--||-| -- |-| Speech-to-text | 4 core, 4-GB memory | 8 core, 8-GB memory |+4 to 8 GB memory| -| Custom speech-to-text | 4 core, 4-GB memory | 8 core, 8-GB memory |+4 to 8 GB memory| +| Speech to text | 4 core, 4-GB memory | 8 core, 8-GB memory |+4 to 8 GB memory| +| Custom speech to text | 4 core, 4-GB memory | 8 core, 8-GB memory |+4 to 8 GB memory| | Speech language identification | 1 core, 1-GB memory | 1 core, 1-GB memory |n/a|-| Neural text-to-speech | 6 core, 12-GB memory | 8 core, 16-GB memory |n/a| +| Neural text to speech | 6 core, 12-GB memory | 8 core, 16-GB memory |n/a| Each core must be at least 2.6 gigahertz (GHz) or faster. Core and memory correspond to the `--cpus` and `--memory` settings, which are us > [!NOTE] > The minimum and recommended allocations are based on Docker limits, *not* the host machine resources.-> For example, speech-to-text containers memory map portions of a large language model. We recommend that the entire file should fit in memory. You need to add an additional 4 to 8 GB to load the speech models (see above table). +> For example, speech to text containers memory map portions of a large language model. We recommend that the entire file should fit in memory. You need to add an additional 4 to 8 GB to load the speech models (see above table). > Also, the first run of either container might take longer because models are being paged into memory. ## Host computer requirements and recommendations You can have this container and a different Cognitive Services container running | Protocol | Host URL | Containers | |--|--|--|-| WS | `ws://localhost:5000` | [Speech-to-text](speech-container-stt.md#use-the-container)<br/><br/>[Custom speech-to-text](speech-container-cstt.md#use-the-container) | -| HTTP | `http://localhost:5000` | [Neural text-to-speech](speech-container-ntts.md#use-the-container)<br/><br/>[Speech language identification](speech-container-lid.md#use-the-container) | +| WS | `ws://localhost:5000` | [Speech to text](speech-container-stt.md#use-the-container)<br/><br/>[Custom speech to text](speech-container-cstt.md#use-the-container) | +| HTTP | `http://localhost:5000` | [Neural text to speech](speech-container-ntts.md#use-the-container)<br/><br/>[Speech language identification](speech-container-lid.md#use-the-container) | For more information on using WSS and HTTPS protocols, see [Container security](../cognitive-services-container-support.md#azure-cognitive-services-container-security) in the Azure Cognitive Services documentation. |
cognitive-services | Speech Container Lid | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Speech-Service/speech-container-lid.md | The Speech language identification container detects the language spoken in audi For more information about prerequisites, validating that a container is running, running multiple containers on the same host, and running disconnected containers, see [Install and run Speech containers with Docker](speech-container-howto.md). > [!TIP]-> To get the most useful results, use the Speech language identification container with the [speech-to-text](speech-container-stt.md) or [custom speech-to-text](speech-container-cstt.md) containers. +> To get the most useful results, use the Speech language identification container with the [speech to text](speech-container-stt.md) or [custom speech to text](speech-container-cstt.md) containers. ## Container images This command: For more information about `docker run` with Speech containers, see [Install and run Speech containers with Docker](speech-container-howto.md#run-the-container). -## Run with the speech-to-text container +## Run with the speech to text container -If you want to run the language identification container with the [speech-to-text](speech-container-stt.md) container, you can use this [docker image](https://hub.docker.com/r/antsu/on-prem-client). After both containers have been started, use this `docker run` command to execute `speech-to-text-with-languagedetection-client`: +If you want to run the language identification container with the [speech to text](speech-container-stt.md) container, you can use this [docker image](https://hub.docker.com/r/antsu/on-prem-client). After both containers have been started, use this `docker run` command to execute `speech-to-text-with-languagedetection-client`: ```bash docker run --rm -v ${HOME}:/root -ti antsu/on-prem-client:latest ./speech-to-text-with-languagedetection-client ./audio/LanguageDetection_en-us.wav --host localhost --lport 5003 --sport 5000 |
cognitive-services | Speech Container Ntts | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Speech-Service/speech-container-ntts.md | Title: Neural text-to-speech containers - Speech service + Title: Neural text to speech containers - Speech service -description: Install and run neural text-to-speech containers with Docker to perform speech synthesis and more on-premises. +description: Install and run neural text to speech containers with Docker to perform speech synthesis and more on-premises. zone_pivot_groups: programming-languages-speech-sdk-cli keywords: on-premises, Docker, container -# Text-to-speech containers with Docker +# Text to speech containers with Docker -The neural text-to-speech container converts text to natural-sounding speech by using deep neural network technology, which allows for more natural synthesized speech.. In this article, you'll learn how to download, install, and run a Text-to-speech container. +The neural text to speech container converts text to natural-sounding speech by using deep neural network technology, which allows for more natural synthesized speech.. In this article, you'll learn how to download, install, and run a Text to speech container. > [!NOTE] > You must [request and get approval](speech-container-overview.md#request-approval-to-run-the-container) to use a Speech container. For more information about prerequisites, validating that a container is running ## Container images -The neural text-to-speech container image for all supported versions and locales can be found on the [Microsoft Container Registry (MCR)](https://mcr.microsoft.com/product/azure-cognitive-services/speechservices/neural-text-to-speech/tags) syndicate. It resides within the `azure-cognitive-services/speechservices/` repository and is named `neural-text-to-speech`. +The neural text to speech container image for all supported versions and locales can be found on the [Microsoft Container Registry (MCR)](https://mcr.microsoft.com/product/azure-cognitive-services/speechservices/neural-text-to-speech/tags) syndicate. It resides within the `azure-cognitive-services/speechservices/` repository and is named `neural-text-to-speech`. :::image type="content" source="./media/containers/mcr-tags-neural-text-to-speech.png" alt-text="A screenshot of the search connectors and triggers dialog." lightbox="./media/containers/mcr-tags-neural-text-to-speech.png"::: The tags are also available [in JSON format](https://mcr.microsoft.com/v2/azure- ``` > [!IMPORTANT]-> We retired the standard speech synthesis voices and standard [text-to-speech](https://mcr.microsoft.com/product/azure-cognitive-services/speechservices/text-to-speech/tags) container on August 31, 2021. You should use neural voices with the [neural-text-to-speech](https://mcr.microsoft.com/product/azure-cognitive-services/speechservices/neural-text-to-speech/tags) container instead. For more information on updating your application, see [Migrate from standard voice to prebuilt neural voice](./how-to-migrate-to-prebuilt-neural-voice.md). +> We retired the standard speech synthesis voices and standard [text to speech](https://mcr.microsoft.com/product/azure-cognitive-services/speechservices/text-to-speech/tags) container on August 31, 2021. You should use neural voices with the [neural-text to speech](https://mcr.microsoft.com/product/azure-cognitive-services/speechservices/neural-text-to-speech/tags) container instead. For more information on updating your application, see [Migrate from standard voice to prebuilt neural voice](./how-to-migrate-to-prebuilt-neural-voice.md). ## Get the container image with docker pull docker pull mcr.microsoft.com/azure-cognitive-services/speechservices/neural-tex ``` > [!IMPORTANT]-> The `latest` tag pulls the `en-US` locale and `en-us-arianeural` voice. For additional locales and voices, see [text-to-speech container images](#container-images). +> The `latest` tag pulls the `en-US` locale and `en-us-arianeural` voice. For additional locales and voices, see [text to speech container images](#container-images). ## Run the container with docker run The following table represents the various `docker run` parameters and their cor | `{ENDPOINT_URI}` | The endpoint is required for metering and billing. For more information, see [billing arguments](speech-container-howto.md#billing-arguments). | | `{API_KEY}` | The API key is required. For more information, see [billing arguments](speech-container-howto.md#billing-arguments). | -When you run the text-to-speech container, configure the port, memory, and CPU according to the text-to-speech container [requirements and recommendations](speech-container-howto.md#container-requirements-and-recommendations). +When you run the text to speech container, configure the port, memory, and CPU according to the text to speech container [requirements and recommendations](speech-container-howto.md#container-requirements-and-recommendations). Here's an example `docker run` command with placeholder values. You must specify the `ENDPOINT_URI` and `API_KEY` values: ApiKey={API_KEY} This command: -* Runs a neural text-to-speech container from the container image. +* Runs a neural text to speech container from the container image. * Allocates 6 CPU cores and 12 GB of memory. * Exposes TCP port 5000 and allocates a pseudo-TTY for the container. * Automatically removes the container after it exits. The container image is still available on the host computer. For more information about `docker run` with Speech containers, see [Install and [!INCLUDE [Speech container authentication](includes/containers-speech-config-http.md)] -[Try the text-to-speech quickstart](get-started-text-to-speech.md) using host authentication instead of key and region. +[Try the text to speech quickstart](get-started-text-to-speech.md) using host authentication instead of key and region. ### SSML voice element -When you construct a neural text-to-speech HTTP POST, the [SSML](speech-synthesis-markup.md) message requires a `voice` element with a `name` attribute. The [locale of the voice](language-support.md?tabs=tts) must correspond to the locale of the container model. +When you construct a neural text to speech HTTP POST, the [SSML](speech-synthesis-markup.md) message requires a `voice` element with a `name` attribute. The [locale of the voice](language-support.md?tabs=tts) must correspond to the locale of the container model. For example, a model that was downloaded via the `latest` tag (defaults to "en-US") would have a voice name of `en-US-AriaNeural`. |
cognitive-services | Speech Container Overview | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Speech-Service/speech-container-overview.md | The following table lists the Speech containers available in the Microsoft Conta | Container | Features | Supported versions and locales | |--|--|--|-| [Speech-to-text](speech-container-stt.md) | Transcribes continuous real-time speech or batch audio recordings with intermediate results. | Latest: 3.12.0<br/><br/>For all supported versions and locales, see the [Microsoft Container Registry (MCR)](https://mcr.microsoft.com/product/azure-cognitive-services/speechservices/speech-to-text/tags) and [JSON tags](https://mcr.microsoft.com/v2/azure-cognitive-services/speechservices/speech-to-text/tags/list).| -| [Custom speech-to-text](speech-container-cstt.md) | Using a custom model from the [Custom Speech portal](https://speech.microsoft.com/customspeech), transcribes continuous real-time speech or batch audio recordings into text with intermediate results. | Latest: 3.12.0<br/><br/>For all supported versions and locales, see the [Microsoft Container Registry (MCR)](https://mcr.microsoft.com/product/azure-cognitive-services/speechservices/custom-speech-to-text/tags) and [JSON tags](https://mcr.microsoft.com/v2/azure-cognitive-services/speechservices/speech-to-text/tags/list). | +| [Speech to text](speech-container-stt.md) | Transcribes continuous real-time speech or batch audio recordings with intermediate results. | Latest: 3.12.0<br/><br/>For all supported versions and locales, see the [Microsoft Container Registry (MCR)](https://mcr.microsoft.com/product/azure-cognitive-services/speechservices/speech-to-text/tags) and [JSON tags](https://mcr.microsoft.com/v2/azure-cognitive-services/speechservices/speech-to-text/tags/list).| +| [Custom speech to text](speech-container-cstt.md) | Using a custom model from the [Custom Speech portal](https://speech.microsoft.com/customspeech), transcribes continuous real-time speech or batch audio recordings into text with intermediate results. | Latest: 3.12.0<br/><br/>For all supported versions and locales, see the [Microsoft Container Registry (MCR)](https://mcr.microsoft.com/product/azure-cognitive-services/speechservices/custom-speech-to-text/tags) and [JSON tags](https://mcr.microsoft.com/v2/azure-cognitive-services/speechservices/speech-to-text/tags/list). | | [Speech language identification](speech-container-lid.md)<sup>1, 2</sup> | Detects the language spoken in audio files. | Latest: 1.11.0<br/><br/>For all supported versions and locales, see the [Microsoft Container Registry (MCR)](https://mcr.microsoft.com/product/azure-cognitive-services/speechservices/language-detection/tags) and [JSON tags](https://mcr.microsoft.com/v2/azure-cognitive-services/speechservices/language-detection/tags/list). |-| [Neural text-to-speech](speech-container-ntts.md) | Converts text to natural-sounding speech by using deep neural network technology, which allows for more natural synthesized speech. | Latest: 2.11.0<br/><br/>For all supported versions and locales, see the [Microsoft Container Registry (MCR)](https://mcr.microsoft.com/product/azure-cognitive-services/speechservices/neural-text-to-speech/tags) and [JSON tags](https://mcr.microsoft.com/v2/azure-cognitive-services/speechservices/neural-text-to-speech/tags/list). | +| [Neural text to speech](speech-container-ntts.md) | Converts text to natural-sounding speech by using deep neural network technology, which allows for more natural synthesized speech. | Latest: 2.11.0<br/><br/>For all supported versions and locales, see the [Microsoft Container Registry (MCR)](https://mcr.microsoft.com/product/azure-cognitive-services/speechservices/neural-text-to-speech/tags) and [JSON tags](https://mcr.microsoft.com/v2/azure-cognitive-services/speechservices/neural-text-to-speech/tags/list). | <sup>1</sup> The container is available in public preview. Containers in preview are still under development and don't meet Microsoft's stability and support requirements. <sup>2</sup> Not available as a disconnected container. |
cognitive-services | Speech Container Stt | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Speech-Service/speech-container-stt.md | Title: Speech-to-text containers - Speech service + Title: Speech to text containers - Speech service -description: Install and run speech-to-text containers with Docker to perform speech recognition, transcription, generation, and more on-premises. +description: Install and run speech to text containers with Docker to perform speech recognition, transcription, generation, and more on-premises. zone_pivot_groups: programming-languages-speech-sdk-cli keywords: on-premises, Docker, container -# Speech-to-text containers with Docker +# Speech to text containers with Docker -The Speech-to-text container transcribes real-time speech or batch audio recordings with intermediate results. In this article, you'll learn how to download, install, and run a Speech-to-text container. +The Speech to text container transcribes real-time speech or batch audio recordings with intermediate results. In this article, you'll learn how to download, install, and run a Speech to text container. > [!NOTE] > You must [request and get approval](speech-container-overview.md#request-approval-to-run-the-container) to use a Speech container. For more information about prerequisites, validating that a container is running ## Container images -The Speech-to-text container image for all supported versions and locales can be found on the [Microsoft Container Registry (MCR)](https://mcr.microsoft.com/product/azure-cognitive-services/speechservices/speech-to-text/tags) syndicate. It resides within the `azure-cognitive-services/speechservices/` repository and is named `speech-to-text`. +The Speech to text container image for all supported versions and locales can be found on the [Microsoft Container Registry (MCR)](https://mcr.microsoft.com/product/azure-cognitive-services/speechservices/speech-to-text/tags) syndicate. It resides within the `azure-cognitive-services/speechservices/` repository and is named `speech-to-text`. :::image type="content" source="./media/containers/mcr-tags-speech-to-text.png" alt-text="A screenshot of the search connectors and triggers dialog." lightbox="./media/containers/mcr-tags-speech-to-text.png"::: docker pull mcr.microsoft.com/azure-cognitive-services/speechservices/speech-to- ``` > [!IMPORTANT]-> The `latest` tag pulls the latest image for the `en-US` locale. For additional versions and locales, see [speech-to-text container images](#container-images). +> The `latest` tag pulls the latest image for the `en-US` locale. For additional versions and locales, see [speech to text container images](#container-images). ## Run the container with docker run The following table represents the various `docker run` parameters and their cor | `{ENDPOINT_URI}` | The endpoint is required for metering and billing. For more information, see [billing arguments](speech-container-howto.md#billing-arguments). | | `{API_KEY}` | The API key is required. For more information, see [billing arguments](speech-container-howto.md#billing-arguments). | -When you run the speech-to-text container, configure the port, memory, and CPU according to the speech-to-text container [requirements and recommendations](speech-container-howto.md#container-requirements-and-recommendations). +When you run the speech to text container, configure the port, memory, and CPU according to the speech to text container [requirements and recommendations](speech-container-howto.md#container-requirements-and-recommendations). Here's an example `docker run` command with placeholder values. You must specify the `ENDPOINT_URI` and `API_KEY` values: For more information about `docker run` with Speech containers, see [Install and [!INCLUDE [Speech container authentication](includes/containers-speech-config-ws.md)] -[Try the speech-to-text quickstart](get-started-speech-to-text.md) using host authentication instead of key and region. +[Try the speech to text quickstart](get-started-speech-to-text.md) using host authentication instead of key and region. ## Next steps |
cognitive-services | Speech Sdk | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Speech-Service/speech-sdk.md | -In some cases, you can't or shouldn't use the [Speech SDK](speech-sdk.md). In those cases, you can use REST APIs to access the Speech service. For example, use the [Speech-to-text REST API](rest-speech-to-text.md) for [batch transcription](batch-transcription.md) and [custom speech](custom-speech-overview.md). +In some cases, you can't or shouldn't use the [Speech SDK](speech-sdk.md). In those cases, you can use REST APIs to access the Speech service. For example, use the [Speech to text REST API](rest-speech-to-text.md) for [batch transcription](batch-transcription.md) and [custom speech](custom-speech-overview.md). ## Supported languages The Speech SDK supports the following languages and platforms: ## Speech SDK demo -The following video shows how to install the [Speech SDK for C#](quickstarts/setup-platform.md) and write a simple .NET console application for speech-to-text. +The following video shows how to install the [Speech SDK for C#](quickstarts/setup-platform.md) and write a simple .NET console application for speech to text. > [!VIDEO c20d3b0c-e96a-4154-9299-155e27db7117] |
cognitive-services | Speech Service Vnet Service Endpoint | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Speech-Service/speech-service-vnet-service-endpoint.md | This scenario is equivalent to [using a Speech resource that has a custom domain * [Azure Virtual Network service endpoints](../../virtual-network/virtual-network-service-endpoints-overview.md) * [Azure Private Link](../../private-link/private-link-overview.md) * [Speech SDK](speech-sdk.md)-* [Speech-to-text REST API](rest-speech-to-text.md) -* [Text-to-speech REST API](rest-text-to-speech.md) +* [Speech to text REST API](rest-speech-to-text.md) +* [Text to speech REST API](rest-text-to-speech.md) |
cognitive-services | Speech Services Private Link | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Speech-Service/speech-services-private-link.md | A Speech resource with a custom domain name and a private endpoint turned on use We'll use `my-private-link-speech.cognitiveservices.azure.com` as a sample Speech resource DNS name (custom domain) for this section. -Speech service has REST APIs for [Speech-to-text](rest-speech-to-text.md) and [Text-to-speech](rest-text-to-speech.md). Consider the following information for the private-endpoint-enabled scenario. +Speech service has REST APIs for [Speech to text](rest-speech-to-text.md) and [Text to speech](rest-text-to-speech.md). Consider the following information for the private-endpoint-enabled scenario. -Speech-to-text has two REST APIs. Each API serves a different purpose, uses different endpoints, and requires a different approach when you're using it in the private-endpoint-enabled scenario. +Speech to text has two REST APIs. Each API serves a different purpose, uses different endpoints, and requires a different approach when you're using it in the private-endpoint-enabled scenario. -The Speech-to-text REST APIs are: -- [Speech-to-text REST API](rest-speech-to-text.md), which is used for [Batch transcription](batch-transcription.md) and [Custom Speech](custom-speech-overview.md). -- [Speech-to-text REST API for short audio](rest-speech-to-text-short.md), which is used for real-time speech to text.+The Speech to text REST APIs are: +- [Speech to text REST API](rest-speech-to-text.md), which is used for [Batch transcription](batch-transcription.md) and [Custom Speech](custom-speech-overview.md). +- [Speech to text REST API for short audio](rest-speech-to-text-short.md), which is used for real-time speech to text. -Usage of the Speech-to-text REST API for short audio and the Text-to-speech REST API in the private endpoint scenario is the same. It's equivalent to the [Speech SDK case](#speech-resource-with-a-custom-domain-name-and-a-private-endpoint-usage-with-the-speech-sdk) described later in this article. +Usage of the Speech to text REST API for short audio and the Text to speech REST API in the private endpoint scenario is the same. It's equivalent to the [Speech SDK case](#speech-resource-with-a-custom-domain-name-and-a-private-endpoint-usage-with-the-speech-sdk) described later in this article. -Speech-to-text REST API uses a different set of endpoints, so it requires a different approach for the private-endpoint-enabled scenario. +Speech to text REST API uses a different set of endpoints, so it requires a different approach for the private-endpoint-enabled scenario. The next subsections describe both cases. -#### Speech-to-text REST API +#### Speech to text REST API -Usually, Speech resources use [Cognitive Services regional endpoints](../cognitive-services-custom-subdomains.md#is-there-a-list-of-regional-endpoints) for communicating with the [Speech-to-text REST API](rest-speech-to-text.md). These resources have the following naming format: <p/>`{region}.api.cognitive.microsoft.com`. +Usually, Speech resources use [Cognitive Services regional endpoints](../cognitive-services-custom-subdomains.md#is-there-a-list-of-regional-endpoints) for communicating with the [Speech to text REST API](rest-speech-to-text.md). These resources have the following naming format: <p/>`{region}.api.cognitive.microsoft.com`. This is a sample request URL: After you turn on a custom domain name for a Speech resource, you typically repl > > A custom domain for a Speech resource contains *no* information about the region where the resource is deployed. So the application logic described earlier will *not* work and needs to be altered. -#### Speech-to-text REST API for short audio and Text-to-speech REST API +#### Speech to text REST API for short audio and Text to speech REST API -The [Speech-to-text REST API for short audio](rest-speech-to-text-short.md) and the [Text-to-speech REST API](rest-text-to-speech.md) use two types of endpoints: +The [Speech to text REST API for short audio](rest-speech-to-text-short.md) and the [Text to speech REST API](rest-text-to-speech.md) use two types of endpoints: - [Cognitive Services regional endpoints](../cognitive-services-custom-subdomains.md#is-there-a-list-of-regional-endpoints) for communicating with the Cognitive Services REST API to obtain an authorization token - Special endpoints for all other operations > [!NOTE] > See [this article](sovereign-clouds.md) for Azure Government and Azure China endpoints. -The detailed description of the special endpoints and how their URL should be transformed for a private-endpoint-enabled Speech resource is provided in [this subsection](#construct-endpoint-url) about usage with the Speech SDK. The same principle described for the SDK applies for the Speech-to-text REST API for short audio and the Text-to-speech REST API. +The detailed description of the special endpoints and how their URL should be transformed for a private-endpoint-enabled Speech resource is provided in [this subsection](#construct-endpoint-url) about usage with the Speech SDK. The same principle described for the SDK applies for the Speech to text REST API for short audio and the Text to speech REST API. -Get familiar with the material in the subsection mentioned in the previous paragraph and see the following example. The example describes the Text-to-speech REST API. Usage of the Speech-to-text REST API for short audio is fully equivalent. +Get familiar with the material in the subsection mentioned in the previous paragraph and see the following example. The example describes the Text to speech REST API. Usage of the Speech to text REST API for short audio is fully equivalent. > [!NOTE]-> When you're using the Speech-to-text REST API for short audio and Text-to-speech REST API in private endpoint scenarios, use a resource key passed through the `Ocp-Apim-Subscription-Key` header. (See details for [Speech-to-text REST API for short audio](rest-speech-to-text-short.md#request-headers) and [Text-to-speech REST API](rest-text-to-speech.md#request-headers)) +> When you're using the Speech to text REST API for short audio and Text to speech REST API in private endpoint scenarios, use a resource key passed through the `Ocp-Apim-Subscription-Key` header. (See details for [Speech to text REST API for short audio](rest-speech-to-text-short.md#request-headers) and [Text to speech REST API](rest-text-to-speech.md#request-headers)) > > Using an authorization token and passing it to the special endpoint via the `Authorization` header will work *only* if you've turned on the **All networks** access option in the **Networking** section of your Speech resource. In other cases you will get either `Forbidden` or `BadRequest` error when trying to obtain an authorization token. -**Text-to-speech REST API usage example** +**Text to speech REST API usage example** We'll use West Europe as a sample Azure region and `my-private-link-speech.cognitiveservices.azure.com` as a sample Speech resource DNS name (custom domain). The custom domain name `my-private-link-speech.cognitiveservices.azure.com` in our example belongs to the Speech resource created in the West Europe region. To get the list of the voices supported in the region, perform the following req ```http https://westeurope.tts.speech.microsoft.com/cognitiveservices/voices/list ```-See more details in the [Text-to-speech REST API documentation](rest-text-to-speech.md). +See more details in the [Text to speech REST API documentation](rest-text-to-speech.md). For the private-endpoint-enabled Speech resource, the endpoint URL for the same operation needs to be modified. The same request will look like this: We'll use `my-private-link-speech.cognitiveservices.azure.com` as a sample Speec #### Construct endpoint URL -Usually in SDK scenarios (as well as in the Speech-to-text REST API for short audio and Text-to-speech REST API scenarios), Speech resources use the dedicated regional endpoints for different service offerings. The DNS name format for these endpoints is: +Usually in SDK scenarios (as well as in the Speech to text REST API for short audio and Text to speech REST API scenarios), Speech resources use the dedicated regional endpoints for different service offerings. The DNS name format for these endpoints is: `{region}.{speech service offering}.speech.microsoft.com` All possible values for the region (first element of the DNS name) are listed in | `commands` | [Custom Commands](custom-commands.md) | | `convai` | [Conversation Transcription](conversation-transcription.md) | | `s2s` | [Speech Translation](speech-translation.md) |-| `stt` | [Speech-to-text](speech-to-text.md) | -| `tts` | [Text-to-speech](text-to-speech.md) | +| `stt` | [Speech to text](speech-to-text.md) | +| `tts` | [Text to speech](text-to-speech.md) | | `voice` | [Custom Voice](how-to-custom-voice.md) | -So the earlier example (`westeurope.stt.speech.microsoft.com`) stands for a Speech-to-text endpoint in West Europe. +So the earlier example (`westeurope.stt.speech.microsoft.com`) stands for a Speech to text endpoint in West Europe. Private-endpoint-enabled endpoints communicate with Speech service via a special proxy. Because of that, *you must change the endpoint connection URLs*. Compare it with the output from [this section](#resolve-dns-from-other-networks) ### Speech resource with a custom domain name and without private endpoints: Usage with the REST APIs -#### Speech-to-text REST API +#### Speech to text REST API -Speech-to-text REST API usage is fully equivalent to the case of [private-endpoint-enabled Speech resources](#speech-to-text-rest-api). +Speech to text REST API usage is fully equivalent to the case of [private-endpoint-enabled Speech resources](#speech-to-text-rest-api). -#### Speech-to-text REST API for short audio and Text-to-speech REST API +#### Speech to text REST API for short audio and Text to speech REST API -In this case, usage of the Speech-to-text REST API for short audio and usage of the Text-to-speech REST API have no differences from the general case, with one exception. (See the following note.) You should use both APIs as described in the [speech-to-text REST API for short audio](rest-speech-to-text-short.md) and [Text-to-speech REST API](rest-text-to-speech.md) documentation. +In this case, usage of the Speech to text REST API for short audio and usage of the Text to speech REST API have no differences from the general case, with one exception. (See the following note.) You should use both APIs as described in the [Speech to text REST API for short audio](rest-speech-to-text-short.md) and [Text to speech REST API](rest-text-to-speech.md) documentation. > [!NOTE]-> When you're using the Speech-to-text REST API for short audio and Text-to-speech REST API in custom domain scenarios, use a Speech resource key passed through the `Ocp-Apim-Subscription-Key` header. (See details for [Speech-to-text REST API for short audio](rest-speech-to-text-short.md#request-headers) and [Text-to-speech REST API](rest-text-to-speech.md#request-headers)) +> When you're using the Speech to text REST API for short audio and Text to speech REST API in custom domain scenarios, use a Speech resource key passed through the `Ocp-Apim-Subscription-Key` header. (See details for [Speech to text REST API for short audio](rest-speech-to-text-short.md#request-headers) and [Text to speech REST API](rest-text-to-speech.md#request-headers)) > > Using an authorization token and passing it to the special endpoint via the `Authorization` header will work *only* if you've turned on the **All networks** access option in the **Networking** section of your Speech resource. In other cases you will get either `Forbidden` or `BadRequest` error when trying to obtain an authorization token. For pricing details, see [Azure Private Link pricing](https://azure.microsoft.co * [Azure Private Link](../../private-link/private-link-overview.md) * [Azure VNet service endpoint](../../virtual-network/virtual-network-service-endpoints-overview.md) * [Speech SDK](speech-sdk.md)-* [Speech-to-text REST API](rest-speech-to-text.md) -* [Text-to-speech REST API](rest-text-to-speech.md) +* [Speech to text REST API](rest-speech-to-text.md) +* [Text to speech REST API](rest-text-to-speech.md) |
cognitive-services | Speech Services Quotas And Limits | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Speech-Service/speech-services-quotas-and-limits.md | The following sections provide you with a quick guide to the quotas and limits t For information about adjustable quotas for Standard (S0) Speech resources, see [additional explanations](#detailed-description-quota-adjustment-and-best-practices), [best practices](#general-best-practices-to-mitigate-throttling-during-autoscaling), and [adjustment instructions](#speech-to-text-increase-real-time-speech-to-text-concurrent-request-limit). The quotas and limits for Free (F0) Speech resources aren't adjustable. -### Speech-to-text quotas and limits per resource +### Speech to text quotas and limits per resource -This section describes speech-to-text quotas and limits per Speech resource. Unless otherwise specified, the limits aren't adjustable. +This section describes speech to text quotas and limits per Speech resource. Unless otherwise specified, the limits aren't adjustable. #### Real-time speech to text and speech translation -You can use real-time speech-to-text with the [Speech SDK](speech-sdk.md) or the [speech-to-text REST API for short audio](rest-speech-to-text-short.md). +You can use real-time speech to text with the [Speech SDK](speech-sdk.md) or the [Speech to text REST API for short audio](rest-speech-to-text-short.md). > [!IMPORTANT]-> These limits apply to concurrent real-time speech-to-text requests and speech translation requests combined. For example, if you have 60 concurrent speech-to-text requests and 40 concurrent speech translation requests, you'll reach the limit of 100 concurrent requests. +> These limits apply to concurrent real-time speech to text requests and speech translation requests combined. For example, if you have 60 concurrent speech to text requests and 40 concurrent speech translation requests, you'll reach the limit of 100 concurrent requests. | Quota | Free (F0) | Standard (S0) | |--|--|--| You can use real-time speech-to-text with the [Speech SDK](speech-sdk.md) or the | Quota | Free (F0) | Standard (S0) | |--|--|--|-| [Speech-to-text REST API](rest-speech-to-text.md) limit | Not available for F0 | 300 requests per minute | +| [Speech to text REST API](rest-speech-to-text.md) limit | Not available for F0 | 300 requests per minute | | Max audio input file size | N/A | 1 GB | | Max input blob size (for example, can contain more than one file in a zip archive). Note the file size limit from the preceding row. | N/A | 2.5 GB | | Max blob container size | N/A | 5 GB | The limits in this table apply per Speech resource when you create a Custom Spee | Max pronunciation dataset file size for data import | 1 KB | 1 MB | | Max text size when you're using the `text` parameter in the [Models_Create](https://westcentralus.dev.cognitive.microsoft.com/docs/services/speech-to-text-api-v3-1/operations/Models_Create/) API request | 200 KB | 500 KB | -### Text-to-speech quotas and limits per resource +### Text to speech quotas and limits per resource -This section describes text-to-speech quotas and limits per Speech resource. Unless otherwise specified, the limits aren't adjustable. +This section describes text to speech quotas and limits per Speech resource. Unless otherwise specified, the limits aren't adjustable. -#### Common text-to-speech quotas and limits +#### Common text to speech quotas and limits | Quota | Free (F0) | Standard (S0) | |--|--|--| Some of the Speech service quotas are adjustable. This section provides addition The following quotas are adjustable for Standard (S0) resources. The Free (F0) request limits aren't adjustable. -- Speech-to-text [concurrent request limit](#real-time-speech-to-text-and-speech-translation) for base model endpoint and custom endpoint-- Text-to-speech [maximum number of transactions per time period](#text-to-speech-quotas-and-limits-per-resource) for prebuilt neural voices and custom neural voices+- Speech to text [concurrent request limit](#real-time-speech-to-text-and-speech-translation) for base model endpoint and custom endpoint +- Text to speech [maximum number of transactions per time period](#text-to-speech-quotas-and-limits-per-resource) for prebuilt neural voices and custom neural voices - Speech translation [concurrent request limit](#real-time-speech-to-text-and-speech-translation) Before requesting a quota increase (where applicable), ensure that it's necessary. Speech service uses autoscaling technologies to bring the required computational resources in on-demand mode. At the same time, Speech service tries to keep your costs low by not maintaining an excessive amount of hardware capacity. Let's look at an example. Suppose that your application receives response code 4 To minimize issues related to throttling, it's a good idea to use the following techniques: - Implement retry logic in your application.-- Avoid sharp changes in the workload. Increase the workload gradually. For example, let's say your application is using text-to-speech, and your current workload is 5 TPS. The next second, you increase the load to 20 TPS (that is, four times more). Speech service immediately starts scaling up to fulfill the new load, but is unable to scale as needed within one second. Some of the requests will get response code 429 (too many requests).+- Avoid sharp changes in the workload. Increase the workload gradually. For example, let's say your application is using text to speech, and your current workload is 5 TPS. The next second, you increase the load to 20 TPS (that is, four times more). Speech service immediately starts scaling up to fulfill the new load, but is unable to scale as needed within one second. Some of the requests will get response code 429 (too many requests). - Test different load increase patterns. For more information, see the [workload pattern example](#example-of-a-workload-pattern-best-practice). - Create additional Speech service resources in *different* regions, and distribute the workload among them. (Creating multiple Speech service resources in the same region will not affect the performance, because all resources will be served by the same backend cluster). The next sections describe specific cases of adjusting quotas. -### Speech-to-text: increase real-time speech-to-text concurrent request limit +### Speech to text: increase real-time speech to text concurrent request limit -By default, the number of concurrent real-time speech-to-text and speech translation [requests combined](#real-time-speech-to-text-and-speech-translation) is limited to 100 per resource in the base model, and 100 per custom endpoint in the custom model. For the standard pricing tier, you can increase this amount. Before submitting the request, ensure that you're familiar with the material discussed earlier in this article, such as the best practices to mitigate throttling. +By default, the number of concurrent real-time speech to text and speech translation [requests combined](#real-time-speech-to-text-and-speech-translation) is limited to 100 per resource in the base model, and 100 per custom endpoint in the custom model. For the standard pricing tier, you can increase this amount. Before submitting the request, ensure that you're familiar with the material discussed earlier in this article, such as the best practices to mitigate throttling. >[!NOTE] > Concurrent request limits for base and custom models need to be adjusted separately. You can have a Speech service resource that's associated with many custom endpoints hosting many custom model deployments. As needed, the limit adjustments per custom endpoint must be requested separately. Initiate the increase of the limit for concurrent requests for your resource, or 1. Go to the [Azure portal](https://portal.azure.com/). 1. Select the Speech service resource for which you would like to increase (or to check) the concurrency request limit. 1. In the **Support + troubleshooting** group, select **New support request**. A new window will appear, with auto-populated information about your Azure subscription and Azure resource.-1. In **Summary**, describe what you want (for example, "Increase speech-to-text concurrency request limit"). +1. In **Summary**, describe what you want (for example, "Increase speech to text concurrency request limit"). 1. In **Problem type**, select **Quota or Subscription issues**. 1. In **Problem subtype**, select either: - **Quota or concurrent requests increase** for an increase request. - **Quota or usage validation** to check the existing limit. 1. Select **Next: Solutions**. Proceed further with the request creation. 1. On the **Details** tab, in the **Description** field, enter the following:- - A note that the request is about the speech-to-text quota. + - A note that the request is about the speech to text quota. - Choose either the base or custom model. - The Azure resource information you [collected previously](#have-the-required-information-ready). - Any other required information. Suppose that a Speech service resource has the concurrent request limit set to 3 Generally, it's a very good idea to test the workload and the workload patterns before going to production. -### Text-to-speech: increase concurrent request limit +### Text to speech: increase concurrent request limit For the standard pricing tier, you can increase this amount. Before submitting the request, ensure that you're familiar with the material discussed earlier in this article, such as the best practices to mitigate throttling. Initiate the increase of the limit for concurrent requests for your resource, or 1. Go to the [Azure portal](https://portal.azure.com/). 1. Select the Speech service resource for which you would like to increase (or to check) the concurrency request limit. 1. In the **Support + troubleshooting** group, select **New support request**. A new window will appear, with auto-populated information about your Azure subscription and Azure resource.-1. In **Summary**, describe what you want (for example, "Increase text-to-speech concurrency request limit"). +1. In **Summary**, describe what you want (for example, "Increase text to speech concurrency request limit"). 1. In **Problem type**, select **Quota or Subscription issues**. 1. In **Problem subtype**, select either: - **Quota or concurrent requests increase** for an increase request. - **Quota or usage validation** to check the existing limit. 1. On the **Recommended solution** tab, select **Next**. 1. On the **Additional details** tab, fill in all the required items. And in the **Details** field, enter the following:- - A note that the request is about the text-to-speech quota. + - A note that the request is about the text to speech quota. - Choose either the prebuilt voice or custom voice. - The Azure resource information you [collected previously](#prepare-the-required-information). - Any other required information. |
cognitive-services | Speech Ssml Phonetic Sets | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Speech-Service/speech-ssml-phonetic-sets.md | -Phonetic alphabets are used with the [Speech Synthesis Markup Language (SSML)](speech-synthesis-markup.md) to improve the pronunciation of text-to-speech voices. To learn when and how to use each alphabet, see [Use phonemes to improve pronunciation](speech-synthesis-markup-pronunciation.md#phoneme-element). +Phonetic alphabets are used with the [Speech Synthesis Markup Language (SSML)](speech-synthesis-markup.md) to improve the pronunciation of text to speech voices. To learn when and how to use each alphabet, see [Use phonemes to improve pronunciation](speech-synthesis-markup-pronunciation.md#phoneme-element). Speech service supports the [International Phonetic Alphabet (IPA)](https://en.wikipedia.org/wiki/International_Phonetic_Alphabet) suprasegmentals that are listed here. You set `ipa` as the `alphabet` in [SSML](speech-synthesis-markup-pronunciation.md#phoneme-element). |
cognitive-services | Speech Studio Overview | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Speech-Service/speech-studio-overview.md | -> You can try speech-to-text and text-to-speech in [Speech Studio](https://aka.ms/speechstudio/) without signing up or writing any code. +> You can try speech to text and text to speech in [Speech Studio](https://aka.ms/speechstudio/) without signing up or writing any code. ## Speech Studio scenarios For a demonstration of these scenarios in Speech Studio, view this [introductory In Speech Studio, the following Speech service features are available as project types: -* [Real-time speech-to-text](https://aka.ms/speechstudio/speechtotexttool): Quickly test speech-to-text by dragging audio files here without having to use any code. This is a demo tool for seeing how speech-to-text works on your audio samples. To explore the full functionality, see [What is speech-to-text?](speech-to-text.md). +* [Real-time speech to text](https://aka.ms/speechstudio/speechtotexttool): Quickly test speech to text by dragging audio files here without having to use any code. This is a demo tool for seeing how speech to text works on your audio samples. To explore the full functionality, see [What is speech to text?](speech-to-text.md). * [Custom Speech](https://aka.ms/speechstudio/customspeech): Create speech recognition models that are tailored to specific vocabulary sets and styles of speaking. In contrast to the base speech recognition model, Custom Speech models become part of your unique competitive advantage because they're not publicly accessible. To get started with uploading sample audio to create a Custom Speech model, see [Upload training and testing datasets](how-to-custom-speech-upload-data.md). In Speech Studio, the following Speech service features are available as project * [Voice Gallery](https://aka.ms/speechstudio/voicegallery): Build apps and services that speak naturally. Choose from a broad portfolio of [languages, voices, and variants](language-support.md?tabs=tts). Bring your scenarios to life with highly expressive and human-like neural voices. -* [Custom Voice](https://aka.ms/speechstudio/customvoice): Create custom, one-of-a-kind voices for text-to-speech. You supply audio files and create matching transcriptions in Speech Studio, and then use the custom voices in your applications. To create and use custom voices via endpoints, see [Create and use your voice model](how-to-custom-voice-create-voice.md). +* [Custom Voice](https://aka.ms/speechstudio/customvoice): Create custom, one-of-a-kind voices for text to speech. You supply audio files and create matching transcriptions in Speech Studio, and then use the custom voices in your applications. To create and use custom voices via endpoints, see [Create and use your voice model](how-to-custom-voice-create-voice.md). -* [Audio Content Creation](https://aka.ms/speechstudio/audiocontentcreation): A no-code approach for text-to-speech synthesis. You can use the output audio as-is, or as a starting point for further customization. You can build highly natural audio content for a variety of scenarios, such as audiobooks, news broadcasts, video narrations, and chat bots. For more information, see the [Audio Content Creation](how-to-audio-content-creation.md) documentation. +* [Audio Content Creation](https://aka.ms/speechstudio/audiocontentcreation): A no-code approach for text to speech synthesis. You can use the output audio as-is, or as a starting point for further customization. You can build highly natural audio content for a variety of scenarios, such as audiobooks, news broadcasts, video narrations, and chat bots. For more information, see the [Audio Content Creation](how-to-audio-content-creation.md) documentation. * [Custom Keyword](https://aka.ms/speechstudio/customkeyword): A custom keyword is a word or short phrase that you can use to voice-activate a product. You create a custom keyword in Speech Studio, and then generate a binary file to [use with the Speech SDK](custom-keyword-basics.md) in your applications. |
cognitive-services | Speech Synthesis Markup Pronunciation | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Speech-Service/speech-synthesis-markup-pronunciation.md | -You can use Speech Synthesis Markup Language (SSML) with text-to-speech to specify how the speech is pronounced. For example, you can use SSML with phonemes and a custom lexicon to improve pronunciation. You can also use SSML to define how a word or mathematical expression is pronounced. +You can use Speech Synthesis Markup Language (SSML) with text to speech to specify how the speech is pronounced. For example, you can use SSML with phonemes and a custom lexicon to improve pronunciation. You can also use SSML to define how a word or mathematical expression is pronounced. Refer to the sections below for details about how to use SSML elements to improve pronunciation. For more information about SSML syntax, see [SSML document structure and events](speech-synthesis-markup-structure.md). Usage of the `phoneme` element's attributes are described in the following table | Attribute | Description | Required or optional | | - | - | - | | `alphabet` | The phonetic alphabet to use when you synthesize the pronunciation of the string in the `ph` attribute. The string that specifies the alphabet must be specified in lowercase letters. The following options are the possible alphabets that you can specify:<ul><li>`ipa` – See [SSML phonetic alphabets](speech-ssml-phonetic-sets.md)</li><li>`sapi` – See [SSML phonetic alphabets](speech-ssml-phonetic-sets.md)</li><li>`ups` – See [Universal Phone Set](https://documentation.help/Microsoft-Speech-Platform-SDK-11/17509a49-cae7-41f5-b61d-07beaae872ea.htm)</li><li>`x-sampa` – See [SSML phonetic alphabets](speech-ssml-phonetic-sets.md#map-x-sampa-to-ipa)</li></ul><br>The alphabet applies only to the `phoneme` in the element. | Optional |-| `ph` | A string containing phones that specify the pronunciation of the word in the `phoneme` element. If the specified string contains unrecognized phones, text-to-speech rejects the entire SSML document and produces none of the speech output specified in the document.<br/><br/>For `ipa`, to stress one syllable by placing stress symbol before this syllable, you need to mark all syllables for the word. Or else, the syllable before this stress symbol will be stressed. For `sapi`, if you want to stress one syllable, you need to place the stress symbol after this syllable, whether or not all syllables of the word are marked.| Required | +| `ph` | A string containing phones that specify the pronunciation of the word in the `phoneme` element. If the specified string contains unrecognized phones, text to speech rejects the entire SSML document and produces none of the speech output specified in the document.<br/><br/>For `ipa`, to stress one syllable by placing stress symbol before this syllable, you need to mark all syllables for the word. Or else, the syllable before this stress symbol will be stressed. For `sapi`, if you want to stress one syllable, you need to place the stress symbol after this syllable, whether or not all syllables of the word are marked.| Required | ### phoneme examples You can define how single entities (such as company, a medical term, or an emoji > [!NOTE] > For a list of locales that support custom lexicon, see footnotes in the [language support](language-support.md?tabs=tts) table. > -> The `lexicon` element is not supported by the [Long Audio API](migrate-to-batch-synthesis.md#text-inputs). For long-form text-to-speech, use the [batch synthesis API](batch-synthesis.md) (Preview) instead. +> The `lexicon` element is not supported by the [Long Audio API](migrate-to-batch-synthesis.md#text-inputs). For long-form text to speech, use the [batch synthesis API](batch-synthesis.md) (Preview) instead. Usage of the `lexicon` element's attributes are described in the following table. To define how multiple entities are read, you can define them in a custom lexico Here are some limitations of the custom lexicon file: - **File size**: The custom lexicon file size is limited to a maximum of 100 KB. If the file size exceeds the 100-KB limit, the synthesis request fails.-- **Lexicon cache refresh**: The custom lexicon is cached with the URI as the key on text-to-speech when it's first loaded. The lexicon with the same URI won't be reloaded within 15 minutes, so the custom lexicon change needs to wait 15 minutes at the most to take effect.+- **Lexicon cache refresh**: The custom lexicon is cached with the URI as the key on text to speech when it's first loaded. The lexicon with the same URI won't be reloaded within 15 minutes, so the custom lexicon change needs to wait 15 minutes at the most to take effect. The supported elements and attributes of a custom lexicon XML file are described in the [Pronunciation Lexicon Specification (PLS) Version 1.0](https://www.w3.org/TR/pronunciation-lexicon/). Here are some examples of the supported elements and attributes: The MathML entities aren't supported by XML syntax, so you must use the correspo ### MathML examples -The text-to-speech output for this example is "a squared plus b squared equals c squared". +The text to speech output for this example is "a squared plus b squared equals c squared". ```xml <speak version='1.0' xmlns='http://www.w3.org/2001/10/synthesis' xmlns:mstts='http://www.w3.org/2001/mstts' xml:lang='en-US'><voice name='en-US-JennyNeural'><math xmlns='http://www.w3.org/1998/Math/MathML'><msup><mi>a</mi><mn>2</mn></msup><mo>+</mo><msup><mi>b</mi><mn>2</mn></msup><mo>=</mo><msup><mi>c</mi><mn>2</mn></msup></math></voice></speak> |
cognitive-services | Speech Synthesis Markup Structure | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Speech-Service/speech-synthesis-markup-structure.md | -The Speech Synthesis Markup Language (SSML) with input text determines the structure, content, and other characteristics of the text-to-speech output. For example, you can use SSML to define a paragraph, a sentence, a break or a pause, or silence. You can wrap text with event tags such as bookmark or viseme that can be processed later by your application. +The Speech Synthesis Markup Language (SSML) with input text determines the structure, content, and other characteristics of the text to speech output. For example, you can use SSML to define a paragraph, a sentence, a break or a pause, or silence. You can wrap text with event tags such as bookmark or viseme that can be processed later by your application. Refer to the sections below for details about how to structure elements in the SSML document. The supported values for attributes of the `break` element were [described previ ```xml <speak version="1.0" xmlns="http://www.w3.org/2001/10/synthesis" xml:lang="en-US"> <voice name="en-US-JennyNeural">- Welcome <break /> to text-to-speech. - Welcome <break strength="medium" /> to text-to-speech. - Welcome <break time="750ms" /> to text-to-speech. + Welcome <break /> to text to speech. + Welcome <break strength="medium" /> to text to speech. + Welcome <break time="750ms" /> to text to speech. </voice> </speak> ``` |
cognitive-services | Speech Synthesis Markup Voice | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Speech-Service/speech-synthesis-markup-voice.md | -Use Speech Synthesis Markup Language (SSML) to specify the text-to-speech voice, language, name, style, and role. You can use multiple voices in a single SSML document. Adjust the emphasis, speaking rate, pitch, and volume. You can also use SSML to insert pre-recorded audio, such as a sound effect or a musical note. +Use Speech Synthesis Markup Language (SSML) to specify the text to speech voice, language, name, style, and role. You can use multiple voices in a single SSML document. Adjust the emphasis, speaking rate, pitch, and volume. You can also use SSML to insert pre-recorded audio, such as a sound effect or a musical note. Refer to the sections below for details about how to use SSML elements to specify voice and sound. For more information about SSML syntax, see [SSML document structure and events](speech-synthesis-markup-structure.md). ## Voice element -At least one `voice` element must be specified within each SSML [speak](speech-synthesis-markup-structure.md#speak-root-element) element. This element determines the voice that's used for text-to-speech. +At least one `voice` element must be specified within each SSML [speak](speech-synthesis-markup-structure.md#speak-root-element) element. This element determines the voice that's used for text to speech. You can include multiple `voice` elements in a single SSML document. Each `voice` element can specify a different voice. You can also use the same voice multiple times with different settings, such as when you [change the silence duration](speech-synthesis-markup-structure.md#add-silence) between sentences. Usage of the `voice` element's attributes are described in the following table. | Attribute | Description | Required or optional | | - | - | - |-| `name` | The voice used for text-to-speech output. For a complete list of supported prebuilt voices, see [Language support](language-support.md?tabs=tts).| Required| +| `name` | The voice used for text to speech output. For a complete list of supported prebuilt voices, see [Language support](language-support.md?tabs=tts).| Required| | `effect` |The audio effect processor that's used to optimize the quality of the synthesized speech output for specific scenarios on devices. <br/><br/>For some scenarios in production environments, the auditory experience may be degraded due to the playback distortion on certain devices. For example, the synthesized speech from a car speaker may sound dull and muffled due to environmental factors such as speaker response, room reverberation, and background noise. The passenger might have to turn up the volume to hear more clearly. To avoid manual operations in such a scenario, the audio effect processor can make the sound clearer by compensating the distortion of playback.<br/><br/>The following values are supported:<br/><ul><li>`eq_car` ΓÇô Optimize the auditory experience when providing high-fidelity speech in cars, buses, and other enclosed automobiles.</li><li>`eq_telecomhp8k` ΓÇô Optimize the auditory experience for narrowband speech in telecom or telephone scenarios. We recommend a sampling rate of 8 kHz. If the sample rate isn't 8 kHz, the auditory quality of the output speech won't be optimized.</li></ul><br/>If the value is missing or invalid, this attribute will be ignored and no effect will be applied.| Optional | ### Voice examples This example uses the `en-US-JennyNeural` voice. #### Multiple voices example -Within the `speak` element, you can specify multiple voices for text-to-speech output. These voices can be in different languages. For each voice, the text must be wrapped in a `voice` element. +Within the `speak` element, you can specify multiple voices for text to speech output. These voices can be in different languages. For each voice, the text must be wrapped in a `voice` element. This example alternates between the `en-US-JennyNeural` and `en-US-ChristopherNeural` voices. This SSML snippet shows how to use the `lang` element (and `xml:lang` attribute) </speak> ``` -Within the `speak` element, you can specify multiple languages including `en-US` for text-to-speech output. For each adjusted language, the text must match the language and be wrapped in a `voice` element. This SSML snippet shows how to use `<lang xml:lang>` to change the speaking languages to `es-MX`, `en-US`, and `fr-FR`. +Within the `speak` element, you can specify multiple languages including `en-US` for text to speech output. For each adjusted language, the text must match the language and be wrapped in a `voice` element. This SSML snippet shows how to use `<lang xml:lang>` to change the speaking languages to `es-MX`, `en-US`, and `fr-FR`. ```xml <speak version="1.0" xmlns="http://www.w3.org/2001/10/synthesis" Within the `speak` element, you can specify multiple languages including `en-US` ## Adjust prosody -The `prosody` element is used to specify changes to pitch, contour, range, rate, and volume for the text-to-speech output. The `prosody` element can contain text and the following elements: `audio`, `break`, `p`, `phoneme`, `prosody`, `say-as`, `sub`, and `s`. +The `prosody` element is used to specify changes to pitch, contour, range, rate, and volume for the text to speech output. The `prosody` element can contain text and the following elements: `audio`, `break`, `p`, `phoneme`, `prosody`, `say-as`, `sub`, and `s`. -Because prosodic attribute values can vary over a wide range, the speech recognizer interprets the assigned values as a suggestion of what the actual prosodic values of the selected voice should be. Text-to-speech limits or substitutes values that aren't supported. Examples of unsupported values are a pitch of 1 MHz or a volume of 120. +Because prosodic attribute values can vary over a wide range, the speech recognizer interprets the assigned values as a suggestion of what the actual prosodic values of the selected voice should be. Text to speech limits or substitutes values that aren't supported. Examples of unsupported values are a pitch of 1 MHz or a volume of 120. Usage of the `prosody` element's attributes are described in the following table. This SSML snippet illustrates how the `rate` attribute is used to change the spe <speak version="1.0" xmlns="http://www.w3.org/2001/10/synthesis" xml:lang="en-US"> <voice name="en-US-JennyNeural"> <prosody rate="+30.00%">- Enjoy using text-to-speech. + Enjoy using text to speech. </prosody> </voice> </speak> This SSML snippet illustrates how the `volume` attribute is used to change the v <speak version="1.0" xmlns="http://www.w3.org/2001/10/synthesis" xml:lang="en-US"> <voice name="en-US-JennyNeural"> <prosody volume="+20.00%">- Enjoy using text-to-speech. + Enjoy using text to speech. </prosody> </voice> </speak> This SSML snippet illustrates how the `pitch` attribute is used so that the voic ```xml <speak version="1.0" xmlns="http://www.w3.org/2001/10/synthesis" xml:lang="en-US"> <voice name="en-US-JennyNeural">- Welcome to <prosody pitch="high">Enjoy using text-to-speech.</prosody> + Welcome to <prosody pitch="high">Enjoy using text to speech.</prosody> </voice> </speak> ``` Any audio included in the SSML document must meet these requirements: * The audio must not contain any customer-specific or other sensitive information. > [!NOTE]-> The `audio` element is not supported by the [Long Audio API](migrate-to-batch-synthesis.md#text-inputs). For long-form text-to-speech, use the [batch synthesis API](batch-synthesis.md) (Preview) instead. +> The `audio` element is not supported by the [Long Audio API](migrate-to-batch-synthesis.md#text-inputs). For long-form text to speech, use the [batch synthesis API](batch-synthesis.md) (Preview) instead. Usage of the `audio` element's attributes are described in the following table. | Attribute | Description | Required or optional | | - | - | - |-| `src` | The URI location of the audio file. The audio must be hosted on an internet-accessible HTTPS endpoint. HTTPS is required, and the domain hosting the file must present a valid, trusted TLS/SSL certificate. We recommend that you put the audio file into Blob Storage in the same Azure region as the text-to-speech endpoint to minimize the latency. | Required | +| `src` | The URI location of the audio file. The audio must be hosted on an internet-accessible HTTPS endpoint. HTTPS is required, and the domain hosting the file must present a valid, trusted TLS/SSL certificate. We recommend that you put the audio file into Blob Storage in the same Azure region as the text to speech endpoint to minimize the latency. | Required | ### Audio examples A good place to start is by trying out the slew of educational apps that are hel ## Background audio -You can use the `mstts:backgroundaudio` element to add background audio to your SSML documents or mix an audio file with text-to-speech. With `mstts:backgroundaudio`, you can loop an audio file in the background, fade in at the beginning of text-to-speech, and fade out at the end of text-to-speech. +You can use the `mstts:backgroundaudio` element to add background audio to your SSML documents or mix an audio file with text to speech. With `mstts:backgroundaudio`, you can loop an audio file in the background, fade in at the beginning of text to speech, and fade out at the end of text to speech. -If the background audio provided is shorter than the text-to-speech or the fade out, it loops. If it's longer than the text-to-speech, it stops when the fade out has finished. +If the background audio provided is shorter than the text to speech or the fade out, it loops. If it's longer than the text to speech, it stops when the fade out has finished. Only one background audio file is allowed per SSML document. You can intersperse `audio` tags within the `voice` element to add more audio to your SSML document. > [!NOTE] > The `mstts:backgroundaudio` element should be put in front of all `voice` elements. If specified, it must be the first child of the `speak` element. >-> The `mstts:backgroundaudio` element is not supported by the [Long Audio API](migrate-to-batch-synthesis.md#text-inputs). For long-form text-to-speech, use the [batch synthesis API](batch-synthesis.md) (Preview) instead. +> The `mstts:backgroundaudio` element is not supported by the [Long Audio API](migrate-to-batch-synthesis.md#text-inputs). For long-form text to speech, use the [batch synthesis API](batch-synthesis.md) (Preview) instead. Usage of the `mstts:backgroundaudio` element's attributes are described in the following table. |
cognitive-services | Speech Synthesis Markup | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Speech-Service/speech-synthesis-markup.md | Title: Speech Synthesis Markup Language (SSML) overview - Speech service -description: Use the Speech Synthesis Markup Language to control pronunciation and prosody in text-to-speech. +description: Use the Speech Synthesis Markup Language to control pronunciation and prosody in text to speech. -Speech Synthesis Markup Language (SSML) is an XML-based markup language that can be used to fine-tune the text-to-speech output attributes such as pitch, pronunciation, speaking rate, volume, and more. You have more control and flexibility compared to plain text input. +Speech Synthesis Markup Language (SSML) is an XML-based markup language that can be used to fine-tune the text to speech output attributes such as pitch, pronunciation, speaking rate, volume, and more. You have more control and flexibility compared to plain text input. > [!TIP] > You can hear voices in different styles and pitches reading example text via the [Voice Gallery](https://speech.microsoft.com/portal/voicegallery). Speech Synthesis Markup Language (SSML) is an XML-based markup language that can You can use SSML to: -- [Define the input text structure](speech-synthesis-markup-structure.md) that determines the structure, content, and other characteristics of the text-to-speech output. For example, you can use SSML to define a paragraph, a sentence, a break or a pause, or silence. You can wrap text with event tags such as bookmark or viseme that can be processed later by your application.+- [Define the input text structure](speech-synthesis-markup-structure.md) that determines the structure, content, and other characteristics of the text to speech output. For example, you can use SSML to define a paragraph, a sentence, a break or a pause, or silence. You can wrap text with event tags such as bookmark or viseme that can be processed later by your application. - [Choose the voice](speech-synthesis-markup-voice.md), language, name, style, and role. You can use multiple voices in a single SSML document. Adjust the emphasis, speaking rate, pitch, and volume. You can also use SSML to insert pre-recorded audio, such as a sound effect or a musical note. - [Control pronunciation](speech-synthesis-markup-pronunciation.md) of the output audio. For example, you can use SSML with phonemes and a custom lexicon to improve pronunciation. You can also use SSML to define how a word or mathematical expression is pronounced. ## Use SSML > [!IMPORTANT]-> You're billed for each character that's converted to speech, including punctuation. Although the SSML document itself is not billable, optional elements that are used to adjust how the text is converted to speech, like phonemes and pitch, are counted as billable characters. For more information, see [text-to-speech pricing notes](text-to-speech.md#pricing-note). +> You're billed for each character that's converted to speech, including punctuation. Although the SSML document itself is not billable, optional elements that are used to adjust how the text is converted to speech, like phonemes and pitch, are counted as billable characters. For more information, see [text to speech pricing notes](text-to-speech.md#pricing-note). You can use SSML in the following ways: |
cognitive-services | Speech To Text | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Speech-Service/speech-to-text.md | Title: Speech-to-text overview - Speech service + Title: Speech to text overview - Speech service -description: Get an overview of the benefits and capabilities of the speech-to-text feature of the Speech Service. +description: Get an overview of the benefits and capabilities of the speech to text feature of the Speech Service. -# What is speech-to-text? +# What is speech to text? -In this overview, you learn about the benefits and capabilities of the speech-to-text feature of the Speech service, which is part of Azure Cognitive Services. Speech-to-text can be used for [real-time](#real-time-speech-to-text) or [batch transcription](#batch-transcription) of audio streams into text. +In this overview, you learn about the benefits and capabilities of the speech to text feature of the Speech service, which is part of Azure Cognitive Services. Speech to text can be used for [real-time](#real-time-speech-to-text) or [batch transcription](#batch-transcription) of audio streams into text. > [!NOTE] > To compare pricing of [real-time](#real-time-speech-to-text) to [batch transcription](#batch-transcription), see [Speech service pricing](https://azure.microsoft.com/pricing/details/cognitive-services/speech-services/). -For a full list of available speech-to-text languages, see [Language and voice support](language-support.md?tabs=stt). +For a full list of available speech to text languages, see [Language and voice support](language-support.md?tabs=stt). -## Real-time speech-to-text +## Real-time speech to text -With real-time speech-to-text, the audio is transcribed as speech is recognized from a microphone or file. Use real-time speech-to-text for applications that need to transcribe audio in real-time such as: +With real-time speech to text, the audio is transcribed as speech is recognized from a microphone or file. Use real-time speech to text for applications that need to transcribe audio in real-time such as: - Transcriptions, captions, or subtitles for live meetings - Contact center agent assist - Dictation Batch transcription is used to transcribe a large amount of audio in storage. Yo - Diarization Batch transcription is available via:-- [Speech-to-text REST API](rest-speech-to-text.md): To get started, see [How to use batch transcription](batch-transcription.md) and [Batch transcription samples (REST)](https://github.com/Azure-Samples/cognitive-services-speech-sdk/tree/master/samples/batch).+- [Speech to text REST API](rest-speech-to-text.md): To get started, see [How to use batch transcription](batch-transcription.md) and [Batch transcription samples (REST)](https://github.com/Azure-Samples/cognitive-services-speech-sdk/tree/master/samples/batch). - The [Speech CLI](spx-overview.md) supports both real-time and batch transcription. For Speech CLI help with batch transcriptions, run the following command: ```azurecli-interactive spx help batch transcription Batch transcription is available via: ## Custom Speech -With [Custom Speech](./custom-speech-overview.md), you can evaluate and improve the accuracy of speech recognition for your applications and products. A custom speech model can be used for [real-time speech-to-text](speech-to-text.md), [speech translation](speech-translation.md), and [batch transcription](batch-transcription.md). +With [Custom Speech](./custom-speech-overview.md), you can evaluate and improve the accuracy of speech recognition for your applications and products. A custom speech model can be used for [real-time speech to text](speech-to-text.md), [speech translation](speech-translation.md), and [batch transcription](batch-transcription.md). > [!TIP] > A [hosted deployment endpoint](how-to-custom-speech-deploy-model.md) isn't required to use Custom Speech with the [Batch transcription API](batch-transcription.md). You can conserve resources if the [custom speech model](how-to-custom-speech-train-model.md) is only used for batch transcription. For more information, see [Speech service pricing](https://azure.microsoft.com/pricing/details/cognitive-services/speech-services/). Out of the box, speech recognition utilizes a Universal Language Model as a base model that is trained with Microsoft-owned data and reflects commonly used spoken language. The base model is pre-trained with dialects and phonetics representing a variety of common domains. When you make a speech recognition request, the most recent base model for each [supported language](language-support.md?tabs=stt) is used by default. The base model works very well in most speech recognition scenarios. -A custom model can be used to augment the base model to improve recognition of domain-specific vocabulary specific to the application by providing text data to train the model. It can also be used to improve recognition based for the specific audio conditions of the application by providing audio data with reference transcriptions. For more information, see [Custom Speech](./custom-speech-overview.md) and [Speech-to-text REST API](rest-speech-to-text.md). +A custom model can be used to augment the base model to improve recognition of domain-specific vocabulary specific to the application by providing text data to train the model. It can also be used to improve recognition based for the specific audio conditions of the application by providing audio data with reference transcriptions. For more information, see [Custom Speech](./custom-speech-overview.md) and [Speech to text REST API](rest-speech-to-text.md). Customization options vary by language or locale. To verify support, see [Language and voice support for the Speech service](./language-support.md?tabs=stt). ## Next steps -- [Get started with speech-to-text](get-started-speech-to-text.md)+- [Get started with speech to text](get-started-speech-to-text.md) - [Create a batch transcription](batch-transcription-create.md) |
cognitive-services | Speech Translation | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Speech-Service/speech-translation.md | keywords: speech translation # What is speech translation? -In this article, you learn about the benefits and capabilities of the speech translation service, which enables real-time, multi-language speech-to-speech and speech-to-text translation of audio streams. +In this article, you learn about the benefits and capabilities of the speech translation service, which enables real-time, multi-language speech-to-speech and speech to text translation of audio streams. By using the Speech SDK or Speech CLI, you can give your applications, tools, and devices access to source transcriptions and translation outputs for the provided audio. Interim transcription and translation results are returned as speech is detected, and the final results can be converted into synthesized speech. For a list of languages supported for speech translation, see [Language and voic ## Core features -* Speech-to-text translation with recognition results. +* Speech to text translation with recognition results. * Speech-to-speech translation. * Support for translation to multiple target languages. * Interim recognition and translation results. For a list of languages supported for speech translation, see [Language and voic As your first step, try the [Speech translation quickstart](get-started-speech-translation.md). The speech translation service is available via the [Speech SDK](speech-sdk.md) and the [Speech CLI](spx-overview.md). -You'll find [Speech SDK speech-to-text and translation samples](https://github.com/Azure-Samples/cognitive-services-speech-sdk) on GitHub. These samples cover common scenarios, such as reading audio from a file or stream, continuous and single-shot recognition and translation, and working with custom models. +You'll find [Speech SDK speech to text and translation samples](https://github.com/Azure-Samples/cognitive-services-speech-sdk) on GitHub. These samples cover common scenarios, such as reading audio from a file or stream, continuous and single-shot recognition and translation, and working with custom models. ## Next steps |
cognitive-services | Spx Basics | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Speech-Service/spx-basics.md | Title: "Quickstart: The Speech CLI - Speech service" -description: In this Azure Speech CLI quickstart, you interact with speech-to-text, text-to-speech, and speech translation without having to write code. +description: In this Azure Speech CLI quickstart, you interact with speech to text, text to speech, and speech translation without having to write code. -In this article, you'll learn how to use the Azure Speech CLI (also called SPX) to access Speech services such as speech-to-text, text-to-speech, and speech translation, without having to write any code. The Speech CLI is production ready, and you can use it to automate simple workflows in the Speech service by using `.bat` or shell scripts. +In this article, you'll learn how to use the Azure Speech CLI (also called SPX) to access Speech services such as speech to text, text to speech, and speech translation, without having to write any code. The Speech CLI is production ready, and you can use it to automate simple workflows in the Speech service by using `.bat` or shell scripts. This article assumes that you have working knowledge of the Command Prompt window, terminal, or PowerShell. spx help recognize Additional help commands are listed in the console output. You can enter these commands to get detailed help about subcommands. -## Speech-to-text (speech recognition) +## Speech to text (speech recognition) To convert speech to text (speech recognition) by using your system's default microphone, run the following command: spx recognize --file /path/to/file.wav > [!TIP] > If you get stuck or want to learn more about the Speech CLI recognition options, you can run ```spx help recognize```. -## Text-to-speech (speech synthesis) +## Text to speech (speech synthesis) The following command takes text as input and then outputs the synthesized speech to the current active output device (for example, your computer speakers). spx synthesize --text "Bienvenue chez moi." --voice fr-FR-AlainNeural --speakers > [!TIP] > If you get stuck or want to learn more about the Speech CLI recognition options, you can run ```spx help synthesize```. -## Speech-to-text translation +## Speech to text translation -With the Speech CLI, you can also do speech-to-text translation. Run the following command to capture audio from your default microphone and output the translation as text. Keep in mind that you need to supply the `source` and `target` language with the `translate` command. +With the Speech CLI, you can also do speech to text translation. Run the following command to capture audio from your default microphone and output the translation as text. Keep in mind that you need to supply the `source` and `target` language with the `translate` command. ```console spx translate --microphone --source en-US --target ru-RU |
cognitive-services | Spx Batch Operations | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Speech-Service/spx-batch-operations.md | sample_2 8f9b378f6d0b42f99522f1173492f013 Sample text synthesized. ## Batch text to speech (speech synthesis) -The easiest way to run batch text-to-speech is to create a new `.tsv` (tab-separated-value) file, and use the `--foreach` command in the Speech CLI. You can create a `.tsv` file using your favorite text editor, for this example, let's call it `text_synthesis.tsv`: +The easiest way to run batch text to speech is to create a new `.tsv` (tab-separated-value) file, and use the `--foreach` command in the Speech CLI. You can create a `.tsv` file using your favorite text editor, for this example, let's call it `text_synthesis.tsv`: >[!IMPORTANT] > When copying the contents of this text file, make sure that your file has a **tab** not spaces between the file location and the text. Sometimes, when copying the contents from this example, tabs are converted to spaces causing the `spx` command to fail when run. |
cognitive-services | Spx Overview | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Speech-Service/spx-overview.md | -The Speech CLI is a command-line tool for using Speech service without having to write any code. The Speech CLI requires minimal setup. You can easily use it to experiment with key features of Speech service and see how it works with your use cases. Within minutes, you can run simple test workflows, such as batch speech-recognition from a directory of files or text-to-speech on a collection of strings from a file. Beyond simple workflows, the Speech CLI is production-ready, and you can scale it up to run larger processes by using automated `.bat` or shell scripts. +The Speech CLI is a command-line tool for using Speech service without having to write any code. The Speech CLI requires minimal setup. You can easily use it to experiment with key features of Speech service and see how it works with your use cases. Within minutes, you can run simple test workflows, such as batch speech-recognition from a directory of files or text to speech on a collection of strings from a file. Beyond simple workflows, the Speech CLI is production-ready, and you can scale it up to run larger processes by using automated `.bat` or shell scripts. Most features in the Speech SDK are available in the Speech CLI, and some advanced features and customizations are simplified in the Speech CLI. As you're deciding when to use the Speech CLI or the Speech SDK, consider the following guidance. Use the Speech SDK when: ## Get started -To get started with the Speech CLI, see the [quickstart](spx-basics.md). This article shows you how to run some basic commands. It also gives you slightly more advanced commands for running batch operations for speech-to-text and text-to-speech. After you've read the basics article, you should understand the syntax well enough to start writing some custom commands or automate simple Speech service operations. +To get started with the Speech CLI, see the [quickstart](spx-basics.md). This article shows you how to run some basic commands. It also gives you slightly more advanced commands for running batch operations for speech to text and text to speech. After you've read the basics article, you should understand the syntax well enough to start writing some custom commands or automate simple Speech service operations. ## Next steps |
cognitive-services | Swagger Documentation | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Speech-Service/swagger-documentation.md | -# Generate a REST API client library for the Speech-to-text REST API +# Generate a REST API client library for the Speech to text REST API Speech service offers a Swagger specification to interact with a handful of REST APIs used to import data, create models, test model accuracy, create custom endpoints, queue up batch transcriptions, and manage subscriptions. Most operations available through the [Custom Speech area of the Speech Studio](https://aka.ms/speechstudio/customspeech) can be completed programmatically using these APIs. > [!NOTE]-> Speech service has several REST APIs for [Speech-to-text](rest-speech-to-text.md) and [Text-to-speech](rest-text-to-speech.md). +> Speech service has several REST APIs for [Speech to text](rest-speech-to-text.md) and [Text to speech](rest-text-to-speech.md). >-> However only [Speech-to-text REST API](rest-speech-to-text.md) is documented in the Swagger specification. See the documents referenced in the previous paragraph for the information on all other Speech Services REST APIs. +> However only [Speech to text REST API](rest-speech-to-text.md) is documented in the Swagger specification. See the documents referenced in the previous paragraph for the information on all other Speech Services REST APIs. ## Generating code from the Swagger specification You can use the Python library that you generated with the [Speech service sampl ## Next steps * [Speech service samples on GitHub](https://aka.ms/csspeech/samples).-* [Speech-to-text REST API](rest-speech-to-text.md) +* [Speech to text REST API](rest-speech-to-text.md) |
cognitive-services | Text To Speech | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Speech-Service/text-to-speech.md | Title: Text-to-speech overview - Speech service + Title: Text to speech overview - Speech service -description: Get an overview of the benefits and capabilities of the text-to-speech feature of the Speech service. +description: Get an overview of the benefits and capabilities of the text to speech feature of the Speech service. -# What is text-to-speech? +# What is text to speech? -In this overview, you learn about the benefits and capabilities of the text-to-speech feature of the Speech service, which is part of Azure Cognitive Services. +In this overview, you learn about the benefits and capabilities of the text to speech feature of the Speech service, which is part of Azure Cognitive Services. -Text-to-speech enables your applications, tools, or devices to convert text into humanlike synthesized speech. The text-to-speech capability is also known as speech synthesis. Use humanlike prebuilt neural voices out of the box, or create a custom neural voice that's unique to your product or brand. For a full list of supported voices, languages, and locales, see [Language and voice support for the Speech service](language-support.md?tabs=tts). +Text to speech enables your applications, tools, or devices to convert text into humanlike synthesized speech. The text to speech capability is also known as speech synthesis. Use humanlike prebuilt neural voices out of the box, or create a custom neural voice that's unique to your product or brand. For a full list of supported voices, languages, and locales, see [Language and voice support for the Speech service](language-support.md?tabs=tts). ## Core features -Text-to-speech includes the following features: +Text to speech includes the following features: | Feature | Summary | Demo | | | | | | Prebuilt neural voice (called *Neural* on the [pricing page](https://azure.microsoft.com/pricing/details/cognitive-services/speech-services/)) | Highly natural out-of-the-box voices. Create an Azure account and Speech service subscription, and then use the [Speech SDK](./get-started-text-to-speech.md) or visit the [Speech Studio portal](https://speech.microsoft.com/portal) and select prebuilt neural voices to get started. Check the [pricing details](https://azure.microsoft.com/pricing/details/cognitive-services/speech-services/). | Check the the [Voice Gallery](https://speech.microsoft.com/portal/voicegallery) and determine the right voice for your business needs. | | Custom Neural Voice (called *Custom Neural* on the [pricing page](https://azure.microsoft.com/pricing/details/cognitive-services/speech-services/)) | Easy-to-use self-service for creating a natural brand voice, with limited access for responsible use. Create an Azure account and Speech service subscription (with the S0 tier), and [apply](https://aka.ms/customneural) to use the custom neural feature. After you've been granted access, visit the [Speech Studio portal](https://speech.microsoft.com/portal) and select **Custom Voice** to get started. Check the [pricing details](https://azure.microsoft.com/pricing/details/cognitive-services/speech-services/). | Check the [voice samples](https://aka.ms/customvoice). | -### More about neural text-to-speech features +### More about neural text to speech features -The text-to-speech feature of the Speech service on Azure has been fully upgraded to the neural text-to-speech engine. This engine uses deep neural networks to make the voices of computers nearly indistinguishable from the recordings of people. With the clear articulation of words, neural text-to-speech significantly reduces listening fatigue when users interact with AI systems. +The text to speech feature of the Speech service on Azure has been fully upgraded to the neural text to speech engine. This engine uses deep neural networks to make the voices of computers nearly indistinguishable from the recordings of people. With the clear articulation of words, neural text to speech significantly reduces listening fatigue when users interact with AI systems. -The patterns of stress and intonation in spoken language are called _prosody_. Traditional text-to-speech systems break down prosody into separate linguistic analysis and acoustic prediction steps that are governed by independent models. That can result in muffled, buzzy voice synthesis. +The patterns of stress and intonation in spoken language are called _prosody_. Traditional text to speech systems break down prosody into separate linguistic analysis and acoustic prediction steps that are governed by independent models. That can result in muffled, buzzy voice synthesis. -Here's more information about neural text-to-speech features in the Speech service, and how they overcome the limits of traditional text-to-speech systems: +Here's more information about neural text to speech features in the Speech service, and how they overcome the limits of traditional text to speech systems: -* **Real-time speech synthesis**: Use the [Speech SDK](./get-started-text-to-speech.md) or [REST API](rest-text-to-speech.md) to convert text-to-speech by using [prebuilt neural voices](language-support.md?tabs=tts) or [custom neural voices](custom-neural-voice.md). +* **Real-time speech synthesis**: Use the [Speech SDK](./get-started-text-to-speech.md) or [REST API](rest-text-to-speech.md) to convert text to speech by using [prebuilt neural voices](language-support.md?tabs=tts) or [custom neural voices](custom-neural-voice.md). -* **Asynchronous synthesis of long audio**: Use the [batch synthesis API](batch-synthesis.md) (Preview) to asynchronously synthesize text-to-speech files longer than 10 minutes (for example, audio books or lectures). Unlike synthesis performed via the Speech SDK or speech-to-text REST API, responses aren't returned in real-time. The expectation is that requests are sent asynchronously, responses are polled for, and synthesized audio is downloaded when the service makes it available. +* **Asynchronous synthesis of long audio**: Use the [batch synthesis API](batch-synthesis.md) (Preview) to asynchronously synthesize text to speech files longer than 10 minutes (for example, audio books or lectures). Unlike synthesis performed via the Speech SDK or Speech to text REST API, responses aren't returned in real-time. The expectation is that requests are sent asynchronously, responses are polled for, and synthesized audio is downloaded when the service makes it available. -* **Prebuilt neural voices**: Microsoft neural text-to-speech capability uses deep neural networks to overcome the limits of traditional speech synthesis with regard to stress and intonation in spoken language. Prosody prediction and voice synthesis happen simultaneously, which results in more fluid and natural-sounding outputs. Each prebuilt neural voice model is available at 24kHz and high-fidelity 48kHz. You can use neural voices to: +* **Prebuilt neural voices**: Microsoft neural text to speech capability uses deep neural networks to overcome the limits of traditional speech synthesis with regard to stress and intonation in spoken language. Prosody prediction and voice synthesis happen simultaneously, which results in more fluid and natural-sounding outputs. Each prebuilt neural voice model is available at 24kHz and high-fidelity 48kHz. You can use neural voices to: - Make interactions with chatbots and voice assistants more natural and engaging. - Convert digital texts such as e-books into audiobooks. Here's more information about neural text-to-speech features in the Speech servi For a full list of platform neural voices, see [Language and voice support for the Speech service](language-support.md?tabs=tts). -* **Fine-tuning text-to-speech output with SSML**: Speech Synthesis Markup Language (SSML) is an XML-based markup language that's used to customize text-to-speech outputs. With SSML, you can adjust pitch, add pauses, improve pronunciation, change speaking rate, adjust volume, and attribute multiple voices to a single document. +* **Fine-tuning text to speech output with SSML**: Speech Synthesis Markup Language (SSML) is an XML-based markup language that's used to customize text to speech outputs. With SSML, you can adjust pitch, add pauses, improve pronunciation, change speaking rate, adjust volume, and attribute multiple voices to a single document. You can use SSML to define your own lexicons or switch to different speaking styles. With the [multilingual voices](https://techcommunity.microsoft.com/t5/azure-ai/azure-text-to-speech-updates-at-build-2021/ba-p/2382981), you can also adjust the speaking languages via SSML. To fine-tune the voice output for your scenario, see [Improve synthesis with Speech Synthesis Markup Language](speech-synthesis-markup.md) and [Speech synthesis with the Audio Content Creation tool](how-to-audio-content-creation.md). Here's more information about neural text-to-speech features in the Speech servi ## Get started -To get started with text-to-speech, see the [quickstart](get-started-text-to-speech.md). Text-to-speech is available via the [Speech SDK](speech-sdk.md), the [REST API](rest-text-to-speech.md), and the [Speech CLI](spx-overview.md). +To get started with text to speech, see the [quickstart](get-started-text-to-speech.md). Text to speech is available via the [Speech SDK](speech-sdk.md), the [REST API](rest-text-to-speech.md), and the [Speech CLI](spx-overview.md). > [!TIP]-> To convert text-to-speech with a no-code approach, try the [Audio Content Creation](how-to-audio-content-creation.md) tool in [Speech Studio](https://aka.ms/speechstudio/audiocontentcreation). +> To convert text to speech with a no-code approach, try the [Audio Content Creation](how-to-audio-content-creation.md) tool in [Speech Studio](https://aka.ms/speechstudio/audiocontentcreation). ## Sample code -Sample code for text-to-speech is available on GitHub. These samples cover text-to-speech conversion in most popular programming languages: +Sample code for text to speech is available on GitHub. These samples cover text to speech conversion in most popular programming languages: -* [Text-to-speech samples (SDK)](https://github.com/Azure-Samples/cognitive-services-speech-sdk) -* [Text-to-speech samples (REST)](https://github.com/Azure-Samples/Cognitive-Speech-TTS) +* [Text to speech samples (SDK)](https://github.com/Azure-Samples/cognitive-services-speech-sdk) +* [Text to speech samples (REST)](https://github.com/Azure-Samples/Cognitive-Speech-TTS) ## Custom Neural Voice In addition to prebuilt neural voices, you can create and fine-tune custom neura ## Pricing note ### Billable characters-When you use the text-to-speech feature, you're billed for each character that's converted to speech, including punctuation. Although the SSML document itself is not billable, optional elements that are used to adjust how the text is converted to speech, like phonemes and pitch, are counted as billable characters. Here's a list of what's billable: +When you use the text to speech feature, you're billed for each character that's converted to speech, including punctuation. Although the SSML document itself is not billable, optional elements that are used to adjust how the text is converted to speech, like phonemes and pitch, are counted as billable characters. Here's a list of what's billable: -* Text passed to the text-to-speech feature in the SSML body of the request +* Text passed to the text to speech feature in the SSML body of the request * All markup within the text field of the request body in the SSML format, except for `<speak>` and `<voice>` tags * Letters, punctuation, spaces, tabs, markup, and all white-space characters * Every code point defined in Unicode Custom Neural Voice (CNV) endpoint hosting is measured by the actual time (hour) ## Reference docs * [Speech SDK](speech-sdk.md)-* [REST API: Text-to-speech](rest-text-to-speech.md) +* [REST API: Text to speech](rest-text-to-speech.md) ## Next steps |
cognitive-services | Troubleshooting | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Speech-Service/troubleshooting.md | This error usually occurs when the request body contains invalid audio data. Onl The error most likely occurs because no audio data is being sent to the service. This error also might be caused by network issues. -## Connection closed or timeout --There is a known issue on Windows 11 that might affect some types of Secure Sockets Layer (SSL) and Transport Layer Security (TLS) connections. These connections might have handshake failures. For developers, the affected connections are likely to send multiple frames followed by a partial frame with a size of less than 5 bytes within a single input buffer. If the connection fails, your app will receive the error such as, "USP error", "Connection closed", "ServiceTimeout", or "SEC_E_ILLEGAL_MESSAGE". --There is an out of band update available for Windows 11 that fixes these issues. The update may be manually installed by following the instructions here: -- [Windows 11 21H2](https://support.microsoft.com/topic/october-17-2022-kb5020387-os-build-22000-1100-out-of-band-5e723873-2769-4e3d-8882-5cb044455a92)-- [Windows 11 22H2](https://support.microsoft.com/topic/october-25-2022-kb5018496-os-build-22621-755-preview-64040bea-1e02-4b6d-bad1-b036200c2cb3)--The issue started October 12th, 2022 and should be resolved via Windows update in November, 2022. - ## Next steps * [Review the release notes](releasenotes.md) |
cognitive-services | Tutorial Voice Enable Your Bot Speech Sdk | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Speech-Service/tutorial-voice-enable-your-bot-speech-sdk.md | The voice-enabled chat bot that you make in this tutorial follows these steps: 1. Optionally, higher-accuracy keyword verification happens on the service. 1. The audio is passed to the speech recognition service and transcribed to text. 1. The recognized text is passed to the echo bot as a Bot Framework activity.-1. The response text is turned into audio by the text-to-speech service, and streamed back to the client application for playback. +1. The response text is turned into audio by the text to speech service, and streamed back to the client application for playback.  This section walks you through creating a resource group in the West US region. ### Choose an Azure region -Ensure that you use a [supported Azure region](regions.md#voice-assistants). The Direct Line Speech channel uses the text-to-speech service, which has neural and standard voices. Neural voices are used at [these Azure regions](regions.md#speech-service), and standard voices (retiring) are used at [these Azure regions](how-to-migrate-to-prebuilt-neural-voice.md). +Ensure that you use a [supported Azure region](regions.md#voice-assistants). The Direct Line Speech channel uses the text to speech service, which has neural and standard voices. Neural voices are used at [these Azure regions](regions.md#speech-service), and standard voices (retiring) are used at [these Azure regions](how-to-migrate-to-prebuilt-neural-voice.md). For more information about regions, see [Azure locations](https://azure.microsoft.com/global-infrastructure/locations/). In the source code of the Windows Voice Assistant Client, use these files to rev ## Optional: Change the language and bot voice -The bot that you've created will listen for and respond in English, with a default US English text-to-speech voice. However, you're not limited to using English or a default voice. +The bot that you've created will listen for and respond in English, with a default US English text to speech voice. However, you're not limited to using English or a default voice. In this section, you'll learn how to change the language that your bot will listen for and respond in. You'll also learn how to select a different voice for that language. ### Change the language -You can choose from any of the languages mentioned in the [speech-to-text](language-support.md?tabs=stt) table. The following example changes the language to German. +You can choose from any of the languages mentioned in the [speech to text](language-support.md?tabs=stt) table. The following example changes the language to German. -1. Open the Windows Voice Assistant Client app, select the **Settings** button (upper-right gear icon), and enter **de-de** in the **Language** field. This is the locale value mentioned in the [speech-to-text](language-support.md?tabs=stt) table. +1. Open the Windows Voice Assistant Client app, select the **Settings** button (upper-right gear icon), and enter **de-de** in the **Language** field. This is the locale value mentioned in the [speech to text](language-support.md?tabs=stt) table. This step sets the spoken language to be recognized, overriding the default **en-us**. It also instructs the Direct Line Speech channel to use a default German voice for the bot reply. 1. Close the **Settings** page, and then select the **Reconnect** button to establish a new connection to your echo bot. You can choose from any of the languages mentioned in the [speech-to-text](langu ### Change the default bot voice -You can select the text-to-speech voice and control pronunciation if the bot specifies the reply in the form of a [Speech Synthesis Markup Language](speech-synthesis-markup.md) (SSML) instead of simple text. The echo bot doesn't use SSML, but you can easily modify the code to do that. +You can select the text to speech voice and control pronunciation if the bot specifies the reply in the form of a [Speech Synthesis Markup Language](speech-synthesis-markup.md) (SSML) instead of simple text. The echo bot doesn't use SSML, but you can easily modify the code to do that. The following example adds SSML to the echo bot reply so that the German voice `de-DE-RalfNeural` (a male voice) is used instead of the default female voice. See the [list of standard voices](how-to-migrate-to-prebuilt-neural-voice.md) and [list of neural voices](language-support.md?tabs=tts) that are supported for your language. If you're not going to continue using the echo bot deployed in this tutorial, yo ## Explore documentation * [Deploy to an Azure region near you](https://azure.microsoft.com/global-infrastructure/locations/) to see the improvement in bot response time.-* [Deploy to an Azure region that supports high-quality neural text-to-speech voices](./regions.md#speech-service). +* [Deploy to an Azure region that supports high-quality neural text to speech voices](./regions.md#speech-service). * Get pricing associated with the Direct Line Speech channel: * [Bot Service pricing](https://azure.microsoft.com/pricing/details/bot-service/) * [Speech service](https://azure.microsoft.com/pricing/details/cognitive-services/speech-services/) |
cognitive-services | Voice Assistants | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Speech-Service/voice-assistants.md | Whether you choose [Direct Line Speech](direct-line-speech.md) or another soluti | Category | Features | |-|-| |[Custom keyword](./custom-keyword-basics.md) | Users can start conversations with assistants by using a custom keyword such as "Hey Contoso." An app does this with a custom keyword engine in the Speech SDK, which you can configure by going to [Get started with custom keywords](./custom-keyword-basics.md). Voice assistants can use service-side keyword verification to improve the accuracy of the keyword activation (versus using the device alone).-|[Speech-to-text](speech-to-text.md) | Voice assistants convert real-time audio into recognized text by using [speech-to-text](speech-to-text.md) from the Speech service. This text is available, as it's transcribed, to both your assistant implementation and your client application. -|[Text-to-speech](text-to-speech.md) | Textual responses from your assistant are synthesized through [text-to-speech](text-to-speech.md) from the Speech service. This synthesis is then made available to your client application as an audio stream. Microsoft offers the ability to build your own custom, high-quality Neural Text to Speech (Neural TTS) voice that gives a voice to your brand. +|[Speech to text](speech-to-text.md) | Voice assistants convert real-time audio into recognized text by using [speech to text](speech-to-text.md) from the Speech service. This text is available, as it's transcribed, to both your assistant implementation and your client application. +|[Text to speech](text-to-speech.md) | Textual responses from your assistant are synthesized through [text to speech](text-to-speech.md) from the Speech service. This synthesis is then made available to your client application as an audio stream. Microsoft offers the ability to build your own custom, high-quality Neural Text to speech (Neural TTS) voice that gives a voice to your brand. ## Get started with voice assistants |
cognitive-services | Quickstart Translator Rest Api | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Translator/quickstart-translator-rest-api.md | + + Title: "Quickstart: Azure Cognitive Services Translator REST APIs" ++description: "Learn to translate text with the Translator service REST APIs. Examples are provided in C#, Go, Java, JavaScript and Python." ++++++ Last updated : 05/03/2023++ms.devlang: csharp, golang, java, javascript, python +++<!--┬ámarkdownlint-disable┬áMD033┬á--> +<!--┬ámarkdownlint-disable┬áMD001┬á--> +<!--┬ámarkdownlint-disable┬áMD024┬á--> +<!--┬ámarkdownlint-disable┬áMD036┬á--> +<!--┬ámarkdownlint-disable┬áMD049┬á--> ++# Quickstart: Azure Cognitive Services Translator REST APIs ++Try the latest version of Azure Translator. In this quickstart, get started using the Translator service to [translate text](reference/v3-0-translate.md) using a programming language of your choice or the REST API. For this project, we recommend using the free pricing tier (F0), while you're learning the technology, and later upgrading to a paid tier for production. ++## Prerequisites ++You need an active Azure subscription. If you don't have an Azure subscription, you can [create one for free](https://azure.microsoft.com/free/cognitive-services/) ++* Once you have your Azure subscription, create a [Translator resource](https://portal.azure.com/#create/Microsoft.CognitiveServicesTextTranslation) in the Azure portal. ++* After your resource deploys, select **Go to resource** and retrieve your key and endpoint. ++ * You need the key and endpoint from the resource to connect your application to the Translator service. You paste your key and endpoint into the code later in the quickstart. You can find these values on the Azure portal **Keys and Endpoint** page: ++ :::image type="content" source="media/quickstarts/keys-and-endpoint-portal.png" alt-text="Screenshot: Azure portal keys and endpoint page."::: ++ > [!NOTE] + > + > * For this quickstart it is recommended that you use a Translator text single-service global resource. + > * With a single-service global resource you'll include one authorization header (**Ocp-Apim-Subscription-key**) with the REST API request. The value for Ocp-Apim-Subscription-key is your Azure secret key for your Translator Text subscription. + > * If you choose to use the multi-service Cognitive Services or regional Translator resource, two authentication headers will be required: (**Ocp-Api-Subscription-Key** and **Ocp-Apim-Subscription-Region**). The value for Ocp-Apim-Subscription-Region is the region associated with your subscription. + > * For more information on how to use the **Ocp-Apim-Subscription-Region** header, _see_ [Text Translator REST API headers](translator-text-apis.md). ++<!-- checked --> +<!-- + > [!div class="nextstepaction"] +> [I ran into an issue](https://microsoft.qualtrics.com/jfe/form/SV_0Cl5zkG3CnDjq6O?PLanguage=Csharp&Product=Translator&Page=quickstart-translator&Section=prerequisites) +--> ++## Headers ++To call the Translator service via the [REST API](reference/rest-api-guide.md), you need to include the following headers with each request. Don't worry, we include the headers for you in the sample code for each programming language. ++For more information on Translator authentication options, _see_ the [Translator v3 reference](./reference/v3-0-reference.md#authentication) guide. ++Header|Value| Condition | +| |: |:| +|**Ocp-Apim-Subscription-Key** |Your Translator service key from the Azure portal.|• ***Required***| +|**Ocp-Apim-Subscription-Region**|The region where your resource was created. |• ***Required*** when using a multi-service Cognitive Services or regional (geographic) resource like **West US**.</br>• ***Optional*** when using a single-service global Translator Resource. +|**Content-Type**|The content type of the payload. The accepted value is **application/json** or **charset=UTF-8**.|• **Required**| +|**Content-Length**|The **length of the request** body.|• ***Optional***| ++> [!IMPORTANT] +> +> Remember to remove the key from your code when you're done, and never post it publicly. For production, use a secure way of storing and accessing your credentials like [Azure Key Vault](../../key-vault/general/overview.md). For more information, _see_ the Cognitive Services [security](../cognitive-services-security.md) article. ++## Translate text ++The core operation of the Translator service is translating text. In this quickstart, you build a request using a programming language of your choice that takes a single source (`from`) and provides two outputs (`to`). Then we review some parameters that can be used to adjust both the request and the response. ++For detailed information regarding Azure Translator Service request limits, *see* [**Text translation request limits**](request-limits.md#text-translation). ++### [C#: Visual Studio](#tab/csharp) ++### Set up your Visual Studio project ++1. Make sure you have the current version of [Visual Studio IDE](https://visualstudio.microsoft.com/vs/). ++ > [!TIP] + > + > If you're new to Visual Studio, try the [Introduction to Visual Studio](/training/modules/go-get-started/) Learn module. ++1. Open Visual Studio. ++1. On the Start page, choose **Create a new project**. ++ :::image type="content" source="media/quickstarts/start-window.png" alt-text="Screenshot: Visual Studio start window."::: ++1. On the **Create a new project page**, enter **console** in the search box. Choose the **Console Application** template, then choose **Next**. ++ :::image type="content" source="media/quickstarts/create-new-project.png" alt-text="Screenshot: Visual Studio's create new project page."::: ++1. In the **Configure your new project** dialog window, enter `translator_quickstart` in the Project name box. Leave the "Place solution and project in the same directory" checkbox **unchecked** and select **Next**. ++ :::image type="content" source="media/quickstarts/configure-new-project.png" alt-text="Screenshot: Visual Studio's configure new project dialog window."::: ++1. In the **Additional information** dialog window, make sure **.NET 6.0 (Long-term support)** is selected. Leave the "Don't use top-level statements" checkbox **unchecked** and select **Create**. ++ :::image type="content" source="media/quickstarts/additional-information.png" alt-text="Screenshot: Visual Studio's additional information dialog window."::: ++### Install the Newtonsoft.json package with NuGet ++1. Right-click on your translator_quickstart project and select **Manage NuGet Packages...** . ++ :::image type="content" source="media/quickstarts/manage-nuget.png" alt-text="Screenshot of the NuGet package search box."::: ++1. Select the Browse tab and type Newtonsoft.json. ++ :::image type="content" source="media/quickstarts/newtonsoft.png" alt-text="Screenshot of the NuGet package install window."::: ++1. Select install from the right package manager window to add the package to your project. ++ :::image type="content" source="media/quickstarts/install-newtonsoft.png" alt-text="Screenshot of the NuGet package install button."::: +<!-- checked --> +<!-- [!div class="nextstepaction"] +> [I ran into an issue](https://microsoft.qualtrics.com/jfe/form/SV_0Cl5zkG3CnDjq6O?PLanguage=Csharp&Product=Translator&Page=quickstart-translator&Section=set-up-your-visual-studio-project) --> ++### Build your C# application ++> [!NOTE] +> +> * Starting with .NET 6, new projects using the `console` template generate a new program style that differs from previous versions. +> * The new output uses recent C# features that simplify the code you need to write. +> * When you use the newer version, you only need to write the body of the `Main` method. You don't need to include top-level statements, global using directives, or implicit using directives. +> * For more information, _see_ [**New C# templates generate top-level statements**](/dotnet/core/tutorials/top-level-templates). ++1. Open the **Program.cs** file. ++1. Delete the pre-existing code, including the line `Console.Writeline("Hello World!")`. Copy and paste the code sample into your application's Program.cs file. Make sure you update the key variable with the value from your Azure portal Translator instance: ++```csharp +using System.Text; +using Newtonsoft.Json; ++class Program +{ + private static readonly string key = "<your-translator-key>"; + private static readonly string endpoint = "https://api.cognitive.microsofttranslator.com"; ++ // location, also known as region. + // required if you're using a multi-service or regional (not global) resource. It can be found in the Azure portal on the Keys and Endpoint page. + private static readonly string location = "<YOUR-RESOURCE-LOCATION>"; ++ static async Task Main(string[] args) + { + // Input and output languages are defined as parameters. + string route = "/translate?api-version=3.0&from=en&to=fr&to=zu"; + string textToTranslate = "I would really like to drive your car around the |