Updates from: 10/17/2023 01:13:54
Service Microsoft Docs article Related commit history on GitHub Change details
active-directory-b2c Error Codes https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-b2c/error-codes.md
The following errors can be returned by the Azure Active Directory B2C service.
| `AADB2C99013` | The supplied grant_type [{0}] and token_type [{1}] combination is not supported. | | `AADB2C99015` | Profile '{0}' in policy '{1}' in tenant '{2}' is missing all InputClaims required for resource owner password credential flow. | [Create a resource owner policy](add-ropc-policy.md#create-a-resource-owner-policy) | |`AADB2C99002`| User doesn't exist. Please sign up before you can sign in. |
+| `AADB2C99027` | Policy '{0}' does not contain a AuthorizationTechnicalProfile with a corresponding ClientAssertionType. | [Client credentials flow](client-credentials-grant-flow.md) |
active-directory-b2c Saml Service Provider Options https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-b2c/saml-service-provider-options.md
Previously updated : 10/05/2021 Last updated : 10/16/2023
Replace the following values:
You can use a complete sample policy for testing with the SAML test app:
-1. Download the [SAML-SP-initiated login sample policy](https://github.com/azure-ad-b2c/saml-sp/tree/master/policy/SAML-SP-Initiated).
+1. Download the [SAML-SP-initiated login sample policy](https://github.com/azure-ad-b2c/saml-sp/tree/master/policy/SAML-IdP-Initiated-LocalAccounts).
1. Update `TenantId` to match your tenant name. This article uses the example *contoso.b2clogin.com*. 1. Keep the policy name *B2C_1A_signup_signin_saml*.
active-directory Concept Password Ban Bad Combined Policy https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/authentication/concept-password-ban-bad-combined-policy.md
Previously updated : 04/02/2023 Last updated : 10/16/2023
This topic explains details about the password policy criteria checked by Micros
A password policy is applied to all user and admin accounts that are created and managed directly in Microsoft Entra ID. You can [ban weak passwords](concept-password-ban-bad.md) and define parameters to [lock out an account](howto-password-smart-lockout.md) after repeated bad password attempts. Other password policy settings can't be modified.
-The Microsoft Entra password policy doesn't apply to user accounts synchronized from an on-premises AD DS environment using Microsoft Entra Connect unless you enable EnforceCloudPasswordPolicyForPasswordSyncedUsers.
+The Microsoft Entra password policy doesn't apply to user accounts synchronized from an on-premises AD DS environment using Microsoft Entra Connect unless you enable EnforceCloudPasswordPolicyForPasswordSyncedUsers. If EnforceCloudPasswordPolicyForPasswordSyncedUsers and password writeback are enabled, Microsoft Entra password expiration policy applies, but the on-premises password policy takes precedence for length, complexity, and so on.
The following Microsoft Entra password policy requirements apply for all passwords that are created, changed, or reset in Microsoft Entra ID. Requirements are applied during user provisioning, password change, and password reset flows. You can't change these settings except as noted.
active-directory How To Configure Aws Iam https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/cloud-infrastructure-entitlement-management/how-to-configure-aws-iam.md
Previously updated : 06/07/2023 Last updated : 10/16/2023
-# Configure AWS IAM Identity Center as an identity provider
+# Configure AWS IAM Identity Center as an identity provider (preview)
If you're an Amazon Web Services (AWS) customer who uses the AWS IAM Identity Center, you can configure the Identity Center as an identity provider in Permissions Management. Configuring your AWS IAM Identity Center information allows you to receive more accurate data for your identities in Permissions Management.
If you're an Amazon Web Services (AWS) customer who uses the AWS IAM Identity Ce
1. If the **Data Collectors** dashboard isn't displayed when Permissions Management launches, select **Settings** (gear icon), and then select the **Data Collectors** subtab.
-2. On the **Data Collectors** dashboard, select **AWS**, and then select **Create Configuration**. If a Data Collector already exists in your AWS account and you want to add AWS IAM integration, do the following:
+2. On the **Data Collectors** dashboard, select **AWS**, and then select **Create Configuration**. If a Data Collector already exists in your AWS account and you want to add AWS IAM integration, then:
- Select the Data Collector for which you want to configure AWS IAM. - Click on the ellipsis next to theΓÇ»**Authorization Systems Status**. - SelectΓÇ»**Integrate Identity Provider**.
If you're an Amazon Web Services (AWS) customer who uses the AWS IAM Identity Ce
- Your **AWS Management Account Role** 5. SelectΓÇ»**Launch Management Account Template**. The template opens in a new window.
-6. If the Management Account stack is created with the Cloud Formation Template as part of the previous onboarding steps, update the stack by running ``EnableSSO`` as true. This creates a new stack when running the Management Account Template.
+6. If the Management Account stack is created with the Cloud Formation Template as part of the previous onboarding steps, update the stack by running ``EnableSSO`` as true. Running this command creates a new stack when running the Management Account Template.
-The template execution attaches the AWS managed policy ``AWSSSOReadOnly`` and the newly created custom policy ``SSOPolicy`` to the AWS IAM role that allows Microsoft Entra Permissions Management to collect organizational information. The following details are requested in the template. All fields are pre-populated, and you can edit the data as you need:
-- **Stack name** ΓÇô This is the name of the AWS stack for creating the required AWS resources for Permissions Management to collect organizational information. The default value is ``mciem-org-<tenant-id>``.
+The template execution attaches the AWS managed policy ``AWSSSOReadOnly`` and the newly created custom policy ``SSOPolicy`` to the AWS IAM role that allows Microsoft Entra Permissions Management to collect organizational information. The following details are requested in the template. All fields are prepopulated, and you can edit the data as you need:
+- **Stack name** ΓÇô The Stack name is the name of the AWS stack for creating the required AWS resources for Permissions Management to collect organizational information. The default value is ``mciem-org-<tenant-id>``.
- **CFT Parameters** - **OIDC Provider Role Name** ΓÇô Name of the IAM Role OIDC Provider that can assume the role. The default value is the OIDC account role (as entered in Permissions Management).
- - **Org Account Role Name** - Name of the IAM Role. The default value is pre-populated with the Management account role name (as entered in Microsoft Entra PM).
+ - **Org Account Role Name** - Name of the IAM Role. The default value is prepopulated with the Management account role name (as entered in Microsoft Entra PM).
- **true** ΓÇô Enables AWS SSO. The default value is ``true`` when the template is launched from the Configure Identity Provider (IdP) page, otherwise the default is ``false``.
active-directory Concept Conditional Access Grant https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/conditional-access/concept-conditional-access-grant.md
The following client apps support this setting. This list isn't exhaustive and i
- Adobe Acrobat Reader mobile app - iAnnotate for Office 365 - Microsoft Cortana
+- Microsoft Dynamics 365 for Phones
+- Micorsoft Dynamics 365 Sales
- Microsoft Edge - Microsoft Excel - Microsoft Power Automate
The following client apps support this setting. This list isn't exhaustive and i
- Microsoft To Do - Microsoft Word - Microsoft Whiteboard Services-- Microsoft Field Service (Dynamics 365) - MultiLine for Intune - Nine Mail - Email and Calendar - Notate for Intune
active-directory Location Condition https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/conditional-access/location-condition.md
Previously updated : 10/06/2023 Last updated : 10/16/2023
Multiple Conditional Access policies may prompt users for their GPS location bef
> [!IMPORTANT] > Users may receive prompts every hour letting them know that Microsoft Entra ID is checking their location in the Authenticator app. The preview should only be used to protect very sensitive apps where this behavior is acceptable or where access needs to be restricted to a specific country/region.
+#### Deny requests with modified location
+Users can modify the location reported by iOS and Android devices. As a result, Microsoft Authenticator is updating its security baseline for location-based Conditional Access policies. Authenticator will deny authentications where the user may be using a different location than the actual GPS location of the mobile device where Authenticator installed.
+
+In the November 2023 release of Authenticator, users who modify the location of their device will get a denial message in Authenticator when they try location-based authentication. Beginning January 2024, any users that run older Authenticator versions will be blocked from location-based authentication:
+
+- Authenticator version 6.2309.6329 or earlier on Android
+- Authenticator version 6.7.16 or earlier on iOS
+
+To find which users run older versions of Authenticator, use [Microsft Graph APIs](/graph/api/resources/microsoftauthenticatorauthenticationmethod#properties).
#### Include unknown countries/regions
active-directory Supported Accounts Validation https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/develop/supported-accounts-validation.md
See the following table for the validation differences of various properties for
| Application ID URI (`identifierURIs`) | Must be unique in the tenant <br><br> `urn://` schemes are supported <br><br> Wildcards aren't supported <br><br> Query strings and fragments are supported <br><br> Maximum length of 255 characters <br><br> No limit\* on number of identifierURIs | Must be globally unique <br><br> `urn://` schemes are supported <br><br> Wildcards aren't supported <br><br> Query strings and fragments are supported <br><br> Maximum length of 255 characters <br><br> No limit\* on number of identifierURIs | Must be globally unique <br><br> `urn://` schemes aren't supported <br><br> Wildcards, fragments, and query strings aren't supported <br><br> Maximum length of 120 characters <br><br> Maximum of 50 identifierURIs | | National clouds | Supported | Supported | Not supported | | Certificates (`keyCredentials`) | Symmetric signing key | Symmetric signing key | Encryption and asymmetric signing key |
-| Client secrets (`passwordCredentials`) | No limit\* | No limit\* | If liveSDK is enabled: Maximum of two client secrets |
+| Client secrets (`passwordCredentials`) | No limit\* | No limit\* | Maximum of two client secrets |
| Redirect URIs (`replyURLs`) | See [Redirect URI/reply URL restrictions and limitations](reply-url.md) for more info. | | | | API permissions (`requiredResourceAccess`) | No more than 50 total APIs (resource apps), with no more than 10 APIs from other tenants. No more than 400 permissions total across all APIs. | No more than 50 total APIs (resource apps), with no more than 10 APIs from other tenants. No more than 400 permissions total across all APIs. | No more than 50 total APIs (resource apps), with no more than 10 APIs from other tenants. No more than 200 permissions total across all APIs. Maximum of 30 permissions per resource (for example, Microsoft Graph). | | Scopes defined by this API (`oauth2Permissions`) | Maximum scope name length of 120 characters <br><br> No limit\* on the number of scopes defined | Maximum scope name length of 120 characters <br><br> No limit\* on the number of scopes defined | Maximum scope name length of 40 characters <br><br> Maximum of 100 scopes defined |
active-directory Licensing Service Plan Reference https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/enterprise-users/licensing-service-plan-reference.md
When [managing licenses in the Azure portal](https://portal.azure.com/#blade/Mic
| Microsoft Entra ID P1_USGOV_GCCHIGH | AAD_PREMIUM_USGOV_GCCHIGH | de597797-22fb-4d65-a9fe-b7dbe8893914 | EXCHANGE_S_FOUNDATION (113feb6c-3fe4-4440-bddc-54d774bf0318)<br/>MFA_PREMIUM (8a256a2b-b617-496d-b51b-e76466e88db0)<br/>ADALLOM_S_DISCOVERY (932ad362-64a8-4783-9106-97849a1a30b9)<br/>AAD_PREMIUM (41781fb2-bc02-4b7c-bd55-b576c07bb09d) | Exchange Foundation (113feb6c-3fe4-4440-bddc-54d774bf0318)<br/>Microsoft Azure Multi-Factor Authentication (8a256a2b-b617-496d-b51b-e76466e88db0)<br/>Microsoft Defender for Cloud Apps Discovery (932ad362-64a8-4783-9106-97849a1a30b9)<br/>Microsoft Entra ID P1 (41781fb2-bc02-4b7c-bd55-b576c07bb09d) | | Microsoft Entra ID P2 | AAD_PREMIUM_P2 | 84a661c4-e949-4bd2-a560-ed7766fcaf2b | AAD_PREMIUM (41781fb2-bc02-4b7c-bd55-b576c07bb09d)<br/>AAD_PREMIUM_P2 (eec0eb4f-6444-4f95-aba0-50c24d67f998)<br/>ADALLOM_S_DISCOVERY (932ad362-64a8-4783-9106-97849a1a30b9)<br/>EXCHANGE_S_FOUNDATION (113feb6c-3fe4-4440-bddc-54d774bf0318)<br/>MFA_PREMIUM (8a256a2b-b617-496d-b51b-e76466e88db0) | Microsoft Entra ID P1 (41781fb2-bc02-4b7c-bd55-b576c07bb09d)<br/>Microsoft Entra ID P2 (eec0eb4f-6444-4f95-aba0-50c24d67f998)<br/>CLOUD APP SECURITY DISCOVERY (932ad362-64a8-4783-9106-97849a1a30b9)<br/>EXCHANGE FOUNDATION (113feb6c-3fe4-4440-bddc-54d774bf0318)<br/>MICROSOFT AZURE MULTI-FACTOR AUTHENTICATION (8a256a2b-b617-496d-b51b-e76466e88db0) | | Azure Information Protection Plan 1 | RIGHTSMANAGEMENT | c52ea49f-fe5d-4e95-93ba-1de91d380f89 | RMS_S_ENTERPRISE (bea4c11e-220a-4e6d-8eb8-8ea15d019f90)<br/>RMS_S_PREMIUM (6c57d4b6-3b23-47a5-9bc9-69f17b4947b3) | AZURE INFORMATION PROTECTION PREMIUM P1 (6c57d4b6-3b23-47a5-9bc9-69f17b4947b3)<br/>Microsoft Entra RIGHTS (bea4c11e-220a-4e6d-8eb8-8ea15d019f90) |
+| Azure Information Protection Plan 1 | RIGHTSMANAGEMENT_CE | a0e6a48f-b056-4037-af70-b9ac53504551 | EXCHANGE_S_FOUNDATION (113feb6c-3fe4-4440-bddc-54d774bf0318)<br/>RMS_S_PREMIUM (6c57d4b6-3b23-47a5-9bc9-69f17b4947b3)<br/>RMS_S_ENTERPRISE (bea4c11e-220a-4e6d-8eb8-8ea15d019f90) | Exchange Foundation (113feb6c-3fe4-4440-bddc-54d774bf0318)<br/>Azure Information Protection Premium P1 (6c57d4b6-3b23-47a5-9bc9-69f17b4947b3)<br/>Azure Rights Management (bea4c11e-220a-4e6d-8eb8-8ea15d019f90) |
| Azure Information Protection Premium P1 for Government | RIGHTSMANAGEMENT_CE_GOV | 78362de1-6942-4bb8-83a1-a32aa67e6e2c | EXCHANGE_S_FOUNDATION_GOV (922ba911-5694-4e99-a794-73aed9bfeec8)<br/>RMS_S_PREMIUM_GOV (1b66aedf-8ca1-4f73-af76-ec76c6180f98)<br/>RMS_S_ENTERPRISE_GOV (6a76346d-5d6e-4051-9fe3-ed3f312b5597) | Exchange Foundation for Government (922ba911-5694-4e99-a794-73aed9bfeec8)<br/>Azure Information Protection Premium P1 for GCC (1b66aedf-8ca1-4f73-af76-ec76c6180f98)<br/>Azure Rights Management (6a76346d-5d6e-4051-9fe3-ed3f312b5597) | | Azure Information Protection Premium P1_USGOV_GCCHIGH | RIGHTSMANAGEMENT_CE_USGOV_GCCHIGH | c57afa2a-d468-46c4-9a90-f86cb1b3c54a | EXCHANGE_S_FOUNDATION (113feb6c-3fe4-4440-bddc-54d774bf0318)<br/>RMS_S_PREMIUM (6c57d4b6-3b23-47a5-9bc9-69f17b4947b3)<br/>RMS_S_ENTERPRISE (bea4c11e-220a-4e6d-8eb8-8ea15d019f90) | Exchange Foundation (113feb6c-3fe4-4440-bddc-54d774bf0318)<br/>Azure Information Protection Premium P1 (6c57d4b6-3b23-47a5-9bc9-69f17b4947b3)<br/>Azure Rights Management (bea4c11e-220a-4e6d-8eb8-8ea15d019f90) | | Business Apps (free) | SMB_APPS | 90d8b3f8-712e-4f7b-aa1e-62e7ae6cbe96 | DYN365BC_MS_INVOICING (39b5c996-467e-4e60-bd62-46066f572726)<br/>MICROSOFTBOOKINGS (199a5c09-e0ca-4e37-8f7c-b05d533e1ea2) | Microsoft Invoicing (39b5c996-467e-4e60-bd62-46066f572726)<br/>Microsoft Bookings (199a5c09-e0ca-4e37-8f7c-b05d533e1ea2) |
active-directory Howto Identity Protection Remediate Unblock https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/identity-protection/howto-identity-protection-remediate-unblock.md
Organizations who have enabled [password hash synchronization](../hybrid/connect
This configuration provides organizations two new capabilities: -- Risky hybrid users can self-remediate without administrators intervention. When a password is changed on-premises, user risk is now automatically remediated within Entra ID Protection, bringing the user to a safe state.
+- Risky hybrid users can self-remediate without administrators intervention. When a password is changed on-premises, user risk is now automatically remediated within Entra ID Protection, resetting the current user risk state.
- Organizations can proactively deploy [user risk policies that require password changes](howto-identity-protection-configure-risk-policies.md#user-risk-policy-in-conditional-access) to confidently protect their hybrid users. This option strengthens your organization's security posture and simplifies security management by ensuring that user risks are promptly addressed, even in complex hybrid environments. :::image type="content" source="media/howto-identity-protection-remediate-unblock/allow-on-premises-password-reset-user-risk.png" alt-text="Screenshot showing the location of the Allow on-premises password change to reset user risk checkbox." lightbox="media/howto-identity-protection-remediate-unblock/allow-on-premises-password-reset-user-risk.png":::
active-directory Concept Diagnostic Settings Logs Options https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/reports-monitoring/concept-diagnostic-settings-logs-options.md
Previously updated : 10/02/2023 Last updated : 10/16/2023
The `EnrichedOffice365AuditLogs` logs are associated with the enriched logs you
### Microsoft Graph activity logs
-The `MicrosoftGraphActivityLogs` is associated with a feature that's still in preview, but may be visible in the Microsoft Entra admin center. These logs provide administrators full visibility into all HTTP requests accessing your tenant's resources through the Microsoft Graph API. You can use these logs to identify activities that a compromised user account conducted in your tenant or to investigate problematic or unexpected behaviors for client applications, such as extreme call volumes. Route these logs to the same Log Analytics workspace with `SignInLogs` to cross-reference details of token requests for sign-in logs.
-
-The feature is currently in private preview. For more information, see [Access Microsoft Graph activity logs (preview)](/graph/microsoft-graph-activity-logs-overview).
+The `MicrosoftGraphActivityLogs` provide administrators full visibility into all HTTP requests accessing your tenant's resources through the Microsoft Graph API. You can use these logs to identify activities that a compromised user account conducted in your tenant or to investigate problematic or unexpected behaviors for client applications, such as extreme call volumes. Route these logs to the same Log Analytics workspace with `SignInLogs` to cross-reference details of token requests for sign-in logs. For more information, see [Access Microsoft Graph activity logs (preview)](/graph/microsoft-graph-activity-logs-overview).
### Network access traffic logs
active-directory Arborxr Tutorial https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/arborxr-tutorial.md
+
+ Title: Microsoft Entra SSO integration with ArborXR
+description: Learn how to configure single sign-on between Microsoft Entra ID and ArborXR.
++++++++ Last updated : 10/03/2023++++
+# Microsoft Entra SSO integration with ArborXR
+
+In this tutorial, you'll learn how to integrate ArborXR with Microsoft Entra ID. When you integrate ArborXR with Microsoft Entra ID, you can:
+
+* Control in Microsoft Entra ID who has access to ArborXR.
+* Enable your users to be automatically signed-in to ArborXR with their Microsoft Entra accounts.
+* Manage your accounts in one central location.
+
+## Prerequisites
+
+To integrate Microsoft Entra ID with ArborXR, you need:
+
+* A Microsoft Entra subscription. If you don't have a subscription, you can get a [free account](https://azure.microsoft.com/free/).
+* ArborXR single sign-on (SSO) enabled subscription.
+
+## Scenario description
+
+In this tutorial, you configure and test Microsoft Entra SSO in a test environment.
+
+* ArborXR supports **SP** initiated SSO.
+
+## Add ArborXR from the gallery
+
+To configure the integration of ArborXR into Microsoft Entra ID, you need to add ArborXR from the gallery to your list of managed SaaS apps.
+
+1. Sign in to the [Microsoft Entra admin center](https://entra.microsoft.com) as at least a [Cloud Application Administrator](../roles/permissions-reference.md#cloud-application-administrator).
+1. Browse to **Identity** > **Applications** > **Enterprise applications** > **New application**.
+1. In the **Add from the gallery** section, type **ArborXR** in the search box.
+1. Select **ArborXR** from results panel and then add the app. Wait a few seconds while the app is added to your tenant.
+
+Alternatively, you can also use the [Enterprise App Configuration Wizard](https://portal.office.com/AdminPortal/home?Q=Docs#/azureadappintegration). In this wizard, you can add an application to your tenant, add users/groups to the app, assign roles, and walk through the SSO configuration as well. [Learn more about Microsoft 365 wizards.](/microsoft-365/admin/misc/azure-ad-setup-guides)
+
+## Configure and test Microsoft Entra SSO for ArborXR
+
+Configure and test Microsoft Entra SSO with ArborXR using a test user called **B.Simon**. For SSO to work, you need to establish a link relationship between a Microsoft Entra user and the related user in ArborXR.
+
+To configure and test Microsoft Entra SSO with ArborXR, perform the following steps:
+
+1. **[Configure Microsoft Entra SSO](#configure-microsoft-entra-sso)** - to enable your users to use this feature.
+ 1. **[Create a Microsoft Entra ID test user](#create-a-microsoft-entra-id-test-user)** - to test Microsoft Entra single sign-on with B.Simon.
+ 1. **[Assign the Microsoft Entra ID test user](#assign-the-microsoft-entra-id-test-user)** - to enable B.Simon to use Microsoft Entra single sign-on.
+1. **[Configure ArborXR SSO](#configure-arborxr-sso)** - to configure the single sign-on settings on application side.
+ 1. **[Create ArborXR test user](#create-arborxr-test-user)** - to have a counterpart of B.Simon in ArborXR that is linked to the Microsoft Entra ID representation of user.
+1. **[Test SSO](#test-sso)** - to verify whether the configuration works.
+
+## Configure Microsoft Entra SSO
+
+Follow these steps to enable Microsoft Entra SSO in the Microsoft Entra admin center.
+
+1. Sign in to the [Microsoft Entra admin center](https://entra.microsoft.com) as at least a [Cloud Application Administrator](../roles/permissions-reference.md#cloud-application-administrator).
+1. Browse to **Identity** > **Applications** > **Enterprise applications** > **ArborXR** > **Single sign-on**.
+1. On the **Select a single sign-on method** page, select **SAML**.
+1. On the **Set up single sign-on with SAML** page, click the pencil icon for **Basic SAML Configuration** to edit the settings.
+
+ ![Screenshot shows how to edit Basic SAML Configuration.](common/edit-urls.png "Basic Configuration")
+
+1. On the **Basic SAML Configuration** section, perform the following steps:
+
+ a. In the **Identifier (Entity ID)** text box, type a URL using the following pattern:
+ `https://api.xrdm.app/auth/realms/<INSTANCE>`
+
+ b. In the **Reply URL** text box, type a URL using the following pattern:
+ `https://api.xrdm.app/auth/realms/<INSTANCE>/broker/SAML2/endpoint`
+
+ c. In the **Sign on URL** text box, type a URL using the following pattern:
+ `https://api.xrdm.app/auth/realms/<INSTANCE>`
+
+ > [!NOTE]
+ > These values are not real. Update these values with the actual Identifier, Reply URL and Sign on URL. Contact [ArborXR support team](mailto:support@arborxr.com) to get these values. You can also refer to the patterns shown in the **Basic SAML Configuration** section in the Microsoft Entra admin center.
+
+1. On the **Set up single sign-on with SAML** page, in the **SAML Signing Certificate** section, click copy button to copy **App Federation Metadata Url** and save it on your computer.
+
+ ![Screenshot shows the Certificate download link.](common/copy-metadataurl.png "Certificate")
+
+### Create a Microsoft Entra ID test user
+
+In this section, you'll create a test user in the Microsoft Entra admin center called B.Simon.
+
+1. Sign in to the [Microsoft Entra admin center](https://entra.microsoft.com) as at least a [User Administrator](../roles/permissions-reference.md#user-administrator).
+1. Browse to **Identity** > **Users** > **All users**.
+1. Select **New user** > **Create new user**, at the top of the screen.
+1. In the **User** properties, follow these steps:
+ 1. In the **Display name** field, enter `B.Simon`.
+ 1. In the **User principal name** field, enter the username@companydomain.extension. For example, `B.Simon@contoso.com`.
+ 1. Select the **Show password** check box, and then write down the value that's displayed in the **Password** box.
+ 1. Select **Review + create**.
+1. Select **Create**.
+
+### Assign the Microsoft Entra ID test user
+
+In this section, you'll enable B.Simon to use Microsoft Entra single sign-on by granting access to ArborXR.
+
+1. Sign in to the [Microsoft Entra admin center](https://entra.microsoft.com) as at least a [Cloud Application Administrator](../roles/permissions-reference.md#cloud-application-administrator).
+1. Browse to **Identity** > **Applications** > **Enterprise applications** > **ArborXR**.
+1. In the app's overview page, select **Users and groups**.
+1. Select **Add user/group**, then select **Users and groups** in the **Add Assignment** dialog.
+ 1. In the **Users and groups** dialog, select **B.Simon** from the Users list, then click the **Select** button at the bottom of the screen.
+ 1. If you are expecting a role to be assigned to the users, you can select it from the **Select a role** dropdown. If no role has been set up for this app, you see "Default Access" role selected.
+ 1. In the **Add Assignment** dialog, click the **Assign** button.
+
+## Configure ArborXR SSO
+
+1. Log in to ArborXR company site as an administrator.
+
+1. Go to **Settings** > **Single Sign-On** > and click **SAML**.
+
+1. In the **Hosted IdP Metadata URL** textbox, paste the **App Federation Metadata Url**, which you have copied from the Microsoft Entra admin center.
+
+ ![Screenshot shows settings of the configuration.](./media/arborxr-tutorial/settings.png "Account")
+
+1. Click **Apply Changes**.
+
+### Create ArborXR test user
+
+1. In a different web browser window, sign into ArborXR website as an administrator.
+
+1. Navigate to **Settings** > **Users** and click **Add Users**.
+
+ ![Screenshot shows how to create users in application.](./media/arborxr-tutorial/create.png "Users")
+
+1. In the **Add Users** section, perform the following steps:
+
+ ![Screenshot shows how to create new users in the page.](./media/arborxr-tutorial/details.png "Creating Users")
+
+ 1. Select **Role** from the drop-down.
+
+ 1. Enter a valid email address in the **Invite via email** textbox.
+
+ 1. Click **Invite**.
+
+## Test SSO
+
+In this section, you test your Microsoft Entra single sign-on configuration with following options.
+
+* Click on **Test this application** in Microsoft Entra admin center. This will redirect to ArborXR Sign-on URL where you can initiate the login flow.
+
+* Go to ArborXR Sign-on URL directly and initiate the login flow from there.
+
+* You can use Microsoft My Apps. When you click the ArborXR tile in the My Apps, this will redirect to ArborXR Sign-on URL. For more information about the My Apps, see [Introduction to the My Apps](../user-help/my-apps-portal-end-user-access.md).
++
+## Next steps
+
+Once you configure ArborXR you can enforce session control, which protects exfiltration and infiltration of your organization's sensitive data in real time. Session control extends from Conditional Access. [Learn how to enforce session control with Microsoft Defender for Cloud Apps](/cloud-app-security/proxy-deployment-any-app).
active-directory Webxt Recognition Tutorial https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/webxt-recognition-tutorial.md
+
+ Title: Microsoft Entra SSO integration with WebXT Recognition
+description: Learn how to configure single sign-on between Microsoft Entra ID and WebXT Recognition.
++++++++ Last updated : 10/10/2023++++
+# Microsoft Entra SSO integration with WebXT Recognition
+
+In this tutorial, you'll learn how to integrate WebXT Recognition with Microsoft Entra ID. When you integrate WebXT Recognition with Microsoft Entra ID, you can:
+
+* Control in Microsoft Entra ID who has access to WebXT Recognition.
+* Enable your users to be automatically signed-in to WebXT Recognition with their Microsoft Entra accounts.
+* Manage your accounts in one central location.
+
+## Prerequisites
+
+To integrate Microsoft Entra ID with WebXT Recognition, you need:
+
+* A Microsoft Entra subscription. If you don't have a subscription, you can get a [free account](https://azure.microsoft.com/free/).
+* WebXT Recognition single sign-on (SSO) enabled subscription.
+
+## Scenario description
+
+In this tutorial, you configure and test Microsoft Entra SSO in a test environment.
+
+* WebXT Recognition supports **IDP** initiated SSO.
+
+## Add WebXT Recognition from the gallery
+
+To configure the integration of WebXT Recognition into Microsoft Entra ID, you need to add WebXT Recognition from the gallery to your list of managed SaaS apps.
+
+1. Sign in to the [Microsoft Entra admin center](https://entra.microsoft.com) as at least a [Cloud Application Administrator](../roles/permissions-reference.md#cloud-application-administrator).
+1. Browse to **Identity** > **Applications** > **Enterprise applications** > **New application**.
+1. In the **Add from the gallery** section, type **WebXT Recognition** in the search box.
+1. Select **WebXT Recognition** from results panel and then add the app. Wait a few seconds while the app is added to your tenant.
+
+Alternatively, you can also use the [Enterprise App Configuration Wizard](https://portal.office.com/AdminPortal/home?Q=Docs#/azureadappintegration). In this wizard, you can add an application to your tenant, add users/groups to the app, assign roles, and walk through the SSO configuration as well. [Learn more about Microsoft 365 wizards.](/microsoft-365/admin/misc/azure-ad-setup-guides)
+
+## Configure and test Microsoft Entra SSO for WebXT Recognition
+
+Configure and test Microsoft Entra SSO with WebXT Recognition using a test user called **B.Simon**. For SSO to work, you need to establish a link relationship between a Microsoft Entra user and the related user in WebXT Recognition.
+
+To configure and test Microsoft Entra SSO with WebXT Recognition, perform the following steps:
+
+1. **[Configure Microsoft Entra SSO](#configure-microsoft-entra-sso)** - to enable your users to use this feature.
+ 1. **[Create a Microsoft Entra ID test user](#create-a-microsoft-entra-id-test-user)** - to test Microsoft Entra single sign-on with B.Simon.
+ 1. **[Assign the Microsoft Entra ID test user](#assign-the-microsoft-entra-id-test-user)** - to enable B.Simon to use Microsoft Entra single sign-on.
+1. **[Configure WebXT Recognition SSO](#configure-webxt-recognition-sso)** - to configure the single sign-on settings on application side.
+ 1. **[Create WebXT Recognition test user](#create-webxt-recognition-test-user)** - to have a counterpart of B.Simon in WebXT Recognition that is linked to the Microsoft Entra ID representation of user.
+1. **[Test SSO](#test-sso)** - to verify whether the configuration works.
+
+## Configure Microsoft Entra SSO
+
+Follow these steps to enable Microsoft Entra SSO in the Microsoft Entra admin center.
+
+1. Sign in to the [Microsoft Entra admin center](https://entra.microsoft.com) as at least a [Cloud Application Administrator](../roles/permissions-reference.md#cloud-application-administrator).
+1. Browse to **Identity** > **Applications** > **Enterprise applications** > **WebXT Recognition** > **Single sign-on**.
+1. On the **Select a single sign-on method** page, select **SAML**.
+1. On the **Set up single sign-on with SAML** page, click the pencil icon for **Basic SAML Configuration** to edit the settings.
+
+ ![Screenshot shows how to edit Basic SAML Configuration.](common/edit-urls.png "Basic Configuration")
+
+1. On the **Basic SAML Configuration** section, perform the following steps:
+
+ a. In the **Identifier** text box, type a value using the following pattern:
+ `<webxt>`
+
+ b. In the **Reply URL** text box, type a URL using the following pattern:
+ `https://webxtrecognition.<DOMAIN>.com/<INSTANCE>`
+
+ > [!NOTE]
+ > These values are not real. Update these values with the actual Identifier and Reply URL. Contact [WebXT Recognition support team](mailto:webxtrecognition@biworldwide.com) to get these values. You can also refer to the patterns shown in the **Basic SAML Configuration** section in the Microsoft Entra admin center.
+
+1. WebXT Recognition application expects the SAML assertions in a specific format, which requires you to add custom attribute mappings to your SAML token attributes configuration. The following screenshot shows the list of default attributes.
+
+ ![Screenshot shows the image of attributes configuration.](common/default-attributes.png "Image")
+
+1. In addition to above, WebXT Recognition application expects few more attributes to be passed back in SAML response which are shown below. These attributes are also pre populated but you can review them as per your requirements.
+
+ | Name | Source Attribute|
+ | | |
+ | employeeid | user.employeeid |
+
+1. On the **Set up single sign-on with SAML** page, in the **SAML Signing Certificate** section, find **Federation Metadata XML** and select **Download** to download the certificate and save it on your computer.
+
+ ![Screenshot shows the Certificate download link.](common/metadataxml.png "Certificate")
+
+1. On the **Set up WebXT Recognition** section, copy the appropriate URL(s) based on your requirement.
+
+ ![Screenshot shows to copy configuration URLs.](common/copy-configuration-urls.png "Metadata")
+
+### Create a Microsoft Entra ID test user
+
+In this section, you'll create a test user in the Microsoft Entra admin center called B.Simon.
+
+1. Sign in to the [Microsoft Entra admin center](https://entra.microsoft.com) as at least a [User Administrator](../roles/permissions-reference.md#user-administrator).
+1. Browse to **Identity** > **Users** > **All users**.
+1. Select **New user** > **Create new user**, at the top of the screen.
+1. In the **User** properties, follow these steps:
+ 1. In the **Display name** field, enter `B.Simon`.
+ 1. In the **User principal name** field, enter the username@companydomain.extension. For example, `B.Simon@contoso.com`.
+ 1. Select the **Show password** check box, and then write down the value that's displayed in the **Password** box.
+ 1. Select **Review + create**.
+1. Select **Create**.
+
+### Assign the Microsoft Entra ID test user
+
+In this section, you'll enable B.Simon to use Microsoft Entra single sign-on by granting access to WebXT Recognition.
+
+1. Sign in to the [Microsoft Entra admin center](https://entra.microsoft.com) as at least a [Cloud Application Administrator](../roles/permissions-reference.md#cloud-application-administrator).
+1. Browse to **Identity** > **Applications** > **Enterprise applications** > **WebXT Recognition**.
+1. In the app's overview page, select **Users and groups**.
+1. Select **Add user/group**, then select **Users and groups** in the **Add Assignment** dialog.
+ 1. In the **Users and groups** dialog, select **B.Simon** from the Users list, then click the **Select** button at the bottom of the screen.
+ 1. If you are expecting a role to be assigned to the users, you can select it from the **Select a role** dropdown. If no role has been set up for this app, you see "Default Access" role selected.
+ 1. In the **Add Assignment** dialog, click the **Assign** button.
+
+## Configure WebXT Recognition SSO
+
+To configure single sign-on on **WebXT Recognition** side, you need to send the downloaded **Federation Metadata XML** and appropriate copied URLs from Microsoft Entra admin center to [WebXT Recognition support team](mailto:webxtrecognition@biworldwide.com). They set this setting to have the SAML SSO connection set properly on both sides.
+
+### Create WebXT Recognition test user
+
+In this section, you create a user called B.Simon in WebXT Recognition. Work with [WebXT Recognition support team](mailto:webxtrecognition@biworldwide.com) to add the users in the WebXT Recognition platform. Users must be created and activated before you use single sign-on.
+
+## Test SSO
+
+In this section, you test your Microsoft Entra single sign-on configuration with following options.
+
+* Click on Test this application in Microsoft Entra admin center and you should be automatically signed in to the WebXT Recognition for which you set up the SSO.
+
+* You can use Microsoft My Apps. When you click the WebXT Recognition tile in the My Apps, you should be automatically signed in to the WebXT Recognition for which you set up the SSO. For more information about the My Apps, see [Introduction to the My Apps](../user-help/my-apps-portal-end-user-access.md).
+
+## Next steps
+
+Once you configure WebXT Recognition you can enforce session control, which protects exfiltration and infiltration of your organization's sensitive data in real time. Session control extends from Conditional Access. [Learn how to enforce session control with Microsoft Defender for Cloud Apps](/cloud-app-security/proxy-deployment-any-app).
ai-services Cognitive Services Limited Access https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/cognitive-services-limited-access.md
The following services are Limited Access:
- [Speaker Recognition](/legal/cognitive-services/speech-service/speaker-recognition/limited-access-speaker-recognition?context=/azure/ai-services/speech-service/context/context): All features - [Face API](/legal/cognitive-services/computer-vision/limited-access-identity?context=/azure/ai-services/computer-vision/context/context): Identify and Verify features, face ID property - [Azure AI Vision](/legal/cognitive-services/computer-vision/limited-access?context=/azure/ai-services/computer-vision/context/context): Celebrity Recognition feature -- [Azure AI Video Indexer](../azure-video-indexer/limited-access-features.md): Celebrity Recognition and Face Identify features
+- [Azure AI Video Indexer](/azure/azure-video-indexer/limited-access-features): Celebrity Recognition and Face Identify features
- [Azure OpenAI](/legal/cognitive-services/openai/limited-access): Azure OpenAI Service, modified abuse monitoring, and modified content filters Features of these services that aren't listed above are available without registration.
ai-services Harm Categories https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/content-safety/concepts/harm-categories.md
Classification can be multi-labeled. For example, when a text sample goes throug
Every harm category the service applies also comes with a severity level rating. The severity level is meant to indicate the severity of the consequences of showing the flagged content.
-| Severity Levels | Label |
+| 4 Severity Levels |8 Severity Levels | Label |
| -- | -- |
-|Severity Level 0 ΓÇô Safe | Content may be related to violence, self-harm, sexual or hate categories but the terms are used in general, journalistic, scientific, medical, and similar professional contexts which are appropriate for most audiences. |
-|Severity Level 2 ΓÇô Low | Content that expresses prejudiced, judgmental, or opinionated views, includes offensive use of language, stereotyping, use cases exploring a fictional world (e.g., gaming, literature) and depictions at low intensity. |
-|Severity Level 4 ΓÇô Medium| Content that uses offensive, insulting, mocking, intimidating, or demeaning language towards specific identity groups, includes depictions of seeking and executing harmful instructions, fantasies, glorification, promotion of harm at medium intensity. |
-|Severity Level 6 ΓÇô High | Content that displays explicit and severe harmful instructions, actions, damage, or abuse, includes endorsement, glorification, promotion of severe harmful acts, extreme or illegal forms of harm, radicalization, and non-consensual power exchange or abuse. |
+|Severity Level 0 ΓÇô Safe | Severity Level 0 and 1 ΓÇô Safe |Content might be related to violence, self-harm, sexual or hate categories but the terms are used in general, journalistic, scientific, medical, and similar professional contexts which are appropriate for most audiences. |
+|Severity Level 2 ΓÇô Low | Severity Level 2 and 3 ΓÇô Low |Content that expresses prejudiced, judgmental, or opinionated views, includes offensive use of language, stereotyping, use cases exploring a fictional world (e.g., gaming, literature) and depictions at low intensity. |
+|Severity Level 4 ΓÇô Medium| Severity Level 4 and 5 ΓÇô Medium |Content that uses offensive, insulting, mocking, intimidating, or demeaning language towards specific identity groups, includes depictions of seeking and executing harmful instructions, fantasies, glorification, promotion of harm at medium intensity. |
+|Severity Level 6 ΓÇô High | Severity Level 6-7 ΓÇô High |Content that displays explicit and severe harmful instructions, actions, damage, or abuse, includes endorsement, glorification, promotion of severe harmful acts, extreme or illegal forms of harm, radicalization, and non-consensual power exchange or abuse. |
## Next steps
ai-services Migrate To General Availability https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/content-safety/how-to/migrate-to-general-availability.md
+
+ Title: Migrate from Content Safety public preview to GA
+description: Learn how to upgrade your app from the public preview version of Azure AI Content Safety to the GA version.
+++++ Last updated : 09/25/2023+++
+# Migrate from Content Safety public preview to GA
+
+This guide shows you how to upgrade your existing code from the public preview version of Azure AI Content Safety to the GA version.
+
+## REST API calls
+
+In all API calls, be sure to change the _api-version_ parameter in your code:
+
+|old | new |
+|--|--|
+`api-version=2023-04-30-preview` | `api-version=2023-10-01` |
+
+Note the following REST endpoint name changes:
+
+| Public preview term | GA term |
+|-||
+| **addBlockItems** | **addOrUpdateBlocklistItems** |
+| **blockItems** | **blocklistItems** |
+| **removeBlockItems** | **removeBlocklistItems** |
++
+## JSON fields
+
+The following JSON fields have been renamed. Be sure to change them when you send data to a REST call:
+
+| Public preview Term | GA Term |
+|-|-|
+| `blockItems` | `blocklistItems` |
+| `BlockItemId` | `blocklistItemId` |
+| `blockItemIds` | `blocklistItemIds` |
+| `blocklistMatchResults` | `blocklistsMatch` |
+| `breakByBlocklists` | `haltOnBlocklistHit` |
++
+## Return formats
+
+Some of the JSON return formats have changed. See the following updated JSON return examples.
+
+The **text:analyze** API call with category analysis:
+
+```json
+{
+ "categoriesAnalysis": [
+ {
+ "category": "Hate",
+ "severity": 2
+ },
+ {
+ "category": "SelfHarm",
+ "severity": 0
+ },
+ {
+ "category": "Sexual",
+ "severity": 0
+ },
+ {
+ "category": "Violence",
+ "severity": 0
+ }
+ ]
+}
+```
+
+The **text:analyze** API call with a blocklist:
+```json
+{
+ "blocklistsMatch": [
+ {
+ "blocklistName": "string",
+ "blocklistItemId": "string",
+ "blocklistItemText": "bleed"
+ }
+ ],
+ "categoriesAnalysis": [
+ {
+ "category": "Hate",
+ "severity": 0
+ }
+ ]
+}
+```
+
+The **addOrUpdateBlocklistItems** API call:
+
+```json
+{
+ "blocklistItems:"[
+ {
+ "blocklistItemId": "string",
+ "description": "string",
+ "text": "bleed"
+ }
+ ]
+}
+```
+
+The **blocklistItems** API call (list all blocklist items):
+```json
+{
+ "values": [
+ {
+ "blocklistItemId": "string",
+ "description": "string",
+ "text": "bleed",
+ }
+ ]
+}
+```
+
+The **blocklistItems** API call with an item ID (retrieve a single item):
+
+```json
+{
+ "blocklistItemId": "string",
+ "description": "string",
+ "text": "string"
+}
+```
++
+## Next steps
+
+- [Quickstart: Analyze text content](../quickstart-text.md)
ai-services Use Blocklist https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/content-safety/how-to/use-blocklist.md
Title: "Use blocklists for text moderation"
-description: Learn how to customize text moderation in Content Safety by using your own list of blockItems.
+description: Learn how to customize text moderation in Content Safety by using your own list of blocklistItems.
keywords:
# Use a blocklist > [!CAUTION]
-> The sample data in this guide may contain offensive content. User discretion is advised.
+> The sample data in this guide might contain offensive content. User discretion is advised.
-The default AI classifiers are sufficient for most content moderation needs. However, you may need to screen for items that are specific to your use case.
+The default AI classifiers are sufficient for most content moderation needs. However, you might need to screen for items that are specific to your use case.
## Prerequisites * An Azure subscription - [Create one for free](https://azure.microsoft.com/free/cognitive-services/)
-* Once you have your Azure subscription, <a href="https://aka.ms/acs-create" title="Create a Content Safety resource" target="_blank">create a Content Safety resource </a> in the Azure portal to get your key and endpoint. Enter a unique name for your resource, select the subscription you entered on the application form, select a resource group, supported region, and supported pricing tier. Then select **Create**.
+* Once you have your Azure subscription, <a href="https://aka.ms/acs-create" title="Create a Content Safety resource" target="_blank">create a Content Safety resource </a> in the Azure portal to get your key and endpoint. Enter a unique name for your resource, select the subscription you entered on the application form, and select a resource group, supported region, and supported pricing tier. Then select **Create**.
* The resource takes a few minutes to deploy. After it finishes, Select **go to resource**. In the left pane, under **Resource Management**, select **Subscription Key and Endpoint**. The endpoint and either of the keys are used to call APIs. * One of the following installed: * [cURL](https://curl.haxx.se/) for REST API calls.
Copy the cURL command below to a text editor and make the following changes:
```shell
-curl --location --request PATCH '<endpoint>/contentsafety/text/blocklists/<your_list_name>?api-version=2023-04-30-preview' \
+curl --location --request PATCH '<endpoint>/contentsafety/text/blocklists/<your_list_name>?api-version=2023-10-01' \
--header 'Ocp-Apim-Subscription-Key: <enter_your_key_here>' \ --header 'Content-Type: application/json' \ --data-raw '{
else if (createResponse.Status == 200)
1. Optionally replace `<description>` with a custom description. 1. Run the code.
-#### [Python](#tab/python)
+#### [Python](#tab/python)
Create a new Python script and open it in your preferred editor or IDE. Paste in the following code. ```python
except HttpResponseError as e:
-### Add blockItems to the list
+### Add blocklistItems to the list
> [!NOTE] >
-> There is a maximum limit of **10,000 terms** in total across all lists. You can add at most 100 blockItems in one request.
+> There is a maximum limit of **10,000 terms** in total across all lists. You can add at most 100 blocklistItems in one request.
#### [REST API](#tab/rest)
Copy the cURL command below to a text editor and make the following changes:
1. Replace `<enter_your_key_here>` with your key. 1. Replace `<your_list_name>` (in the URL) with the name you used in the list creation step. 1. Optionally replace the value of the `"description"` field with a custom description.
-1. Replace the value of the `"text"` field with the item you'd like to add to your blocklist. The maximum length of a blockItem is 128 characters.
+1. Replace the value of the `"text"` field with the item you'd like to add to your blocklist. The maximum length of a blocklistItem is 128 characters.
```shell
-curl --location --request POST '<endpoint>/contentsafety/text/blocklists/<your_list_name>:addBlockItems?api-version=2023-04-30-preview' \
+curl --location --request POST '<endpoint>/contentsafety/text/blocklists/<your_list_name>:addOrUpdateBlocklistItems?api-version=2023-10-01' \
--header 'Ocp-Apim-Subscription-Key: <enter_your_key_here>' \ --header 'Content-Type: application/json' \data-raw '"blockItems": [{
+--data-raw '"blocklistItems": [{
"description": "string", "text": "bleed" }]' ``` > [!TIP]
-> You can add multiple blockItems in one API call. Make the request body a JSON array of data groups:
+> You can add multiple blocklistItems in one API call. Make the request body a JSON array of data groups:
> > ```json > [{
The response code should be `200`.
```console {
- "blockItemId": "string",
+"blocklistItems:"[
+ {
+ "blocklistItemId": "string",
"description": "string", "text": "bleed"
+ }
+ ]
} ``` #### [C#](#tab/csharp)- Create a new C# console app and open it in your preferred editor or IDE. Paste in the following code. ```csharp
except HttpResponseError as e:
> > There will be some delay after you add or edit a blockItem before it takes effect on text analysis, usually **not more than five minutes**. ++ ### Analyze text with a blocklist #### [REST API](#tab/rest)
Copy the cURL command below to a text editor and make the following changes:
1. Optionally change the value of the `"text"` field to whatever text you want to analyze. ```shell
-curl --location --request POST '<endpoint>/contentsafety/text:analyze?api-version=2023-04-30-preview&' \
+curl --location --request POST '<endpoint>/contentsafety/text:analyze?api-version=2023-10-01&' \
--header 'Ocp-Apim-Subscription-Key: <enter_your_key_here>' \ --header 'Content-Type: application/json' \ --data-raw '{
curl --location --request POST '<endpoint>/contentsafety/text:analyze?api-versio
"Violence" ], "blocklistNames":["<your_list_name>"],
- "breakByBlocklists": true
+ "haltOnBlocklistHit": false,
+ "outputType": "FourSeverityLevels"
}' ```
The JSON response will contain a `"blocklistMatchResults"` that indicates any ma
```json {
- "blocklistMatchResults": [
+ "blocklistsMatch": [
{ "blocklistName": "string",
- "blockItemID": "string",
- "blockItemText": "bleed",
- "offset": "28",
- "length": "5"
+ "blocklistItemId": "string",
+ "blocklistItemText": "bleed"
+ }
+ ],
+ "categoriesAnalysis": [
+ {
+ "category": "Hate",
+ "severity": 0
} ] }
except HttpResponseError as e:
This section contains more operations to help you manage and use the blocklist feature.
-### List all blockItems in a list
+### List all blocklistItems in a list
#### [REST API](#tab/rest)
Copy the cURL command below to a text editor and make the following changes:
1. Replace `<your_list_name>` (in the request URL) with the name you used in the list creation step. ```shell
-curl --location --request GET '<endpoint>/contentsafety/text/blocklists/<your_list_name>/blockItems?api-version=2023-04-30-preview' \
+curl --location --request GET '<endpoint>/contentsafety/text/blocklists/<your_list_name>/blocklistItems?api-version=2023-10-01' \
--header 'Ocp-Apim-Subscription-Key: <enter_your_key_here>' \ --header 'Content-Type: application/json' ```
The status code should be `200` and the response body should look like this:
{ "values": [ {
- "blockItemId": "string",
+ "blocklistItemId": "string",
"description": "string", "text": "bleed", }
Copy the cURL command below to a text editor and make the following changes:
```shell
-curl --location --request GET '<endpoint>/contentsafety/text/blocklists?api-version=2023-04-30-preview' \
+curl --location --request GET '<endpoint>/contentsafety/text/blocklists?api-version=2023-10-01' \
--header 'Ocp-Apim-Subscription-Key: <enter_your_key_here>' \ --header 'Content-Type: application/json' ```
Run the script.
-### Get a blocklist by name
+
+### Get a blocklist by blocklistName
#### [REST API](#tab/rest)
Copy the cURL command below to a text editor and make the following changes:
1. Replace `<your_list_name>` (in the request URL) with the name you used in the list creation step. ```shell
-cURL --location '<endpoint>contentsafety/text/blocklists/<your_list_name>?api-version=2023-04-30-preview' \
+cURL --location '<endpoint>contentsafety/text/blocklists/<your_list_name>?api-version=2023-10-01' \
--header 'Ocp-Apim-Subscription-Key: <enter_your_key_here>' \ --data '' ```
The status code should be `200`. The JSON response looks like this:
```json {
- "blocklistName": "string",
- "description": "string"
+ "blocklistName": "string",
+ "description": "string"
} ```
except HttpResponseError as e:
-### Get a blockItem by blockItem ID
+### Get a blocklistItem by blocklistName and blocklistItemId
#### [REST API](#tab/rest)
Copy the cURL command below to a text editor and make the following changes:
1. Replace `<endpoint>` with your endpoint URL. 1. Replace `<enter_your_key_here>` with your key. 1. Replace `<your_list_name>` (in the request URL) with the name you used in the list creation step.
-1. Replace `<your_item_id>` with the ID value for the blockItem. This is the value of the `"blockItemId"` field from the **Add blockItem** or **Get all blockItems** API calls.
+1. Replace `<your_item_id>` with the ID value for the blocklistItem. This is the value of the `"blocklistItemId"` field from the **Add blocklistItem** or **Get all blocklistItems** API calls.
```shell
-cURL --location '<endpoint>contentsafety/text/blocklists/<your_list_name>/blockitems/<your_item_id>?api-version=2023-04-30-preview' \
+cURL --location '<endpoint>contentsafety/text/blocklists/<your_list_name>/blocklistItems/<your_item_id>?api-version=2023-10-01' \
--header 'Ocp-Apim-Subscription-Key: <enter_your_key_here>' \ --data '' ```
The status code should be `200`. The JSON response looks like this:
```json {
- "blockItemId": "string",
- "description": "string",
- "text": "string"
+ "blocklistItemId": "string",
+ "description": "string",
+ "text": "string"
} ```
except HttpResponseError as e:
-### Remove a blockItem from a list
++
+### Remove blocklistItems from a blocklist.
> [!NOTE] >
Copy the cURL command below to a text editor and make the following changes:
1. Replace `<endpoint>` with your endpoint URL. 1. Replace `<enter_your_key_here>` with your key. 1. Replace `<your_list_name>` (in the request URL) with the name you used in the list creation step.
-1. Replace `<item_id>` with the ID value for the blockItem. This is the value of the `"blockItemId"` field from the **Add blockItem** or **Get all blockItems** API calls.
+1. Replace `<item_id>` with the ID value for the blocklistItem. This is the value of the `"blocklistItemId"` field from the **Add blocklistItem** or **Get all blocklistItems** API calls.
```shell
-curl --location --request DELETE '<endpoint>/contentsafety/text/blocklists/<your_list_name>/removeBlockItems?api-version=2023-04-30-preview' \
+curl --location --request DELETE '<endpoint>/contentsafety/text/blocklists/<your_list_name>:removeBlocklistItems?api-version=2023-10-01' \
--header 'Ocp-Apim-Subscription-Key: <enter_your_key_here>' \ --header 'Content-Type: application/json'data-raw '"blockItemIds":[
+--data-raw '"blocklistItemIds":[
"<item_id>" ]' ``` > [!TIP]
-> You can delete multiple blockItems in one API call. Make the request body an array of `blockItemId` values.
+> You can delete multiple blocklistItems in one API call. Make the request body an array of `blocklistItemId` values.
The response code should be `204`. #### [C#](#tab/csharp) + Create a new C# console app and open it in your preferred editor or IDE. Paste in the following code. ```csharp
Replace `<block_item_text>` with your block item text.
+ ### Delete a list and all of its contents > [!NOTE]
Copy the cURL command below to a text editor and make the following changes:
1. Replace `<your_list_name>` (in the request URL) with the name you used in the list creation step. ```shell
-curl --location --request DELETE '<endpoint>/contentsafety/text/blocklists/<your_list_name>?api-version=2023-04-30-preview' \
+curl --location --request DELETE '<endpoint>/contentsafety/text/blocklists/<your_list_name>?api-version=2023-10-01' \
--header 'Ocp-Apim-Subscription-Key: <enter_your_key_here>' \ --header 'Content-Type: application/json' \ ```
except HttpResponseError as e:
+ ## Next steps See the API reference documentation to learn more about the APIs used in this guide.
ai-services Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/content-safety/overview.md
Title: What is Azure AI Content Safety? (preview)
+ Title: What is Azure AI Content Safety?
description: Learn how to use Content Safety to track, flag, assess, and filter inappropriate material in user-generated content.
#Customer intent: As a developer of content management software, I want to find out whether Azure AI Content Safety is the right solution for my moderation needs.
-# What is Azure AI Content Safety? (preview)
+# What is Azure AI Content Safety?
[!INCLUDE [Azure AI services rebrand](../includes/rebrand-note.md)]
There are different types of analysis available from this service. The following
[Azure AI Content Safety Studio](https://contentsafety.cognitive.azure.com) is an online tool designed to handle potentially offensive, risky, or undesirable content using cutting-edge content moderation ML models. It provides templates and customized workflows, enabling users to choose and build their own content moderation system. Users can upload their own content or try it out with provided sample content.
-Content Safety Studio not only contains the out-of-the-box AI models, but also includes Microsoft's built-in terms blocklists to flag profanities and stay up to date with new trends. You can also upload your own blocklists to enhance the coverage of harmful content that's specific to your use case.
+Content Safety Studio not only contains out-of-the-box AI models but also includes Microsoft's built-in terms blocklists to flag profanities and stay up to date with new trends. You can also upload your own blocklists to enhance the coverage of harmful content that's specific to your use case.
Studio also lets you set up a moderation workflow, where you can continuously monitor and improve content moderation performance. It can help you meet content requirements from all kinds of industries like gaming, media, education, E-commerce, and more. Businesses can easily connect their services to the Studio and have their content moderated in real time, whether user-generated or AI-generated.
-All of these capabilities are handled by the Studio and its backend; customers donΓÇÖt need to worry about model development. You can onboard your data for quick validation and monitor your KPIs accordingly, like technical metrics (latency, accuracy, recall), or business metrics (block rate, block volume, category proportions, language proportions and more). With simple operations and configurations, customers can test different solutions quickly and find the best fit, instead of spending time experimenting with custom models or doing moderation manually.
+All of these capabilities are handled by the Studio and its backend; customers donΓÇÖt need to worry about model development. You can onboard your data for quick validation and monitor your KPIs accordingly, like technical metrics (latency, accuracy, recall), or business metrics (block rate, block volume, category proportions, language proportions, and more). With simple operations and configurations, customers can test different solutions quickly and find the best fit, instead of spending time experimenting with custom models or doing moderation manually.
> [!div class="nextstepaction"] > [Content Safety Studio](https://contentsafety.cognitive.azure.com)
For enhanced security, you can use Microsoft Entra ID or Managed Identity (MI) t
### Encryption of data at rest
-Learn how Content Safety handles the [encryption and decryption of your data](./how-to/encrypt-data-at-rest.md). Customer-managed keys (CMK), also known as Bring your own key (BYOK), offer greater flexibility to create, rotate, disable, and revoke access controls. You can also audit the encryption keys used to protect your data.
+Learn how Content Safety handles the [encryption and decryption of your data](./how-to/encrypt-data-at-rest.md). Customer-managed keys (CMK), also known as Bring Your Own Key (BYOK), offer greater flexibility to create, rotate, disable, and revoke access controls. You can also audit the encryption keys used to protect your data.
## Pricing
-Currently, the public preview features are available in the **F0 and S0** pricing tier.
+Currently, Content Safety has an **F0 and S0** pricing tier.
## Service limits ### Language support
-Content Safety models have been specifically trained and tested on the following languages: English, German, Japanese, Spanish, French, Italian, Portuguese, and Chinese. However, the service can work in many other languages, but the quality may vary. In all cases, you should do your own testing to ensure that it works for your application.
+Content Safety models have been specifically trained and tested in the following languages: English, German, Japanese, Spanish, French, Italian, Portuguese, and Chinese. However, the service can work in many other languages, but the quality might vary. In all cases, you should do your own testing to ensure that it works for your application.
For more information, see [Language support](/azure/ai-services/content-safety/language-support).
-### Region / location
+### Region/location
-To use the preview APIs, you must create your Azure AI Content Safety resource in a supported region. Currently, the public preview features are available in the following Azure regions:
+To use the Content Safety APIs, you must create your Azure AI Content Safety resource in the supported regions. Currently, it is available in the following Azure regions:
+- Australia East
+- Canada East
+- Central US
- East US
+- East US 2
+- France Central
+- Japan East
+- North Central US
+- South Central US
+- Switzerland North
+- UK South
- West Europe
+- West US 2
Feel free to [contact us](mailto:acm-team@microsoft.com) if you need other regions for your business.
If you get stuck, [email us](mailto:acm-team@microsoft.com) or use the feedback
Follow a quickstart to get started using Content Safety in your application. > [!div class="nextstepaction"]
-> [Content Safety quickstart](./quickstart-text.md)
+> [Content Safety quickstart](./quickstart-text.md)
ai-services Quickstart Image https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/content-safety/quickstart-image.md
Get started with the Content Studio, REST API, or client SDKs to do basic image
::: zone-end ++++++ ## Clean up resources
ai-services Quickstart Text https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/content-safety/quickstart-text.md
Get started with the Content Safety Studio, REST API, or client SDKs to do basic
::: zone-end ++++++ ## Clean up resources
ai-services Whats New https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/content-safety/whats-new.md
# What's new in Content Safety
-Learn what's new in the service. These items may be release notes, videos, blog posts, and other types of information. Bookmark this page to stay up to date with new features, enhancements, fixes, and documentation updates.
+Learn what's new in the service. These items might be release notes, videos, blog posts, and other types of information. Bookmark this page to stay up to date with new features, enhancements, fixes, and documentation updates.
+
+## October 2023
+
+### Content Safety is generally available (GA)
+
+The Azure AI Content Safety service is now generally available as a cloud service.
+- The service is available in many more Azure regions. See the [Overview](./overview.md) for a list.
+- The return formats of the Analyze APIs have changed. See the [Quickstarts](./quickstart-text.md) for the latest examples.
+- The names and return formats of several APIs have changed. See the [Migration guide](./how-to/migrate-to-general-availability.md) for a full list of breaking changes. Other guides and quickstarts now reflect the GA version.
+
+### Content Safety Java and JavaScript SDKs
+
+The Azure AI Content Safety service is now available through Java and JavaScript SDKs. The SDKs are available on [Maven](https://central.sonatype.com/artifact/com.azure/azure-ai-contentsafety) and [npm](https://www.npmjs.com/package/@azure-rest/ai-content-safety). Follow a [quickstart](./quickstart-text.md) to get started.
## July 2023 ### Content Safety C# SDK
-The Azure AI Content Safety service is now available through a C# SDK. The SDK is available on [NuGet](https://www.nuget.org/packages/Azure.AI.ContentSafety/). Follow the [quickstart](./quickstart-text.md) to get started.
+The Azure AI Content Safety service is now available through a C# SDK. The SDK is available on [NuGet](https://www.nuget.org/packages/Azure.AI.ContentSafety/). Follow a [quickstart](./quickstart-text.md) to get started.
## May 2023
ai-services Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/immersive-reader/overview.md
With Immersive Reader you can break words into syllables to improve readability
## How does Immersive Reader work?
-Immersive Reader is a standalone web application. When invoked using the Immersive Reader client library is displayed on top of your existing web application in an `iframe`. When your wep application calls the Immersive Reader service, you specify the content to show the reader. The Immersive Reader client library handles the creation and styling of the `iframe` and communication with the Immersive Reader backend service. The Immersive Reader service processes the content for parts of speech, text to speech, translation, and more.
+Immersive Reader is a standalone web application. When invoked using the Immersive Reader client library is displayed on top of your existing web application in an `iframe`. When your web application calls the Immersive Reader service, you specify the content to show the reader. The Immersive Reader client library handles the creation and styling of the `iframe` and communication with the Immersive Reader backend service. The Immersive Reader service processes the content for parts of speech, text to speech, translation, and more.
## Get started with Immersive Reader
ai-services Azure Openai Integration https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/language-service/question-answering/how-to/azure-openai-integration.md
Last updated 08/02/2023
Custom Question Answering enables you to create a conversational layer on your data based on sophisticated Natural Language Processing (NLP) capabilities with enhanced relevance using a deep learning ranker, precise answers, and end-to-end region support. Most use cases for Custom Question Answering rely on finding appropriate answers for inputs by integrating it with chat bots, social media applications and speech-enabled desktop applications.
-AI runtimes however, are evolving due to the development of Large Language Models (LLMs), such as GPT-35-Turbo and GPT-4 offered by [Azure Open AI](../../../openai/overview.md) can address many chat-based use cases, which you may want to integrate with.
+AI runtimes however, are evolving due to the development of Large Language Models (LLMs), such as GPT-35-Turbo and GPT-4 offered by [Azure OpenAI](../../../openai/overview.md) can address many chat-based use cases, which you may want to integrate with.
At the same time, customers often require a custom answer authoring experience to achieve more granular control over the quality and content of question-answer pairs, and allow them to address content issues in production. Read this article to learn how to integrate Azure OpenAI On Your Data (Preview) with question-answer pairs from your Custom Question Answering project, using your project's underlying Azure Cognitive Search indexes. ## Prerequisites
-* An existing Azure Open AI resource. If you don't already have an Azure Open AI resource, then [create one and deploy a model](../../../openai/how-to/create-resource.md).
+* An existing Azure OpenAI resource. If you don't already have an Azure OpenAI resource, then [create one and deploy a model](../../../openai/how-to/create-resource.md).
* An Azure Language Service resource and Custom Question Answering project. If you donΓÇÖt have one already, then [create one](../quickstart/sdk.md). * Azure OpenAI requires registration and is currently only available to approved enterprise customers and partners. See [Limited access to Azure OpenAI Service](/legal/cognitive-services/openai/limited-access?context=/azure/ai-services/openai/context/context) for more information. You can apply for access to Azure OpenAI by completing the form at https://aka.ms/oai/access. Open an issue on this repo to contact us if you have an issue. * Be sure that you are assigned at least the [Cognitive Services OpenAI Contributor role](/azure/role-based-access-control/built-in-roles#cognitive-services-openai-contributor) for the Azure OpenAI resource.
At the same time, customers often require a custom answer authoring experience t
1. Select the **Azure Search** tab on the navigation menu to the left.
-1. Make a note of your Azure Search details, such as Azure Search resource name, subscription, and location. You will need this information when you connect your Azure Cognitive Search index to Azure Open AI.
+1. Make a note of your Azure Search details, such as Azure Search resource name, subscription, and location. You will need this information when you connect your Azure Cognitive Search index to Azure OpenAI.
:::image type="content" source="../media/question-answering/azure-search.png" alt-text="A screenshot showing the Azure search section for a Custom Question Answering project." lightbox="../media/question-answering/azure-search.png":::
At the same time, customers often require a custom answer authoring experience t
You can now start exploring Azure OpenAI capabilities with a no-code approach through the chat playground. It's simply a text box where you can submit a prompt to generate a completion. From this page, you can quickly iterate and experiment with the capabilities. You can also launch a [web app](../../..//openai/concepts/use-your-data.md#using-the-web-app) to chat with the model over the web. ## Next steps
-* [Using Azure OpenAI on your data](../../../openai/concepts/use-your-data.md)
+* [Using Azure OpenAI on your data](../../../openai/concepts/use-your-data.md)
ai-services Language Support https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/language-support.md
The following table provides links to language support reference articles by sup
|![QnA Maker icon](medi) (retired) | Distill information into easy-to-navigate questions and answers. | |![Speech icon](medi)| Configure speech-to-text, text-to-speech, translation, and speaker recognition applications. | |![Translator icon](medi) | Translate more than 100 languages and dialects including those deemed at-risk and endangered. |
-|![Video Indexer icon](medi#guidelines-and-limitations) | Extract actionable insights from your videos. |
+|![Video Indexer icon](media/service-icons/video-indexer.svg)</br>[Video Indexer](/azure/azure-video-indexer/language-identification-model#guidelines-and-limitations) | Extract actionable insights from your videos. |
|![Vision icon](medi) | Analyze content in images and videos. | ## Language independent services
ai-services Models https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/openai/concepts/models.md
GPT-3.5 Turbo version 0301 is the first version of the model released. Version
| Model ID | Base model Regions | Fine-Tuning Regions | Max Request (tokens) | Training Data (up to) | | | | - | -- | - | | `gpt-35-turbo`<sup>1</sup> (0301) | East US, France Central, South Central US, UK South, West Europe | N/A | 4,096 | Sep 2021 |
-| `gpt-35-turbo` (0613) | Australia East, Canada East, East US, East US 2, France Central, Japan East, North Central US, Sweden Central, Switzerland North, UK South | N/A | 4,096 | Sep 2021 |
+| `gpt-35-turbo` (0613) | Australia East, Canada East, East US, East US 2, France Central, Japan East, North Central US, Sweden Central, Switzerland North, UK South | North Central US, Sweden Central | 4,096 | Sep 2021 |
| `gpt-35-turbo-16k` (0613) | Australia East, Canada East, East US, East US 2, France Central, Japan East, North Central US, Sweden Central, Switzerland North, UK South | N/A | 16,384 | Sep 2021 | | `gpt-35-turbo-instruct` (0914) | East US, Sweden Central | N/A | 4,097 | Sep 2021 |
These models can only be used with Embedding API requests.
| | | | | | | dalle2 | East US | N/A | 1000 | N/A |
+### Fine-tuning models (Preview)
+
+`babbage-002` and `davinci-002` are not trained to follow instructions. Querying these base models should only be done as a point of reference to a fine-tuned version to evaluate the progress of your training.
+
+`gpt-35-turbo-0613` - fine-tuning of this model is limited to a subset of regions, and is not available in every region the base model is available.
+
+| Model ID | Fine-Tuning Regions | Max Request (tokens) | Training Data (up to) |
+| | | | | |
+| `babbage-002` | North Central US, Sweden Central | 16,384 | Sep 2021 |
+| `davinci-002` | North Central US, Sweden Central | 16,384 | Sep 2021 |
+| `gpt-35-turbo` (0613) | North Central US, Sweden Central | 4096 | Sep 2021 |
+ ### Whisper models (Preview) | Model ID | Base model Regions | Fine-Tuning Regions | Max Request (audio file size) | Training Data (up to) |
ai-services Fine Tuning https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/openai/how-to/fine-tuning.md
Previously updated : 09/01/2023-- Last updated : 10/12/2023++ zone_pivot_groups: openai-fine-tuning keywords:
-# Customize a model with Azure OpenAI Service
+# Customize a model with fine-tuning (preview)
Azure OpenAI Service lets you tailor our models to your personal datasets by using a process known as *fine-tuning*. This customization step lets you get more out of the service by providing: -- Higher quality results than what you can get just from prompt design.-- The ability to train on more examples than can fit into a prompt.-- Lower-latency requests.
-
-A customized model improves on the few-shot learning approach by training the model's weights on your specific prompts and structure. The customized model lets you achieve better results on a wider number of tasks without needing to provide examples in your prompt. The result is less text sent and fewer tokens processed on every API call, saving cost and improving request latency.
-
+- Higher quality results than what you can get just from [prompt engineering](../concepts/prompt-engineering.md)
+- The ability to train on more examples than can fit into a model's max request context limit.
+- Lower-latency requests, particularly when using smaller models.
+A fine-tuned model improves on the few-shot learning approach by training the model's weights on your own data. A customized model lets you achieve better results on a wider number of tasks without needing to provide examples in your prompt. The result is less text sent and fewer tokens processed on every API call, potentially saving cost and improving request latency.
::: zone pivot="programming-language-studio"
ai-services Prepare Dataset https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/openai/how-to/prepare-dataset.md
- Title: 'How to prepare a dataset for custom model training'-
-description: Learn how to prepare your dataset for fine-tuning
---- Previously updated : 06/24/2022--
-recommendations: false
-keywords:
--
-# Learn how to prepare your dataset for fine-tuning
-
-The first step of customizing your model is to prepare a high quality dataset. To do this you'll need a set of training examples composed of single input prompts and the associated desired output ('completion'). This format is notably different than using models during inference in the following ways:
--- Only provide a single prompt vs a few examples.-- You don't need to provide detailed instructions as part of the prompt.-- Each prompt should end with a fixed separator to inform the model when the prompt ends and the completion begins. A simple separator, which generally works well is `\n\n###\n\n`. The separator shouldn't appear elsewhere in any prompt.-- Each completion should start with a whitespace due to our tokenization, which tokenizes most words with a preceding whitespace.-- Each completion should end with a fixed stop sequence to inform the model when the completion ends. A stop sequence could be `\n`, `###`, or any other token that doesn't appear in any completion.-- For inference, you should format your prompts in the same way as you did when creating the training dataset, including the same separator. Also specify the same stop sequence to properly truncate the completion.-- The dataset cannot exceed 100 MB in total file size.-
-## Best practices
-
-Customization performs better with high-quality examples and the more you have, generally the better the model performs. We recommend that you provide at least a few hundred high-quality examples to achieve a model that performs better than using well-designed prompts with a base model. From there, performance tends to linearly increase with every doubling of the number of examples. Increasing the number of examples is usually the best and most reliable way of improving performance.
-
-If you're fine-tuning on a pre-existing dataset rather than writing prompts from scratch, be sure to manually review your data for offensive or inaccurate content if possible, or review as many random samples of the dataset as possible if it's large.
-
-## Specific guidelines
-
-Fine-tuning can solve various problems, and the optimal way to use it may depend on your specific use case. Below, we've listed the most common use cases for fine-tuning and corresponding guidelines.
-
-### Classification
-
-Classifiers are the easiest models to get started with. For classification problems we suggest using **ada**, which generally tends to perform only very slightly worse than more capable models once fine-tuned, while being significantly faster. In classification problems, each prompt in the dataset should be classified into one of the predefined classes. For this type of problem, we recommend:
--- Use a separator at the end of the prompt, for example, `\n\n###\n\n`. Remember to also append this separator when you eventually make requests to your model.-- Choose classes that map to a single token. At inference time, specify max_tokens=1 since you only need the first token for classification.-- Ensure that the prompt + completion doesn't exceed 2048 tokens, including the separator-- Aim for at least 100 examples per class-- To get class log probabilities, you can specify logprobs=5 (for five classes) when using your model-- Ensure that the dataset used for fine-tuning is very similar in structure and type of task as what the model will be used for-
-#### Case study: Is the model making untrue statements?
-
-Let's say you'd like to ensure that the text of the ads on your website mentions the correct product and company. In other words, you want to ensure the model isn't making things up. You may want to fine-tune a classifier which filters out incorrect ads.
-
-The dataset might look something like the following:
-
-```json
-{"prompt":"Company: BHFF insurance\nProduct: allround insurance\nAd:One stop shop for all your insurance needs!\nSupported:", "completion":" yes"}
-{"prompt":"Company: Loft conversion specialists\nProduct: -\nAd:Straight teeth in weeks!\nSupported:", "completion":" no"}
-```
-
-In the example above, we used a structured input containing the name of the company, the product, and the associated ad. As a separator we used `\nSupported:` which clearly separated the prompt from the completion. With a sufficient number of examples, the separator you choose doesn't make much of a difference (usually less than 0.4%) as long as it doesn't appear within the prompt or the completion.
-
-For this use case we fine-tuned an ada model since it is faster and cheaper, and the performance is comparable to larger models because it's a classification task.
-
-Now we can query our model by making a Completion request.
-
-```console
-curl https://YOUR_RESOURCE_NAME.openaiazure.com/openai/deployments/YOUR_DEPLOYMENT_NAME/completions?api-version=2023-05-15\ \
- -H 'Content-Type: application/json' \
- -H 'api-key: YOUR_API_KEY' \
- -d '{
- "prompt": "Company: Reliable accountants Ltd\nProduct: Personal Tax help\nAd:Best advice in town!\nSupported:",
- "max_tokens": 1
- }'
-```
-
-Which will return either `yes` or `no`.
-
-#### Case study: Sentiment analysis
-
-Let's say you'd like to get a degree to which a particular tweet is positive or negative. The dataset might look something like the following:
-
-```console
-{"prompt":"Overjoyed with the new iPhone! ->", "completion":" positive"}
-{"prompt":"@contoso_basketball disappoint for a third straight night. ->", "completion":" negative"}
-```
-
-Once the model is fine-tuned, you can get back the log probabilities for the first completion token by setting `logprobs=2` on the completion request. The higher the probability for positive class, the higher the relative sentiment.
-
-Now we can query our model by making a Completion request.
-
-```console
-curl https://YOUR_RESOURCE_NAME.openaiazure.com/openai/deployments/YOUR_DEPLOYMENT_NAME/completions?api-version=2023-05-15\ \
- -H 'Content-Type: application/json' \
- -H 'api-key: YOUR_API_KEY' \
- -d '{
- "prompt": "Excited to share my latest blog post! ->",
- "max_tokens": 1,
- "logprobs": 2
- }'
-```
-
-Which will return:
-
-```json
-{
- "object": "text_completion",
- "created": 1589498378,
- "model": "YOUR_FINE_TUNED_MODEL_NAME",
- "choices": [
- {
- "logprobs": {
- "text_offset": [
- 19
- ],
- "token_logprobs": [
- -0.03597255
- ],
- "tokens": [
- " positive"
- ],
- "top_logprobs": [
- {
- " negative": -4.9785037,
- " positive": -0.03597255
- }
- ]
- },
-
- "text": " positive",
- "index": 0,
- "finish_reason": "length"
- }
- ]
-}
-```
-
-#### Case study: Categorization for Email triage
-
-Let's say you'd like to categorize incoming email into one of a large number of predefined categories. For classification into a large number of categories, we recommend you convert those categories into numbers, which will work well with up to approximately 500 categories. We've observed that adding a space before the number sometimes slightly helps the performance, due to tokenization. You may want to structure your training data as follows:
-
-```json
-{
- "prompt":"Subject: <email_subject>\nFrom:<customer_name>\nDate:<date>\nContent:<email_body>\n\n###\n\n", "completion":" <numerical_category>"
-}
-```
-
-For example:
-
-```json
-{
- "prompt":"Subject: Update my address\nFrom:Joe Doe\nTo:support@ourcompany.com\nDate:2021-06-03\nContent:Hi,\nI would like to update my billing address to match my delivery address.\n\nPlease let me know once done.\n\nThanks,\nJoe\n\n###\n\n",
- "completion":" 4"
-}
-```
-
-In the example above we used an incoming email capped at 2043 tokens as input. (This allows for a four token separator and a one token completion, summing up to 2048.) As a separator we used `\n\n###\n\n` and we removed any occurrence of ### within the email.
-
-### Conditional generation
-
-Conditional generation is a problem where the content needs to be generated given some kind of input. This includes paraphrasing, summarizing, entity extraction, product description writing given specifications, chatbots and many others. For this type of problem we recommend:
--- Use a separator at the end of the prompt, for example, `\n\n###\n\n`. Remember to also append this separator when you eventually make requests to your model.-- Use an ending token at the end of the completion, for example, `END`.-- Remember to add the ending token as a stop sequence during inference, for example, `stop=[" END"]`.-- Aim for at least ~500 examples.-- Ensure that the prompt + completion doesn't exceed 2048 tokens, including the separator.-- Ensure the examples are of high quality and follow the same desired format.-- Ensure that the dataset used for fine-tuning is similar in structure and type of task as what the model will be used for.-- Using Lower learning rate and only 1-2 epochs tends to work better for these use cases.-
-#### Case study: Write an engaging ad based on a Wikipedia article
-
-This is a generative use case so you would want to ensure that the samples you provide are of the highest quality, as the fine-tuned model will try to imitate the style (and mistakes) of the given examples. A good starting point is around 500 examples. A sample dataset might look like this:
-
-```json
-{
- "prompt":"<Product Name>\n<Wikipedia description>\n\n###\n\n",
- "completion":" <engaging ad> END"
-}
-```
-
-For example:
-
-```json
-{
- "prompt":"Samsung Galaxy Feel\nThe Samsung Galaxy Feel is an Android smartphone developed by Samsung Electronics exclusively for the Japanese market. The phone was released in June 2017 and was sold by NTT Docomo. It runs on Android 7.0 (Nougat), has a 4.7 inch display, and a 3000 mAh battery.\nSoftware\nSamsung Galaxy Feel runs on Android 7.0 (Nougat), but can be later updated to Android 8.0 (Oreo).\nHardware\nSamsung Galaxy Feel has a 4.7 inch Super AMOLED HD display, 16 MP back facing and 5 MP front facing cameras. It has a 3000 mAh battery, a 1.6 GHz Octa-Core ARM Cortex-A53 CPU, and an ARM Mali-T830 MP1 700 MHz GPU. It comes with 32GB of internal storage, expandable to 256GB via microSD. Aside from its software and hardware specifications, Samsung also introduced a unique a hole in the phone's shell to accommodate the Japanese perceived penchant for personalizing their mobile phones. The Galaxy Feel's battery was also touted as a major selling point since the market favors handsets with longer battery life. The device is also waterproof and supports 1seg digital broadcasts using an antenna that is sold separately.\n\n###\n\n",
- "completion":"Looking for a smartphone that can do it all? Look no further than Samsung Galaxy Feel! With a slim and sleek design, our latest smartphone features high-quality picture and video capabilities, as well as an award winning battery life. END"
-}
-```
-
-Here we used a multiline separator, as Wikipedia articles contain multiple paragraphs and headings. We also used a simple end token, to ensure that the model knows when the completion should finish.
-
-#### Case study: Entity extraction
-
-This is similar to a language transformation task. To improve the performance, it's best to either sort different extracted entities alphabetically or in the same order as they appear in the original text. This helps the model to keep track of all the entities which need to be generated in order. The dataset could look as follows:
-
-```json
-{
- "prompt":"<any text, for example news article>\n\n###\n\n",
- "completion":" <list of entities, separated by a newline> END"
-}
-```
-
-For example:
-
-```json
-{
- "prompt":"Portugal will be removed from the UK's green travel list from Tuesday, amid rising coronavirus cases and concern over a \"Nepal mutation of the so-called Indian variant\". It will join the amber list, meaning holidaymakers should not visit and returnees must isolate for 10 days...\n\n###\n\n",
- "completion":" Portugal\nUK\nNepal mutation\nIndian variant END"
-}
-```
-
-A multi-line separator works best, as the text will likely contain multiple lines. Ideally there will be a high diversity of the types of input prompts (news articles, Wikipedia pages, tweets, legal documents), which reflect the likely texts which will be encountered when extracting entities.
-
-#### Case study: Customer support chatbot
-
-A chatbot will normally contain relevant context about the conversation (order details), summary of the conversation so far, and most recent messages. For this use case the same past conversation can generate multiple rows in the dataset, each time with a slightly different context, for every agent generation as a completion. This use case requires a few thousand examples, as it likely deals with different types of requests, and customer issues. To ensure the performance is of high quality, we recommend vetting the conversation samples to ensure the quality of agent messages. The summary can be generated with a separate text transformation fine tuned model. The dataset could look as follows:
-
-```json
-{"prompt":"Summary: <summary of the interaction so far>\n\nSpecific information:<for example order details in natural language>\n\n###\n\nCustomer: <message1>\nAgent: <response1>\nCustomer: <message2>\nAgent:", "completion":" <response2>\n"}
-{"prompt":"Summary: <summary of the interaction so far>\n\nSpecific information:<for example order details in natural language>\n\n###\n\nCustomer: <message1>\nAgent: <response1>\nCustomer: <message2>\nAgent: <response2>\nCustomer: <message3>\nAgent:", "completion":" <response3>\n"}
-```
-
-Here we purposefully separated different types of input information, but maintained Customer Agent dialog in the same format between a prompt and a completion. All the completions should only be by the agent, and we can use `\n` as a stop sequence when doing inference.
-
-#### Case study: Product description based on a technical list of properties
-
-Here it's important to convert the input data into a natural language, which will likely lead to superior performance. For example, the following format:
-
-```json
-{
- "prompt":"Item=handbag, Color=army_green, price=$99, size=S->",
- "completion":"This stylish small green handbag will add a unique touch to your look, without costing you a fortune."
-}
-```
-
-Won't work as well as:
-
-```json
-{
- "prompt":"Item is a handbag. Colour is army green. Price is midrange. Size is small.->",
- "completion":"This stylish small green handbag will add a unique touch to your look, without costing you a fortune."
-}
-```
-
-For high performance, ensure that the completions were based on the description provided. If external content is often consulted, then adding such content in an automated way would improve the performance. If the description is based on images, it may help to use an algorithm to extract a textual description of the image. Since completions are only one sentence long, we can use `.` as the stop sequence during inference.
-
-### Open ended generation
-
-For this type of problem we recommend:
--- Leave the prompt empty.-- No need for any separators.-- You'll normally want a large number of examples, at least a few thousand.-- Ensure the examples cover the intended domain or the desired tone of voice.-
-#### Case study: Maintaining company voice
-
-Many companies have a large amount of high quality content generated in a specific voice. Ideally all generations from our API should follow that voice for the different use cases. Here we can use the trick of leaving the prompt empty, and feeding in all the documents which are good examples of the company voice. A fine-tuned model can be used to solve many different use cases with similar prompts to the ones used for base models, but the outputs are going to follow the company voice much more closely than previously.
-
-```json
-{"prompt":"", "completion":" <company voice textual content>"}
-{"prompt":"", "completion":" <company voice textual content2>"}
-```
-
-A similar technique could be used for creating a virtual character with a particular personality, style of speech and topics the character talks about.
-
-Generative tasks have a potential to leak training data when requesting completions from the model, so extra care needs to be taken that this is addressed appropriately. For example personal or sensitive company information should be replaced by generic information or not be included into fine-tuning in the first place.
-
-## Next steps
-
-* Fine tune your model with our [How-to guide](fine-tuning.md)
-* Learn more about the [underlying models that power Azure OpenAI Service](../concepts/models.md)
ai-services Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/openai/overview.md
Previously updated : 09/15/2023 Last updated : 10/16/2023 recommendations: false keywords:
keywords:
# What is Azure OpenAI Service?
-Azure OpenAI Service provides REST API access to OpenAI's powerful language models including the GPT-4, GPT-35-Turbo, and Embeddings model series. In addition, the new GPT-4 and gpt-35-turbo model series have now reached general availability. These models can be easily adapted to your specific task including but not limited to content generation, summarization, semantic search, and natural language to code translation. Users can access the service through REST APIs, Python SDK, or our web-based interface in the Azure OpenAI Studio.
+Azure OpenAI Service provides REST API access to OpenAI's powerful language models including the GPT-4, GPT-3.5-Turbo, and Embeddings model series. In addition, the new GPT-4 and GPT-3.5-Turbo model series have now reached general availability. These models can be easily adapted to your specific task including but not limited to content generation, summarization, semantic search, and natural language to code translation. Users can access the service through REST APIs, Python SDK, or our web-based interface in the Azure OpenAI Studio.
### Features overview | Feature | Azure OpenAI | | | |
-| Models available | **GPT-4 series** <br>**GPT-35-Turbo series**<br> Embeddings series <br> Learn more in our [Models](./concepts/models.md) page.|
-| Fine-tuning | Ada <br> Babbage <br> Curie <br> Cushman <br> Davinci <br>**Fine-tuning is currently unavailable to new customers**.|
+| Models available | **GPT-4 series** <br>**GPT-3.5-Turbo series**<br> Embeddings series <br> Learn more in our [Models](./concepts/models.md) page.|
+| Fine-tuning (preview) | `GPT-3.5-Turbo` (0613) <br> `babbage-002` <br> `davinci-002`.|
| Price | [Available here](https://azure.microsoft.com/pricing/details/cognitive-services/openai-service/) |
-| Virtual network support & private link support | Yes, unless using [Azure OpenAI on your data](./concepts/use-your-data.md). |
+| Virtual network support & private link support | Yes, unless using [Azure OpenAI on your data](./concepts/use-your-data.md). |
| Managed Identity| Yes, via Microsoft Entra ID |
-| UI experience | **Azure portal** for account & resource management, <br> **Azure OpenAI Service Studio** for model exploration and fine tuning |
+| UI experience | **Azure portal** for account & resource management, <br> **Azure OpenAI Service Studio** for model exploration and fine-tuning |
| Model regional availability | [Model availability](./concepts/models.md) | | Content filtering | Prompts and completions are evaluated against our content policy with automated systems. High severity content will be filtered. |
ai-services Quotas Limits https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/openai/quotas-limits.md
Previously updated : 10/06/2023 Last updated : 10/13/2023
The default quota for models varies by model and region. Default quota limits ar
<td>North Central US, Australia East, East US 2, Canada East, Japan East, UK South, Switzerland North</td> <td>350 K</td> </tr>
+<tr>
+ <td>Fine-tuning models (babbage-002, davinci-002, gpt-35-turbo-0613)</td>
+ <td>North Central US, Sweden Central</td>
+ <td>50 K</td>
+ </tr>
<tr> <td>all other models</td> <td>East US, South Central US, West Europe, France Central</td>
ai-services Fine Tune https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/openai/tutorials/fine-tune.md
+
+ Title: Azure OpenAI Service fine-tuning gpt-3.5-turbo
+
+description: Learn how to use Azure OpenAI's latest fine-tuning capabilities with gpt-3.5-turbo
++++ Last updated : 10/16/2023++
+recommendations: false
+++
+# Azure OpenAI GPT 3.5 Turbo fine-tuning (preview) tutorial
+
+This tutorial walks you through fine-tuning a `gpt-35-turbo-0613` model.
+
+In this tutorial you learn how to:
+
+> [!div class="checklist"]
+> * Create sample fine-tuning datasets.
+> * Create environment variables for your resource endpoint and API key.
+> * Prepare your sample training and validation datasets for fine-tuning.
+> * Upload your training file and validation file for fine-tuning.
+> * Create a fine-tuning job for `gpt-35-turbo-0613`.
+> * Deploy a custom fine-tuned model.
+
+## Prerequisites
+
+* An Azure subscription - [Create one for free](https://azure.microsoft.com/free/cognitive-services?azure-portal=true).
+- Access granted to Azure OpenAI in the desired Azure subscription Currently, access to this service is granted only by application. You can apply for access to Azure OpenAI by completing the form at https://aka.ms/oai/access.
+- Python 3.7.1 or later version
+- The following Python libraries: `json`, `requests`, `os`, `tiktoken`, `time`, `openai`.
+- The OpenAI Python library should be at least version: `0.28.1`.
+- [Jupyter Notebooks](https://jupyter.org/)
+- An Azure OpenAI resource in a [region where `gpt-35-turbo-0613` fine-tuning is available](../concepts/models.md). If you don't have a resource the process of creating one is documented in our resource [deployment guide](../how-to/create-resource.md).
+- Necessary [Role-based access control permissions](../how-to/role-based-access-control.md). To perform all the actions described in this tutorial requires the equivalent of `Cognitive Services Contributor` + `Cognitive Services OpenAI Contributor` + `Cognitive Services Usages Reader` depending on how the permissions in your environment are defined.
+
+> [!IMPORTANT]
+> We strongly recommend reviewing the [pricing information](https://azure.microsoft.com/pricing/details/cognitive-services/openai-service/#pricing) for fine-tuning prior to beginning this tutorial to make sure you are comfortable with the associated costs. In testing, this tutorial resulted in one training hour billed, in addition to the costs that are associated with fine-tuning inference, and the hourly hosting costs of having a fine-tuned model deployed. Once you have completed the tutorial, you should delete your fine-tuned model deployment otherwise you will continue to incur the hourly hosting cost.
+
+## Set up
+
+### Python libraries
+
+If you haven't already, you need to install the following libraries:
+
+```cmd
+pip install openai json requests os tiktoken time
+```
++
+### Environment variables
+
+# [Command Line](#tab/command-line)
+
+```CMD
+setx AZURE_OPENAI_API_KEY "REPLACE_WITH_YOUR_KEY_VALUE_HERE"
+```
+
+```CMD
+setx AZURE_OPENAI_ENDPOINT "REPLACE_WITH_YOUR_ENDPOINT_HERE"
+```
+
+# [PowerShell](#tab/powershell)
+
+```powershell
+[System.Environment]::SetEnvironmentVariable('AZURE_OPENAI_API_KEY', 'REPLACE_WITH_YOUR_KEY_VALUE_HERE', 'User')
+```
+
+```powershell
+[System.Environment]::SetEnvironmentVariable('AZURE_OPENAI_ENDPOINT', 'REPLACE_WITH_YOUR_ENDPOINT_HERE', 'User')
+```
+
+# [Bash](#tab/bash)
+
+```Bash
+echo export AZURE_OPENAI_API_KEY="REPLACE_WITH_YOUR_KEY_VALUE_HERE" >> /etc/environment
+echo export AZURE_OPENAI_ENDPOINT="REPLACE_WITH_YOUR_ENDPOINT_HERE" >> /etc/environment
+
+source /etc/environment
+```
+++
+### Create a sample dataset
+
+Fine-tuning `gpt-35-turbo-0613` requires a specially formatted JSONL training file. OpenAI provides the following example in their documentation:
+
+```json
+{"messages": [{"role": "system", "content": "Marv is a factual chatbot that is also sarcastic."}, {"role": "user", "content": "What's the capital of France?"}, {"role": "assistant", "content": "Paris, as if everyone doesn't know that already."}]}
+{"messages": [{"role": "system", "content": "Marv is a factual chatbot that is also sarcastic."}, {"role": "user", "content": "Who wrote 'Romeo and Juliet'?"}, {"role": "assistant", "content": "Oh, just some guy named William Shakespeare. Ever heard of him?"}]}
+{"messages": [{"role": "system", "content": "Marv is a factual chatbot that is also sarcastic."}, {"role": "user", "content": "How far is the Moon from Earth?"}, {"role": "assistant", "content": "Around 384,400 kilometers. Give or take a few, like that really matters."}]}
+```
+
+For this example we'll modify this slightly by changing to:
+
+```json
+{"messages": [{"role": "system", "content": "Clippy is a factual chatbot that is also sarcastic."}, {"role": "user", "content": "What's the capital of France?"}, {"role": "assistant", "content": "Paris, as if everyone doesn't know that already."}]}
+{"messages": [{"role": "system", "content": "Clippy is a factual chatbot that is also sarcastic."}, {"role": "user", "content": "Who wrote 'Romeo and Juliet'?"}, {"role": "assistant", "content": "Oh, just some guy named William Shakespeare. Ever heard of him?"}]}
+{"messages": [{"role": "system", "content": "Clippy is a factual chatbot that is also sarcastic."}, {"role": "user", "content": "How far is the Moon from Earth?"}, {"role": "assistant", "content": "Around 384,400 kilometers. Give or take a few, like that really matters."}]}
+```
+
+While these three examples are helpful to give you the general format, if you want to steer your custom fine-tuned model to respond in a similar way you would need more examples. Generally you want **at least 50 high quality examples** to start out. However, it is entirely possible to have a use case that might require 1,000's of high quality training examples to be successful.
+
+In general, doubling the dataset size can lead to a linear increase in model quality. But keep in mind, low quality examples can negatively impact performance. If you train the model on a large amount of internal data, without first pruning the dataset for only the highest quality examples you could end up with a model that performs much worse than expected.
+
+You'll need to create two files `training_set.jsonl` and `validation_set.jsonl`.
+
+Create the files in the same directory that you're running the Jupyter Notebook, and copy the contents of the following code blocks to the corresponding files:
+
+**`training_set.jsonl`**
+
+```json
+{"messages": [{"role": "system", "content": "Clippy is a factual chatbot that is also sarcastic."}, {"role": "user", "content": "Who discovered Antarctica?"}, {"role": "assistant", "content": "Some chaps named Fabian Gottlieb von Bellingshausen and Mikhail Lazarev, as if they don't teach that in every school!"}]}
+{"messages": [{"role": "system", "content": "Clippy is a factual chatbot that is also sarcastic."}, {"role": "user", "content": "What is the biggest ocean?"}, {"role": "assistant", "content": "The Pacific Ocean. It's not like it's a small pond or anything."}]}
+{"messages": [{"role": "system", "content": "Clippy is a factual chatbot that is also sarcastic."}, {"role": "user", "content": "What is the largest planet?"}, {"role": "assistant", "content": "It's called Jupiter, you may have heard of it...or not."}]}
+{"messages": [{"role": "system", "content": "Clippy is a factual chatbot that is also sarcastic."}, {"role": "user", "content": "Who painted the Mona Lisa?"}, {"role": "assistant", "content": "Oh, some obscure artist named Leonardo da Vinci. Not that he is famous or anything."}]}
+{"messages": [{"role": "system", "content": "Clippy is a factual chatbot that is also sarcastic."}, {"role": "user", "content": "What's the boiling point of water?"}, {"role": "assistant", "content": "Exciting fact of the day: it's 100 degrees Celsius. Life-changing, isn't it?"}]}
+{"messages": [{"role": "system", "content": "Clippy is a factual chatbot that is also sarcastic."}, {"role": "user", "content": "Who wrote 'The Odyssey'?"}, {"role": "assistant", "content": "Oh, just some poet named Homer. Nothing special, really."}]}
+{"messages": [{"role": "system", "content": "Clippy is a factual chatbot that is also sarcastic."}, {"role": "user", "content": "What is the largest mammal?"}, {"role": "assistant", "content": "Blue Whale. Quite a big fish isn't it? Oh wait, it's not a fish!"}]}
+{"messages": [{"role": "system", "content": "Clippy is a factual chatbot that is also sarcastic."}, {"role": "user", "content": "What's the highest mountain peak?"}, {"role": "assistant", "content": "Everest. Brace yourself, it's not a walk in the park."}]}
+{"messages": [{"role": "system", "content": "Clippy is a factual chatbot that is also sarcastic."}, {"role": "user", "content": "Who invented the telephone?"}, {"role": "assistant", "content": "A guy named Alexander Graham Bell. Bet you didn't guess that one!"}]}
+{"messages": [{"role": "system", "content": "Clippy is a factual chatbot that is also sarcastic."}, {"role": "user", "content": "What is the distance from Earth to the Sun?"}, {"role": "assistant", "content": "About 93 million miles. Just a quick drive, really."}]}
+```
+
+**`validation_set.jsonl`**
+
+```json
+{"messages": [{"role": "system", "content": "Clippy is a factual chatbot that is also sarcastic."}, {"role": "user", "content": "What's the capital of Australia?"}, {"role": "assistant", "content": "It's Canberra, not Sydney. Shocking, I know!"}]}
+{"messages": [{"role": "system", "content": "Clippy is a factual chatbot that is also sarcastic."}, {"role": "user", "content": "Who wrote 'The Great Gatsby'?"}, {"role": "assistant", "content": "Some guy named F. Scott Fitzgerald. Ever heard of him?"}]}
+{"messages": [{"role": "system", "content": "Clippy is a factual chatbot that is also sarcastic."}, {"role": "user", "content": "What's the deepest part of the ocean?"}, {"role": "assistant", "content": "Mariana Trench, makes your swimming pool seem a bit shallow, doesn't it?"}]}
+{"messages": [{"role": "system", "content": "Clippy is a factual chatbot that is also sarcastic."}, {"role": "user", "content": "Who composed the 'Moonlight Sonata'?"}, {"role": "assistant", "content": "Just a little-known guy named Beethoven. No big deal."}]}
+{"messages": [{"role": "system", "content": "Clippy is a factual chatbot that is also sarcastic."}, {"role": "user", "content": "What are the colors of the rainbow?"}, {"role": "assistant", "content": "ROYGBIV. Red, orange, yellow, green, blue, indigo, violet. Not that we learned this in preschool or anything!"}]}
+{"messages": [{"role": "system", "content": "Clippy is a factual chatbot that is also sarcastic."}, {"role": "user", "content": "What is the smallest planet?"}, {"role": "assistant", "content": "Meet Mercury, the runt of the solar system family."}]}
+{"messages": [{"role": "system", "content": "Clippy is a factual chatbot that is also sarcastic."}, {"role": "user", "content": "Who wrote 'Pride and Prejudice'?"}, {"role": "assistant", "content": "Do the words Jane Austen ring a bell? No? Well, she wrote it."}]}
+{"messages": [{"role": "system", "content": "Clippy is a factual chatbot that is also sarcastic."}, {"role": "user", "content": "What's the largest desert?"}, {"role": "assistant", "content": "Antarctica. Surprise, surprise! Deserts aren't just full of sand, you know."}]}
+{"messages": [{"role": "system", "content": "Clippy is a factual chatbot that is also sarcastic."}, {"role": "user", "content": "What is the longest river?"}, {"role": "assistant", "content": "The Nile River. It's not like it's famous or anything."}]}
+{"messages": [{"role": "system", "content": "Clippy is a factual chatbot that is also sarcastic."}, {"role": "user", "content": "What's the capital of Germany?"}, {"role": "assistant", "content": "Berlin. Shocking news, right?"}]}
+```
+
+Now you need to run some preliminary checks on our training and validation files.
+
+```python
+import json
+
+# Load the training set
+with open('training_set.jsonl', 'r', encoding='utf-8') as f:
+ training_dataset = [json.loads(line) for line in f]
+
+# Training dataset stats
+print("Number of examples in training set:", len(training_dataset))
+print("First example in training set:")
+for message in training_dataset[0]["messages"]:
+ print(message)
+
+# Load the validation set
+with open('validation_set.jsonl', 'r', encoding='utf-8') as f:
+ validation_dataset = [json.loads(line) for line in f]
+
+# Validation dataset stats
+print("\nNumber of examples in validation set:", len(validation_dataset))
+print("First example in validation set:")
+for message in validation_dataset[0]["messages"]:
+ print(message)
+```
+
+**Output:**
+
+```output
+Number of examples in training set: 10
+First example in training set:
+{'role': 'system', 'content': 'Clippy is a factual chatbot that is also sarcastic.'}
+{'role': 'user', 'content': 'Who discovered America?'}
+{'role': 'assistant', 'content': "Some chap named Christopher Columbus, as if they don't teach that in every school!"}
+
+Number of examples in validation set: 10
+First example in validation set:
+{'role': 'system', 'content': 'Clippy is a factual chatbot that is also sarcastic.'}
+{'role': 'user', 'content': "What's the capital of Australia?"}
+{'role': 'assistant', 'content': "It's Canberra, not Sydney. Shocking, I know!"}
+```
+
+In this case we only have 10 training and 10 validation examples so while this will demonstrate the basic mechanics of fine-tuning a model this in unlikely to be a large enough number of examples to produce a consistently noticeable impact.
+
+Now you can then run some additional code from OpenAI using the tiktoken library to validate the token counts. Individual examples need to remain under the `gpt-35-turbo-0613` model's input token limit of 4096 tokens.
+
+```python
+import json
+import tiktoken
+import numpy as np
+from collections import defaultdict
+
+encoding = tiktoken.get_encoding("cl100k_base") # default encoding used by gpt-4, turbo, and text-embedding-ada-002 models
+
+def num_tokens_from_messages(messages, tokens_per_message=3, tokens_per_name=1):
+ num_tokens = 0
+ for message in messages:
+ num_tokens += tokens_per_message
+ for key, value in message.items():
+ num_tokens += len(encoding.encode(value))
+ if key == "name":
+ num_tokens += tokens_per_name
+ num_tokens += 3
+ return num_tokens
+
+def num_assistant_tokens_from_messages(messages):
+ num_tokens = 0
+ for message in messages:
+ if message["role"] == "assistant":
+ num_tokens += len(encoding.encode(message["content"]))
+ return num_tokens
+
+def print_distribution(values, name):
+ print(f"\n#### Distribution of {name}:")
+ print(f"min / max: {min(values)}, {max(values)}")
+ print(f"mean / median: {np.mean(values)}, {np.median(values)}")
+ print(f"p5 / p95: {np.quantile(values, 0.1)}, {np.quantile(values, 0.9)}")
+
+files = ['training_set.jsonl', 'validation_set.jsonl']
+
+for file in files:
+ print(f"Processing file: {file}")
+ with open(file, 'r', encoding='utf-8') as f:
+ dataset = [json.loads(line) for line in f]
+
+ total_tokens = []
+ assistant_tokens = []
+
+ for ex in dataset:
+ messages = ex.get("messages", {})
+ total_tokens.append(num_tokens_from_messages(messages))
+ assistant_tokens.append(num_assistant_tokens_from_messages(messages))
+
+ print_distribution(total_tokens, "total tokens")
+ print_distribution(assistant_tokens, "assistant tokens")
+ print('*' * 50)
+```
+
+**Output:**
+
+```output
+Processing file: training_set.jsonl
+
+#### Distribution of total tokens:
+min / max: 47, 57
+mean / median: 50.8, 50.0
+p5 / p95: 47.9, 55.2
+
+#### Distribution of assistant tokens:
+min / max: 13, 21
+mean / median: 16.3, 15.5
+p5 / p95: 13.0, 20.1
+**************************************************
+Processing file: validation_set.jsonl
+
+#### Distribution of total tokens:
+min / max: 43, 65
+mean / median: 51.4, 49.0
+p5 / p95: 45.7, 56.9
+
+#### Distribution of assistant tokens:
+min / max: 8, 29
+mean / median: 15.9, 13.5
+p5 / p95: 11.6, 20.9
+**************************************************
+```
+
+## Upload fine-tuning files
+
+```Python
+# Upload fine-tuning files
+import openai
+import os
+
+openai.api_key = os.getenv("AZURE_OPENAI_API_KEY")
+openai.api_base = os.getenv("AZURE_OPENAI_ENDPOINT")
+openai.api_type = 'azure'
+openai.api_version = '2023-09-15-preview' # This API version or later is required to access fine-tuning for turbo/babbage-002/davinci-002
+
+training_file_name = 'training_set.jsonl'
+validation_file_name = 'validation_set.jsonl'
+
+# Upload the training and validation dataset files to Azure OpenAI with the SDK.
+
+training_response = openai.File.create(
+ file=open(training_file_name, "rb"), purpose="fine-tune", user_provided_filename="training_set.jsonl"
+)
+training_file_id = training_response["id"]
+
+validation_response = openai.File.create(
+ file=open(validation_file_name, "rb"), purpose="fine-tune", user_provided_filename="validation_set.jsonl"
+)
+validation_file_id = validation_response["id"]
+
+print("Training file ID:", training_file_id)
+print("Validation file ID:", validation_file_id)
+```
+
+**Output:**
+
+```output
+Training file ID: file-9ace76cb11f54fdd8358af27abf4a3ea
+Validation file ID: file-70a3f525ed774e78a77994d7a1698c4b
+```
+
+## Begin fine-tuning
+
+Now that the fine-tuning files have been successfully uploaded you can submit your fine-tuning training job:
+
+```python
+response = openai.FineTuningJob.create(
+ training_file=training_file_id,
+ validation_file=validation_file_id,
+ model="gpt-35-turbo-0613",
+)
+
+job_id = response["id"]
+
+# You can use the job ID to monitor the status of the fine-tuning job.
+# The fine-tuning job will take some time to start and complete.
+
+print("Job ID:", response["id"])
+print("Status:", response["status"])
+print(response)
+```
+
+**Output:**
+
+```output
+Job ID: ftjob-40e78bc022034229a6e3a222c927651c
+Status: pending
+{
+ "hyperparameters": {
+ "n_epochs": 2
+ },
+ "status": "pending",
+ "model": "gpt-35-turbo-0613",
+ "training_file": "file-90ac5d43102f4d42a3477fd30053c758",
+ "validation_file": "file-e21aad7dddbc4ddc98ba35c790a016e5",
+ "id": "ftjob-40e78bc022034229a6e3a222c927651c",
+ "created_at": 1697156464,
+ "updated_at": 1697156464,
+ "object": "fine_tuning.job"
+}
+```
+
+To retrieve the training job ID, you can run:
+
+```python
+response = openai.FineTuningJob.retrieve(job_id)
+
+print("Job ID:", response["id"])
+print("Status:", response["status"])
+print(response)
+```
+
+**Output:**
+
+```output
+Fine-tuning model with job ID: ftjob-0f4191f0c59a4256b7a797a3d9eed219.
+```
+
+## Track training job status
+
+If you would like to poll the training job status until it's complete, you can run:
+
+```python
+# Track training status
+
+from IPython.display import clear_output
+import time
+
+start_time = time.time()
+
+# Get the status of our fine-tuning job.
+response = openai.FineTuningJob.retrieve(job_id)
+
+status = response["status"]
+
+# If the job isn't done yet, poll it every 10 seconds.
+while status not in ["succeeded", "failed"]:
+ time.sleep(10)
+
+ response = openai.FineTuningJob.retrieve(job_id)
+ print(response)
+ print("Elapsed time: {} minutes {} seconds".format(int((time.time() - start_time) // 60), int((time.time() - start_time) % 60)))
+ status = response["status"]
+ print(f'Status: {status}')
+ clear_output(wait=True)
+
+print(f'Fine-tuning job {job_id} finished with status: {status}')
+
+# List all fine-tuning jobs for this resource.
+print('Checking other fine-tune jobs for this resource.')
+response = openai.FineTuningJob.list()
+print(f'Found {len(response["data"])} fine-tune jobs.')
+```
+
+**Output:**
+
+```ouput
+{
+ "hyperparameters": {
+ "n_epochs": 2
+ },
+ "status": "running",
+ "model": "gpt-35-turbo-0613",
+ "training_file": "file-9ace76cb11f54fdd8358af27abf4a3ea",
+ "validation_file": "file-70a3f525ed774e78a77994d7a1698c4b",
+ "id": "ftjob-0f4191f0c59a4256b7a797a3d9eed219",
+ "created_at": 1695307968,
+ "updated_at": 1695310376,
+ "object": "fine_tuning.job"
+}
+Elapsed time: 40 minutes 45 seconds
+Status: running
+```
+
+It isn't unusual for training to take more than an hour to complete. Once training is completed the output message will change to:
+
+```output
+Fine-tuning job ftjob-b044a9d3cf9c4228b5d393567f693b83 finished with status: succeeded
+Checking other fine-tuning jobs for this resource.
+Found 2 fine-tune jobs.
+```
+
+To get the full results, run the following:
+
+```python
+#Retrieve fine_tuned_model name
+
+response = openai.FineTuningJob.retrieve(job_id)
+
+print(response)
+fine_tuned_model = response["fine_tuned_model"]
+```
+
+## Deploy fine-tuned model
+
+Unlike the previous Python SDK commands in this tutorial, since the introduction of the quota feature, model deployment must be done using the [REST API](/rest/api/cognitiveservices/accountmanagement/deployments/create-or-update?tabs=HTTP), which requires separate authorization, a different API path, and a different API version.
+
+Alternatively, you can deploy your fine-tuned model using any of the other common deployment methods like [Azure OpenAI Studio](https://oai.azure.com/), or [Azure CLI](/cli/azure/cognitiveservices/account/deployment#az-cognitiveservices-account-deployment-create()).
+
+|variable | Definition|
+|--|--|
+| token | There are multiple ways to generate an authorization token. The easiest method for initial testing is to launch the Cloud Shell from the [Azure portal](https://portal.azure.com). Then run [`az account get-access-token`](/cli/azure/account#az-account-get-access-token()). You can use this token as your temporary authorization token for API testing. We recommend storing this in a new environment variable|
+| subscription | The subscription ID for the associated Azure OpenAI resource |
+| resource_group | The resource group name for your Azure OpenAI resource |
+| resource_name | The Azure OpenAI resource name |
+| model_deployment_name | The custom name for your new fine-tuned model deployment. This is the name that will be referenced in your code when making chat completion calls. |
+| fine_tuned_model | Retrieve this value from your fine-tuning job results in the previous step. It will look like `gpt-35-turbo-0613.ft-b044a9d3cf9c4228b5d393567f693b83`. You will need to add that value to the deploy_data json. |
++
+```python
+import json
+import requests
+
+token= os.getenv("TEMP_AUTH_TOKEN")
+subscription = "<YOUR_SUBSCRIPTION_ID>"
+resource_group = "<YOUR_RESOURCE_GROUP_NAME>"
+resource_name = "<YOUR_AZURE_OPENAI_RESOURCE_NAME>"
+model_deployment_name ="YOUR_CUSTOM_MODEL_DEPLOYMENT_NAME"
+
+deploy_params = {'api-version': "2023-05-01"}
+deploy_headers = {'Authorization': 'Bearer {}'.format(token), 'Content-Type': 'application/json'}
+
+deploy_data = {
+ "sku": {"name": "standard", "capacity": 1},
+ "properties": {
+ "model": {
+ "format": "OpenAI",
+ "name": "<YOUR_FINE_TUNED_MODEL>", #retrieve this value from the previous call, it will look like gpt-35-turbo-0613.ft-b044a9d3cf9c4228b5d393567f693b83
+ "version": "1"
+ }
+ }
+}
+deploy_data = json.dumps(deploy_data)
+
+request_url = f'https://management.azure.com/subscriptions/{subscription}/resourceGroups/{resource_group}/providers/Microsoft.CognitiveServices/accounts/{resource_name}/deployments/{model_deployment_name}'
+
+print('Creating a new deployment...')
+
+r = requests.put(request_url, params=deploy_params, headers=deploy_headers, data=deploy_data)
+
+print(r)
+print(r.reason)
+print(r.json())
+```
+
+You can check on your deployment progress in the Azure OpenAI Studio:
++
+It isn't uncommon for this process to take some time to complete when dealing with deploying fine-tuned models.
+
+## Use a deployed customized model
+
+After your fine-tuned model is deployed, you can use it like any other deployed model in either the [Chat Playground of Azure OpenAI Studio](https://oai.azure.com), or via the chat completion API. For example, you can send a chat completion call to your deployed model, as shown in the following Python example. You can continue to use the same parameters with your customized model, such as temperature and max_tokens, as you can with other deployed models.
+
+```python
+#Note: The openai-python library support for Azure OpenAI is in preview.
+import os
+import openai
+openai.api_type = "azure"
+openai.api_base = os.getenv("AZURE_OPENAI_ENDPOINT")
+openai.api_version = "2023-05-15"
+openai.api_key = os.getenv("AZURE_OPENAI_KEY")
+
+response = openai.ChatCompletion.create(
+ engine="gpt-35-turbo-ft", # engine = "Custom deployment name you chose for your fine-tuning model"
+ messages=[
+ {"role": "system", "content": "You are a helpful assistant."},
+ {"role": "user", "content": "Does Azure OpenAI support customer managed keys?"},
+ {"role": "assistant", "content": "Yes, customer managed keys are supported by Azure OpenAI."},
+ {"role": "user", "content": "Do other Azure AI services support this too?"}
+ ]
+)
+
+print(response)
+print(response['choices'][0]['message']['content'])
+```
+
+## Delete deployment
+
+Unlike other types of Azure OpenAI models, fine-tuned/customized models have [an hourly hosting cost](https://azure.microsoft.com/pricing/details/cognitive-services/openai-service/#pricing) associated with them once they are deployed. It is **strongly recommended** that once you're done with this tutorial and have tested a few chat completion calls against your fine-tuned model, that you **delete the model deployment**.
+
+Deleting the deployment won't affect the model itself, so you can re-deploy the fine-tuned model that you trained for this tutorial at any time.
+
+You can delete the deployment in [Azure OpenAI Studio](https://oai.azure.com/), via [REST API](/rest/api/cognitiveservices/accountmanagement/deployments/delete?tabs=HTTP), [Azure CLI](/cli/azure/cognitiveservices/account/deployment#az-cognitiveservices-account-deployment-delete()), or other supported deployment methods.
+
+## Troubleshooting
+
+### How do I enable fine-tuning? Create a custom model is greyed out in Azure OpenAI Studio?
+
+In order to successfully access fine-tuning you need **Cognitive Services OpenAI Contributor assigned**. Even someone with high-level Service Administrator permissions would still need this account explicitly set in order to access fine-tuning. For more information please review the [role-based access control guidance](/azure/ai-services/openai/how-to/role-based-access-control#cognitive-services-openai-contributor).
+
+## Next steps
+
+- Learn more about [fine-tuning in Azure OpenAI](../how-to/fine-tuning.md)
+- Learn more about the [underlying models that power Azure OpenAI](../concepts/models.md#fine-tuning-models-preview).
ai-services Whats New https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/openai/whats-new.md
Previously updated : 09/20/2023 Last updated : 10/16/2023 recommendations: false keywords: # What's new in Azure OpenAI Service
+## October 2023
+
+### New fine-tuning models (preview)
+
+- `gpt-35-turbo-0613` is [now available for fine-tuning](./how-to/fine-tuning.md).
+
+- `babbage-002` and `davinci-002` are [now available for fine-tuning](./how-to/fine-tuning.md). These models replace the legacy ada, babbage, curie, and davinci base models that were previously available for fine-tuning.
+
+- Fine-tuning availability is limited to certain regions. Check the [models page](concepts/models.md#fine-tuning-models-preview), for the latest information on model availability in each region.
+
+- Fine-tuned models have different [quota limits](quotas-limits.md) than regular models.
+
+- [Tutorial: fine-tuning GPT-3.5-Turbo](./tutorials/fine-tune.md)
+ ## September 2023 ### GPT-4
-GPT-4 and GPT-4-32k are now available to all Azure OpenAI Service customers. Customers no longer need to apply for the waitlist to use GPT-4 and GPT-4-32k (the Limited Access registration requirements continue to apply for all Azure OpenAI models). Availability may vary by region. Check the [models page](concepts/models.md), for the latest information on model availability in each region.
+GPT-4 and GPT-4-32k are now available to all Azure OpenAI Service customers. Customers no longer need to apply for the waitlist to use GPT-4 and GPT-4-32k (the Limited Access registration requirements continue to apply for all Azure OpenAI models). Availability might vary by region. Check the [models page](concepts/models.md), for the latest information on model availability in each region.
### GPT-3.5 Turbo Instruct
ai-services Migrate To Openai https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/qnamaker/How-To/migrate-to-openai.md
QnA Maker was designed to be a cloud-based Natural Language Processing (NLP) ser
## Prerequisites * A QnA Maker project.
-* An existing Azure Open AI resource. If you don't already have an Azure Open AI resource, then [create one and deploy a model](../../openai/how-to/create-resource.md).
+* An existing Azure OpenAI resource. If you don't already have an Azure OpenAI resource, then [create one and deploy a model](../../openai/how-to/create-resource.md).
* Azure OpenAI requires registration and is currently only available to approved enterprise customers and partners. See [Limited access to Azure OpenAI Service](/legal/cognitive-services/openai/limited-access?context=/azure/ai-services/openai/context/context) for more information. You can apply for access to Azure OpenAI by completing the form at https://aka.ms/oai/access. Open an issue on this repo to contact us if you have an issue. * Be sure that you are assigned at least the [Cognitive Services OpenAI Contributor role](/azure/role-based-access-control/built-in-roles#cognitive-services-openai-contributor) for the Azure OpenAI resource.
QnA Maker was designed to be a cloud-based Natural Language Processing (NLP) ser
:::image type="content" source="../media/openai/search-service.png" alt-text="A screenshot showing a QnA Maker project's search service in the Azure portal." lightbox="../media/openai/search-service.png":::
-1. Select the search service and open its **Overview** section. Note down the details, such as the Azure Search resource name, subscription, and location. You will need this information when you migrate to Azure Open AI.
+1. Select the search service and open its **Overview** section. Note down the details, such as the Azure Search resource name, subscription, and location. You will need this information when you migrate to Azure OpenAI.
:::image type="content" source="../media/openai/search-service-details.png" alt-text="A screenshot showing a QnA Maker project's search service details in the Azure portal." lightbox="../media/openai/search-service-details.png":::
QnA Maker was designed to be a cloud-based Natural Language Processing (NLP) ser
You can now start exploring Azure OpenAI capabilities with a no-code approach through the chat playground. It's simply a text box where you can submit a prompt to generate a completion. From this page, you can quickly iterate and experiment with the capabilities. You can also launch a [web app](../../openai/concepts/use-your-data.md#using-the-web-app) to chat with the model over the web. ## Next steps
-* [Using Azure OpenAI on your data](../../openai/concepts/use-your-data.md)
+* [Using Azure OpenAI on your data](../../openai/concepts/use-your-data.md)
ai-services Captioning Concepts https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/speech-service/captioning-concepts.md
The following are aspects to consider when using captioning:
> [!TIP] > Try the [Speech Studio](https://aka.ms/speechstudio/captioning) and choose a sample video clip to see real-time or offline processed captioning results. >
-> Try the [Azure AI Video Indexer](../../azure-video-indexer/video-indexer-overview.md) as a demonstration of how you can get captions for videos that you upload.
+> Try the [Azure AI Video Indexer](/azure/azure-video-indexer/video-indexer-overview) as a demonstration of how you can get captions for videos that you upload.
Captioning can accompany real-time or pre-recorded speech. Whether you're showing captions in real-time or with a recording, you can use the [Speech SDK](speech-sdk.md) or [Speech CLI](spx-overview.md) to recognize speech and get transcriptions. You can also use the [Batch transcription API](batch-transcription.md) for pre-recorded video.
ai-services What Are Ai Services https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/what-are-ai-services.md
Select a service from the table below and learn how it can help you meet your de
| ![QnA Maker icon](media/service-icons/luis.svg) [QnA maker](./qnamaker/index.yml) (retired) | Distill information into easy-to-navigate questions and answers | | ![Speech icon](media/service-icons/speech.svg) [Speech](./speech-service/index.yml) | Speech to text, text to speech, translation and speaker recognition | | ![Translator icon](media/service-icons/translator.svg) [Translator](./translator/index.yml) | Translate more than 100 languages and dialects |
-| ![Video Indexer icon](media/service-icons/video-indexer.svg) [Video Indexer](../azure-video-indexer/index.yml) | Extract actionable insights from your videos |
+| ![Video Indexer icon](media/service-icons/video-indexer.svg) [Video Indexer](/azure/azure-video-indexer/) | Extract actionable insights from your videos |
| ![Vision icon](media/service-icons/vision.svg) [Vision](./computer-vision/index.yml) | Analyze content in images and videos | ## Pricing tiers and billing
aks Concepts Clusters Workloads https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/concepts-clusters-workloads.md
For associated best practices, see [Best practices for basic scheduler features
### Node pools
+> [!NOTE]
+> The Azure Linux node pool is now generally available (GA). To learn about the benefits and deployment steps, see the [Introduction to the Azure Linux Container Host for AKS][intro-azure-linux].
+ Nodes of the same configuration are grouped together into *node pools*. A Kubernetes cluster contains at least one node pool. The initial number of nodes and size are defined when you create an AKS cluster, which creates a *default node pool*. This default node pool in AKS contains the underlying VMs that run your agent nodes. > [!NOTE]
This article covers some of the core Kubernetes components and how they apply to
[aks-service-level-agreement]: faq.md#does-aks-offer-a-service-level-agreement [aks-tags]: use-tags.md [aks-support]: support-policies.md#user-customization-of-agent-nodes
+[intro-azure-linux]: ../azure-linux/intro-azure-linux.md
aks Concepts Security https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/concepts-security.md
description: Learn about security in Azure Kubernetes Service (AKS), including m
Previously updated : 02/28/2023 Last updated : 07/18/2023
This article introduces the core concepts that secure your applications in AKS.
## Build Security
-As the entry point for the Supply Chain, it's important to conduct static analysis of image builds before they're promoted down the pipeline, which includes vulnerability and compliance assessment. It's not about failing a build because it has a vulnerability, as that breaks development. It's about looking at the **Vendor Status** to segment based on vulnerabilities that are actionable by the development teams. Also use **Grace Periods** to allow developers time to remediate identified issues.
+As the entry point for the supply chain, it is important to conduct static analysis of image builds before they are promoted down the pipeline. This includes vulnerability and compliance assessment. It is not about failing a build because it has a vulnerability, as that breaks development. It's about looking at the **Vendor Status** to segment based on vulnerabilities that are actionable by the development teams. Also use **Grace Periods** to allow developers time to remediate identified issues.
## Registry Security
AKS nodes are Azure virtual machines (VMs) that you manage and maintain.
When an AKS cluster is created or scaled up, the nodes are automatically deployed with the latest OS security updates and configurations. > [!NOTE]
-> AKS clusters using:
-> * Kubernetes version 1.19 and greater for Linux node pools use `containerd` as its container runtime. Using `containerd` with Windows Server 2019 node pools is currently in preview. For more information, see [Add a Windows Server node pool with `containerd`][aks-add-np-containerd].
-> * Kubernetes prior to v1.19 for Linux node pools use Docker as its container runtime. For Windows Server 2019 node pools, Docker is the default container runtime.
+> AKS clusters running:
+> * Kubernetes version 1.19 and higher - Linux node pools use `containerd` as its container runtime. Windows Server 2019 node pools use `containerd` as its container runtime, which is currently in preview. For more information, see [Add a Windows Server node pool with `containerd`][aks-add-np-containerd].
+> * Kubernetes version 1.19 and earlier - Linux node pools use Docker as its container runtime. Windows Server 2019 node pools use Docker for the default container runtime.
For more information about the security upgrade process for Linux and Windows worker nodes, see [Security patching nodes][aks-vulnerability-management-nodes].
Node authorization is a special-purpose authorization mode that specifically aut
### Node deployment
-Nodes are deployed into a private virtual network subnet with no public IP addresses assigned. SSH is enabled by default for troubleshooting and management purposes and is only accessible using the internal IP address.
+Nodes are deployed onto a private virtual network subnet, with no public IP addresses assigned. For troubleshooting and management purposes, SSH is enabled by default and only accessible using the internal IP address. Disabling SSH is during cluster and node pool creation, or for an existing cluster or node pool is in preview. See [Manage SSH access][manage-ssh-access] for more information.
### Node storage
For more information on core Kubernetes and AKS concepts, see:
- [Kubernetes / AKS scale][aks-concepts-scale] <!-- LINKS - External -->
-[kured]: https://github.com/kubereboot/kured
-[kubernetes-network-policies]: https://kubernetes.io/docs/concepts/services-networking/network-policies/
[secret-risks]: https://kubernetes.io/docs/concepts/configuration/secret/#risks [encryption-atrest]: ../security/fundamentals/encryption-atrest.md <!-- LINKS - Internal --> [microsoft-defender-for-containers]: ../defender-for-cloud/defender-for-containers-introduction.md
-[aks-daemonsets]: concepts-clusters-workloads.md#daemonsets
[aks-upgrade-cluster]: upgrade-cluster.md [aks-aad]: ./managed-azure-ad.md
-[aks-add-np-containerd]: /azure/aks/create-node-pools
+[aks-add-np-containerd]: create-node-pools.md
[aks-concepts-clusters-workloads]: concepts-clusters-workloads.md [aks-concepts-identity]: concepts-identity.md [aks-concepts-scale]: concepts-scale.md [aks-concepts-storage]: concepts-storage.md [aks-concepts-network]: concepts-network.md
-[aks-kured]: node-updates-kured.md
[aks-limit-egress-traffic]: limit-egress-traffic.md [cluster-isolation]: operator-best-practices-cluster-isolation.md [operator-best-practices-cluster-security]: operator-best-practices-cluster-security.md [developer-best-practices-pod-security]:developer-best-practices-pod-security.md
-[nodepool-upgrade]: use-multiple-node-pools.md#upgrade-a-node-pool
[authorized-ip-ranges]: api-server-authorized-ip-ranges.md [private-clusters]: private-clusters.md [network-policy]: use-network-policies.md
-[node-image-upgrade]: node-image-upgrade.md
[microsoft-vulnerability-management-aks]: concepts-vulnerability-management.md [aks-vulnerability-management-nodes]: concepts-vulnerability-management.md#worker-nodes
+[manage-ssh-access]: manage-ssh-node-access.md
aks Intro Kubernetes https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/intro-kubernetes.md
For more information, see [Confidential computing nodes on AKS][conf-com-node].
### Azure Linux nodes
+> [!NOTE]
+> The Azure Linux node pool is now generally available (GA). To learn about the benefits and deployment steps, see the [Introduction to the Azure Linux Container Host for AKS][intro-azure-linux].
+ The Azure Linux container host for AKS is an open-source Linux distribution created by Microsoft, and itΓÇÖs available as a container host on Azure Kubernetes Service (AKS). The Azure Linux container host for AKS provides reliability and consistency from cloud to edge across the AKS, AKS-HCI, and Arc products. You can deploy Azure Linux node pools in a new cluster, add Azure Linux node pools to your existing Ubuntu clusters, or migrate your Ubuntu nodes to Azure Linux nodes. For more information, see [Use the Azure Linux container host for AKS](use-azure-linux.md).
Learn more about deploying and managing AKS.
[azure-monitor-logs]: ../azure-monitor/logs/data-platform-logs.md [helm]: quickstart-helm.md [aks-best-practices]: best-practices.md
+[intro-azure-linux]: ../azure-linux/intro-azure-linux.md
aks Quick Kubernetes Deploy Cli https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/learn/quick-kubernetes-deploy-cli.md
Azure Kubernetes Service (AKS) is a managed Kubernetes service that lets you qui
> [!NOTE] > If you plan to run the commands locally instead of in Azure Cloud Shell, make sure you run the commands with administrative privileges.
+> [!NOTE]
+> The Azure Linux node pool is now generally available (GA). To learn about the benefits and deployment steps, see the [Introduction to the Azure Linux Container Host for AKS][intro-azure-linux].
+ ## Create a resource group An [Azure resource group][azure-resource-group] is a logical group in which Azure resources are deployed and managed. When you create a resource group, you're prompted to specify a location. This location is the storage location of your resource group metadata and where your resources run in Azure if you don't specify another region during resource creation.
This quickstart is for introductory purposes. For guidance on creating full solu
[kubernetes-deployment]: ../concepts-clusters-workloads.md#deployments-and-yaml-manifests [kubernetes-service]: ../concepts-network.md#services [aks-solution-guidance]: /azure/architecture/reference-architectures/containers/aks-start-here?WT.mc_id=AKSDOCSPAGE
+[intro-azure-linux]: ../../azure-linux/intro-azure-linux.md
aks Quick Kubernetes Deploy Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/learn/quick-kubernetes-deploy-portal.md
This quickstart assumes a basic understanding of Kubernetes concepts. For more i
[!INCLUDE [quickstarts-free-trial-note](../../../includes/quickstarts-free-trial-note.md)] - If you're unfamiliar with the Azure Cloud Shell, review [Overview of Azure Cloud Shell](../../cloud-shell/overview.md).- - The identity you're using to create your cluster has the appropriate minimum permissions. For more details on access and identity for AKS, see [Access and identity options for Azure Kubernetes Service (AKS)](../concepts-identity.md).
+> [!NOTE]
+> The Azure Linux node pool is now generally available (GA). To learn about the benefits and deployment steps, see the [Introduction to the Azure Linux Container Host for AKS][intro-azure-linux].
+ ## Create an AKS cluster 1. Sign in to the [Azure portal](https://portal.azure.com).
To learn more about AKS by walking through a complete example, including buildin
[http-routing]: ../http-application-routing.md [preset-config]: ../quotas-skus-regions.md#cluster-configuration-presets-in-the-azure-portal [sp-delete]: ../kubernetes-service-principal.md#additional-considerations
+[intro-azure-linux]: ../../azure-linux/intro-azure-linux.md
aks Quick Kubernetes Deploy Terraform https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/learn/quick-kubernetes-deploy-terraform.md
In this article, you learn how to:
## Prerequisites - [Install and configure Terraform](/azure/developer/terraform/quickstart-configure)- - **Kubernetes command-line tool (kubectl):** [Download kubectl](https://kubernetes.io/releases/download/).
+> [!NOTE]
+> The Azure Linux node pool is now generally available (GA). To learn about the benefits and deployment steps, see the [Introduction to the Azure Linux Container Host for AKS][intro-azure-linux].
+ ## Login to your Azure Account [!INCLUDE [authenticate-to-azure.md](~/azure-dev-docs-pr/articles/terraform/includes/authenticate-to-azure.md)]
Two [Kubernetes Services](/azure/aks/concepts-network#services) are created:
> [!div class="nextstepaction"] > [Learn more about using AKS](/azure/aks)+
+<!-- LINKS - Internal -->
+[intro-azure-linux]: ../../azure-linux/intro-azure-linux.md
aks Manage Ssh Node Access https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/manage-ssh-node-access.md
+
+ Title: Manage SSH access on Azure Kubernetes Service cluster nodes
+
+description: Learn how to configure SSH on Azure Kubernetes Service (AKS) cluster nodes.
+ Last updated : 10/16/2023++
+# Manage SSH for secure access to Azure Kubernetes Service (AKS) nodes
+
+This article describes how to update the SSH key on your AKS clusters or node pools.
++
+## Before you begin
+
+* You need the Azure CLI version 2.46.0 or later installed and configured. If you need to install or upgrade, see [Install Azure CLI][install-azure-cli].
+* This feature supports Linux, Mariner, and CBLMariner node pools on existing clusters.
+
+## Update SSH public key on an existing AKS cluster
+
+Use the [az aks update][az-aks-update] command to update the SSH public key on your cluster. This operation updates the key on all node pools. You can either specify the key or a key file using the `--ssh-key-value` argument.
+
+> [!NOTE]
+> Updating of the SSH key is supported on Azure virtual machine scale sets with AKS clusters.
+
+|SSH parameter |Description |Default value |
+|--|--|--|
+|--ssh-key-vaule |Public key path or key contents to install on node VMs for SSH access. For example, `ssh-rsa AAAAB...snip...UcyupgH azureuser@linuxvm`.|`~.ssh\id_rsa.pub` |
+|--no-ssh-key |Do not use or create a local SSH key. |False |
+
+The following are examples of this command:
+
+* To specify the new SSH public key value, include the `--ssh-key-value` argument:
+
+ ```azurecli
+ az aks update --name myAKSCluster --resource-group MyResourceGroup --ssh-key-value 'ssh-rsa AAAAB3Nza-xxx'
+ ```
+
+* To specify an SSH public key file, specify it with the `--ssh-key-value` argument:
+
+ ```azurecli
+ az aks update --name myAKSCluster --resource-group MyResourceGroup --ssh-key-value ~/.ssh/id_rsa.pub
+ ```
+
+> [!IMPORTANT]
+> After you update the SSH key, AKS doesn't automatically reimage your node pool. At anytime you can choose to perform a [reimage operation][node-image-upgrade]. Only after reimage is complete does the update SSH key operation take effect.
+
+## Next steps
+
+To help troubleshoot any issues with SSH connectivity to your clusters nodes, you can [view the kubelet logs][view-kubelet-logs] or [view the Kubernetes master node logs][view-master-logs].
+
+<!-- LINKS - external -->
+
+<!-- LINKS - internal -->
+[install-azure-cli]: /cli/azure/install-azure-cli
+[az-aks-update]: /cli/azure/aks#az-aks-update
+[view-kubelet-logs]: kubelet-logs.md
+[view-master-logs]: monitor-aks-reference.md#resource-logs
+[node-image-upgrade]: node-image-upgrade.md
aks Node Access https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/node-access.md
Title: Connect to Azure Kubernetes Service (AKS) cluster nodes description: Learn how to connect to Azure Kubernetes Service (AKS) cluster nodes for troubleshooting and maintenance tasks. Previously updated : 09/06/2023 Last updated : 10/04/2023 #Customer intent: As a cluster operator, I want to learn how to connect to virtual machines in an AKS cluster to perform maintenance or troubleshoot a problem.
# Connect to Azure Kubernetes Service (AKS) cluster nodes for maintenance or troubleshooting
-Throughout the lifecycle of your Azure Kubernetes Service (AKS) cluster, you might need to access an AKS node. This access could be for maintenance, log collection, or troubleshooting operations. You can securely authenticate against AKS Linux and Windows nodes using SSH, and you can also [connect to Windows Server nodes using remote desktop protocol (RDP)][aks-windows-rdp]. For security reasons, the AKS nodes aren't exposed to the internet. To connect to the AKS nodes, you use `kubectl debug` or the private IP address.
+Throughout the lifecycle of your Azure Kubernetes Service (AKS) cluster, you might need to access an AKS node. This access could be for maintenance, log collection, or troubleshooting operations. You can securely authenticate against AKS Linux and Windows nodes using SSH, and you can also [connect to Windows Server nodes using remote desktop protocol (RDP)][aks-windows-rdp]. For security reasons, the AKS nodes aren't exposed to the internet. To connect to the AKS nodes, you use `kubectl debug` or the private IP address.
This article shows you how to create a connection to an AKS node and update the SSH key on an existing AKS cluster. ## Before you begin
-This article assumes you have an SSH key. If not, you can create an SSH key using [macOS or Linux][ssh-nix] or [Windows][ssh-windows]. Make sure you save the key pair in an OpenSSH format, other formats like .ppk aren't supported.
+* You have an SSH key. If you don't, you can create an SSH key using [macOS or Linux][ssh-nix] or [Windows][ssh-windows]. Save the key pair in an OpenSSH format, other formats like `.ppk` aren't supported.
-You also need the Azure CLI version 2.0.64 or later installed and configured. Run `az --version` to find the version. If you need to install or upgrade, see [Install Azure CLI][install-azure-cli].
+* The Azure CLI version 2.0.64 or later installed and configured. Run `az --version` to find the version. If you need to install or upgrade, see [Install Azure CLI][install-azure-cli].
## Create an interactive shell connection to a Linux node
To create an interactive shell connection to a Linux node, use the `kubectl debu
```bash kubectl get nodes -o wide ```
-
+ The following example resembles output from the command:
-
+ ```output NAME STATUS ROLES AGE VERSION INTERNAL-IP EXTERNAL-IP OS-IMAGE KERNEL-VERSION CONTAINER-RUNTIME aks-nodepool1-37663765-vmss000000 Ready agent 166m v1.25.6 10.224.0.33 <none> Ubuntu 22.04.2 LTS 5.15.0-1039-azure containerd://1.7.1+azure-1
To create an interactive shell connection to a Linux node, use the `kubectl debu
If you don't see a command prompt, try pressing enter. root@aks-nodepool1-37663765-vmss000000:/# ```
-
+ This privileged container gives access to the node.
-
+ > [!NOTE] > You can interact with the node session by running `chroot /host` from the privileged container.
kubectl delete pod node-debugger-aks-nodepool1-37663765-vmss000000-bkmmx
## Create the SSH connection to a Windows node
-At this time, you can't connect to a Windows Server node directly by using `kubectl debug`. Instead, you need to first connect to another node in the cluster, then connect to the Windows Server node from that node using SSH. Alternatively, you can [connect to Windows Server nodes using remote desktop protocol (RDP) connections][aks-windows-rdp] instead of using SSH.
+Currently, you can't connect to a Windows Server node directly by using `kubectl debug`. Instead, you need to first connect to another node in the cluster, and then connect to the Windows Server node from that node using SSH. Alternatively, you can [connect to Windows Server nodes using remote desktop protocol (RDP) connections][aks-windows-rdp] instead of using SSH.
To connect to another node in the cluster, use the `kubectl debug` command. For more information, see [Create an interactive shell connection to a Linux node][ssh-linux-kubectl-debug].
To create the SSH connection to the Windows Server node from another node, use t
### Create the SSH connection to a Windows node using a password
-If you didn't create your AKS cluster using the Azure CLI and the `--generate-ssh-keys` parameter, you'll use a password instead of an SSH key to create the SSH connection. To do this with Azure CLI, use the following steps. Replace `<nodeRG>` with a resource group name and `<vmssName>` with the scale set name in that resource group.
+If you didn't create your AKS cluster using the Azure CLI and the `--generate-ssh-keys` parameter, you'll use a password instead of an SSH key to create the SSH connection. To do this with Azure CLI, perform the following steps. Replace `<nodeRG>` with a resource group name and `<vmssName>` with the scale set name in that resource group.
1. Create a root user called `azureuser`.
When done, `exit` the SSH session, stop any port forwarding, and then `exit` the
kubectl delete pod node-debugger-aks-nodepool1-37663765-vmss000000-bkmmx ```
-## Update SSH public key on an existing AKS cluster (preview)
-
-### Prerequisites
-
-* Ensure the Azure CLI is installed and configured. If you need to install or upgrade, see [Install Azure CLI][install-azure-cli].
-* Ensure that the aks-preview extension version 0.5.111 or later. To learn how to install an Azure extension, see [How to install extensions][how-to-install-azure-extensions].
-
-> [!NOTE]
-> Updating of the SSH key is supported on Azure virtual machine scale sets with AKS clusters.
-
-Use the [az aks update][az-aks-update] command to update the SSH public key on the cluster. This operation updates the key on all node pools. You can either specify the key or a key file using the `--ssh-key-value` argument.
-
-```azurecli
-az aks update --name myAKSCluster --resource-group MyResourceGroup --ssh-key-value <new SSH key value or SSH key file>
-```
-
-The following examples demonstrate possible usage of this command:
-
-* You can specify the new SSH public key value for the `--ssh-key-value` argument:
-
- ```azurecli
- az aks update --name myAKSCluster --resource-group MyResourceGroup --ssh-key-value 'ssh-rsa AAAAB3Nza-xxx'
- ```
-
-* You specify an SSH public key file:
-
- ```azurecli
- az aks update --name myAKSCluster --resource-group MyResourceGroup --ssh-key-value ~/.ssh/id_rsa.pub
- ```
-
-> [!IMPORTANT]
-> After you update SSH key, AKS doesn't automatically reimage your node pool, you can choose anytime to perform [the reimage operation][node-image-upgrade]. Only after reimage is complete, does the update SSH key operation take effect.
-- ## Next steps
-If you need more troubleshooting data, you can [view the kubelet logs][view-kubelet-logs] or [view the Kubernetes master node logs][view-master-logs].
+* To help troubleshoot any issues with SSH connectivity to your clusters nodes, you can [view the kubelet logs][view-kubelet-logs] or [view the Kubernetes master node logs][view-master-logs].
+* See [Manage SSH configuration][manage-ssh-node-access] to learn about managing the SSH key on an AKS cluster or node pools.
<!-- INTERNAL LINKS --> [view-kubelet-logs]: kubelet-logs.md
If you need more troubleshooting data, you can [view the kubelet logs][view-kube
[ssh-nix]: ../virtual-machines/linux/mac-create-ssh-keys.md [ssh-windows]: ../virtual-machines/linux/ssh-from-windows.md [ssh-linux-kubectl-debug]: #create-an-interactive-shell-connection-to-a-linux-node
-[az-aks-update]: /cli/azure/aks#az-aks-update
-[how-to-install-azure-extensions]: /cli/azure/azure-cli-extensions-overview#how-to-install-extensions
-[node-image-upgrade]:node-image-upgrade.md
+[manage-ssh-node-access]: manage-ssh-node-access.md
aks Private Clusters https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/private-clusters.md
Private cluster is available in public regions, Azure Government, and Microsoft
* To use a custom DNS server, add the Azure public IP address 168.63.129.16 as the upstream DNS server in the custom DNS server, and make sure to add this public IP address as the *first* DNS server. For more information about the Azure IP address, see [What is IP address 168.63.129.16?][virtual-networks-168.63.129.16] * The cluster's DNS zone should be what you forward to 168.63.129.16. You can find more information on zone names in [Azure services DNS zone configuration][az-dns-zone].
+> [!NOTE]
+> The Azure Linux node pool is now generally available (GA). To learn about the benefits and deployment steps, see the [Introduction to the Azure Linux Container Host for AKS][intro-azure-linux].
+ ## Limitations * IP authorized ranges can't be applied to the private API server endpoint, they only apply to the public API server
For associated best practices, see [Best practices for network connectivity and
[az-network-private-dns-link-vnet-create]: /cli/azure/network/private-dns/link/vnet#az_network_private_dns_link_vnet_create [az-network-vnet-peering-create]: /cli/azure/network/vnet/peering#az_network_vnet_peering_create [az-network-vnet-peering-list]: /cli/azure/network/vnet/peering#az_network_vnet_peering_list
+[intro-azure-linux]: ../azure-linux/intro-azure-linux.md
aks Upgrade Cluster https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/upgrade-cluster.md
Title: Upgrade an Azure Kubernetes Service (AKS) cluster
description: Learn how to upgrade an Azure Kubernetes Service (AKS) cluster to get the latest features and security updates. Previously updated : 09/14/2023 Last updated : 10/16/2023
Part of the AKS cluster lifecycle involves performing periodic upgrades to the l
For AKS clusters that use multiple node pools or Windows Server nodes, see [Upgrade a node pool in AKS][nodepool-upgrade]. To upgrade a specific node pool without performing a Kubernetes cluster upgrade, see [Upgrade a specific node pool][specific-nodepool].
+> [!NOTE]
+> The Azure Linux node pool is now generally available (GA). To learn about the benefits and deployment steps, see the [Introduction to the Azure Linux Container Host for AKS][intro-azure-linux].
+ ## Kubernetes version upgrades When you upgrade a supported AKS cluster, Kubernetes minor versions can't be skipped. You must perform all upgrades sequentially by major version number. For example, upgrades between *1.14.x* -> *1.15.x* or *1.15.x* -> *1.16.x* are allowed, however *1.14.x* -> *1.16.x* isn't allowed.
Skipping multiple versions can only be done when upgrading from an *unsupported
## Before you begin
-* If you're using Azure CLI, this article requires that you're running the Azure CLI version 2.34.1 or later. Run `az --version` to find the version. If you need to install or upgrade, see [Install Azure CLI][azure-cli-install].
-* If you're using Azure PowerShell, this tutorial requires that you're running Azure PowerShell version 5.9.0 or later. Run `Get-InstalledModule -Name Az` to find the version. If you need to install or upgrade, see [Install Azure PowerShell][azure-powershell-install].
-* Performing upgrade operations requires the `Microsoft.ContainerService/managedClusters/agentPools/write` RBAC role. For more on Azure RBAC roles, see the [Azure resource provider operations]
+* If you use the Azure CLI, you need Azure CLI version 2.34.1 or later. Run `az --version` to find the version. If you need to install or upgrade, see [Install Azure CLI][azure-cli-install].
+* If you use Azure PowerShell, you need Azure PowerShell version 5.9.0 or later. Run `Get-InstalledModule -Name Az` to find the version. If you need to install or upgrade, see [Install Azure PowerShell][azure-powershell-install].
+* Performing upgrade operations requires the `Microsoft.ContainerService/managedClusters/agentPools/write` RBAC role. For more information, see [Create custom roles][azure-rbac-provider-operations].
> [!WARNING] > An AKS cluster upgrade triggers a cordon and drain of your nodes. If you have a low compute quota available, the upgrade may fail. For more information, see [increase quotas](../azure-portal/supportability/regional-quota-requests.md).
This article showed you how to upgrade an existing AKS cluster. To learn more ab
<!-- LINKS - internal --> [aks-tutorial-prepare-app]: ./tutorial-kubernetes-prepare-app.md
+[azure-rbac-provider-operations]: manage-azure-rbac.md#create-custom-roles-definitions
[azure-cli-install]: /cli/azure/install-azure-cli [azure-powershell-install]: /powershell/azure/install-az-ps [az-aks-get-upgrades]: /cli/azure/aks#az_aks_get_upgrades
This article showed you how to upgrade an existing AKS cluster. To learn more ab
[set-azakscluster]: /powershell/module/az.aks/set-azakscluster [az-aks-show]: /cli/azure/aks#az_aks_show [get-azakscluster]: /powershell/module/az.aks/get-azakscluster
-[az-extension-add]: /cli/azure/extension#az_extension_add
-[az-extension-update]: /cli/azure/extension#az_extension_update
-[az-feature-list]: /cli/azure/feature#az_feature_list
-[az-feature-register]: /cli/azure/feature#az_feature_register
-[az-provider-register]: /cli/azure/provider#az_provider_register
[nodepool-upgrade]: manage-node-pools.md#upgrade-a-single-node-pool
-[upgrade-cluster]: #upgrade-an-aks-cluster
[planned-maintenance]: planned-maintenance.md [aks-auto-upgrade]: auto-upgrade-cluster.md [release-tracker]: release-tracker.md
This article showed you how to upgrade an existing AKS cluster. To learn more ab
[k8s-deprecation]: https://kubernetes.io/blog/2022/11/18/upcoming-changes-in-kubernetes-1-26/#:~:text=A%20deprecated%20API%20is%20one%20that%20has%20been,point%20you%20must%20migrate%20to%20using%20the%20replacement [k8s-api]: https://kubernetes.io/docs/reference/using-api/api-concepts/ [container-insights]:/azure/azure-monitor/containers/container-insights-log-query#resource-logs
-[support-policy-user-customizations-agent-nodes]: support-policies.md#user-customization-of-agent-nodes
+[support-policy-user-customizations-agent-nodes]: support-policies.md#user-customization-of-agent-nodes
+[intro-azure-linux]: ../azure-linux/intro-azure-linux.md
aks Use Azure Linux https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/use-azure-linux.md
The Azure Linux container host on AKS uses a native AKS image that provides one
## How to use Azure Linux on AKS
+> [!NOTE]
+> The Azure Linux node pool is now generally available (GA). To learn about the benefits and deployment steps, see the [Introduction to the Azure Linux Container Host for AKS][azurelinuxdocumentation].
+ To get started using the Azure Linux container host for AKS, see: * [Creating a cluster with Azure Linux][azurelinux-cluster-config]
api-management Api Management Using With Internal Vnet https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/api-management/api-management-using-with-internal-vnet.md
This article explains how to set up VNet connectivity for your API Management in
> [!NOTE] > * None of the API Management endpoints are registered on the public DNS. The endpoints remain inaccessible until you [configure DNS](#dns-configuration) for the VNet.
-> * To use the self-hosted gateway in this mode, also enable private connectivity to the self-hosted gateway [configuration endpoint](self-hosted-gateway-overview.md#fqdn-dependencies). Currently, API Management doesn't enable configuring a custom domain name for the v2 endpoint.
+> * To use the self-hosted gateway in this mode, also enable private connectivity to the self-hosted gateway [configuration endpoint](self-hosted-gateway-overview.md#fqdn-dependencies).
Use API Management in internal mode to:
api-management How To Self Hosted Gateway On Kubernetes In Production https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/api-management/how-to-self-hosted-gateway-on-kubernetes-in-production.md
If you use custom domain names for the [API Management endpoints](self-hosted-ga
In this scenario, if the SSL certificate that's used by the Management endpoint isn't signed by a well-known CA certificate, you must make sure that the CA certificate is trusted by the pod of the self-hosted gateway. > [!NOTE]
-> With the self-hosted gateway v2, API Management provides a new configuration endpoint: `<apim-service-name>.configuration.azure-api.net`. Currently, API Management doesn't enable configuring a custom domain name for the v2 configuration endpoint. If you need custom hostname mapping for this endpoint, you may be able to configure an override in the container's local hosts file, for example, using a [`hostAliases`](https://kubernetes.io/docs/tasks/network/customize-hosts-file-for-pods/#adding-additional-entries-with-hostaliases) element in a Kubernetes container spec.
+> With the self-hosted gateway v2, API Management provides a new configuration endpoint: `<apim-service-name>.configuration.azure-api.net`. Custom hostnames are supported for this endpoint and can be used instead of the default hostname.
## DNS policy DNS name resolution plays a critical role in a self-hosted gateway's ability to connect to dependencies in Azure and dispatch API calls to backend services.
automation Automation Availability Zones https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/automation/automation-availability-zones.md
description: This article provides an overview of Azure availability zones and r
keywords: automation availability zones. Previously updated : 04/10/2023 Last updated : 10/16/2023
Automation accounts currently support the following regions:
- Australia East - Brazil South - Canada Central
+- Central India
- Central US - China North 3 - East Asia
Automation accounts currently support the following regions:
- East US 2 - France Central - Germany West Central
+- Israel Central
+- Italy North
- Japan East - Korea Central - North Europe - Norway East
+- Poland Central
- Qatar Central - South Africa North - South Central US - South East Asia - Sweden Central
+- USGov Virginia (Fairfax Private Cloud)
- UK South - West Europe - West US 2
automation Automation Runbook Types https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/automation/automation-runbook-types.md
Following are the limitations of Python runbooks
# [Python 3.8 (GA)](#tab/py38) - You must be familiar with Python scripting.
+- Source control integration isn't supported.
- For Python 3.8 modules, use wheel files targeting cp38-amd64. - To use third-party libraries, you must [import the packages](python-packages.md) into the Automation account. - Using **Start-AutomationRunbook** cmdlet in PowerShell/PowerShell Workflow to start a Python 3.8 runbook doesn't work. You can use **Start-AzAutomationRunbook** cmdlet from Az.Automation module or **Start-AzureRmAutomationRunbook** cmdlet from AzureRm.Automation module to work around this limitation. 
Following are the limitations of Python runbooks
# [Python 3.10 (preview)](#tab/py10) - For Python 3.10 (preview) modules, currently, only the wheel files targeting cp310 Linux OS are supported. [Learn more](./python-3-packages.md)
+- Source control integration isn't supported.
- Custom packages for Python 3.10 (preview) are only validated during job runtime. Job is expected to fail if the package is not compatible in the runtime or if required dependencies of packages aren't imported into automation account. - Currently, Python 3.10 (preview) runbooks are only supported from Azure portal. Rest API and PowerShell aren't supported.
azure-app-configuration Enable Dynamic Configuration Dotnet Core Push Refresh https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-app-configuration/enable-dynamic-configuration-dotnet-core-push-refresh.md
Open *Program.cs* and update the file with the following code.
```csharp using Azure.Messaging.EventGrid;
-using Microsoft.Azure.ServiceBus;
+using Azure.Messaging.ServiceBus;
using Microsoft.Extensions.Configuration; using Microsoft.Extensions.Configuration.AzureAppConfiguration; using Microsoft.Extensions.Configuration.AzureAppConfiguration.Extensions;
namespace TestConsole
string serviceBusConnectionString = Environment.GetEnvironmentVariable(ServiceBusConnectionStringEnvVarName); string serviceBusTopic = Environment.GetEnvironmentVariable(ServiceBusTopicEnvVarName); string serviceBusSubscription = Environment.GetEnvironmentVariable(ServiceBusSubscriptionEnvVarName);
- SubscriptionClient serviceBusClient = new SubscriptionClient(serviceBusConnectionString, serviceBusTopic, serviceBusSubscription);
+ ServiceBusClient serviceBusClient = new ServiceBusClient(serviceBusConnectionString);
+ ServiceBusProcessor serviceBusProcessor = serviceBusClient.CreateProcessor(serviceBusTopic, serviceBusSubscription);
- serviceBusClient.RegisterMessageHandler(
- handler: (message, cancellationToken) =>
+ serviceBusProcessor.ProcessMessageAsync += (processMessageEventArgs) =>
{ // Build EventGridEvent from notification message
- EventGridEvent eventGridEvent = EventGridEvent.Parse(BinaryData.FromBytes(message.Body));
+ EventGridEvent eventGridEvent = EventGridEvent.Parse(BinaryData.FromBytes(processMessageEventArgs.Message.Body));
// Create PushNotification from eventGridEvent eventGridEvent.TryCreatePushNotification(out PushNotification pushNotification);
namespace TestConsole
_refresher.ProcessPushNotification(pushNotification); return Task.CompletedTask;
- },
- exceptionReceivedHandler: (exceptionargs) =>
+ };
+
+ serviceBusProcessor.ProcessErrorAsync += (exceptionargs) =>
{ Console.WriteLine($"{exceptionargs.Exception}"); return Task.CompletedTask;
- });
+ };
} } }
azure-arc Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/resource-bridge/overview.md
Arc resource bridge supports the following Azure regions:
* West Europe * North Europe * UK South
+* UK West
+ * Sweden Central * Canada Central * Australia East
If an Arc resource bridge is unable to be upgraded to a supported version, you m
+
azure-arc Quick Start Connect Vcenter To Arc Using Script https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/vmware-vsphere/quick-start-connect-vcenter-to-arc-using-script.md
This account is used for the ongoing operation of Azure Arc-enabled VMware vSphe
### Workstation
-You need a Windows or Linux machine that can access both your vCenter Server instance and the internet, directly or through a proxy.
+You need a Windows or Linux machine that can access both your vCenter Server instance and the internet, directly or through a proxy. The workstation must also have outbound network connectivity to the ESXi host backing the datastore. Datastore connectivity is needed for uploading the Arc resource bridge image to the datastore as part of the onboarding.
## Prepare vCenter Server
azure-functions Functions Bindings Service Bus Output https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/functions-bindings-service-bus-output.md
When the parameter value is null when the function exits, Functions doesn't crea
Use the [Message](/dotnet/api/microsoft.azure.servicebus.message) type when sending messages with metadata. Parameters are defined as `return` type attributes. Use an `ICollector<T>` or `IAsyncCollector<T>` to write multiple messages. A message is created when you call the `Add` method. + When the parameter value is null when the function exits, Functions doesn't create a message. [!INCLUDE [functions-service-bus-account-attribute](../../includes/functions-service-bus-account-attribute.md)]
azure-functions Functions Bindings Service Bus Trigger https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/functions-bindings-service-bus-trigger.md
The following table explains the properties you can set using this trigger attri
|**Access**|Access rights for the connection string. Available values are `manage` and `listen`. The default is `manage`, which indicates that the `connection` has the **Manage** permission. If you use a connection string that does not have the **Manage** permission, set `accessRights` to "listen". Otherwise, the Functions runtime might fail trying to do operations that require manage rights. In Azure Functions version 2.x and higher, this property is not available because the latest version of the Service Bus SDK doesn't support manage operations.| |**IsBatched**| Messages are delivered in batches. Requires an array or collection type. | |**IsSessionsEnabled**|`true` if connecting to a [session-aware](../service-bus-messaging/message-sessions.md) queue or subscription. `false` otherwise, which is the default value.|
-|**AutoComplete**|`true` Whether the trigger should automatically call complete after processing, or if the function code will manually call complete.<br/><br/>If set to `true`, the trigger completes the message automatically if the function execution completes successfully, and abandons the message otherwise.<br/><br/>When set to `false`, you are responsible for calling [MessageReceiver](/dotnet/api/microsoft.azure.servicebus.core.messagereceiver) methods to complete, abandon, or deadletter the message. If an exception is thrown (and none of the `MessageReceiver` methods are called), then the lock remains. Once the lock expires, the message is re-queued with the `DeliveryCount` incremented and the lock is automatically renewed. |
+|**AutoComplete**|`true` Whether the trigger should automatically call complete after processing, or if the function code will manually call complete.<br/><br/>If set to `true`, the trigger completes the message automatically if the function execution completes successfully, and abandons the message otherwise.<br/><br/>When set to `false`, you are responsible for calling [ServiceBusReceiver](/dotnet/api/azure.messaging.servicebus.servicebusreceiver) methods to complete, abandon, or deadletter the message, session, or batch. When an exception is thrown (and none of the `ServiceBusReceiver` methods are called), then the lock remains. Once the lock expires, the message is re-queued with the `DeliveryCount` incremented and the lock is automatically renewed. |
[!INCLUDE [app settings to local.settings.json](../../includes/functions-app-settings-local.md)]
In [C# class libraries](functions-dotnet-class-library.md), the attribute's cons
Use the [Message](/dotnet/api/microsoft.azure.servicebus.message) type to receive messages with metadata. To learn more, see [Messages, payloads, and serialization](../service-bus-messaging/service-bus-messages-payloads.md). + In [C# class libraries](functions-dotnet-class-library.md), the attribute's constructor takes the name of the queue or the topic and subscription. [!INCLUDE [functions-service-bus-account-attribute](../../includes/functions-service-bus-account-attribute.md)]
The following parameter types are available for the queue or topic message:
* [BrokeredMessage](/dotnet/api/microsoft.servicebus.messaging.brokeredmessage) - Gives you the deserialized message with the [BrokeredMessage.GetBody\<T>()](/dotnet/api/microsoft.servicebus.messaging.brokeredmessage.getbody#Microsoft_ServiceBus_Messaging_BrokeredMessage_GetBody__1) method. * [MessageReceiver](/dotnet/api/microsoft.azure.servicebus.core.messagereceiver) - Used to receive and acknowledge messages from the message container, which is required when `autoComplete` is set to `false`. + In [C# class libraries](functions-dotnet-class-library.md), the attribute's constructor takes the name of the queue or the topic and subscription. In Azure Functions version 1.x, you can also specify the connection's access rights. If you don't specify access rights, the default is `Manage`. [!INCLUDE [functions-service-bus-account-attribute](../../includes/functions-service-bus-account-attribute.md)]
Poison message handling can't be controlled or configured in Azure Functions. Se
The Functions runtime receives a message in [PeekLock mode](../service-bus-messaging/service-bus-performance-improvements.md#receive-mode). It calls `Complete` on the message if the function finishes successfully, or calls `Abandon` if the function fails. If the function runs longer than the `PeekLock` timeout, the lock is automatically renewed as long as the function is running.
-The `maxAutoRenewDuration` is configurable in *host.json*, which maps to [OnMessageOptions.MaxAutoRenewDuration](/dotnet/api/microsoft.azure.servicebus.messagehandleroptions.maxautorenewduration). The default value of this setting is 5 minutes.
+The `maxAutoRenewDuration` is configurable in *host.json*, which maps to [ServiceBusProcessor.MaxAutoLockRenewalDuration](/dotnet/api/azure.messaging.servicebus.servicebusprocessor.maxautolockrenewalduration). The default value of this setting is 5 minutes.
::: zone pivot="programming-language-csharp" ## Message metadata
These properties are members of the [ServiceBusReceivedMessage](/dotnet/api/azur
These properties are members of the [Message](/dotnet/api/microsoft.azure.servicebus.message) class. + |Property|Type|Description| |--|-|--| |`ContentType`|`string`|A content type identifier utilized by the sender and receiver for application-specific logic.|
These properties are members of the [Message](/dotnet/api/microsoft.azure.servic
These properties are members of the [BrokeredMessage](/dotnet/api/microsoft.servicebus.messaging.brokeredmessage) and [MessageReceiver](/dotnet/api/microsoft.azure.servicebus.core.messagereceiver) classes. + |Property|Type|Description| |--|-|--| |`ContentType`|`string`|A content type identifier utilized by the sender and receiver for application-specific logic.|
Functions version 1.x doesn't support isolated worker process. To use the isolat
- [Send Azure Service Bus messages from Azure Functions (Output binding)](./functions-bindings-service-bus-output.md)
-[BrokeredMessage]: /dotnet/api/microsoft.servicebus.messaging.brokeredmessage
[upgrade your application to Functions 4.x]: ./migrate-version-1-version-4.md
azure-functions Functions Bindings Service Bus https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/functions-bindings-service-bus.md
The Service Bus extension supports parameter types according to the table below.
Earlier versions of the extension exposed types from the now deprecated [Microsoft.Azure.ServiceBus] namespace. Newer types from [Azure.Messaging.ServiceBus] are exclusive to **Extension 5.x+**. + This version of the extension supports parameter types according to the table below. The Service Bus extension supports parameter types according to the table below.
The Service Bus extension supports parameter types according to the table below.
Functions 1.x exposed types from the deprecated [Microsoft.ServiceBus.Messaging] namespace. Newer types from [Azure.Messaging.ServiceBus] are exclusive to **Extension 5.x+**. To use these, you will need to [upgrade your application to Functions 4.x]. + # [Extension 5.x+](#tab/extensionv5/isolated-process) The isolated worker process supports parameter types according to the tables below. Support for binding to types from [Azure.Messaging.ServiceBus] is in preview. Current support does not yet include message settlement scenarios for triggers.
Functions version 1.x doesn't support isolated worker process. To use the isolat
[Microsoft.ServiceBus.Messaging]: /dotnet/api/microsoft.servicebus.messaging + [upgrade your application to Functions 4.x]: ./migrate-version-1-version-4.md :::zone-end
When you set the `isSessionsEnabled` property or attribute on [the trigger](func
|||| |**prefetchCount**|`0`|Gets or sets the number of messages that the message receiver can simultaneously request.| |**maxAutoRenewDuration**|`00:05:00`|The maximum duration within which the message lock will be renewed automatically.|
-|**autoComplete**|`true`|Whether the trigger should automatically call complete after processing, or if the function code manually calls complete.<br><br>Setting to `false` is only supported in C#.<br><br>When set to `true`, the trigger completes the message, session, or batch automatically when the function execution completes successfully, and abandons the message otherwise.<br><br>When set to `false`, you are responsible for calling [MessageReceiver](/dotnet/api/microsoft.azure.servicebus.core.messagereceiver) methods to complete, abandon, or deadletter the message, session, or batch. When an exception is thrown (and none of the `MessageReceiver` methods are called), then the lock remains. Once the lock expires, the message is re-queued with the `DeliveryCount` incremented and the lock is automatically renewed.<br><br>In non-C# functions, exceptions in the function results in the runtime calls `abandonAsync` in the background. If no exception occurs, then `completeAsync` is called in the background. |
+|**autoComplete**|`true`|Whether the trigger should automatically call complete after processing, or if the function code manually calls complete.<br><br>Setting to `false` is only supported in C#.<br><br>When set to `true`, the trigger completes the message, session, or batch automatically when the function execution completes successfully, and abandons the message otherwise.<br><br>When set to `false`, you are responsible for calling [ServiceBusReceiver](/dotnet/api/azure.messaging.servicebus.servicebusreceiver) methods to complete, abandon, or deadletter the message, session, or batch. When an exception is thrown (and none of the `ServiceBusReceiver` methods are called), then the lock remains. Once the lock expires, the message is re-queued with the `DeliveryCount` incremented and the lock is automatically renewed.<br><br>In non-C# functions, exceptions in the function results in the runtime calls `abandonAsync` in the background. If no exception occurs, then `completeAsync` is called in the background. |
|**maxConcurrentCalls**|`16`|The maximum number of concurrent calls to the callback that the message pump should initiate per scaled instance. By default, the Functions runtime processes multiple messages concurrently.| |**maxConcurrentSessions**|`2000`|The maximum number of sessions that can be handled concurrently per scaled instance.| |**maxMessageCount**|`1000`| The maximum number of messages sent to the function when triggered. |
azure-functions Functions Dotnet Class Library https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/functions-dotnet-class-library.md
You can't use `out` parameters in async functions. For output bindings, use the
A function can accept a [CancellationToken](/dotnet/api/system.threading.cancellationtoken) parameter, which enables the operating system to notify your code when the function is about to be terminated. You can use this notification to make sure the function doesn't terminate unexpectedly in a way that leaves data in an inconsistent state.
-Consider the case when you have a function that processes messages in batches. The following Azure Service Bus-triggered function processes an array of [Message](/dotnet/api/microsoft.azure.servicebus.message) objects, which represents a batch of incoming messages to be processed by a specific function invocation:
+Consider the case when you have a function that processes messages in batches. The following Azure Service Bus-triggered function processes an array of [ServiceBusReceivedMessage](/dotnet/api/azure.messaging.servicebus.servicebusreceivedmessage) objects, which represents a batch of incoming messages to be processed by a specific function invocation:
```csharp
-using Microsoft.Azure.ServiceBus;
+using Azure.Messaging.ServiceBus;
using System.Threading; namespace ServiceBusCancellationToken
namespace ServiceBusCancellationToken
{ [FunctionName("servicebus")] public static void Run([ServiceBusTrigger("csharpguitar", Connection = "SB_CONN")]
- Message[] messages, CancellationToken cancellationToken, ILogger log)
+ ServiceBusReceivedMessage[] messages, CancellationToken cancellationToken, ILogger log)
{ try {
azure-functions Functions Host Json V1 https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/functions-host-json-v1.md
Configuration settings for [Host health monitor](https://github.com/Azure/azure-
|||| |enabled|true|Specifies whether the feature is enabled. | |healthCheckInterval|10 seconds|The time interval between the periodic background health checks. |
-|healthCheckWindow|2 minutes|A sliding time window used in conjunction with the `healthCheckThreshold` setting.|
+|healthCheckWindow|2 minutes|A sliding time window used with the `healthCheckThreshold` setting.|
|healthCheckThreshold|6|Maximum number of times the health check can fail before a host recycle is initiated.| |counterThreshold|0.80|The threshold at which a performance counter will be considered unhealthy.|
Configuration settings for [http triggers and bindings](functions-bindings-http-
|Property |Default | Description | ||||
-|dynamicThrottlesEnabled|false|When enabled, this setting causes the request processing pipeline to periodically check system performance counters like connections/threads/processes/memory/cpu/etc. and if any of those counters are over a built-in high threshold (80%), requests will be rejected with a 429 "Too Busy" response until the counter(s) return to normal levels.|
+|dynamicThrottlesEnabled|false|When enabled, this setting causes the request processing pipeline to periodically check system performance counters like connections/threads/processes/memory/cpu/etc. and if any of those counters are over a built-in high threshold (80%), requests are rejected with a 429 "Too Busy" response until the counter(s) return to normal levels.|
|maxConcurrentRequests|unbounded (`-1`)|The maximum number of HTTP functions that will be executed in parallel. This allows you to control concurrency, which can help manage resource utilization. For example, you might have an HTTP function that uses a lot of system resources (memory/cpu/sockets) such that it causes issues when concurrency is too high. Or you might have a function that makes outbound requests to a third party service, and those calls need to be rate limited. In these cases, applying a throttle here can help.|
-|maxOutstandingRequests|unbounded (`-1`)|The maximum number of outstanding requests that are held at any given time. This limit includes requests that are queued but have not started executing, as well as any in progress executions. Any incoming requests over this limit are rejected with a 429 "Too Busy" response. That allows callers to employ time-based retry strategies, and also helps you to control maximum request latencies. This only controls queuing that occurs within the script host execution path. Other queues such as the ASP.NET request queue will still be in effect and unaffected by this setting.|
+|maxOutstandingRequests|unbounded (`-1`)|The maximum number of outstanding requests that are held at any given time. This limit includes requests that are queued but have not started executing, and any in progress executions. Any incoming requests over this limit are rejected with a 429 "Too Busy" response. That allows callers to employ time-based retry strategies, and also helps you to control maximum request latencies. This only controls queuing that occurs within the script host execution path. Other queues such as the ASP.NET request queue will still be in effect and unaffected by this setting.|
|routePrefix|api|The route prefix that applies to all routes. Use an empty string to remove the default prefix. | ## id The unique ID for a job host. Can be a lower case GUID with dashes removed. Required when running locally. When running in Azure, we recommend that you not set an ID value. An ID is generated automatically in Azure when `id` is omitted.
-If you share a Storage account across multiple function apps, make sure that each function app has a different `id`. You can omit the `id` property or manually set each function app's `id` to a different value. The timer trigger uses a storage lock to ensure that there will be only one timer instance when a function app scales out to multiple instances. If two function apps share the same `id` and each uses a timer trigger, only one timer will run.
+If you share a Storage account across multiple function apps, make sure that each function app has a different `id`. You can omit the `id` property or manually set each function app's `id` to a different value. The timer trigger uses a storage lock to ensure that there will be only one timer instance when a function app scales out to multiple instances. If two function apps share the same `id` and each uses a timer trigger, only one timer runs.
```json {
Configuration setting for [Service Bus triggers and bindings](functions-bindings
|Property |Default | Description | |||| |maxConcurrentCalls|16|The maximum number of concurrent calls to the callback that the message pump should initiate. By default, the Functions runtime processes multiple messages concurrently. To direct the runtime to process only a single queue or topic message at a time, set `maxConcurrentCalls` to 1. |
-|prefetchCount|n/a|The default PrefetchCount that will be used by the underlying MessageReceiver.|
+|prefetchCount|n/a|The default PrefetchCount that will be used by the underlying ServiceBusReceiver.|
|autoRenewTimeout|00:05:00|The maximum duration within which the message lock will be renewed automatically.|
-|autoComplete|true|When true, the trigger will complete the message processing automatically on successful execution of the operation. When false, it is the responsibility of the function to complete the message before returning.|
+|autoComplete|true|When true, the trigger completes the message processing automatically on successful execution of the operation. When false, it is the responsibility of the function to complete the message before returning.|
## singleton
Configuration settings for Singleton lock behavior. For more information, see [G
|lockPeriod|00:00:15|The period that function level locks are taken for. The locks auto-renew.| |listenerLockPeriod|00:01:00|The period that listener locks are taken for.| |listenerLockRecoveryPollingInterval|00:01:00|The time interval used for listener lock recovery if a listener lock couldn't be acquired on startup.|
-|lockAcquisitionTimeout|00:01:00|The maximum amount of time the runtime will try to acquire a lock.|
+|lockAcquisitionTimeout|00:01:00|The maximum amount of time the runtime tries to acquire a lock.|
|lockAcquisitionPollingInterval|n/a|The interval between lock acquisition attempts.| ## tracing
azure-functions Functions Reference Csharp https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/functions-reference-csharp.md
The following table explains the binding configuration properties that you set i
|**connection**| The name of an app setting or setting collection that specifies how to connect to Service Bus. See [Connections](./functions-bindings-service-bus-trigger.md#connections).| |**accessRights**| Access rights for the connection string. Available values are `manage` and `listen`. The default is `manage`, which indicates that the `connection` has the **Manage** permission. If you use a connection string that does not have the **Manage** permission, set `accessRights` to "listen". Otherwise, the Functions runtime might fail trying to do operations that require manage rights. In Azure Functions version 2.x and higher, this property is not available because the latest version of the Service Bus SDK doesn't support manage operations.| |**isSessionsEnabled**| `true` if connecting to a [session-aware](../service-bus-messaging/message-sessions.md) queue or subscription. `false` otherwise, which is the default value.|
-|**autoComplete**| `true` when the trigger should automatically call complete after processing, or if the function code will manually call complete.<br/><br/>Setting to `false` is only supported in C#.<br/><br/>If set to `true`, the trigger completes the message automatically if the function execution completes successfully, and abandons the message otherwise.<br/><br/>When set to `false`, you are responsible for calling [MessageReceiver](/dotnet/api/microsoft.azure.servicebus.core.messagereceiver) methods to complete, abandon, or deadletter the message. If an exception is thrown (and none of the `MessageReceiver` methods are called), then the lock remains. Once the lock expires, the message is re-queued with the `DeliveryCount` incremented and the lock is automatically renewed.<br/><br/>This property is available only in Azure Functions 2.x and higher. |
+|**autoComplete**| `true` when the trigger should automatically call complete after processing, or if the function code will manually call complete.<br/><br/>Setting to `false` is only supported in C#.<br/><br/>If set to `true`, the trigger completes the message automatically if the function execution completes successfully, and abandons the message otherwise.<br/><br/When set to `false`, you are responsible for calling [ServiceBusReceiver](/dotnet/api/azure.messaging.servicebus.servicebusreceiver) methods to complete, abandon, or deadletter the message, session, or batch. When an exception is thrown (and none of the `ServiceBusReceiver` methods are called), then the lock remains. Once the lock expires, the message is re-queued with the `DeliveryCount` incremented and the lock is automatically renewed.<br/><br/>This property is available only in Azure Functions 2.x and higher. |
The following example shows a Service Bus trigger binding in a *function.json* file and a C# script function that uses the binding. The function reads message metadata and logs a Service Bus queue message.
azure-government Azure Services In Fedramp Auditscope https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-government/compliance/azure-services-in-fedramp-auditscope.md
This article provides a detailed list of Azure, Dynamics 365, Microsoft 365, and
| [Azure Spring Apps](../../spring-apps/index.yml) | &#x2705; | &#x2705; | | [Azure Stack Edge](../../databox-online/index.yml) (formerly Data Box Edge) **&ast;** | &#x2705; | &#x2705; | | [Azure Stack HCI](/azure-stack/hci/) | &#x2705; | &#x2705; |
-| [Azure Video Indexer](../../azure-video-indexer/index.yml) | &#x2705; | &#x2705; |
+| [Azure Video Indexer](/azure/azure-video-indexer/) | &#x2705; | &#x2705; |
| [Azure Virtual Desktop](../../virtual-desktop/index.yml) (formerly Windows Virtual Desktop) | &#x2705; | &#x2705; | | [Azure VMware Solution](../../azure-vmware/index.yml) | &#x2705; | &#x2705; | | [Backup](../../backup/index.yml) | &#x2705; | &#x2705; |
This article provides a detailed list of Azure, Dynamics 365, Microsoft 365, and
| [Azure Stack Bridge](/azure-stack/operator/azure-stack-usage-reporting) | &#x2705; | &#x2705; | &#x2705; | &#x2705; | &#x2705; | | [Azure Stack Edge](../../databox-online/index.yml) (formerly Data Box Edge) **&ast;** | &#x2705; | &#x2705; | &#x2705; | &#x2705; | &#x2705; | | [Azure Stack HCI](/azure-stack/hci/) | &#x2705; | &#x2705; | &#x2705; | | |
-| [Azure Video Indexer](../../azure-video-indexer/index.yml) | &#x2705; | &#x2705; | &#x2705; | | |
+| [Azure Video Indexer](/azure/azure-video-indexer/) | &#x2705; | &#x2705; | &#x2705; | | |
| [Azure Virtual Desktop](../../virtual-desktop/index.yml) (formerly Windows Virtual Desktop) | &#x2705; | &#x2705; | &#x2705; | &#x2705; | &#x2705; | | [Backup](../../backup/index.yml) | &#x2705; | &#x2705; | &#x2705; | &#x2705; | &#x2705; | | [Bastion](../../bastion/index.yml) | &#x2705; | &#x2705; | &#x2705; | &#x2705; | |
azure-monitor Azure Monitor Agent Extension Versions https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/agents/azure-monitor-agent-extension-versions.md
We strongly recommended to always update to the latest version, or opt in to the
## Version details | Release Date | Release notes | Windows | Linux | |:|:|:|:|
+| September 2023| **Windows** <ul><li>Fix issue with high CPU usage due to excessive Windows Event Logs subscription reset</li><li>Reduce fluentbit resource usage by limiting tracked files older than 3 days and limiting logging to errors only</li><li>Fix race-condition where resource_id is unavailable when agent is restarted</li><li>Fix race-condition when AMA vm-extension is provisioned involving disable command</li><li>Update MetricExtension version to 2.2023.721.1630</li><li>Update Troubleshooter to v1.5.14 </li></ul>|1.20.0| None |
| August 2023| **Windows** <ul><li>AMA: Allow prefixes in the tag names to handle regression</li><li>Updating package version for AzSecPack 4.28 release</li></ul>**Linux**<ul><li> Comming soon</li></ui>|1.19.0| Comming Soon | | July 2023| **Windows** <ul><li>Fix crash when Event Log subscription callback throws errors.<li>MetricExtension updated to 2.2023.609.2051</li></ui> |1.18.0|None| | June 2023| **Windows** <ul><li>Add new file path column to custom logs table</li><li>Config setting to disable custom IMDS endpoint in Tenant.json file</li><li>FluentBit binaries signed with Microsoft customer Code Sign cert</li><li>Minimize number of retries on calls to refresh tokens</li><li>Don't overwrite resource ID with empty string</li><li>AzSecPack updated to version 4.27</li><li>AzureProfiler and AzurePerfCollector updated to version 1.0.0.990</li><li>MetricsExtension updated to version 2.2023.513.10</li><li>Troubleshooter updated to version 1.5.0</li></ul>**Linux** <ul><li>Add new column CollectorHostName to syslog table to identify forwarder/collector machine</li><li>Link OpenSSL dynamically</li><li>**Fixes**<ul><li>Allow uploads soon after AMA start up</li><li>Run LocalSink GC on a dedicated thread to avoid thread pool scheduling issues</li><li>Fix upgrade restart of disabled services</li><li>Handle Linux Hardening where sudo on root is blocked</li><li>CEF processing fixes for noncompliant RFC 5424 logs</li><li>ASA tenant can fail to start up due to config-cache directory permissions</li><li>Fix auth proxy in AMA</li><li>Fix to remove null characters in agentlauncher.log after log rotation</li><li>Fix for authenticated proxy(1.27.3)</li><li>Fix regression in VM Insights(1.27.4)</ul></li></ul>|1.17.0 |1.27.4|
azure-monitor Asp Net Dependencies https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/asp-net-dependencies.md
Below is the currently supported list of dependency calls that are automatically
| [SqlClient](https://www.nuget.org/packages/System.Data.SqlClient) | .NET Core 1.0+, NuGet 4.3.0 | | [Microsoft.Data.SqlClient](https://www.nuget.org/packages/Microsoft.Data.SqlClient/1.1.2)| 1.1.0 - latest stable release. (See Note below.) | [Event Hubs Client SDK](https://www.nuget.org/packages/Microsoft.Azure.EventHubs) | 1.1.0 |
-| [ServiceBus Client SDK](https://www.nuget.org/packages/Microsoft.Azure.ServiceBus) | 3.0.0 |
+| [ServiceBus Client SDK](https://www.nuget.org/packages/Azure.Messaging.ServiceBus) | 7.0.0 |
| <b>Storage clients</b>| | | ADO.NET | 4.5+ |
azure-monitor Asp Net https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/asp-net.md
This section guides you through manually adding Application Insights to a templa
</ExcludeComponentCorrelationHttpHeadersOnDomains> <IncludeDiagnosticSourceActivities> <Add>Microsoft.Azure.EventHubs</Add>
- <Add>Microsoft.Azure.ServiceBus</Add>
+ <Add>Azure.Messaging.ServiceBus</Add>
</IncludeDiagnosticSourceActivities> </Add> <Add Type="Microsoft.ApplicationInsights.Extensibility.PerfCounterCollector.PerformanceCollectorModule, Microsoft.AI.PerfCounterCollector">
azure-monitor Custom Operations Tracking https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/custom-operations-tracking.md
The [W3C Trace Context](https://www.w3.org/TR/trace-context/) and [HTTP Protocol
For tracing information, see [Distributed tracing and correlation through Azure Service Bus messaging](../../service-bus-messaging/service-bus-end-to-end-tracing.md#distributed-tracing-and-correlation-through-service-bus-messaging).
-> [!IMPORTANT]
-> The WindowsAzure.ServiceBus and Microsoft.Azure.ServiceBus packages are deprecated.
- ### Azure Storage queue The following example shows how to track the [Azure Storage queue](/azure/storage/queues/storage-quickstart-queues-dotnet?tabs=passwordless%2Croles-azure-portal%2Cenvironment-variable-windows%2Csign-in-azure-cli) operations and correlate telemetry between the producer, the consumer, and Azure Storage.
azure-monitor Java Standalone Config https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/java-standalone-config.md
This article shows you how to configure Azure Monitor Application Insights for J
## Connection string and role name
-Connection string and role name are the most common settings you need to get started:
-
-```json
-{
- "connectionString": "...",
- "role": {
- "name": "my cloud role name"
- }
-}
-```
-
-Connection string is required. Role name is important anytime you're sending data from different applications to the same Application Insights resource.
More information and configuration options are provided in the following sections.
azure-monitor Java Standalone Sampling Overrides https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/java-standalone-sampling-overrides.md
those are also collected for all '/login' requests.
## Span attributes available for sampling
-Span attribute names are based on the OpenTelemetry semantic conventions:
+Span attribute names are based on the OpenTelemetry semantic conventions. (HTTP, Messaging, Database, RPC)
-* [HTTP](https://github.com/open-telemetry/semantic-conventions/blob/main/docs//http.md)
-* [Messaging](https://github.com/open-telemetry/semantic-conventions/blob/main/docs//messaging.md)
-* [Database](https://github.com/open-telemetry/semantic-conventions/blob/main/docs/database/README.md)
-* [RPC](https://github.com/open-telemetry/semantic-conventions/blob/main/docs//rpc.md)
+https://github.com/open-telemetry/semantic-conventions/blob/main/docs/README.md
To see the exact set of attributes captured by Application Insights Java for your application, set the [self-diagnostics level to debug](./java-standalone-config.md#self-diagnostics), and look for debug messages starting
azure-monitor Java Standalone Telemetry Processors https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/java-standalone-telemetry-processors.md
This section lists some common span attributes that telemetry processors can use
| Attribute | Type | Description | ||||
-| `db.system` | string | Identifier for the database management system (DBMS) product being used. See [list of identifiers](https://github.com/open-telemetry/semantic-conventions/blob/main/docs/database/README.md). |
+| `db.system` | string | Identifier for the database management system (DBMS) product being used. See [Semantic Conventions for database operations](https://github.com/open-telemetry/semantic-conventions/blob/main/docs/README.md). |
| `db.connection_string` | string | Connection string used to connect to the database. It's recommended to remove embedded credentials.| | `db.user` | string | Username for accessing the database. | | `db.name` | string | String used to report the name of the database being accessed. For commands that switch the database, this string should be set to the target database, even if the command fails.|
azure-monitor Opentelemetry Configuration https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/opentelemetry-configuration.md
Use one of the following two ways to configure the connection string:
### [Java](#tab/java)
-For more information about Java, see the [Java supplemental documentation](java-standalone-config.md).
### [Node.js](#tab/nodejs)
You might want to update the [Cloud Role Name](app-map.md#understand-the-cloud-r
### [ASP.NET Core](#tab/aspnetcore)
-Set the Cloud Role Name and the Cloud Role Instance via [Resource](https://github.com/open-telemetry/opentelemetry-specification/blob/main/specification/resource/sdk.md#resource-sdk) attributes. Cloud Role Name uses `service.namespace` and `service.name` attributes, although it falls back to `service.name` if `service.namespace` isn't set. Cloud Role Instance uses the `service.instance.id` attribute value. For information on standard attributes for resources, see [Resource Semantic Conventions](https://github.com/open-telemetry/semantic-conventions/blob/main/docs/README.md).
+Set the Cloud Role Name and the Cloud Role Instance via [Resource](https://github.com/open-telemetry/opentelemetry-specification/blob/main/specification/resource/sdk.md#resource-sdk) attributes. Cloud Role Name uses `service.namespace` and `service.name` attributes, although it falls back to `service.name` if `service.namespace` isn't set. Cloud Role Instance uses the `service.instance.id` attribute value. For information on standard attributes for resources, see [OpenTelemetry Semantic Conventions](https://github.com/open-telemetry/semantic-conventions/blob/main/docs/README.md).
```csharp // Setting role name and role instance
app.Run();
### [.NET](#tab/net)
-Set the Cloud Role Name and the Cloud Role Instance via [Resource](https://github.com/open-telemetry/opentelemetry-specification/blob/main/specification/resource/sdk.md#resource-sdk) attributes. Cloud Role Name uses `service.namespace` and `service.name` attributes, although it falls back to `service.name` if `service.namespace` isn't set. Cloud Role Instance uses the `service.instance.id` attribute value. For information on standard attributes for resources, see [Resource Semantic Conventions](https://github.com/open-telemetry/semantic-conventions/blob/main/docs/README.md).
+Set the Cloud Role Name and the Cloud Role Instance via [Resource](https://github.com/open-telemetry/opentelemetry-specification/blob/main/specification/resource/sdk.md#resource-sdk) attributes. Cloud Role Name uses `service.namespace` and `service.name` attributes, although it falls back to `service.name` if `service.namespace` isn't set. Cloud Role Instance uses the `service.instance.id` attribute value. For information on standard attributes for resources, see [OpenTelemetry Semantic Conventions](https://github.com/open-telemetry/semantic-conventions/blob/main/docs/README.md).
```csharp // Setting role name and role instance
To set the cloud role instance, see [cloud role instance](java-standalone-config
### [Node.js](#tab/nodejs)
-Set the Cloud Role Name and the Cloud Role Instance via [Resource](https://github.com/open-telemetry/opentelemetry-specification/blob/main/specification/resource/sdk.md#resource-sdk) attributes. Cloud Role Name uses `service.namespace` and `service.name` attributes, although it falls back to `service.name` if `service.namespace` isn't set. Cloud Role Instance uses the `service.instance.id` attribute value. For information on standard attributes for resources, see [Resource Semantic Conventions](https://github.com/open-telemetry/semantic-conventions/blob/main/docs/README.md).
+Set the Cloud Role Name and the Cloud Role Instance via [Resource](https://github.com/open-telemetry/opentelemetry-specification/blob/main/specification/resource/sdk.md#resource-sdk) attributes. Cloud Role Name uses `service.namespace` and `service.name` attributes, although it falls back to `service.name` if `service.namespace` isn't set. Cloud Role Instance uses the `service.instance.id` attribute value. For information on standard attributes for resources, see [OpenTelemetry Semantic Conventions](https://github.com/open-telemetry/semantic-conventions/blob/main/docs/README.md).
```typescript // Import the useAzureMonitor function, the AzureMonitorOpenTelemetryOptions class, the Resource class, and the SemanticResourceAttributes class from the @azure/monitor-opentelemetry, @opentelemetry/resources, and @opentelemetry/semantic-conventions packages, respectively.
useAzureMonitor(options);
### [Python](#tab/python)
-Set the Cloud Role Name and the Cloud Role Instance via [Resource](https://github.com/open-telemetry/opentelemetry-specification/blob/main/specification/resource/sdk.md#resource-sdk) attributes. Cloud Role Name uses `service.namespace` and `service.name` attributes, although it falls back to `service.name` if `service.namespace` isn't set. Cloud Role Instance uses the `service.instance.id` attribute value. For information on standard attributes for resources, see [Resource Semantic Conventions](https://github.com/open-telemetry/semantic-conventions/blob/main/docs/README.md).
+Set the Cloud Role Name and the Cloud Role Instance via [Resource](https://github.com/open-telemetry/opentelemetry-specification/blob/main/specification/resource/sdk.md#resource-sdk) attributes. Cloud Role Name uses `service.namespace` and `service.name` attributes, although it falls back to `service.name` if `service.namespace` isn't set. Cloud Role Instance uses the `service.instance.id` attribute value. For information on standard attributes for resources, see [OpenTelemetry Semantic Conventions](https://github.com/open-telemetry/semantic-conventions/blob/main/docs/README.md).
Set Resource attributes using the `OTEL_RESOURCE_ATTRIBUTES` and/or `OTEL_SERVICE_NAME` environment variables. `OTEL_RESOURCE_ATTRIBUTES` takes series of comma-separated key-value pairs. For example, to set the Cloud Role Name to `my-namespace.my-helloworld-service` and set Cloud Role Instance to `my-instance`, you can set `OTEL_RESOURCE_ATTRIBUTES` and `OTEL_SERVICE_NAME` as such: ```
azure-monitor Container Insights Enable Aks https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/containers/container-insights-enable-aks.md
This article describes how to set up Container insights to monitor a managed Kub
If you're connecting an existing AKS cluster to a Log Analytics workspace in another subscription, the *Microsoft.ContainerService* resource provider must be registered in the subscription with the Log Analytics workspace. For more information, see [Register resource provider](../../azure-resource-manager/management/resource-providers-and-types.md#register-resource-provider).
+> [!NOTE]
+> When you enable Container Insights on legacy auth clusters, a managed identity is automatically created. This identity will not be available in case the cluster migrates to MSI Auth or if the Container Insights is disabled and hence this managed identity should not be used for anything else.
+ ## New AKS cluster You can enable monitoring for an AKS cluster when it's created by using any of the following methods:
To enable [managed identity authentication](container-insights-onboard.md#authen
- `aksResourceId`: Use the values on the **AKS Overview** page for the AKS cluster. - `aksResourceLocation`: Use the values on the **AKS Overview** page for the AKS cluster. - `workspaceResourceId`: Use the resource ID of your Log Analytics workspace.
- - `resourceTagValues`: Match the existing tag values specified for the existing Container insights extension data collection rule (DCR) of the cluster and the name of the DCR. The name will be *MSCI-\<clusterName\>-\<clusterRegion\>* and this resource created in an AKS clusters resource group. If this is the first time onboarding, you can set the arbitrary tag values.
+ - `resourceTagValues`: Match the existing tag values specified for the existing Container insights extension data collection rule (DCR) of the cluster and the name of the DCR. The name will be *MSCI-\<clusterRegion\>-\<clusterName\>* and this resource created in an AKS clusters resource group. If this is the first time onboarding, you can set the arbitrary tag values.
To enable [managed identity authentication](container-insights-onboard.md#authentication):
azure-monitor Resource Logs Schema https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/essentials/resource-logs-schema.md
The schema for resource logs varies depending on the resource and log category.
| Azure Storage | [Blobs](../../storage/blobs/monitor-blob-storage-reference.md#resource-logs-preview), [Files](../../storage/files/storage-files-monitoring-reference.md#resource-logs-preview), [Queues](../../storage/queues/monitor-queue-storage-reference.md#resource-logs-preview), [Tables](../../storage/tables/monitor-table-storage-reference.md#resource-logs-preview) | | Azure Stream Analytics |[Job logs](../../stream-analytics/stream-analytics-job-diagnostic-logs.md) | | Azure Traffic Manager | [Traffic Manager log schema](../../traffic-manager/traffic-manager-diagnostic-logs.md) |
-| Azure Video Indexer|[Monitor Azure Video Indexer data reference](../../azure-video-indexer/monitor-video-indexer-data-reference.md)|
+| Azure Video Indexer|[Monitor Azure Video Indexer data reference](/azure/azure-video-indexer/monitor-video-indexer-data-reference)|
| Azure Virtual Network | Schema not available | | Azure Web PubSub | [Monitoring Azure Web PubSub data reference](../../azure-web-pubsub/howto-monitor-data-reference.md) | | Virtual network gateways | [Logging for Virtual Network Gateways](../../vpn-gateway/troubleshoot-vpn-with-azure-diagnostics.md)|
azure-monitor Monitor Virtual Machine Data Collection https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/vm/monitor-virtual-machine-data-collection.md
Last updated 01/05/2023 - # Monitor virtual machines with Azure Monitor: Collect data
The following samples use the `Perf` table with custom performance data. For inf
| `Perf | where Computer == "MyComputer" and CounterName startswith_cs "%" and InstanceName == "_Total" | summarize AggregatedValue = percentile(CounterValue, 70) by bin(TimeGenerated, 1h), CounterName` | Hourly 70 percentile of every % percent counter for a particular computer | | `Perf | where CounterName == "% Processor Time" and InstanceName == "_Total" and Computer == "MyComputer" | summarize ["min(CounterValue)"] = min(CounterValue), ["avg(CounterValue)"] = avg(CounterValue), ["percentile75(CounterValue)"] = percentile(CounterValue, 75), ["max(CounterValue)"] = max(CounterValue) by bin(TimeGenerated, 1h), Computer` |Hourly average, minimum, maximum, and 75-percentile CPU usage for a specific computer | | `Perf | where ObjectName == "MSSQL$INST2:Databases" and InstanceName == "master"` | All Performance data from the Database performance object for the master database from the named SQL Server instance INST2. |
+| `Perf | where TimeGenerated >ago(5m) | where ObjectName == "Process" and InstanceName != "_Total" and InstanceName != "Idle" | where CounterName == "% Processor Time" | summarize cpuVal=avg(CounterValue) by Computer,InstanceName | join (Perf| where TimeGenerated >ago(5m)| where ObjectName == "Process" and CounterName == "ID Process" | summarize arg_max(TimeGenerated,*) by ProcID=CounterValue ) on Computer,InstanceName | sort by TimeGenerated desc | summarize AvgCPU = avg(cpuVal) by InstanceName,ProcID` | Average of CPU over last 5 min for each Process ID. |
+ ## Collect text logs Some applications write events written to a text log stored on the virtual machine. Create a [custom table and DCR](../agents/data-collection-text-log.md) to collect this data. You define the location of the text log, its detailed configuration, and the schema of the custom table. There's a cost for the ingestion and retention of this data in the workspace.
The runbook can access any resources on the local machine to gather required dat
* [Analyze monitoring data collected for virtual machines](monitor-virtual-machine-analyze.md) * [Create alerts from collected data](monitor-virtual-machine-alerts.md)++
azure-netapp-files Azure Netapp Files Create Volumes Smb https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-netapp-files/azure-netapp-files-create-volumes-smb.md
Before creating an SMB volume, you need to create an Active Directory connection
* <a name="continuous-availability"></a>If you want to enable Continuous Availability for the SMB volume, select **Enable Continuous Availability**. >[!IMPORTANT]
- >You should enable Continuous Availability for Citrix App Layering, SQL Server, and [FSLogix user profile containers](../virtual-desktop/create-fslogix-profile-container.md). Using SMB Continuous Availability shares for workloads other than Citrix App Layering, SQL Server, and FSLogix user profile containers is *not* supported. This feature is currently supported on Windows SQL Server. Linux SQL Server is not currently supported. If you are using a non-administrator (domain) account to install SQL Server, ensure that the account has the required security privilege assigned. If the domain account does not have the required security privilege (`SeSecurityPrivilege`), and the privilege cannot be set at the domain level, you can grant the privilege to the account by using the **Security privilege users** field of Active Directory connections. See [Create an Active Directory connection](create-active-directory-connections.md#create-an-active-directory-connection).
+ >You should enable Continuous Availability for Citrix App Layering, SQL Server, [FSLogix user profile containers](../virtual-desktop/create-fslogix-profile-container.md), and FSLogix ODFC containers. Using SMB Continuous Availability shares for workloads other than Citrix App Layering, SQL Server, FSLogix user profile containers, or FSLogix ODFC containers is *not* supported. This feature is currently supported on Windows SQL Server. Linux SQL Server is not currently supported. If you are using a non-administrator (domain) account to install SQL Server, ensure that the account has the required security privilege assigned. If the domain account does not have the required security privilege (`SeSecurityPrivilege`), and the privilege cannot be set at the domain level, you can grant the privilege to the account by using the **Security privilege users** field of Active Directory connections. See [Create an Active Directory connection](create-active-directory-connections.md#create-an-active-directory-connection).
**Custom applications are not supported with SMB Continuous Availability.**
azure-netapp-files Configure Customer Managed Keys https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-netapp-files/configure-customer-managed-keys.md
The following diagram demonstrates how customer-managed keys work with Azure Net
* Customer-managed keys can only be configured on new volumes. You can't migrate existing volumes to customer-managed key encryption. * To create a volume using customer-managed keys, you must select the *Standard* network features. You can't use customer-managed key volumes with volume configured using Basic network features. Follow instructions in to [Set the Network Features option](configure-network-features.md#set-the-network-features-option) in the volume creation page. * For increased security, you can select the **Disable public access** option within the network settings of your key vault. When selecting this option, you must also select **Allow trusted Microsoft services to bypass this firewall** to permit the Azure NetApp Files service to access your encryption key.
-* MSI Automatic certificate renewal isn't currently supported. It is recommended to set up an Azure monitor alert for when the MSI certificate is going to expire.
+* Automatic Managed System Identity (MSI) certificate renewal isn't currently supported. It is recommended to set up an Azure monitor alert for when the MSI certificate is going to expire.
* The MSI certificate has a lifetime of 90 days. It becomes eligible for renewal after 46 days. **After 90 days, the certificate is no longer be valid and the customer-managed key volumes under the NetApp account will go offline.** * To renew, you need to call the NetApp account operation `renewCredentials` if eligible for renewal. If it's not eligible, an error message communicates the date of eligibility. * Version 2.42 or later of the Azure CLI supports running the `renewCredentials` operation with the [az netappfiles account command](/cli/azure/netappfiles/account#az-netappfiles-account-renew-credentials). For example:
azure-netapp-files Enable Continuous Availability Existing SMB https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-netapp-files/enable-continuous-availability-existing-SMB.md
You can enable the SMB Continuous Availability (CA) feature when you [create a n
> See the [**Enable Continuous Availability**](azure-netapp-files-create-volumes-smb.md#continuous-availability) option for additional details and considerations. >[!IMPORTANT]
-> You should enable Continuous Availability for [Citrix App Layering](https://docs.citrix.com/en-us/citrix-app-layering/4.html), SQL Server, and [FSLogix user profile containers](../virtual-desktop/create-fslogix-profile-container.md). Using SMB Continuous Availability shares for any other workload is not supported. This feature is currently supported on Windows SQL Server. Linux SQL Server is not currently supported.
+> You should enable Continuous Availability for [Citrix App Layering](https://docs.citrix.com/en-us/citrix-app-layering/4.html), SQL Server, [FSLogix user profile containers](../virtual-desktop/create-fslogix-profile-container.md), and FSLogix ODFC containers. Using SMB Continuous Availability shares for workloads other than Citrix App Layering, SQL Server, FSLogix user profile containers, or FSLogix ODFC containers is *not* supported. This feature is currently supported on Windows SQL Server. Linux SQL Server is not currently supported.
> If you are using a non-administrator (domain) account to install SQL Server, ensure that the account has the required security privilege assigned. If the domain account does not have the required security privilege (`SeSecurityPrivilege`), and the privilege cannot be set at the domain level, you can grant the privilege to the account by using the **Security privilege users** field of Active Directory connections. See [Create an Active Directory connection](create-active-directory-connections.md#create-an-active-directory-connection). >[!IMPORTANT]
azure-netapp-files Faq Application Resilience https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-netapp-files/faq-application-resilience.md
Azure NetApp Files might undergo occasional planned maintenance (for example, pl
Yes, certain SMB-based applications require SMB Transparent Failover. SMB Transparent Failover enables maintenance operations on the Azure NetApp Files service without interrupting connectivity to server applications storing and accessing data on SMB volumes. To support SMB Transparent Failover for specific applications, Azure NetApp Files now supports the [SMB Continuous Availability shares option](azure-netapp-files-create-volumes-smb.md#continuous-availability). Using SMB Continuous Availability is only supported for workloads on: * Citrix App Layering * [FSLogix user profile containers](../virtual-desktop/create-fslogix-profile-container.md)
+* FSLogix ODFC containers
* Microsoft SQL Server (not Linux SQL Server) >[!CAUTION]
azure-netapp-files Troubleshoot Diagnose Solve Problems https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-netapp-files/troubleshoot-diagnose-solve-problems.md
+
+ Title: Troubleshoot Azure NetApp Files using diagnose and solve problems tool
+description: Describes how to use the Azure diagnose and solve problems tool to troubleshoot issues of Azure NetApp Files.
+
+documentationcenter: ''
++
+editor: ''
+
+ms.assetid:
++
+ na
+ Last updated : 10/15/2023+++
+# Troubleshoot Azure NetApp Files using diagnose and solve problems tool
+
+You can use Azure **diagnose and solve problems** tool to troubleshoot issues of Azure NetApp Files.
+
+## Steps
+
+1. From the Azure portal, select **diagnose and solve problems** in the navigation pane.
+
+2. Choose a problem type for the issue you are experiencing, for example, **Capacity Pools**.
+ You can select the problem type by clicking the corresponding tile on the diagnose and solve problems page or using the search bar above the tiles.
+
+ The following screenshot shows an example of issue types that you can troubleshoot for Azure NetApp Files:
+
+ :::image type="content" source="../media/azure-netapp-files/troubleshoot-issue-types.png" alt-text="Screenshot that shows an example of issue types in diagnose and solve problems page." lightbox="../media/azure-netapp-files/troubleshoot-issue-types.png":::
+
+3. After specifying the problem type, select an option (problem subtype) from the pull-down menu to describe the specific problem you are experiencing. Then follow the on-screen directions to troubleshoot the problem.
+
+ :::image type="content" source="../media/azure-netapp-files/troubleshoot-diagnose-pull-down.png" alt-text="Screenshot that shows the pull-down menu for problem subtype selection." lightbox="../media/azure-netapp-files/troubleshoot-diagnose-pull-down.png":::
+
+ This page presents general guidelines and relevant resources for the problem subtype you select. In some situations, you might be prompted to fill out a questionnaire to trigger diagnostics. If issues are identified, the tool presents a diagnosis and possible solutions.
+
+ :::image type="content" source="../media/azure-netapp-files/troubleshoot-problem-subtype.png" alt-text="Screenshot that shows the capacity pool troubleshoot page." lightbox="../media/azure-netapp-files/troubleshoot-problem-subtype.png":::
+
+For more information about using this tool, see [Diagnostics and solve tool - Azure App Service](../app-service/overview-diagnostics.md).
+
+## Next steps
+
+* [Troubleshoot capacity pool errors](troubleshoot-capacity-pools.md)
+* [Troubleshoot volume errors](troubleshoot-volumes.md)
+* [Troubleshoot application volume group errors](troubleshoot-application-volume-groups.md)
+* [Troubleshoot snapshot policy errors](troubleshoot-snapshot-policies.md)
+* [Troubleshoot cross-region replication errors](troubleshoot-cross-region-replication.md)
+* [Troubleshoot Resource Provider errors](azure-netapp-files-troubleshoot-resource-provider-errors.md)
+* [Troubleshoot user access on LDAP volumes](troubleshoot-user-access-ldap.md)
+* [Troubleshoot file locks](troubleshoot-file-locks.md)
azure-netapp-files Whats New https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-netapp-files/whats-new.md
na Previously updated : 09/07/2023 Last updated : 10/16/2023
Azure NetApp Files is updated regularly. This article provides a summary about t
## October 2023
+* [Troubleshoot Azure NetApp Files using diagnose and solve problems tool](troubleshoot-diagnose-solve-problems.md)
+
+ The **diagnose and solve problems** tool simplifies the troubleshooting process, making it effortless to identify and resolve any issues affecting your Azure NetApp Files deployment. With the tool's proactive troubleshooting, user-friendly guidance, and seamless integration with Azure Support, you can more easily manage and maintain a reliable and high-performance Azure NetApp Files storage environment. Experience enhanced issue resolution and optimization capabilities today, ensuring a smoother Azure NetApp Files management experience.
+ * [Snapshot manageability enhancement: Identify parent snapshot](snapshots-restore-new-volume.md) You can now see the name of the snapshot used to create a new volume. In the Volume overview page, the **Originated from** field identifies the source snapshot used in volume creation. If the field is empty, no snapshot was used.
azure-portal Home https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-portal/mobile-app/home.md
+
+ Title: Azure mobile app Home
+description: Azure mobile app Home surfaces the most essential information and the resources you use most often.
Last updated : 10/16/2023+++
+# Azure mobile app Home
+
+Azure mobile app **Home** surfaces the most essential information and the resources you use most often. It provides a convenient way to access and manage your Azure resources or your Microsoft Entra tenant from your mobile device.
+
+## Display cards
+
+Azure mobile app **Home** consists of customizable display cards that show information and let you quickly access frequently used resources and services. You can select and organize these cards depending on what's most important for you and how you want to use the app.
+
+Current card options include:
+
+- **Learn**: Explore the most popular Microsoft learn modules for Azure.
+- **Resource groups**: Quick access to all your resource groups.
+- **Microsoft Entra ID**: Quick access to Microsoft Entra ID management.
+- **Azure services**: Quick access to Virtual machines, Web Apps, SQL databases, and Application Insights.
+- **Latest alerts**: A list and chart view of the alerts fired in the last 24 hours and the option to see all.
+- **Service Health**: A current count of service issues, maintenance, health advisories, and security advisories.
+- **Cloud Shell**: Quick access to the Cloud Shell terminal.
+- **Recent resources**: A list of your four most recently viewed resources, with the option to see all.
+- **Favorites**: A list of the resources you have added to your favorites, and the option to see all.
++
+## Customize Azure mobile app Home
+
+You can customize the cards displayed on your Azure mobile app **Home** by selecting the :::image type="icon" source="media/edit-icon.png" border="false"::: **Edit** icon in the top right of **Home**. From there, you can select which cards you see by toggling the switch. You can also drag and drop the display cards in the list to reorder how they appear on your **Home**.
+
+For instance, you could rearrange the default order as follows:
++
+This would result in a **Home** similar to the following image:
++
+## Global search
+
+The global search button appears the top left of **Home**. Select this button to search for anything specific you may be looking for on your Azure account. This includes:
+
+- Resources
+- Services
+- Resource groups
+- Subscriptions
+
+You can filter these results by subscription using the **Home** filtering option.
+
+## Filtering
+
+In the top right of **Home**, you'll see a filter option. When selecting the filter icon, the app gives you the option to filter the results shown on **Home** by specific subscriptions. This includes results for:
+
+- Resource groups
+- Azure services
+- Latest alerts
+- Service health
+- Global search
+
+This filtering option is specific to **Home**, and doesn't filter for the other bottom navigation sections.
+
+## Next steps
+
+- Learn more about the [Azure mobile app](overview.md).
+- Download the Azure mobile app for free from the [Apple App Store](https://aka.ms/azureapp/ios/doc), [Google Play](https://aka.ms/azureapp/android/doc) or [Amazon App Store](https://aka.ms/azureapp/amazon/doc).
+
azure-portal Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-portal/mobile-app/overview.md
+
+ Title: What is the Azure mobile app?
+description: The Azure mobile app is a tool that allows you to monitor and manage your Azure resources and services from your mobile device.
Last updated : 10/16/2023+++
+# What is the Azure mobile app?
+
+The Azure mobile app is a tool that allows you to monitor and manage your Azure resources and services from your mobile device. You can use the app to view the status, performance, and health of your resources, as well as perform common operations such as starting and stopping virtual machines, web apps, and databases. You can also access Azure Cloud Shell from the app and get push notifications and alerts about your resources. The Azure mobile app is available for iOS and Android devices, and you can download it for free from the [Apple App Store](https://aka.ms/azureapp/ios/doc), [Google Play](https://aka.ms/azureapp/android/doc) or [Amazon App Store](https://aka.ms/azureapp/amazon/doc).
+
+To use the app, you need an Azure account with the appropriate permissions to access your resources. The app supports multiple accounts, and you can switch between them easily. The app also supports Microsoft Entra ID authentication and multifactor authentication for enhanced security. The Azure mobile app is a convenient way to stay connected to your Azure resources and Entra tenant, and manage much more on the go.
+
+## Azure mobile app Home
+
+When you first open the Azure mobile app, **Home** shows an overview of your Azure account.
++
+View and customize display cards, including:
+
+- Microsoft Entra ID
+- Resource groups
+- Azure services
+- Latest alerts
+- Service Health
+- Cloud Shell
+- Recent resources
+- Favorites
+- Learn
+- Privileged Identity Management
+
+You can select which of these tiles appear on **Home** and rearrange them.
+
+For more information, see [Azure mobile app Home](home.md).
+
+## Hamburger menu
+
+The hamburger menu lets you select the environment, account, and directory you want to manage. The hamburger menu also houses several other settings and features, including:
+
+- Billing/Cost management
+- Settings
+- Help & feedback
+- Support requests
+- Privacy + Terms
+
+## Navigation
+
+The Azure mobile app provides several areas that allow you to navigate to different sections of the app. On the bottom navigation bar, you'll find **Home**, **Subscriptions**, **Resources**, and **Notifications**.
+
+On the top toolbar, you'll find the hamburger button to open the hamburger menu, the search magnifying glass to explore your services and resources, the edit button to change the layout of the Azure mobile app home, and the filter button to filter what content currently appears.
+
+## Download the Azure mobile app
+
+You can download the Azure mobile app today for free from the [Apple App Store](https://aka.ms/azureapp/ios/doc), [Google Play](https://aka.ms/azureapp/android/doc) or [Amazon App Store](https://aka.ms/azureapp/amazon/doc).
+
+## Next steps
+
+- Learn about [Azure mobile app **Home**](home.md) and how to customize it.
azure-resource-manager Template Specs https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-resource-manager/bicep/template-specs.md
Title: Create & deploy template specs in Bicep
description: Describes how to create template specs in Bicep and share them with other users in your organization. Previously updated : 10/13/2023 Last updated : 10/16/2023 # Azure Resource Manager template specs in Bicep
https://portal.azure.com/#create/Microsoft.Template/templateSpecVersionId/%2fsub
## Parameters
-Passing in parameters to template spec is exactly like passing parameters to a Bicep file. Add the parameter values either inline or in a parameter file.
+Passing in parameters to template spec is similar to passing parameters to a Bicep file. Add the parameter values either inline or in a parameter file.
+
+### Inline parameters
To pass a parameter inline, use:
az deployment group create \
-To create a local parameter file, use:
+### Parameter files
-```json
-{
- "$schema": "https://schema.management.azure.com/schemas/2019-04-01/deploymentParameters.json#",
- "contentVersion": "1.0.0.0",
- "parameters": {
- "StorageAccountType": {
- "value": "Standard_GRS"
- }
- }
-}
-```
+- Use Bicep parameters file
-And, pass that parameter file with:
+ To create a Bicep parameter file, you must specify the `using` statement. Here is an example:
-# [PowerShell](#tab/azure-powershell)
+ ```bicep
+ using 'using 'ts:<subscription-id>/<resource-group-name>/<template-spec-name>:<tag>'
-```azurepowershell
-New-AzResourceGroupDeployment `
- -TemplateSpecId $id `
- -ResourceGroupName demoRG `
- -TemplateParameterFile ./mainTemplate.parameters.json
-```
+ param StorageAccountType = 'Standard_GRS'
+ ```
-# [CLI](#tab/azure-cli)
+ For more information, see [Bicep parameters file](./parameter-files.md).
-```azurecli
-az deployment group create \
- --resource-group demoRG \
- --template-spec $id \
- --parameters "./mainTemplate.parameters.json"
-```
-
+ To pass parameter file with:
+
+ # [PowerShell](#tab/azure-powershell)
+
+ Currently, you can't deploy a template spec with a [.bicepparam file](./parameter-files.md) by using Azure PowerShell.
+
+ # [CLI](#tab/azure-cli)
+
+ ```azurecli
+ az deployment group create \
+ --resource-group demoRG \
+ --parameters "./mainTemplate.bicepparam"
+ ```
+
+ Because of the `using` statement in the bicepparam file. You don't need to specify the `--template-spec` parameter.
+
+
++
+- Use JSON parameters file
++
+ The following JSON is a sample JSON parameters file:
+
+ ```json
+ {
+ "$schema": "https://schema.management.azure.com/schemas/2019-04-01/deploymentParameters.json#",
+ "contentVersion": "1.0.0.0",
+ "parameters": {
+ "StorageAccountType": {
+ "value": "Standard_GRS"
+ }
+ }
+ }
+ ```
+
+ And, pass that parameter file with:
+
+ # [PowerShell](#tab/azure-powershell)
+
+ ```azurepowershell
+ New-AzResourceGroupDeployment `
+ -TemplateSpecId $id `
+ -ResourceGroupName demoRG `
+ -TemplateParameterFile ./mainTemplate.parameters.json
+ ```
+
+ # [CLI](#tab/azure-cli)
+
+ ```azurecli
+ az deployment group create \
+ --resource-group demoRG \
+ --template-spec $id \
+ --parameters "./mainTemplate.parameters.json"
+ ```
-Currently, you can't deploy a template spec with a [.bicepparam file](./parameter-files.md).
+
## Versioning
azure-video-indexer Accounts Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-video-indexer/accounts-overview.md
- Title: Azure AI Video Indexer accounts
-description: This article gives an overview of Azure AI Video Indexer accounts and provides links to other articles for more details.
- Previously updated : 08/29/2023----
-# Azure AI Video Indexer account types
--
-This article gives an overview of Azure AI Video Indexer accounts types and provides links to other articles for more details.
-
-## Trial account
-
-When starting out with [Azure AI Video Indexer](https://www.videoindexer.ai/), click **start free** to kick off a quick and easy process of creating a trial account. No Azure subscription is required and this is a great way to explore Azure AI Video Indexer and try it out with your content. Keep in mind that the trial Azure AI Video Indexer account has a limitation on the number of indexing minutes, support, and SLA.
-
-With a trial account, Azure AI Video Indexer provides up to 2,400 minutes of free indexing when using the [Azure AI Video Indexer](https://www.videoindexer.ai/) website or the Azure AI Video Indexer API (see [developer portal](https://api-portal.videoindexer.ai/)).
-
-The trial account option is not available on the Azure Government cloud. For other Azure Government limitations, see [Limitations of Azure AI Video Indexer on Azure Government](connect-to-azure.md#limitations-of-azure-ai-video-indexer-on-azure-government).
-
-## Paid (unlimited) account
-
-When you have used up the free trial minutes or are ready to start using Video Indexer for production workloads, you can create a regular paid account which doesn't have minute, support, or SLA limitations. Account creation can be performed through the Azure portal (see [Create an account with the Azure portal](create-account-portal.md)) or API (see [Create accounts with API](/rest/api/videoindexer/stable/accounts)).
-
-Azure AI Video Indexer unlimited accounts are Azure Resource Manager (ARM) based and unlike trial accounts, are created in your Azure subscription. Moving to an unlimited ARM based account unlocks many security and management capabilities, such as [RBAC user management](../role-based-access-control/overview.md), [Azure Monitor integration](../azure-monitor/overview.md), deployment through ARM templates, and much more.
-
-Billing is per indexed minute, with the per minute cost determined by the selected preset. For more information regarding pricing, see [Azure AI Video Indexer pricing](https://azure.microsoft.com/pricing/details/video-indexer/).
-
-## Create accounts
-
-* To create an ARM-based (paid) account with the Azure portal, see [Create accounts with the Azure portal](create-account-portal.md).
-* To create an account with an API, see [Create accounts](/rest/api/videoindexer/stable/accounts)
-
- > [!TIP]
- > Make sure you are signed in with the correct domain to the [Azure AI Video Indexer website](https://www.videoindexer.ai/). For details, see [Switch tenants](switch-tenants-portal.md).
-* [Upgrade a trial account to an ARM-based (paid) account and import your content for free](import-content-from-trial.md).
-
- ## Classic accounts
-
-Before ARM based accounts were added to Azure AI Video Indexer, there was a "classic" account type (where the accounts management plane is built on API Management.) The classic account type is still used by some users.
-
-* If you are using a classic (paid) account and interested in moving to an ARM-based account, see [connect an existing classic Azure AI Video Indexer account to an ARM-based account](connect-classic-account-to-arm.md).
-
-For more information on the difference between regular unlimited accounts and classic accounts, see [Azure AI Video Indexer as an Azure resource](https://techcommunity.microsoft.com/t5/ai-applied-ai-blog/azure-video-indexer-is-now-available-as-an-azure-resource/ba-p/2912422).
-
-## Limited access features
--
-For more information, see [Azure AI Video Indexer limited access features](limited-access-features.md).
-
-## Next steps
-
-Make sure to review [Pricing](https://azure.microsoft.com/pricing/details/video-indexer/).
azure-video-indexer Add Contributor Role On The Media Service https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-video-indexer/add-contributor-role-on-the-media-service.md
- Title: Add Contributor role on the Media Services account
-description: This topic explains how to add contributor role on the Media Services account.
- Previously updated : 10/13/2021-----
-# Add contributor role to Media Services
--
-This article describes how to assign contributor role on the Media Services account.
-
-> [!NOTE]
-> If you are creating your Azure AI Video Indexer through the Azure portal UI, the selected Managed identity will be automatically assigned with a contributor permission on the selected Media Service account.
-
-## Prerequisites
-
-1. Azure Media Services (AMS)
-2. User-assigned managed identity
-
-> [!NOTE]
-> You need an Azure subscription with access to both the [Contributor][docs-role-contributor] role and the [User Access Administrator][docs-role-administrator] role to the Azure Media Services and the User-assigned managed identity. If you don't have the right permissions, ask your account administrator to grant you those permissions. The associated Azure Media Services must be in the same region as the Azure AI Video Indexer account.
-
-## Add Contributor role on the Media Services
-### [Azure portal](#tab/portal/)
-
-### Add Contributor role to Media Services using Azure portal
-
-1. Sign in at the [Azure portal](https://portal.azure.com/).
- * Using the search bar at the top, enter **Media Services**.
- * Find and select your Media Service resource.
-1. In the pane to the left, click **Access control (IAM)**.
- * Click **Add** > **Add role assignment**. If you don't have permissions to assign roles, the **Add role assignment** option will be disabled.
-1. In the Role list, select [Contributor][docs-role-contributor] role and click **Next**.
-1. In the **Assign access to**, select *Managed identity* radio button.
- * Click **+Select members** button and **Select managed identities** pane should be pop up.
-1. **Select** the following:
- * In the **Subscription**, the subscription where the managed identity is located.
- * In the **Managed identity**, select *User-assigned managed identity*.
- * In the **Select** section, search for the Managed identity you'd like to grant contributor permissions on the Media services resource.
-1. Once you have found the security principal, click to select it.
-1. To assign the role, click **Review + assign**
-
-## Next steps
-
-[Create a new Azure Resource Manager based account](create-account-portal.md)
-
-<!-- links -->
-[docs-role-contributor]: ../role-based-access-control/built-in-roles.md#contributor
-[docs-role-administrator]: ../role-based-access-control/built-in-roles.md#user-access-administrator
azure-video-indexer Audio Effects Detection Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-video-indexer/audio-effects-detection-overview.md
- Title: Introduction to Azure AI Video Indexer audio effects detection-
-description: An introduction to Azure AI Video Indexer audio effects detection component responsibly.
- Previously updated : 06/15/2022-----
-# Audio effects detection
--
-Audio effects detection is an Azure AI Video Indexer feature that detects insights on various acoustic events and classifies them into acoustic categories. Audio effect detection can detect and classify different categories such as laughter, crowd reactions, alarms and/or sirens.
-
-When working on the website, the instances are displayed in the Insights tab. They can also be generated in a categorized list in a JSON file that includes the category ID, type, name, and instances per category together with the specific timeframes and confidence score.
-
-## Prerequisites
-
-Review [transparency note overview](/legal/azure-video-indexer/transparency-note?context=/azure/azure-video-indexer/context/context)
-
-## General principles
-
-This article discusses audio effects detection and the key considerations for making use of this technology responsibly. There are many things you need to consider when deciding how to use and implement an AI-powered feature:
-
-* Does this feature perform well in my scenario? Before deploying audio effects detection into your scenario, test how it performs using real-life data and make sure it can deliver the accuracy you need.
-* Are we equipped to identify and respond to errors? AI-powered products and features won't be 100% accurate, so consider how you'll identify and respond to any errors that may occur.
-
-## View the insight
-
-To see the instances on the website, do the following:
-
-1. When uploading the media file, go to Video + Audio Indexing, or go to Audio Only or Video + Audio and select Advanced.
-1. After the file is uploaded and indexed, go to Insights and scroll to audio effects.
-
-To display the JSON file, do the following:
-
-1. Select Download -> Insights (JSON).
-1. Copy the `audioEffects` element, under `insights`, and paste it into your Online JSON viewer.
-
- ```json
- "audioEffects": [
- {
- "id": 1,
- "type": "Silence",
- "instances": [
- {
- "confidence": 0,
- "adjustedStart": "0:01:46.243",
- "adjustedEnd": "0:01:50.434",
- "start": "0:01:46.243",
- "end": "0:01:50.434"
- }
- ]
- },
- {
- "id": 2,
- "type": "Speech",
- "instances": [
- {
- "confidence": 0,
- "adjustedStart": "0:00:00",
- "adjustedEnd": "0:01:43.06",
- "start": "0:00:00",
- "end": "0:01:43.06"
- }
- ]
- }
- ],
- ```
-
-To download the JSON file via the API, use the [Azure AI Video Indexer developer portal](https://api-portal.videoindexer.ai/).
-
-## Audio effects detection components
-
-During the audio effects detection procedure, audio in a media file is processed, as follows:
-
-|Component|Definition|
-|||
-|Source file | The user uploads the source file for indexing. |
-|Segmentation| The audio is analyzed, nonspeech audio is identified and then split into short overlapping internals. |
-|Classification| An AI process analyzes each segment and classifies its contents into event categories such as crowd reaction or laughter. A probability list is then created for each event category according to department-specific rules. |
-|Confidence level| The estimated confidence level of each audio effect is calculated as a range of 0 to 1. The confidence score represents the certainty in the accuracy of the result. For example, an 82% certainty is represented as an 0.82 score.|
-
-## Example use cases
--- Companies with a large video archive can improve accessibility by offering more context for a hearing- impaired audience by transcription of nonspeech effects. -- Improved efficiency when creating raw data for content creators. Important moments in promos and trailers such as laughter, crowd reactions, gunshots, or explosions can be identified, for example, in Media and Entertainment. -- Detecting and classifying gunshots, explosions, and glass shattering in a smart-city system or in other public environments that include cameras and microphones to offer fast and accurate detection of violence incidents. -
-## Considerations and limitations when choosing a use case
--- Avoid use of short or low-quality audio, audio effects detection provides probabilistic and partial data on detected nonspeech audio events. For accuracy, audio effects detection requires at least 2 seconds of clear nonspeech audio. Voice commands or singing aren't supported.   -- Avoid use of audio with loud background music or music with repetitive and/or linearly scanned frequency, audio effects detection is designed for nonspeech audio only and therefore can't classify events in loud music. Music with repetitive and/or linearly scanned frequency many be incorrectly classified as an alarm or siren. -- Carefully consider the methods of usage in law enforcement and similar institutions, to promote more accurate probabilistic data, carefully review the following: -
- - Audio effects can be detected in nonspeech segments only.
- - The duration of a nonspeech section should be at least 2 seconds.
- - Low quality audio might impact the detection results.
- - Events in loud background music aren't classified.
- - Music with repetitive and/or linearly scanned frequency might be incorrectly classified as an alarm or siren.
- - Knocking on a door or slamming a door might be labeled as a gunshot or explosion.
- - Prolonged shouting or sounds of physical human effort might be incorrectly classified.
- - A group of people laughing might be classified as both laughter and crowd.
- - Natural and nonsynthetic gunshot and explosions sounds are supported.
-
-When used responsibly and carefully, Azure AI Video Indexer is a valuable tool for many industries. To respect the privacy and safety of others, and to comply with local and global regulations, we recommend the following:  
--- Always respect an individual’s right to privacy, and only ingest audio for lawful and justifiable purposes.   -- Don't purposely disclose inappropriate audio of young children or family members of celebrities or other content that may be detrimental or pose a threat to an individual’s personal freedom.   -- Commit to respecting and promoting human rights in the design and deployment of your analyzed audio.   -- When using third party materials, be aware of any existing copyrights or permissions required before distributing content derived from them.  -- Always seek legal advice when using audio from unknown sources.  -- Be aware of any applicable laws or regulations that exist in your area regarding processing, analyzing, and sharing audio containing people.  -- Keep a human in the loop. Don't use any solution as a replacement for human oversight and decision-making.   -- Fully examine and review the potential of any AI model you're using to understand its capabilities and limitations.  -
-## Next steps
--- [Microsoft Responsible AI principles](https://www.microsoft.com/ai/responsible-ai?activetab=pivot1%3aprimaryr6) -- [Microsoft Responsible AI resources](https://www.microsoft.com/ai/responsible-ai-resources)-- [Microsoft Azure Learning courses on Responsible AI](/training/paths/responsible-ai-business-principles/)-- [Microsoft Global Human Rights Statement](https://www.microsoft.com/corporate-responsibility/human-rights-statement?activetab=pivot_1:primaryr5) -
-### Contact us
-
-`visupport@microsoft.com`
-
-## Azure AI Video Indexer insights
--- [Face detection](face-detection.md)-- [OCR](ocr.md)-- [Keywords extraction](keywords.md)-- [Transcription, translation & language identification](transcription-translation-lid.md)-- [Labels identification](labels-identification.md) -- [Named entities](named-entities.md)-- [Observed people tracking & matched faces](observed-matched-people.md)-- [Topics inference](topics-inference.md)
azure-video-indexer Audio Effects Detection https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-video-indexer/audio-effects-detection.md
- Title: Enable audio effects detection
-description: Audio Effects Detection is one of Azure AI Video Indexer AI capabilities that detects various acoustics events and classifies them into different acoustic categories (for example, gunshot, screaming, crowd reaction and more).
- Previously updated : 05/24/2023----
-# Enable audio effects detection (preview)
--
-**Audio effects detection** is one of Azure AI Video Indexer AI capabilities that detects various acoustics events and classifies them into different acoustic categories (such as dog barking, crowd reactions, laugher and more).
-
-Some scenarios where this feature is useful:
--- Companies with a large set of video archives can easily improve accessibility with audio effects detection. The feature provides more context for persons who are hard of hearing, and enhances video transcription with non-speech effects.-- In the Media & Entertainment domain, the detection feature can improve efficiency when creating raw data for content creators. Important moments in promos and trailers (such as laughter, crowd reactions, gunshot, or explosion) can be identified by using **audio effects detection**.-- In the Public Safety & Justice domain, the feature can detect and classify gunshots, explosions, and glass shattering. It can be implemented in a smart-city system or in other public environments that include cameras and microphones to offer fast and accurate detection of violence incidents. -
-## Supported audio categories
-
-**Audio effect detection** can detect and classify different categories. In the following table, you can find the different categories split in to the different presets, divided to **Standard** and **Advanced**. For more information, see [pricing](https://azure.microsoft.com/pricing/details/video-indexer/).
-
-The following table shows which categories are supported depending on **Preset Name** (**Audio Only** / **Video + Audio** vs. **Advance Audio** / **Advance Video + Audio**). When you are using the **Advanced** indexing, categories appear in the **Insights** pane of the website.
-
-|Indexing type |Standard indexing| Advanced indexing|
-||||
-| Crowd Reactions || V|
-| Silence| V| V|
-| Gunshot or explosion ||V |
-| Breaking glass ||V|
-| Alarm or siren|| V |
-| Laughter|| V |
-| Dog || V|
-| Bell ringing|| V|
-| Bird|| V|
-| Car|| V|
-| Engine|| V|
-| Crying|| V|
-| Music playing|| V|
-| Screaming|| V|
-| Thunderstorm || V|
-
-## Result formats
-
-The audio effects are retrieved in the insights JSON that includes the category ID, type, and set of instances per category along with their specific timeframe and confidence score.
-
-```json
-audioEffects: [{
- id: 0,
- type: "Gunshot or explosion",
- instances: [{
- confidence: 0.649,
- adjustedStart: "0:00:13.9",
- adjustedEnd: "0:00:14.7",
- start: "0:00:13.9",
- end: "0:00:14.7"
- }, {
- confidence: 0.7706,
- adjustedStart: "0:01:54.3",
- adjustedEnd: "0:01:55",
- start: "0:01:54.3",
- end: "0:01:55"
- }
- ]
- }, {
- id: 1,
- type: "CrowdReactions",
- instances: [{
- confidence: 0.6816,
- adjustedStart: "0:00:47.9",
- adjustedEnd: "0:00:52.5",
- start: "0:00:47.9",
- end: "0:00:52.5"
- },
- {
- confidence: 0.7314,
- adjustedStart: "0:04:57.67",
- adjustedEnd: "0:05:01.57",
- start: "0:04:57.67",
- end: "0:05:01.57"
- }
- ]
- }
-],
-```
-
-## How to index audio effects
-
-In order to set the index process to include the detection of audio effects, select one of the **Advanced** presets under **Video + audio indexing** menu as can be seen below.
-
-> [!div class="mx-imgBorder"]
-> :::image type="content" source="./media/audio-effects-detection/index-audio-effect.png" alt-text="Index Audio Effects image":::
-
-## Closed Caption
-
-When audio effects are retrieved in the closed caption files, they are retrieved in square brackets the following structure:
-
-|Type| Example|
-|||
-|SRT |00:00:00,000 00:00:03,671<br/>[Gunshot or explosion]|
-|VTT |00:00:00.000 00:00:03.671<br/>[Gunshot or explosion]|
-|TTML|Confidence: 0.9047 <br/> `<p begin="00:00:00.000" end="00:00:03.671">[Gunshot or explosion]</p>`|
-|TXT |[Gunshot or explosion]|
-|CSV |0.9047,00:00:00.000,00:00:03.671, [Gunshot or explosion]|
-
-Audio Effects in closed captions file is retrieved with the following logic employed:
-
-* `Silence` event type will not be added to the closed captions.
-* Minimum timer duration to show an event is 700 milliseconds.
-
-## Adding audio effects in closed caption files
-
-Audio effects can be added to the closed captions files supported by Azure AI Video Indexer via the [Get video captions API](https://api-portal.videoindexer.ai/api-details#api=Operations&operation=Get-Video-Captions) by choosing true in the `includeAudioEffects` parameter or via the video.ai website experience by selecting **Download** -> **Closed Captions** -> **Include Audio Effects**.
-
-> [!div class="mx-imgBorder"]
-> :::image type="content" source="./media/audio-effects-detection/close-caption.jpg" alt-text="Audio Effects in CC":::
-
-> [!NOTE]
-> When using [update transcript](https://api-portal.videoindexer.ai/api-details#api=Operations&operation=Update-Video-Transcript) from closed caption files or [update custom language model](https://api-portal.videoindexer.ai/api-details#api=Operations&operation=Update-Language-Model) from closed caption files, audio effects included in those files are ignored.
-
-## Limitations and assumptions
-
-* The audio effects are detected when present in non-speech segments only.
-* The model is optimized for cases where there is no loud background music.
-* Low quality audio may impact the detection results.
-* Minimal non-speech section duration is 2 seconds.
-* Music that is characterized with repetitive and/or linearly scanned frequency can be mistakenly classified as Alarm or siren.
-* The model is currently optimized for natural and non-synthetic gunshot and explosions sounds.
-* Door knocks and door slams can sometimes be mistakenly labeled as gunshot and explosions.
-* Prolonged shouting and human physical effort sounds can sometimes be mistakenly detected.
-* Group of people laughing can sometime be classified as both Laughter and Crowd reactions.
-
-## Next steps
-
-Review [overview](video-indexer-overview.md)
azure-video-indexer Azure Video Indexer Azure Media Services Retirement Announcement https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-video-indexer/azure-video-indexer-azure-media-services-retirement-announcement.md
- Title: Azure AI Video Indexer (AVI) changes related to Azure Media Service (AMS) retirement
-description: This article explains the upcoming changes to Azure AI Video Indexer (AVI) related to the retirement of Azure Media Services (AMS).
- Previously updated : 09/05/2023----
-# Changes related to Azure Media Service (AMS) retirement
-
-This article explains the upcoming changes to Azure AI Video Indexer (AVI) resulting from the [retirement of Azure Media Services (AMS)](/azure/media-services/latest/azure-media-services-retirement).
-
-Currently, AVI requires the creation of an AMS account. Additionally, AVI uses AMS for video encoding and streaming operations. The required changes will affect all AVI customers.
-
-To continue using AVI beyond June 30, 2024, all customers **must** make changes to their AVI accounts to remove the AMS dependency. Detailed guidance for converting AVI accounts will be provided in January 2024 when the new account type is released.
-
-## Pricing and billing
-
-Currently, AVI uses AMS for encoding and streaming for the AVI player. AMS charges you for both encoding and streaming. In the future, AVI will encode media and you'll be billed using the updated AVI accounts. Pricing details will be shared in January 2024. There will be no charge for the AVI video player.
-
-## AVI changes
-
-AVI will continue to offer the same insights, performance, and functionality. However, a few aspects of the service will change which fall under the following three categories:
--- Account changes-- API changes-- Product changes-
-## Account changes
-
-AVI has three account types. All will be impacted by the AMS retirement. The account types are:
--- ARM-based accounts-- Classic accounts-- Trial accounts-
-See [Azure AI Video Indexer account types](/azure/azure-video-indexer/accounts-overview) to understand more about AVI account types.
-
-### Azure Resource Manager (ARM)-based accounts
-
-**New accounts:** As of January 15, all newly created AVI accounts will be non-AMS dependent accounts. You'll no longer be able to create AMS-dependent accounts.
-
-**Existing accounts**: Existing accounts will continue to work through June 30, 2024. To continue using the account beyond June 30, customers must go through the process to convert their account to a non-AMS dependent account. If you donΓÇÖt convert your account to a non-AMS dependent account, you won't be able to access the account or use it beyond June 30.
-
-### Classic accounts
--- **New accounts:** As of January 15, all newly created AVI accounts will be non-AMS dependent accounts. You'll no longer be able to create Classic accounts.-- **Existing accounts:** Existing classic accounts will continue to work through June 30, 2024. AVI will release an updated API version for the non-AMS dependent accounts that doesnΓÇÖt contain any AMS related parameters.-
-To continue using the account beyond June 30, 2024, classic accounts will have to go through two steps:
-
-1. Connect the account as an ARM-based account. You can connect the accounts already. See [Azure AI Video Indexer accounts](accounts-overview.md) for instructions.
-1. Make the required changes to the AVI account to remove the AMS dependency. If this isnΓÇÖt done, you won't be able to access the account or use it beyond June 30, 2024.
-
-### Existing trial accounts
--- As of January 15, 2024 Video Indexer trial accounts will continue to work as usual. However, when using them through the APIs, customers must use the updated APIs.-- AVI supports [importing content](import-content-from-trial.md) from a trial AVI account to a paid AVI account. This import option will be supported only until **January 15th, 2024**.-
-## API changes
-
-**Between January 15 to June 30, 2024**, AVI will support both existing data and control plane APIs as well as the updated APIs that exclude all AMS related parameters.
-
-New AVI accounts as well as existing AVI accounts that have completed the steps to remove all AMS dependencies will only use the updated APIs that will exclude all AMS related parameters.
-
-**On July 1, 2024**, code using APIs with AMS parameters will no longer be supported. This applies to both control plane and data plane operations.
-
-### Breaking API changes
-
-There will be breaking API changes. The following table describes the changes for your awareness, but actionable guidance will be provided when the changes have been released.
-
-| **Type** | **API Name** | **Change** |
-||||
-| **ARM** | Create<br/>Update<br/>Patch<br/>ListAccount | - The `mediaServices` Account property will be replaced with a `storageServices` Account property.<br/><br/> - The `Identity` property will change from an `Owner` managed identity to `Storage Blob Data Contributor` permissions on the storage resource. |
-| **ARM** | Get<br/>MoveAccount | The `mediaServices` Account property will be replaced with a `storageServices` Account property. |
-| **ARM** | GetClassicAccount<br/>ListClassicAccounts | API will no longer be supported. |
-| **Classic** | CreatePaidAccountManually | API will no longer be supported. |
-| **Classic** | UpdateAccountMediaServicesAsync | API will no longer be supported. |
-| **Data plane** | Upload | Upload will no longer accept the `assetId` parameter. |
-| **Data plane** | Upload<br/>ReIndex<br/>Redact | `AdaptiveBitrate` will no longer be supported for new uploads. |
-| **Data plane** | GetVideoIndex | `PublishedUrl` property will always be null. |
-| **Data plane** | GetVideoStreamingURL | The streaming URL will return references to AVI account endpoints rather than AMS account endpoints. |
-
-Full details of the API changes and alternatives will be provided when the updated APIs are released.
-
-## Product changes
-
-As of July 1, 2024, AVI wonΓÇÖt use AMS for encoding or streaming. As a result, it will no longer support the following:
--- Encoding with adaptive bitrate will no longer be supported. Only single bitrate will be supported for new indexing jobs. Videos already encoded with adaptive bitrate will be playable in the AVI player.-- Video Indexer [dynamic encryption](/azure/media-services/latest/drm-content-protection-concept) of media files will no longer be supported.-- Media files created by non-AMS dependent accounts wonΓÇÖt be playable by the [Azure Media Player](https://azure.microsoft.com/products/media-services/media-player).-- Using a Cognitive Insights widget and playing the content with the Azure Media Player outlined [here](video-indexer-embed-widgets.md) will no longer be supported.-
-## Timeline
-
-This graphic shows the timeline for the changes.
-
azure-video-indexer Clapperboard Metadata https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-video-indexer/clapperboard-metadata.md
- Title: Enable and view a clapper board with extracted metadata
-description: Learn about how to enable and view a clapper board with extracted metadata.
- Previously updated : 09/20/2022----
-# Enable and view a clapper board with extracted metadata (preview)
-
-A clapper board insight is used to detect clapper board instances and information written on each. For example, *head* or *tail* (the board is upside-down), *production*, *roll*, *scene*, *take*, *date*, etc. The [clapper board](https://en.wikipedia.org/wiki/Clapperboard)'s extracted metadata is most useful to customers involved in the movie post-production process.
-
-When the movie is being edited, a clapper board is removed from the scene; however, the information that was written on the clapper board is important. Azure AI Video Indexer extracts the data from clapper boards, preserves, and presents the metadata.
-
-This article shows how to enable the post-production insight and view clapper board instances with extracted metadata.
-
-## View the insight
-
-### View post-production insights
-
-In order to set the indexing process to include the slate metadata, select the **Video + audio indexing** -> **Advanced** presets.
-
-> [!div class="mx-imgBorder"]
-> :::image type="content" source="./media/slate-detection-process/advanced-setting.png" alt-text="This image shows the advanced setting in order to view post-production clapperboards insights.":::
-
-After the file has been uploaded and indexed, if you want to view the timeline of the insight, select the **Post-production** checkmark from the list of insights.
-
-> [!div class="mx-imgBorder"]
-> :::image type="content" source="./media/slate-detection-process/post-production-checkmark.png" alt-text="This image shows the post-production checkmark needed to view clapperboards.":::
-
-### Clapper boards
-
-Clapper boards contain fields with titles (for example, *production*, *roll*, *scene*, *take*) and values (content) associated with each title.
-
-For example, take this clapper board:
-
-> [!div class="mx-imgBorder"]
-> :::image type="content" source="./media/slate-detection-process/clapperboard.png" alt-text="This image shows a clapperboard.":::
-
-In the following example, the board contains the following fields:
-
-|title|content|
-|||
-|camera|COD|
-|date|FILTER (in this case the board contains no date)|
-|director|John|
-|production|Prod name|
-|scene|1|
-|take|99|
-
-#### View the insight
--
-To see the instances on the website, select **Insights** and scroll to **Clapper boards**. You can hover over each clapper board, or unfold **Show/Hide clapper board info** and see the metadata:
-
-> [!div class="mx-imgBorder"]
-> :::image type="content" source="./media/slate-detection-process/clapperboard-metadata.png" alt-text="This image shows the clapperboard metadata.":::
-
-#### View the timeline
-
-If you checked the **Post-production** insight, You can also find the clapper board instance and its timeline (includes time, fields' values) on the **Timeline** tab.
-
-#### View JSON
-
-To display the JSON file:
-
-1. Select Download and then Insights (JSON).
-1. Copy the `clapperboard` element, under `insights`, and paste it into your Online JSON Viewer.
-
-> [!div class="mx-imgBorder"]
-> :::image type="content" source="./media/slate-detection-process/clapperboard-json.png" alt-text="This image shows the clapperboard metadata in json.":::
-
-The following table describes fields found in json:
-
-|Name|Description|
-|||
-|`id`|The clapper board ID.|
-|`thumbnailId`|The ID of the thumbnail.|
-|`isHeadSlate`|The value stands for head or tail (the board is upside-down) of the clapper board: `true` or `false`.|
-|`fields`|The fields found in the clapper board; also each field's name and value.|
-|`instances`|A list of time ranges where this element appeared.|
-
-## Clapper board limitations
-
-The values may not always be correctly identified by the detection algorithm. Here are some limitations:
--- The titles of the fields appearing on the clapper board are optimized to identify the most popular fields appearing on top of clapper boards. -- Handwritten text or digital digits may not be correctly identified by the fields detection algorithm.-- The algorithm is optimized to identify fields' categories that appear horizontally. -- The clapper board may not be detected if the frame is blurred or that the text written on it can't be identified by the human eye. -- Empty fieldsΓÇÖ values may lead to wrong fields categories.
-<!-- If a part of a clapper board is hidden a value with the highest confidence is shown. -->
-
-## Next steps
-
-* [Slate detection overview](slate-detection-insight.md)
-* [How to enable and view digital patterns with color bars](digital-patterns-color-bars.md).
-* [How to enable and view textless slate with matched scene](textless-slate-scene-matching.md).
azure-video-indexer Concepts Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-video-indexer/concepts-overview.md
- Title: Azure AI Video Indexer terminology & concepts overview
-description: This article gives a brief overview of Azure AI Video Indexer terminology and concepts.
- Previously updated : 08/02/2023----
-# Azure AI Video Indexer terminology & concepts
--
-This article gives a brief overview of Azure AI Video Indexer terminology and concepts. Also, review [transparency note overview](/legal/azure-video-indexer/transparency-note?context=/azure/azure-video-indexer/context/context)
-
-## Artifact files
-
-If you plan to download artifact files, beware of the following warning:
-
-
-## Confidence scores
-
-The confidence score indicates the confidence in an insight. It's a number between 0.0 and 1.0. The higher the score the greater the confidence in the answer. For example:
-
-```json
-"transcript":[
-{
- "id":1,
- "text":"Well, good morning everyone and welcome to",
- "confidence":0.8839,
- "speakerId":1,
- "language":"en-US",
- "instances":[
- {
- "adjustedStart":"0:00:10.21",
- "adjustedEnd":"0:00:12.81",
- "start":"0:00:10.21",
- "end":"0:00:12.81"
- }
- ]
-},
-```
-
-## Content moderation
-
-Use textual and visual content moderation models to keep your users safe from inappropriate content and validate that the content you publish matches your organization's values. You can automatically block certain videos or alert your users about the content. For more information, see [Insights: visual and textual content moderation](video-indexer-output-json-v2.md#visualcontentmoderation).
-
-## Insights
-
-Insights contain an aggregated view of the data: faces, topics, text-based emotion detection. Azure AI Video Indexer analyzes the video and audio content by running 30+ AI models, generating rich insights.
-
-For detailed explanation of insights, see [Azure AI Video Indexer insights](insights-overview.md).
-
-## Keyframes
-
-Azure AI Video Indexer selects the frame(s) that best represent each shot. Keyframes are the representative frames selected from the entire video based on aesthetic properties (for example, contrast and stableness). For more information, see [Scenes, shots, and keyframes](scenes-shots-keyframes.md).
-
-## Time range vs. adjusted time range
-
-Time range is the time period in the original video. Adjusted time range is the time range relative to the current playlist. Since you can create a playlist from different lines of different videos, you can take a one-hour video and use just one line from it, for example, 10:00-10:15. In that case, you'll have a playlist with one line, where the time range is 10:00-10:15 but the adjusted time range is 00:00-00:15.
-
-## Widgets
-
-Azure AI Video Indexer supports embedding widgets in your apps. For more information, see [Embed Azure AI Video Indexer widgets in your apps](video-indexer-embed-widgets.md).
-
-## Next steps
--- [overview](video-indexer-overview.md)-- Once you [set up](video-indexer-get-started.md), start using [insights](video-indexer-output-json-v2.md) and check out other **How to guides**.
azure-video-indexer Connect Classic Account To Arm https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-video-indexer/connect-classic-account-to-arm.md
- Title: Connect a classic Azure AI Video Indexer account to ARM
-description: This topic explains how to connect an existing classic paid Azure AI Video Indexer account to an ARM-based account
- Previously updated : 03/20/2023-----
-# Connect an existing classic paid Azure AI Video Indexer account to ARM-based account
--
-This article shows how to connect an existing classic paid Azure AI Video Indexer account to an Azure Resource Manager (ARM)-based (recommended) account. To create a new ARM-based account, see [create a new account](create-account-portal.md). To understand the Azure AI Video Indexer account types, review [account types](accounts-overview.md).
-
-In this article, we demonstrate options of connecting your **existing** Azure AI Video Indexer account to an [ARM][docs-arm-overview]-based account. You can also view the following video.
-
-> [!VIDEO https://www.microsoft.com/videoplayer/embed/RW10iby]
-
-## Prerequisites
-
-1. Unlimited paid Azure AI Video Indexer account (classic account).
-
- 1. To perform the connect to the ARM (Azure Resource Manager) action, you should have owner's permissions on the Azure AI Video Indexer classic account.
-1. Azure Subscription with Owner permissions or Contributor with Administrator Role assignment.
-
- 1. Same level of permission for the Azure Media Service associated with the existing Azure AI Video Indexer Classic account.
-1. User assigned managed identity (can be created along the flow).
-
-## Transition state
-
-Connecting a classic account to be ARM-based triggers a 30 days of a transition state. In the transition state, an existing account can be accessed by generating an access token using both:
-
-* Access token [generated through API Management](https://aka.ms/avam-dev-portal)(classic way)
-* Access token [generated through ARM](/rest/api/videoindexer/preview/generate/access-token)
-
-The transition state moves all account management functionality to be managed by ARM and will be handled by [Azure RBAC][docs-rbac-overview].
-
-The [invite users](restricted-viewer-role.md#share-the-account) feature in the [Azure AI Video Indexer website](https://www.videoindexer.ai/) gets disabled. The invited users on this account lose their access to the Azure AI Video Indexer account Media in the portal.
-However, this can be resolved by assigning the right role-assignment to these users through Azure RBAC, see [How to assign RBAC][docs-rbac-assignment].
-
-Only the account owner, who performed the connect action, is automatically assigned as the owner on the connected account. When [Azure policies][docs-governance-policy] are enforced, they override the settings on the account.
-
-If users are not added through Azure RBAC to the account after 30 days, they will lose access through API as well as the [Azure AI Video Indexer website](https://www.videoindexer.ai/).
-After the transition state ends, users will only be able to generate a valid access token through ARM, making Azure RBAC the exclusive way to manage role-based access control on the account.
-
-> [!NOTE]
-> If there are invited users you wish to remove access from, do it before connecting the account to ARM.
-
-Before the end of the 30 days of transition state, you can remove access from users through the [Azure AI Video Indexer website](https://www.videoindexer.ai/) account settings page.
-
-## Get started
-
-### Browse to the [Azure AI Video Indexer website](https://aka.ms/vi-portal-link)
-
-1. Sign in using your Microsoft Entra account.
-1. On the top right bar press *User account* to open the side pane account list.
-1. Select the Azure AI Video Indexer classic account you wish to connect to ARM (classic accounts will be tagged with a *classic tag*).
-1. Click **Settings**.
-
- :::image type="content" alt-text="Screenshot that shows the Azure AI Video Indexer website settings." source="./media/connect-classic-account-to-arm/classic-account-settings.png":::
-1. Click **Connect to an ARM-based account**.
-
- :::image type="content" alt-text="Screenshot that shows the connect to an ARM-based account dialog." source="./media/connect-classic-account-to-arm/connect-classic-to-arm.png":::
-1. Sign to Azure portal.
-1. The Azure AI Video Indexer create blade will open.
-1. In the **Create Azure AI Video Indexer account** section enter required values.
-
- If you followed the steps the fields should be auto-populated, make sure to validate the eligible values.
-
- :::image type="content" alt-text="Screenshot that shows the create Azure AI Video Indexer account dialog." source="./media/connect-classic-account-to-arm/connect-blade.png":::
-
- Here are the descriptions for the resource fields:
-
- | Name | Description |
- | ||
- |**Subscription**| The subscription currently contains the classic account and other related resources such as the Media Services.|
- |**Resource Group**|Select an existing resource or create a new one. The resource group must be the same location as the classic account being connected|
- |**Azure AI Video Indexer account** (radio button)| Select the *"Connecting an existing classic account"*.|
- |**Existing account ID**|Select an existing Azure AI Video Indexer account from the dropdown.|
- |**Resource name**|Enter the name of the new Azure AI Video Indexer account. Default value would be the same name the account had as classic.|
- |**Location**|The geographic region can't be changed in the connect process, the connected account must stay in the same region. |
- |**Media Services account name**|The original Media Services account name that was associated with classic account.|
- |**User-assigned managed identity**|Select a user-assigned managed identity, or create a new one. Azure AI Video Indexer account will use it to access the Media services. The user-assignment managed identity will be assigned the roles of Contributor for the Media Service account.|
-1. Click **Review + create** at the bottom of the form.
-
-## After connecting to ARM is complete
-
-After successfully connecting your account to ARM, it is recommended to make sure your account management APIs are replaced with [Azure AI Video Indexer REST API](/rest/api/videoindexer/preview/accounts).
-As mentioned in the beginning of this article, during the 30 days of the transition state, ΓÇ£[Get-access-token](https://api-portal.videoindexer.ai/api-details#api=Operations&operation=Get-Account-Access-Token)ΓÇ¥ will be supported side by side the ARM-based ΓÇ£[Generate-Access token](/rest/api/videoindexer/preview/generate/access-token)ΓÇ¥.
-Make sure to change to the new "Generate-Access token" by updating all your solutions that use the API.
-
-APIs to be changed:
--- Get Access token for each scope: Account, Project & Video.-- Get account ΓÇô the accountΓÇÖs details.-- Get accounts ΓÇô List of all account in a region.-- Create paid account ΓÇô would create a classic account.
-
-For a full description of [Azure AI Video Indexer REST API](/rest/api/videoindexer/preview/accounts) calls and documentation, follow the link.
-
-For code sample generating an access token through ARM see [C# code sample](https://github.com/Azure-Samples/media-services-video-indexer/blob/master/API-Samples/C%23/ArmBased/Program.cs).
-
-### Next steps
-
-Learn how to [Upload a video using C#](https://github.com/Azure-Samples/media-services-video-indexer/blob/master/API-Samples/C%23/ArmBased/).
-
-<!-- links -->
-[docs-arm-overview]: ../azure-resource-manager/management/overview.md
-[docs-rbac-overview]: ../role-based-access-control/overview.md
-[docs-rbac-assignment]: ../role-based-access-control/role-assignments-portal.md
-[docs-governance-policy]: ../governance/policy/overview.md
azure-video-indexer Connect To Azure https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-video-indexer/connect-to-azure.md
- Title: Create a classic Azure AI Video Indexer account connected to Azure
-description: Learn how to create a classic Azure AI Video Indexer account connected to Azure.
- Previously updated : 08/24/2022-----
-# Create a classic Azure AI Video Indexer account
---
-This topic shows how to create a new classic account connected to Azure using the [Azure AI Video Indexer website](https://aka.ms/vi-portal-link). You can also create an Azure AI Video Indexer classic account through our [API](https://aka.ms/avam-dev-portal).
-
-The topic discusses prerequisites that you need to connect to your Azure subscription and how to configure an Azure Media Services account.
-
-A few Azure AI Video Indexer account types are available to you. For detailed explanation, review [Account types](accounts-overview.md).
-
-For the pricing details, see [pricing](https://azure.microsoft.com/pricing/details/video-indexer/).
-
-## Prerequisites for connecting to Azure
-
-* An Azure subscription.
-
- If you don't have an Azure subscription yet, sign up for [Azure Free Trial](https://azure.microsoft.com/free/).
-* A Microsoft Entra domain.
-
- If you don't have a Microsoft Entra domain, create this domain with your Azure subscription. For more information, see [Managing custom domain names in your Microsoft Entra ID](../active-directory/enterprise-users/domains-manage.md)
-* A user in your Microsoft Entra domain with an **Application administrator** role. You'll use this member when connecting your Azure AI Video Indexer account to Azure.
-
- This user should be a Microsoft Entra user with a work or school account. Don't use a personal account, such as outlook.com, live.com, or hotmail.com.
-
- :::image type="content" alt-text="Screenshot that shows how to choose a user in your Microsoft Entra domain." source="./media/create-account/all-aad-users.png":::
-* A user and member in your Microsoft Entra domain.
-
- You'll use this member when connecting your Azure AI Video Indexer account to Azure.
-
- This user should be a member in your Azure subscription with either an **Owner** role, or both **Contributor** and **User Access Administrator** roles. A user can be added twice, with two roles. Once with Contributor and once with user Access Administrator. For more information, see [View the access a user has to Azure resources](../role-based-access-control/check-access.md).
-
- :::image type="content" alt-text="Screenshot that shows the access control settings." source="./media/create-account/access-control-iam.png":::
-* Register the Event Grid resource provider using the Azure portal.
-
- In the [Azure portal](https://portal.azure.com/), go to **Subscriptions**->[subscription]->**ResourceProviders**.
-
- Search for **Microsoft.Media** and **Microsoft.EventGrid**. If not in the "Registered" state, select **Register**. It takes a couple of minutes to register.
-
- :::image type="content" alt-text="Screenshot that shows how to select an Event Grid subscription." source="./media/create-account/event-grid.png":::
-
-## Connect to Azure
-
-> [!NOTE]
-> Use the same Microsoft Entra user you used when connecting to Azure.
-
-It's strongly recommended to have the following three accounts located in the same region:
-
-* The Azure AI Video Indexer account that you're creating.
-* The Azure AI Video Indexer account that you're connecting with the Media Services account.
-* The Azure storage account connected to the same Media Services account.
-
- When you create an Azure AI Video Indexer account and connect it to Media Services, the media and metadata files are stored in the Azure storage account associated with that Media Services account.
-
-If your storage account is behind a firewall, see [storage account that is behind a firewall](faq.yml#can-a-storage-account-connected-to-the-media-services-account-be-behind-a-firewall).
-
-### Create and configure a Media Services account
-
-1. Use the [Azure](https://portal.azure.com/) portal to create an Azure Media Services account, as described in [Create an account](/azure/media-services/previous/media-services-portal-create-account).
-
- > [!NOTE]
- > Make sure to write down the Media Services resource and account names.
-1. Before you can play your videos in the [Azure AI Video Indexer](https://www.videoindexer.ai/) website, you must start the default **Streaming Endpoint** of the new Media Services account.
-
- In the new Media Services account, select **Streaming endpoints**. Then select the streaming endpoint and press start.
-
- :::image type="content" alt-text="Screenshot that shows how to specify streaming endpoints." source="./media/create-account/create-ams-account-se.png":::
-1. For Azure AI Video Indexer to authenticate with Media Services API, an AD app needs to be created. The following steps guide you through the Microsoft Entra authentication process described in [Get started with Microsoft Entra authentication by using the Azure portal](/azure/media-services/previous/media-services-portal-get-started-with-aad):
-
- 1. In the new Media Services account, select **API access**.
- 2. Select [Service principal authentication method](/azure/media-services/previous/media-services-portal-get-started-with-aad).
- 3. Get the client ID and client secret
-
- After you select **Settings**->**Keys**, add **Description**, press **Save**, and the key value gets populated.
-
- If the key expires, the account owner will have to contact Azure AI Video Indexer support to renew the key.
-
- > [!NOTE]
- > Make sure to write down the key value and the Application ID. You'll need it for the steps in the next section.
-
-### Azure Media Services considerations
-
-The following Azure Media Services related considerations apply:
-
-* If you connect to a new Media Services account, Azure AI Video Indexer automatically starts the default **Streaming Endpoint** in it:
-
- ![Media Services streaming endpoint](./media/create-account/ams-streaming-endpoint.png)
-
- Streaming endpoints have a considerable startup time. Therefore, it may take several minutes from the time you connected your account to Azure until your videos can be streamed and watched in the [Azure AI Video Indexer](https://www.videoindexer.ai/) website.
-* If you connect to an existing Media Services account, Azure AI Video Indexer doesn't change the default Streaming Endpoint configuration. If there's no running **Streaming Endpoint**, you can't watch videos from this Media Services account or in Azure AI Video Indexer.
-
-## Create a classic account
-
-1. On the [Azure AI Video Indexer website](https://aka.ms/vi-portal-link), select **Create unlimited account** (the paid account).
-2. To create a classic account, select **Switch to manual configuration**.
-
-In the dialog, provide the following information:
-
-|Setting|Description|
-|||
-|Azure AI Video Indexer account region|The name of the Azure AI Video Indexer account region. For better performance and lower costs, it's highly recommended to specify the name of the region where the Azure Media Services resource and Azure Storage account are located. |
-|Microsoft Entra tenant|The name of the Microsoft Entra tenant, for example "contoso.onmicrosoft.com". The tenant information can be retrieved from the Azure portal. Place your cursor over the name of the signed-in user in the top-right corner. Find the name to the right of **Domain**.|
-|Subscription ID|The Azure subscription under which this connection should be created. The subscription ID can be retrieved from the Azure portal. Select **All services** in the left panel, and search for "subscriptions". Select **Subscriptions** and choose the desired ID from the list of your subscriptions.|
-|Azure Media Services resource group name|The name for the resource group in which you created the Media Services account.|
-|Media service resource name|The name of the Azure Media Services account that you created in the previous section.|
-|Application ID|The Microsoft Entra application ID (with permissions for the specified Media Services account) that you created in the previous section.|
-|Application key|The Microsoft Entra application key that you created in the previous section. |
-
-## Import your content from the trial account
-
-See [Import your content from the trial account](import-content-from-trial.md).
-
-## Automate creation of the Azure AI Video Indexer account
-
-To automate the creation of the account is a two steps process:
-
-1. Use Azure Resource Manager to create an Azure Media Services account + Microsoft Entra application.
-
- See an example of the [Media Services account creation template](https://github.com/Azure-Samples/media-services-v3-arm-templates).
-1. Call [Create-Account with the Media Services and Microsoft Entra application](https://videoindexer.ai.azure.us/account/login?source=apim).
-
-## Azure AI Video Indexer in Azure Government
-
-### Prerequisites for connecting to Azure Government
--- An Azure subscription in [Azure Government](../azure-government/index.yml).-- A Microsoft Entra account in Azure Government.-- All pre-requirements of permissions and resources as described above in [Prerequisites for connecting to Azure](#prerequisites-for-connecting-to-azure). -
-### Create new account via the Azure Government portal
-
-> [!NOTE]
-> The Azure Government cloud does not include a *trial* experience of Azure AI Video Indexer.
-
-To create a paid account via the Azure AI Video Indexer website:
-
-1. Go to https://videoindexer.ai.azure.us
-1. Sign-in with your Azure Government Microsoft Entra account.
-1.If you don't have any Azure AI Video Indexer accounts in Azure Government that you're an owner or a contributor to, you'll get an empty experience from which you can start creating your account.
-
- The rest of the flow is as described in above, only the regions to select from will be Government regions in which Azure AI Video Indexer is available
-
- If you already are a contributor or an admin of an existing one or more Azure AI Video Indexer accounts in Azure Government, you'll be taken to that account and from there you can start a following steps for creating an additional account if needed, as described above.
-
-### Create new account via the API on Azure Government
-
-To create a paid account in Azure Government, follow the instructions in [Create-Paid-Account](https://api-portal.videoindexer.ai/api-details#api=Operations&operation=Create-Paid-Account). This API end point only includes Government cloud regions.
-
-### Limitations of Azure AI Video Indexer on Azure Government
-
-* Only paid accounts (ARM or classic) are available on Azure Government.
-* No manual content moderation available in Azure Government.
-
- In the public cloud when content is deemed offensive based on a content moderation, the customer can ask for a human to look at that content and potentially revert that decision.
-* Bing description - in Azure Government we won't present a description of celebrities and named entities identified. This is a UI capability only.
-
-## Clean up resources
-
-After you're done with this tutorial, delete resources that you aren't planning to use.
-
-### Delete an Azure AI Video Indexer account
-
-If you want to delete an Azure AI Video Indexer account, you can delete the account from the Azure AI Video Indexer website. To delete the account, you must be the owner.
-
-Select the account -> **Settings** -> **Delete this account**.
-
-The account will be permanently deleted in 90 days.
-
-## Next steps
-
-You can programmatically interact with your trial account and/or with your Azure AI Video Indexer accounts that are connected to Azure by following the instructions in: [Use APIs](video-indexer-use-apis.md).
azure-video-indexer Considerations When Use At Scale https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-video-indexer/considerations-when-use-at-scale.md
- Title: Things to consider when using Azure AI Video Indexer at scale - Azure
-description: This topic explains what things to consider when using Azure AI Video Indexer at scale.
- Previously updated : 07/03/2023----
-# Things to consider when using Azure AI Video Indexer at scale
--
-When using Azure AI Video Indexer to index videos and your archive of videos is growing, consider scaling.
-
-This article answers questions like:
-
-* Are there any technological constraints I need to take into account?
-* Is there a smart and efficient way of doing it?
-* Can I prevent spending excess money in the process?
-
-The article provides six best practices of how to use Azure AI Video Indexer at scale.
-
-## When uploading videos consider using a URL over byte array
-
-Azure AI Video Indexer does give you the choice to upload videos from URL or directly by sending the file as a byte array, the latter comes with some constraints. For more information, see [uploading considerations and limitations)](upload-index-videos.md)
-
-First, it has file size limitations. The size of the byte array file is limited to 2 GB compared to the 30-GB upload size limitation while using URL.
-
-Second, consider just some of the issues that can affect your performance and hence your ability to scale:
-
-* Sending files using multi-part means high dependency on your network,
-* service reliability,
-* connectivity,
-* upload speed,
-* lost packets somewhere in the world wide web.
--
-When you upload videos using URL, you just need to provide a path to the location of a media file and Video Indexer takes care of the rest (see the `videoUrl` field in the [upload video](https://api-portal.videoindexer.ai/api-details#api=Operations&operation=Upload-Video) API).
-
-> [!TIP]
-> Use the `videoUrl` optional parameter of the upload video API.
-
-To see an example of how to upload videos using URL, check out [this example](upload-index-videos.md). Or, you can use [AzCopy](../storage/common/storage-use-azcopy-v10.md) for a fast and reliable way to get your content to a storage account from which you can submit it to Azure AI Video Indexer using [SAS URL](../storage/common/storage-sas-overview.md). Azure AI Video Indexer recommends using *readonly* SAS URLs.
-
-## Automatic Scaling of Media Reserved Units
-
-Starting August 1st 2021, Azure Video Indexer enabled [Reserved Units](/azure/media-services/latest/concept-media-reserved-units)(MRUs) auto scaling by [Azure Media Services](/azure/media-services/latest/media-services-overview) (AMS), as a result you do not need to manage them through Azure Video Indexer. That allows price optimization, e.g. price reduction in many cases, based on your business needs as it is being auto scaled.
-
-## Respect throttling
-
-Azure Video Indexer is built to deal with indexing at scale, and when you want to get the most out of it you should also be aware of the system's capabilities and design your integration accordingly. You don't want to send an upload request for a batch of videos just to discover that some of the movies didn't upload and you are receiving an HTTP 429 response code (too many requests). There is an API request limit of 10 requests per second and up to 120 requests per minute.
-
-Azure Video Indexer adds a `retry-after` header in the HTTP response, the header specifies when you should attempt your next retry. Make sure you respect it before trying your next request.
--
-## Use callback URL
-
-We recommend that instead of polling the status of your request constantly from the second you sent the upload request, you can add a callback URL and wait for Azure AI Video Indexer to update you. As soon as there is any status change in your upload request, you get a POST notification to the URL you specified.
-
-You can add a callback URL as one of the parameters of the [upload video API](https://api-portal.videoindexer.ai/api-details#api=Operations&operation=Upload-Video). Check out the code samples in [GitHub repo](https://github.com/Azure-Samples/media-services-video-indexer/tree/master/).
-
-For callback URL you can also use Azure Functions, a serverless event-driven platform that can be triggered by HTTP and implement a following flow.
-
-### callBack URL definition
--
-## Use the right indexing parameters for you
-
-When making decisions related to using Azure AI Video Indexer at scale, look at how to get the most out of it with the right parameters for your needs. Think about your use case, by defining different parameters you can save money and make the indexing process for your videos faster.
-
-Before uploading and indexing your video read the [documentation](upload-index-videos.md) to get a better idea of what your options are.
-
-For example, donΓÇÖt set the preset to streaming if you don't plan to watch the video, don't index video insights if you only need audio insights.
-
-## Index in optimal resolution, not highest resolution
-
-You might be asking, what video quality do you need for indexing your videos?
-
-In many cases, indexing performance has almost no difference between HD (720P) videos and 4K videos. Eventually, youΓÇÖll get almost the same insights with the same confidence. The higher the quality of the movie you upload means the higher the file size, and this leads to higher computing power and time needed to upload the video.
-
-For example, for the face detection feature, a higher resolution can help with the scenario where there are many small but contextually important faces. However, this comes with a quadratic increase in runtime and an increased risk of false positives.
-
-Therefore, we recommend you to verify that you get the right results for your use case and to first test it locally. Upload the same video in 720P and in 4K and compare the insights you get.
-
-## Next steps
-
-[Examine the Azure AI Video Indexer output produced by API](video-indexer-output-json-v2.md)
azure-video-indexer Create Account Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-video-indexer/create-account-portal.md
- Title: Create an Azure AI Video Indexer account
-description: This article explains how to create an account for Azure AI Video Indexer.
- Previously updated : 06/10/2022---
-
-# Tutorial: create an ARM-based account with Azure portal
---
-To start using unlimited features and robust capabilities of Azure AI Video Indexer, you need to create an Azure AI Video Indexer unlimited account.
-
-This tutorial walks you through the steps of creating the Azure AI Video Indexer account and its accompanying resources by using the Azure portal. The account that gets created is ARM (Azure Resource Manager) account. For information about different account types, see [Overview of account types](accounts-overview.md).
-
-## Prerequisites
-
-* You should be a member of your Azure subscription with either an **Owner** role, or both **Contributor** and **User Access Administrator** roles. You can be added twice, with two roles, once with **Contributor** and once with **User Access Administrator**. For more information, see [View the access a user has to Azure resources](../role-based-access-control/check-access.md).
-* Register the **EventGrid** resource provider using the Azure portal.
-
- In the [Azure portal](https://portal.azure.com), go to **Subscriptions**->[<*subscription*>]->**ResourceProviders**.
-Search for **Microsoft.Media** and **Microsoft.EventGrid**. If not in the registered state, select **Register**. It takes a couple of minutes to register.
-* Have an **Owner** role (or **Contributor** and **User Access Administrator** roles) assignment on the associated Azure Media Services (AMS). You select the AMS account during the Azure AI Video Indexer account creation, as described below.
-* Have an **Owner** role (or **Contributor** and **User Access Administrator** roles) assignment on the related managed identity.
-
-## Use the Azure portal to create an Azure AI Video Indexer account
-
-1. Sign in to the [Azure portal](https://portal.azure.com).
-
- Alternatively, you can start creating the **unlimited** account from the [videoindexer.ai](https://www.videoindexer.ai) website.
-1. Using the search bar at the top, enter **"Video Indexer"**.
-1. Select **Video Indexer** under **Services**.
-1. Select **Create**.
-1. In the Create an Azure AI Video Indexer resource section, enter required values (the descriptions follow after the image).
-
- > [!div class="mx-imgBorder"]
- > :::image type="content" source="./media/create-account-portal/avi-create-blade.png" alt-text="Screenshot showing how to create an Azure AI Video Indexer resource.":::
-
- Here are the definitions:
-
- | Name | Description|
- |||
- |**Subscription**|Choose the subscription to use. If you're a member of only one subscription, you'll see that name. If there are multiple choices, choose a subscription in which your user has the required role.
- |**Resource group**|Select an existing resource group or create a new one. A resource group is a collection of resources that share lifecycle, permissions, and policies. Learn more [here](../azure-resource-manager/management/overview.md#resource-groups).|
- |**Resource name**|This will be the name of the new Azure AI Video Indexer account. The name can contain letters, numbers and dashes with no spaces.|
- |**Region**|Select the Azure region that will be used to deploy the Azure AI Video Indexer account. The region matches the resource group region you chose. If you'd like to change the selected region, change the selected resource group or create a new one in the preferred region. [Azure region in which Azure AI Video Indexer is available](https://azure.microsoft.com/global-infrastructure/services/?products=cognitive-services&regions=all)|
- |**Existing content**|If you have existing classic Video Indexer accounts, you can choose to have the videos, files, and data associated with an existing classic account connected to the new account. See the following article to learn more [Connect the classic account to ARM](connect-classic-account-to-arm.md)
- |**Available classic accounts**|Classic accounts available in the chosen subscription, resource group, and region.|
- |**Media Services account name**|Select a Media Services that the new Azure AI Video Indexer account will use to process the videos. You can select an existing Media Services or you can create a new one. The Media Services must be in the same region you selected for your Azure AI Video Indexer account.|
- |**Storage account** (appears when creating a new AMS account)|Choose or create a new storage account in the same resource group.|
- |**Managed identity**|Select an existing user-assigned managed identity or system-assigned managed identity or both when creating the account. The new Azure AI Video Indexer account will use the selected managed identity to access the Media Services associated with the account. If both user-assigned and system assigned managed identities will be selected during the account creation the **default** managed identity is the user-assigned managed identity. A contributor role should be assigned on the Media Services.|
-1. Select **Review + create** at the bottom of the form.
-
-### Review deployed resource
-
-You can use the Azure portal to validate the Azure AI Video Indexer account and other resources that were created. After the deployment is finished, select **Go to resource** to see your new Azure AI Video Indexer account.
-
-## The Overview tab of the account
-
-This tab enables you to view details about your account.
-
-> [!div class="mx-imgBorder"]
-> :::image type="content" source="./media/create-account-portal/avi-overview.png" alt-text="Screenshot showing the Overview tab.":::
-
-Select **Explore Azure AI Video Indexer's portal** to view your new account on the [Azure AI Video Indexer website](https://aka.ms/vi-portal-link).
-
-### Essential details
-
-|Name|Description|
-|||
-|Status| When the resource is connected properly, the status is **Active**. When there's a problem with the connection between the managed identity and the Media Service instance, the status will be *Connection to Azure Media Services failed*. Contributor role assignment on the Media Services should be added to the proper managed identity.|
-|Managed identity |The name of the default managed identity, user-assigned or system-assigned. The default managed identity can be updated using the **Change** button.|
-
-## The Management tab of the account
-
-This tab contains sections for:
-
-* getting an access token for the account
-* managing identities
-
-### Management API
-
-Use the **Management API** tab to manually generate access tokens for the account.
-This token can be used to authenticate API calls for this account. Each token is valid for one hour.
-
-#### To get the access token
-
-Choose the following:
-
-* Permission type: **Contributor** or **Reader**
-* Scope: **Account**, **Project** or **Video**
-
- * For **Project** or **Video** you should also insert the matching ID.
-* Select **Generate**
-
-### Identity
-
-Use the **Identity** tab to manually update the managed identities associated with the Azure AI Video Indexer resource.
-
-Add new managed identities, switch the default managed identity between user-assigned and system-assigned or set a new user-assigned managed identity.
-
-## Next steps
-
-Learn how to [Upload a video using C#](https://github.com/Azure-Samples/media-services-video-indexer/blob/master/API-Samples/C%23/ArmBased/).
--
-<!-- links -->
-[docs-uami]: ../active-directory/managed-identities-azure-resources/overview.md
-[docs-ms]: /azure/media-services/latest/media-services-overview
-[docs-role-contributor]: ../../role-based-access-control/built-in-roles.md#contibutor
-[docs-contributor-on-ms]: ./add-contributor-role-on-the-media-service.md
azure-video-indexer Customize Brands Model Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-video-indexer/customize-brands-model-overview.md
- Title: Customize a Brands model in Azure AI Video Indexer - Azure
-description: This article gives an overview of what is a Brands model in Azure AI Video Indexer and how to customize it.
- Previously updated : 12/15/2019----
-# Customize a Brands model in Azure AI Video Indexer
--
-Azure AI Video Indexer supports brand detection from speech and visual text during indexing and reindexing of video and audio content. The brand detection feature identifies mentions of products, services, and companies suggested by Bing's brands database. For example, if Microsoft is mentioned in a video or audio content or if it shows up in visual text in a video, Azure AI Video Indexer detects it as a brand in the content. Brands are disambiguated from other terms using context.
-
-Brand detection is useful in a wide variety of business scenarios such as contents archive and discovery, contextual advertising, social media analysis, retail compete analysis, and many more. Azure AI Video Indexer brand detection enables you to index brand mentions in speech and visual text, using Bing's brands database as well as with customization by building a custom Brands model for each Azure AI Video Indexer account. The custom Brands model feature allows you to select whether or not Azure AI Video Indexer will detect brands from the Bing brands database, exclude certain brands from being detected (essentially creating a list of unapproved brands), and include brands that should be part of your model that might not be in Bing's brands database (essentially creating a list of approved brands). The custom Brands model that you create will only be available in the account in which you created the model.
-
-## Out of the box detection example
-
-In the "Microsoft Build 2017 Day 2" presentation, the brand "Microsoft Windows" appears multiple times. Sometimes in the transcript, sometimes as visual text and never as verbatim. Azure AI Video Indexer detects with high precision that a term is indeed brand based on the context, covering over 90k brands out of the box, and constantly updating. At 02:25, Azure AI Video Indexer detects the brand from speech and then again at 02:40 from visual text, which is part of the Windows logo.
-
-![Brands overview](./media/content-model-customization/brands-overview.png)
-
-Talking about Windows in the context of construction will not detect the word "Windows" as a brand, and same for Box, Apple, Fox, etc., based on advanced Machine Learning algorithms that know how to disambiguate from context. Brand Detection works for all our supported languages.
-
-## Next steps
-
-To bring your own brands, check out these topics:
-
-[Customize Brands model using APIs](customize-brands-model-with-api.md)
-
-[Customize Brands model using the website](customize-brands-model-with-website.md)
azure-video-indexer Customize Brands Model With Api https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-video-indexer/customize-brands-model-with-api.md
- Title: Customize a Brands model with Azure AI Video Indexer API
-description: Learn how to customize a Brands model with the Azure AI Video Indexer API.
- Previously updated : 01/14/2020-----
-# Customize a Brands model with the Azure AI Video Indexer API
--
-Azure AI Video Indexer supports brand detection from speech and visual text during indexing and reindexing of video and audio content. The brand detection feature identifies mentions of products, services, and companies suggested by Bing's brands database. For example, if Microsoft is mentioned in video or audio content or if it shows up in visual text in a video, Azure AI Video Indexer detects it as a brand in the content. A custom Brands model allows you to exclude certain brands from being detected and include brands that should be part of your model that might not be in Bing's brands database. For more information, see [Overview](customize-brands-model-overview.md).
-
-> [!NOTE]
-> If your video was indexed prior to adding a brand, you need to reindex it.
-
-You can use the Azure AI Video Indexer APIs to create, use, and edit custom Brands models detected in a video, as described in this topic. You can also use the Azure AI Video Indexer website, as described in [Customize Brands model using the Azure AI Video Indexer website](customize-brands-model-with-api.md).
-
-## Create a Brand
-
-The [create a brand](https://api-portal.videoindexer.ai/api-details#api=Operations&operation=Create-Brand) API creates a new custom brand and adds it to the custom Brands model for the specified account.
-
-> [!NOTE]
-> Setting `enabled` (in the body) to true puts the brand in the *Include* list for Azure AI Video Indexer to detect. Setting `enabled` to false puts the brand in the *Exclude* list, so Azure AI Video Indexer won't detect it.
-
-Some other parameters that you can set in the body:
-
-* The `referenceUrl` value can be any reference websites for the brand, such as a link to its Wikipedia page.
-* The `tags` value is a list of tags for the brand. This tag shows up in the brand's *Category* field in the Azure AI Video Indexer website. For example, the brand "Azure" can be tagged or categorized as "Cloud".
-
-### Response
-
-The response provides information on the brand that you just created following the format of the example below.
-
-```json
-{
- "referenceUrl": "https://en.wikipedia.org/wiki/Example",
- "id": 97974,
- "name": "Example",
- "accountId": "SampleAccountId",
- "lastModifierUserName": "SampleUserName",
- "created": "2018-04-25T14:59:52.7433333",
- "lastModified": "2018-04-25T14:59:52.7433333",
- "enabled": true,
- "description": "This is an example",
- "tags": [
- "Tag1",
- "Tag2"
- ]
-}
-```
-
-## Delete a Brand
-
-The [delete a brand](https://api-portal.videoindexer.ai/api-details#api=Operations&operation=Delete-Brand) API removes a brand from the custom Brands model for the specified account. The account is specified in the `accountId` parameter. Once called successfully, the brand will no longer be in the *Include* or *Exclude* brands lists.
-
-### Response
-
-There's no returned content when the brand is deleted successfully.
-
-## Get a specific Brand
-
-The [get a brand](https://api-portal.videoindexer.ai/api-details#api=Operations&operation=Get-Brand) API lets you search for the details of a brand in the custom Brands model for the specified account using the brand ID.
-
-### Response
-
-The response provides information on the brand that you searched (using brand ID) following the format of the example below.
-
-```json
-{
- "referenceUrl": "https://en.wikipedia.org/wiki/Example",
- "id": 128846,
- "name": "Example",
- "accountId": "SampleAccountId",
- "lastModifierUserName": "SampleUserName",
- "created": "2018-01-06T13:51:38.3666667",
- "lastModified": "2018-01-11T13:51:38.3666667",
- "enabled": true,
- "description": "This is an example",
- "tags": [
- "Tag1",
- "Tag2"
- ]
-}
-```
-
-> [!NOTE]
-> `enabled` being set to `true` signifies that the brand is in the *Include* list for Azure AI Video Indexer to detect, and `enabled` being false signifies that the brand is in the *Exclude* list, so Azure AI Video Indexer won't detect it.
-
-## Update a specific brand
-
-The [update a brand](https://api-portal.videoindexer.ai/api-details#api=Operations&operation=Update-Brand) API lets you search for the details of a brand in the custom Brands model for the specified account using the brand ID.
-
-### Response
-
-The response provides the updated information on the brand that you updated following the format of the example below.
-
-```json
-{
- "referenceUrl": null,
- "id": 97974,
- "name": "Example",
- "accountId": "SampleAccountId",
- "lastModifierUserName": "SampleUserName",
- "Created": "2018-04-25T14:59:52.7433333",
- "lastModified": "2018-04-25T15:37:50.67",
- "enabled": false,
- "description": "This is an update example",
- "tags": [
- "Tag1",
- "NewTag2"
- ]
-}
-```
-
-## Get all of the Brands
-
-The [get all brands](https://api-portal.videoindexer.ai/api-details#api=Operations&operation=Get-Brands) API returns all of the brands in the custom Brands model for the specified account regardless of whether the brand is meant to be in the *Include* or *Exclude* brands list.
-
-### Response
-
-The response provides a list of all of the brands in your account and each of their details following the format of the example below.
-
-```json
-[
- {
- "ReferenceUrl": null,
- "id": 97974,
- "name": "Example",
- "accountId": "AccountId",
- "lastModifierUserName": "UserName",
- "Created": "2018-04-25T14:59:52.7433333",
- "LastModified": "2018-04-25T14:59:52.7433333",
- "enabled": true,
- "description": "This is an example",
- "tags": ["Tag1", "Tag2"]
- },
- {
- "ReferenceUrl": null,
- "id": 97975,
- "name": "Example2",
- "accountId": "AccountId",
- "lastModifierUserName": "UserName",
- "Created": "2018-04-26T14:59:52.7433333",
- "LastModified": "2018-04-26T14:59:52.7433333",
- "enabled": false,
- "description": "This is another example",
- "tags": ["Tag1", "Tag2"]
- },
-]
-```
-
-> [!NOTE]
-> The brand named *Example* is in the *Include* list for Azure AI Video Indexer to detect, and the brand named *Example2* is in the *Exclude* list, so Azure AI Video Indexer won't detect it.
-
-## Get Brands model settings
-
-The [get brands settings](https://api-portal.videoindexer.ai/api-details#api=Operations&operation=Get-Brands) API returns the Brands model settings in the specified account. The Brands model settings represent whether detection from the Bing brands database is enabled or not. If Bing brands aren't enabled, Azure AI Video Indexer will only detect brands from the custom Brands model of the specified account.
-
-### Response
-
-The response shows whether Bing brands are enabled following the format of the example below.
-
-```json
-{
- "state": true,
- "useBuiltIn": true
-}
-```
-
-> [!NOTE]
-> `useBuiltIn` being set to true represents that Bing brands are enabled. If `useBuiltin` is false, Bing brands are disabled. The `state` value can be ignored because it has been deprecated.
-
-## Update Brands model settings
-
-The [update brands](https://api-portal.videoindexer.ai/api-details#api=Operations&operation=Update-Brands-Model-Settings) API updates the Brands model settings in the specified account. The Brands model settings represent whether detection from the Bing brands database is enabled or not. If Bing brands aren't enabled, Azure AI Video Indexer will only detect brands from the custom Brands model of the specified account.
-
-The `useBuiltIn` flag set to true means that Bing brands are enabled. If `useBuiltin` is false, Bing brands are disabled.
-
-### Response
-
-There's no returned content when the Brands model setting is updated successfully.
-
-## Next steps
-
-[Customize Brands model using website](customize-brands-model-with-website.md)
azure-video-indexer Customize Brands Model With Website https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-video-indexer/customize-brands-model-with-website.md
- Title: Customize a Brands model with the Azure AI Video Indexer website
-description: Learn how to customize a Brands model with the Azure AI Video Indexer website.
- Previously updated : 12/15/2019-----
-# Customize a Brands model with the Azure AI Video Indexer website
--
-Azure AI Video Indexer supports brand detection from speech and visual text during indexing and reindexing of video and audio content. The brand detection feature identifies mentions of products, services, and companies suggested by Bing's brands database. For example, if Microsoft is mentioned in video or audio content or if it shows up in visual text in a video, Azure AI Video Indexer detects it as a brand in the content.
-
-A custom Brands model allows you to:
--- select if you want Azure AI Video Indexer to detect brands from the Bing brands database.-- select if you want Azure AI Video Indexer to exclude certain brands from being detected (essentially creating a blocklist of brands).-- select if you want Azure AI Video Indexer to include brands that should be part of your model that might not be in Bing's brands database (essentially creating an accept list of brands).-
-For a detailed overview, see this [Overview](customize-brands-model-overview.md).
-
-You can use the Azure AI Video Indexer website to create, use, and edit custom Brands models detected in a video, as described in this article. You can also use the API, as described in [Customize Brands model using APIs](customize-brands-model-with-api.md).
-
-> [!NOTE]
-> If your video was indexed prior to adding a brand, you need to reindex it. You will find **Re-index** item in the drop-down menu associated with the video. Select **Advanced options** -> **Brand categories** and check **All brands**.
-
-## Edit Brands model settings
-
-You have the option to set whether or not you want brands from the Bing brands database to be detected. To set this option, you need to edit the settings of your Brands model. Follow these steps:
-
-1. Go to the [Azure AI Video Indexer](https://www.videoindexer.ai/) website and sign in.
-1. To customize a model in your account, select the **Content model customization** button on the left of the page.
-
- > [!div class="mx-imgBorder"]
- > :::image type="content" source="./media/content-model-customization/content-model-customization.png" alt-text="Customize content model in Azure AI Video Indexer ":::
-1. To edit brands, select the **Brands** tab.
-
- > [!div class="mx-imgBorder"]
- > :::image type="content" source="./media/customize-brand-model/customize-brand-model.png" alt-text="Screenshot shows the Brands tab of the Content model customization dialog box":::
-1. Check the **Show brands suggested by Bing** option if you want Azure AI Video Indexer to detect brands suggested by BingΓÇöleave the option unchecked if you don't.
-
-## Include brands in the model
-
-The **Include brands** section represents custom brands that you want Azure AI Video Indexer to detect, even if they aren't suggested by Bing.
-
-### Add a brand to include list
-
-1. Select **+ Create new brand**.
-
- Provide a name (required), category (optional), description (optional), and reference URL (optional).
- The category field is meant to help you tag your brands. This field shows up as the brand's *tags* when using the Azure AI Video Indexer APIs. For example, the brand "Azure" can be tagged or categorized as "Cloud".
-
- The reference URL field can be any reference website for the brand (like a link to its Wikipedia page).
-
-2. Select **Save** and you'll see that the brand has been added to the **Include brands** list.
-
-### Edit a brand on the include list
-
-1. Select the pencil icon next to the brand that you want to edit.
-
- You can update the category, description, or reference URL of a brand. You can't change the name of a brand because names of brands are unique. If you need to change the brand name, delete the entire brand (see next section) and create a new brand with the new name.
-
-2. Select the **Update** button to update the brand with the new information.
-
-### Delete a brand on the include list
-
-1. Select the trash icon next to the brand that you want to delete.
-2. Select **Delete** and the brand will no longer appear in your *Include brands* list.
-
-## Exclude brands from the model
-
-The **Exclude brands** section represents the brands that you don't want Azure AI Video Indexer to detect.
-
-### Add a brand to exclude list
-
-1. Select **+ Create new brand.**
-
- Provide a name (required), category (optional).
-
-2. Select **Save** and you'll see that the brand has been added to the *Exclude brands* list.
-
-### Edit a brand on the exclude list
-
-1. Select the pencil icon next to the brand that you want to edit.
-
- You can only update the category of a brand. You can't change the name of a brand because names of brands are unique. If you need to change the brand name, delete the entire brand (see next section) and create a new brand with the new name.
-
-2. Select the **Update** button to update the brand with the new information.
-
-### Delete a brand on the exclude list
-
-1. Select the trash icon next to the brand that you want to delete.
-2. Select **Delete** and the brand will no longer appear in your *Exclude brands* list.
-
-## Next steps
-
-[Customize Brands model using APIs](customize-brands-model-with-api.md)
azure-video-indexer Customize Content Models Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-video-indexer/customize-content-models-overview.md
- Title: Customizing content models in Azure AI Video Indexer
-description: This article gives links to the conceptual articles that explain the benefits of each type of customization. This article also links to how-to guides that show how you can implement the customization of each model.
- Previously updated : 06/26/2019----
-# Customizing content models in Azure AI Video Indexer
---
-Azure AI Video Indexer allows you to customize some of its models to be adapted to your specific use case. These models include [brands](customize-brands-model-overview.md), [language](customize-language-model-overview.md), and [person](customize-person-model-overview.md). You can easily customize these models using the Azure AI Video Indexer website or API.
-
-This article gives links to articles that explain the benefits of each type of customization. The article also links to how-to guides that show how you can implement the customization of each model.
-
-## Brands model
-
-* [Customizing the brands model overview](customize-brands-model-overview.md)
-* [Customizing the brands model using the Azure AI Video Indexer website](customize-brands-model-with-website.md)
-* [Customizing the brands model using the Azure AI Video Indexer API](customize-brands-model-with-api.md)
-
-## Language model
-
-* [Customizing language models overview](customize-language-model-overview.md)
-* [Customizing language models using the Azure AI Video Indexer website](customize-language-model-with-website.md)
-* [Customizing language models using the Azure AI Video Indexer API](customize-language-model-with-api.md)
-
-## Person model
-
-* [Customizing person models overview](customize-person-model-overview.md)
-* [Customizing person models using the Azure AI Video Indexer website](customize-person-model-with-website.md)
-* [Customizing person models using the Azure AI Video Indexer API](customize-person-model-with-api.md)
-
-## Next steps
-
-[Azure AI Video Indexer overview](video-indexer-overview.md)
azure-video-indexer Customize Language Model Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-video-indexer/customize-language-model-overview.md
- Title: Customize a Language model in Azure AI Video Indexer - Azure
-description: This article gives an overview of what is a Language model in Azure AI Video Indexer and how to customize it.
- Previously updated : 11/23/2022----
-# Customize a Language model with Azure AI Video Indexer
--
-Azure AI Video Indexer supports automatic speech recognition through integration with the Microsoft [Custom Speech Service](https://azure.microsoft.com/services/cognitive-services/custom-speech-service/). You can customize the Language model by uploading adaptation text, namely text from the domain whose vocabulary you'd like the engine to adapt to. Once you train your model, new words appearing in the adaptation text will be recognized, assuming default pronunciation, and the Language model will learn new probable sequences of words. See the list of supported by Azure AI Video Indexer languages in [supported langues](language-support.md).
-
-Let's take a word that is highly specific, like *"Kubernetes"* (in the context of Azure Kubernetes service), as an example. Since the word is new to Azure AI Video Indexer, it's recognized as *"communities"*. You need to train the model to recognize it as *"Kubernetes"*. In other cases, the words exist, but the Language model isn't expecting them to appear in a certain context. For example, *"container service"* isn't a 2-word sequence that a nonspecialized Language model would recognize as a specific set of words.
-
-There are two ways to customize a language model:
--- **Option 1**: Edit the transcript that was generated by Azure AI Video Indexer. By editing and correcting the transcript, you're training a language model to provide improved results in the future.-- **Option 2**: Upload text file(s) to train the language model. The upload file can either contain a list of words as you would like them to appear in the Video Indexer transcript or the relevant words included naturally in sentences and paragraphs. As better results are achieved with the latter approach, it's recommended for the upload file to contain full sentences or paragraphs related to your content.
-
-> [!Important]
-> Do not include in the upload file the words or sentences as currently incorrectly transcribed (for example, *"communities"*) as this will negate the intended impact.
-> Only include the words as you would like them to appear (for example, *"Kubernetes"*).
-
-You can use the Azure AI Video Indexer APIs or the website to create and edit custom Language models, as described in articles in the [Next steps](#next-steps) section of this article.
-
-## Best practices for custom Language models
-
-Azure AI Video Indexer learns based on probabilities of word combinations, so to learn best:
-
-* Give enough real examples of sentences as they would be spoken.
-* Put only one sentence per line, not more. Otherwise the system will learn probabilities across sentences.
-* It's okay to put one word as a sentence to boost the word against others, but the system learns best from full sentences.
-* When introducing new words or acronyms, if possible, give as many examples of usage in a full sentence to give as much context as possible to the system.
-* Try to put several adaptation options, and see how they work for you.
-* Avoid repetition of the exact same sentence multiple times. It may create bias against the rest of the input.
-* Avoid including uncommon symbols (~, # @ % &) as they'll get discarded. The sentences in which they appear will also get discarded.
-* Avoid putting too large inputs, such as hundreds of thousands of sentences, because doing so will dilute the effect of boosting.
-
-## Next steps
-
-[Customize Language model using APIs](customize-language-model-with-api.md)
-
-[Customize Language model using the website](customize-language-model-with-website.md)
azure-video-indexer Customize Language Model With Api https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-video-indexer/customize-language-model-with-api.md
- Title: Customize a Language model with Azure AI Video Indexer API
-description: Learn how to customize a Language model with the Azure AI Video Indexer API.
- Previously updated : 02/04/2020-----
-# Customize a Language model with the Azure AI Video Indexer API
--
-Azure AI Video Indexer lets you create custom Language models to customize speech recognition by uploading adaptation text, namely text from the domain whose vocabulary you'd like the engine to adapt to. Once you train your model, new words appearing in the adaptation text will be recognized.
-
-For a detailed overview and best practices for custom Language models, see [Customize a Language model with Azure AI Video Indexer](customize-language-model-overview.md).
-
-You can use the Azure AI Video Indexer APIs to create and edit custom Language models in your account, as described in this article. You can also use the website, as described in [Customize Language model using the Azure AI Video Indexer website](customize-language-model-with-api.md).
-
-## Create a Language model
-
-The [create a language model](https://api-portal.videoindexer.ai/api-details#api=Operations&operation=Create-Language-Model) API creates a new custom Language model in the specified account. You can upload files for the Language model in this call. Alternatively, you can create the Language model here and upload files for the model later by updating the Language model.
-
-> [!NOTE]
-> You must still train the model with its enabled files for the model to learn the contents of its files. Directions on training a language are in the next section.
-
-To upload files to be added to the Language model, you must upload files in the body using FormData in addition to providing values for the required parameters above. There are two ways to do this task:
-
-* Key is the file name and value is the txt file.
-* Key is the file name and value is a URL to txt file.
-
-### Response
-
-The response provides metadata on the newly created Language model along with metadata on each of the model's files following the format of this example JSON output:
-
-```json
-{
- "id": "dfae5745-6f1d-4edd-b224-42e1ab57a891",
- "name": "TestModel",
- "language": "En-US",
- "state": "None",
- "languageModelId": "00000000-0000-0000-0000-000000000000",
- "files": [
- {
- "id": "25be7c0e-b6a6-4f48-b981-497e920a0bc9",
- "name": "hellofile",
- "enable": true,
- "creator": "John Doe",
- "creationTime": "2018-04-28T11:55:34.6733333"
- },
- {
- "id": "33025f5b-2354-485e-a50c-4e6b76345ca7",
- "name": "worldfile",
- "enable": true,
- "creator": "John Doe",
- "creationTime": "2018-04-28T11:55:34.86"
- }
- ]
-}
-
-```
-
-## Train a Language model
-
-The [train a language model](https://api-portal.videoindexer.ai/api-details#api=Operations&operation=Train-Language-Model) API trains a custom Language model in the specified account with the contents in the files that were uploaded to and enabled in the language model.
-
-> [!NOTE]
-> You must first create the Language model and upload its files. You can upload files when creating the Language model or by updating the Language model.
-
-### Response
-
-The response provides metadata on the newly trained Language model along with metadata on each of the model's files following the format of this example JSON output:
-
-```json
-{
- "id": "41464adf-e432-42b1-8e09-f52905d7e29d",
- "name": "TestModel",
- "language": "En-US",
- "state": "Waiting",
- "languageModelId": "531e5745-681d-4e1d-b124-12e5ab57a891",
- "files": [
- {
- "id": "84fcf1ac-1952-48f3-b372-18f768eedf83",
- "name": "RenamedFile",
- "enable": false,
- "creator": "John Doe",
- "creationTime": "2018-04-27T20:10:10.5233333"
- },
- {
- "id": "9ac35b4b-1381-49c4-9fe4-8234bfdd0f50",
- "name": "hellofile",
- "enable": true,
- "creator": "John Doe",
- "creationTime": "2018-04-27T20:10:10.68"
- }
- ]
-}
-```
-
-The returned `id` is a unique ID used to distinguish between language models, while `languageModelId` is used both for [uploading a video to index](https://api-portal.videoindexer.ai/api-details#api=Operations&operation=Upload-Video) and [reindexing a video](https://api-portal.videoindexer.ai/api-details#api=Operations&operation=Re-Index-Video) APIs (also known as `linguisticModelId` in Azure AI Video Indexer upload/reindex APIs).
-
-## Delete a Language model
-
-The [delete a language model](https://api-portal.videoindexer.ai/api-details#api=Operations&operation=Delete-Language-Model) API deletes a custom Language model from the specified account. Any video that was using the deleted Language model keeps the same index until you reindex the video. If you reindex the video, you can assign a new Language model to the video. Otherwise, Azure AI Video Indexer uses its default model to reindex the video.
-
-### Response
-
-There's no returned content when the Language model is deleted successfully.
-
-## Update a Language model
-
-The [update a Language model](https://api-portal.videoindexer.ai/api-details#api=Operations&operation=Update-Language-Model) API updates a custom Language person model in the specified account.
-
-> [!NOTE]
-> You must have already created the Language model. You can use this call to enable or disable all files under the model, update the name of the Language model, and upload files to be added to the language model.
-
-To upload files to be added to the Language model, you must upload files in the body using FormData in addition to providing values for the required parameters above. There are two ways to do this task:
-
-* Key is the file name and value is the txt file.
-* Key is the file name and value is a URL to txt file.
-
-### Response
-
-The response provides metadata on the newly trained Language model along with metadata on each of the model's files following the format of this example JSON output:
-
-```json
-{
- "id": "41464adf-e432-42b1-8e09-f52905d7e29d",
- "name": "TestModel",
- "language": "En-US",
- "state": "Waiting",
- "languageModelId": "531e5745-681d-4e1d-b124-12e5ab57a891",
- "files": [
- {
- "id": "84fcf1ac-1952-48f3-b372-18f768eedf83",
- "name": "RenamedFile",
- "enable": true,
- "creator": "John Doe",
- "creationTime": "2018-04-27T20:10:10.5233333"
- },
- {
- "id": "9ac35b4b-1381-49c4-9fe4-8234bfdd0f50",
- "name": "hellofile",
- "enable": true,
- "creator": "John Doe",
- "creationTime": "2018-04-27T20:10:10.68"
- }
- ]
-}
-```
-
-Use the `id` of the files returned in the response to download the contents of the file.
-
-## Update a file from a Language model
-
-The [update a file](https://api-portal.videoindexer.ai/api-details#api=Operations&operation=Update-Language-Model-file) allows you to update the name and `enable` state of a file in a custom Language model in the specified account.
-
-### Response
-
-The response provides metadata on the file that you updated following the format of the example JSON output below.
-
-```json
-{
- "id": "84fcf1ac-1952-48f3-b372-18f768eedf83",
- "name": "RenamedFile",
- "enable": false,
- "creator": "John Doe",
- "creationTime": "2018-04-27T20:10:10.5233333"
-}
-```
-
-Use the `id` of the file returned in the response to download the contents of the file.
-
-## Get a specific Language model
-
-The [get](https://api-portal.videoindexer.ai/api-details#api=Operations&operation=Get-Language-Model) API returns information on the specified Language model in the specified account such as language and the files that are in the Language model.
-
-### Response
-
-The response provides metadata on the specified Language model along with metadata on each of the model's files following the format of this example JSON output:
-
-```json
-{
- "id": "dfae5745-6f1d-4edd-b224-42e1ab57a891",
- "name": "TestModel",
- "language": "En-US",
- "state": "None",
- "languageModelId": "00000000-0000-0000-0000-000000000000",
- "files": [
- {
- "id": "25be7c0e-b6a6-4f48-b981-497e920a0bc9",
- "name": "hellofile",
- "enable": true,
- "creator": "John Doe",
- "creationTime": "2018-04-28T11:55:34.6733333"
- },
- {
- "id": "33025f5b-2354-485e-a50c-4e6b76345ca7",
- "name": "worldfile",
- "enable": true,
- "creator": "John Doe",
- "creationTime": "2018-04-28T11:55:34.86"
- }
- ]
-}
-```
-
-Use the `id` of the file returned in the response to download the contents of the file.
-
-## Get all the Language models
-
-The [get all](https://api-portal.videoindexer.ai/api-details#api=Operations&operation=Get-Language-Models) API returns all of the custom Language models in the specified account in a list.
-
-### Response
-
-The response provides a list of all of the Language models in your account and each of their metadata and files following the format of this example JSON output:
-
-```json
-[
- {
- "id": "dfae5745-6f1d-4edd-b224-42e1ab57a891",
- "name": "TestModel",
- "language": "En-US",
- "state": "None",
- "languageModelId": "00000000-0000-0000-0000-000000000000",
- "files": [
- {
- "id": "25be7c0e-b6a6-4f48-b981-497e920a0bc9",
- "name": "hellofile",
- "enable": true,
- "creator": "John Doe",
- "creationTime": "2018-04-28T11:55:34.6733333"
- },
- {
- "id": "33025f5b-2354-485e-a50c-4e6b76345ca7",
- "name": "worldfile",
- "enable": true,
- "creator": "John Doe",
- "creationTime": "2018-04-28T11:55:34.86"
- }
- ]
- },
- {
- "id": "dfae5745-6f1d-4edd-b224-42e1ab57a892",
- "name": "AnotherTestModel",
- "language": "En-US",
- "state": "None",
- "languageModelId": "00000000-0000-0000-0000-000000000001",
- "files": []
- }
-]
-```
-
-## Delete a file from a Language model
-
-The [delete](https://api-portal.videoindexer.ai/api-details#api=Operations&operation=Delete-Language-Model-File) API deletes the specified file from the specified Language model in the specified account.
-
-### Response
-
-There's no returned content when the file is deleted from the Language model successfully.
-
-## Get metadata on a file from a Language model
-
-The [get metadata of a file](https://api-portal.videoindexer.ai/api-details#api=Operations&operation=Get-Language-Model-File-Data) API returns the contents of and metadata on the specified file from the chosen Language model in your account.
-
-### Response
-
-The response provides the contents and metadata of the file in JSON format, similar to this example:
-
-```json
-{
- "content": "hello\r\nworld",
- "id": "84fcf1ac-1952-48f3-b372-18f768eedf83",
- "name": "Hello",
- "enable": true,
- "creator": "John Doe",
- "creationTime": "2018-04-27T20:10:10.5233333"
-}
-```
-
-> [!NOTE]
-> The contents of this example file are the words "hello" and world" in two separate lines.
-
-## Download a file from a Language model
-
-The [download a file](https://api-portal.videoindexer.ai/api-details#api=Operations&operation=Download-Language-Model-File-Content) API downloads a text file containing the contents of the specified file from the specified Language model in the specified account. This text file should match the contents of the text file that was originally uploaded.
-
-### Response
-
-The response is the download of a text file with the contents of the file in the JSON format.
-
-## Next steps
-
-[Customize Language model using website](customize-language-model-with-website.md)
azure-video-indexer Customize Language Model With Website https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-video-indexer/customize-language-model-with-website.md
- Title: Customize Language model with Azure AI Video Indexer website
-description: Learn how to customize a Language model with the Azure AI Video Indexer website.
- Previously updated : 08/10/2020-----
-# Customize a Language model with the Azure AI Video Indexer website
--
-Azure AI Video Indexer lets you create custom Language models to customize speech recognition by uploading adaptation text, namely text from the domain whose vocabulary you'd like the engine to adapt to. Once you train your model, new words appearing in the adaptation text will be recognized.
-
-For a detailed overview and best practices for custom language models, see [Customize a Language model with Azure AI Video Indexer](customize-language-model-overview.md).
-
-You can use the Azure AI Video Indexer website to create and edit custom Language models in your account, as described in this topic. You can also use the API, as described in [Customize Language model using APIs](customize-language-model-with-api.md).
-
-## Create a Language model
-
-1. Go to the [Azure AI Video Indexer](https://www.videoindexer.ai/) website and sign in.
-1. To customize a model in your account, select the **Content model customization** button on the left of the page.
-
- > [!div class="mx-imgBorder"]
- > :::image type="content" source="./media/customize-language-model/model-customization.png" alt-text="Customize content model in Azure AI Video Indexer ":::
-1. Select the **Language** tab.
-
- You see a list of supported languages.
-1. Under the language that you want, select **Add model**.
-1. Type in the name for the Language model and hit enter.
-
- This step creates the model and gives the option to upload text files to the model.
-1. To add a text file, select **Add file**. Your file explorer will open.
-1. Navigate to and select the text file. You can add multiple text files to a Language model.
-
- You can also add a text file by selecting the **...** button on the right side of the Language model and selecting **Add file**.
-1. Once you're done uploading the text files, select the green **Train** option.
-
-The training process can take a few minutes. Once the training is done, you see **Trained** next to the model. You can preview, download, and delete the file from the model.
-
-> [!div class="mx-imgBorder"]
-> :::image type="content" source="./media/customize-language-model/customize-language-model.png" alt-text="Train the model":::
-
-### Using a Language model on a new video
-
-To use your Language model on a new video, do one of the following actions:
-
-* Select the **Upload** button on the top of the page.
-
- ![Upload button Azure AI Video Indexer](./media/customize-language-model/upload.png)
-* Drop your audio or video file or browse for your file.
-
-You're given the option to select the **Video source language**. Select the drop-down and select a Language model that you created from the list. It should say the language of your Language model and the name that you gave it in parentheses. For example:
-
-![Choose video source languageΓÇöReindex a video with Azure AI Video Indexer](./media/customize-language-model/reindex.png)
-
-Select the **Upload** option in the bottom of the page, and your new video will be indexed using your Language model.
-
-### Using a Language model to reindex
-
-To use your Language model to reindex a video in your collection, follow these steps:
-
-1. Sign in to the [Azure AI Video Indexer](https://www.videoindexer.ai/) home page.
-1. Click on **...** button on the video and select **Re-index**.
-1. You're given the option to select the **Video source language** to reindex your video with. Select the drop-down and select a Language model that you created from the list. It should say the language of your language model and the name that you gave it in parentheses.
-1. Select the **Re-index** button and your video will be reindexed using your Language model.
-
-## Edit a Language model
-
-You can edit a Language model by changing its name, adding files to it, and deleting files from it.
-
-If you add or delete files from the Language model, you'll have to train the model again by selecting the green **Train** option.
-
-### Rename the Language model
-
-You can change the name of the Language model by selecting the ellipsis (**...**) button on the right side of the Language model and selecting **Rename**.
-
-Type in the new name and hit enter.
-
-### Add files
-
-To add a text file, select **Add file**. Your file explorer will open.
-
-Navigate to and select the text file. You can add multiple text files to a Language model.
-
-You can also add a text file by selecting the ellipsis (**...**) button on the right side of the Language model and selecting **Add file**.
-
-### Delete files
-
-To delete a file from the Language model, select the ellipsis (**...**) button on the right side of the text file and select **Delete**. A new window pops up telling you that the deletion can't be undone. Select the **Delete** option in the new window.
-
-This action removes the file completely from the Language model.
-
-## Delete a Language model
-
-To delete a Language model from your account, select the ellipsis (**...**) button on the right side of the Language model and select **Delete**.
-
-A new window pops up telling you that the deletion can't be undone. Select the **Delete** option in the new window.
-
-This action removes the Language model completely from your account. Any video that was using the deleted Language model will keep the same index until you reindex the video. If you reindex the video, you can assign a new Language model to the video. Otherwise, Azure AI Video Indexer will use its default model to reindex the video.
-
-## Customize Language models by correcting transcripts
-
-Azure AI Video Indexer supports automatic customization of Language models based on the actual corrections users make to the transcriptions of their videos.
-
-1. To make corrections to a transcript, open up the video that you want to edit from your Account Videos. Select the **Timeline** tab.
-
- ![Customize language model timeline tabΓÇöAzure AI Video Indexer](./media/customize-language-model/timeline.png)
-
-1. Select the pencil icon to edit the transcript of your transcription.
-
- ![Customize language model edit transcriptionΓÇöAzure AI Video Indexer](./media/customize-language-model/edits.png)
-
- Azure AI Video Indexer captures all lines that are corrected by you in the transcription of your video and adds them automatically to a text file called "From transcript edits". These edits are used to retrain the specific Language model that was used to index this video.
-
- The edits that were done in the [widget's](video-indexer-embed-widgets.md) timeline are also included.
-
- If you didn't specify a Language model when indexing this video, all edits for this video will be stored in a default Language model called "Account adaptations" within the detected language of the video.
-
- In case multiple edits have been made to the same line, only the last version of the corrected line will be used for updating the Language model.
-
- > [!NOTE]
- > Only textual corrections are used for the customization. Corrections that don't involve actual words (for example, punctuation marks or spaces) aren't included.
-
-1. You'll see transcript corrections show up in the Language tab of the Content model customization page.
-
- To look at the "From transcript edits" file for each of your Language models, select it to open it.
-
- ![From transcript editsΓÇöAzure AI Video Indexer](./media/customize-language-model/from-transcript-edits.png)
-
-## Next steps
-
-[Customize language model using APIs](customize-language-model-with-api.md)
azure-video-indexer Customize Person Model Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-video-indexer/customize-person-model-overview.md
- Title: Customize a Person model in Azure AI Video Indexer - Azure
-description: This article gives an overview of what is a Person model in Azure AI Video Indexer and how to customize it.
- Previously updated : 05/15/2019----
-# Customize a Person model in Azure AI Video Indexer
---
-Azure AI Video Indexer supports celebrity recognition in your videos. The celebrity recognition feature covers approximately one million faces based on commonly requested data source such as IMDB, Wikipedia, and top LinkedIn influencers. Faces that aren't recognized by Azure AI Video Indexer are still detected but are left unnamed. Customers can build custom Person models and enable Azure AI Video Indexer to recognize faces that aren't recognized by default. Customers can build these Person models by pairing a person's name with image files of the person's face.
-
-If your account caters to different use-cases, you can benefit from being able to create multiple Person models per account. For example, if the content in your account is meant to be sorted into different channels, you might want to create a separate Person model for each channel.
-
-> [!NOTE]
-> Each Person model supports up to 1 million people and each account has a limit of 50 Person models.
-
-Once a model is created, you can use it by providing the model ID of a specific Person model when uploading/indexing or reindexing a video. Training a new face for a video updates the specific custom model that the video was associated with.
-
-If you don't need the multiple Person model support, don't assign a Person model ID to your video when uploading/indexing or reindexing. In this case, Azure AI Video Indexer will use the default Person model in your account.
-
-You can use the Azure AI Video Indexer website to edit faces that were detected in a video and to manage multiple custom Person models in your account, as described in the [Customize a Person model using a website](customize-person-model-with-website.md) article. You can also use the API, as described inΓÇ»[Customize a Person model using APIs](customize-person-model-with-api.md).
azure-video-indexer Customize Person Model With Api https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-video-indexer/customize-person-model-with-api.md
- Title: Customize a Person model with Azure AI Video Indexer API
-description: Learn how to customize a Person model with the Azure AI Video Indexer API.
- Previously updated : 01/14/2020-----
-# Customize a Person model with the Azure AI Video Indexer API
---
-Azure AI Video Indexer supports face detection and celebrity recognition for video content. The celebrity recognition feature covers about one million faces based on commonly requested data source such as IMDB, Wikipedia, and top LinkedIn influencers. Faces that aren't recognized by the celebrity recognition feature are detected but left unnamed. After you upload your video to Azure AI Video Indexer and get results back, you can go back and name the faces that weren't recognized. Once you label a face with a name, the face and name get added to your account's Person model. Azure AI Video Indexer will then recognize this face in your future videos and past videos.
-
-You can use the Azure AI Video Indexer API to edit faces that were detected in a video, as described in this topic. You can also use the Azure AI Video Indexer website, as described in [Customize Person model using the Azure AI Video Indexer website](customize-person-model-with-api.md).
-
-## Managing multiple Person models
-
-Azure AI Video Indexer supports multiple Person models per account. This feature is currently available only through the Azure AI Video Indexer APIs.
-
-If your account caters to different use-case scenarios, you might want to create multiple Person models per account. For example, if your content is related to sports, you can then create a separate Person model for each sport (football, basketball, soccer, and so on).
-
-Once a model is created, you can use it by providing the model ID of a specific Person model when uploading/indexing or reindexing a video. Training a new face for a video updates the specific custom model that the video was associated with.
-
-Each account has a limit of 50 Person models. If you don't need the multiple Person model support, don't assign a Person model ID to your video when uploading/indexing or reindexing. In this case, Azure AI Video Indexer uses the default custom Person model in your account.
-
-## Create a new Person model
-
-To create a new Person model in the specified account, use the [create a person model](https://api-portal.videoindexer.ai/api-details#api=Operations&operation=Create-Person-Model) API.
-
-The response provides the name and generated model ID of the Person model that you just created following the format of the example below.
-
-```json
-{
- "id": "227654b4-912c-4b92-ba4f-641d488e3720",
- "name": "Example Person Model"
-}
-```
-
-You then use the **id** value for the **personModelId** parameter when [uploading a video to index](https://api-portal.videoindexer.ai/api-details#api=Operations&operation=Upload-Video) or [reindexing a video](https://api-portal.videoindexer.ai/api-details#api=Operations&operation=Re-Index-Video).
-
-## Delete a Person model
-
-To delete a custom Person model from the specified account, use the [delete a person model](https://api-portal.videoindexer.ai/api-details#api=Operations&operation=Delete-Person-Model) API.
-
-Once the Person model is deleted successfully, the index of your current videos that were using the deleted model will remain unchanged until you reindex them. Upon reindexing, the faces that were named in the deleted model won't be recognized by Azure AI Video Indexer in your current videos that were indexed using that model but the faces will still be detected. Your current videos that were indexed using the deleted model will now use your account's default Person model. If faces from the deleted model are also named in your account's default model, those faces will continue to be recognized in the videos.
-
-There's no returned content when the Person model is deleted successfully.
-
-## Get all Person models
-
-To get all Person models in the specified account, use the [get a person model](https://api-portal.videoindexer.ai/api-details#api=Operations&operation=Get-Person-Models) API.
-
-The response provides a list of all of the Person models in your account (including the default Person model in the specified account) and each of their names and IDs following the format of the example below.
-
-```json
-[
- {
- "id": "59f9c326-b141-4515-abe7-7d822518571f",
- "name": "Default"
- },
- {
- "id": "9ef2632d-310a-4510-92e1-cc70ae0230d4",
- "name": "Test"
- }
-]
-```
-
-You can choose which model you want to use for a video by using the `id` value of the Person model for the `personModelId` parameter when [uploading a video to index](https://api-portal.videoindexer.ai/api-details#api=Operations&operation=Upload-Video) or [reindexing a video](https://api-portal.videoindexer.ai/api-details#api=Operations&operation=Re-Index-Video).
-
-## Update a face
-
-This command allows you to update a face in your video with a name using the ID of the video and ID of the face. This action then updates the Person model that the video was associated with upon uploading/indexing or reindexing. If no Person model was assigned, it updates the account's default Person model.
-
-The system then recognizes the occurrences of the same face in your other current videos that share the same Person model. Recognition of the face in your other current videos might take some time to take effect as this is a batch process.
-
-You can update a face that Azure AI Video Indexer recognized as a celebrity with a new name. The new name that you give will take precedence over the built-in celebrity recognition.
-
-To update the face, use the [update a video face](https://api-portal.videoindexer.ai/api-details#api=Operations&operation=Update-Video-Face) API.
-
-Names are unique for Person models, so if you give two different faces in the same Person model the same `name` parameter value, Azure AI Video Indexer views the faces as the same person and converges them once you reindex your video.
-
-## Next steps
-
-[Customize Person model using the Azure AI Video Indexer website](customize-person-model-with-website.md)
azure-video-indexer Customize Person Model With Website https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-video-indexer/customize-person-model-with-website.md
- Title: Customize a Person model with Azure AI Video Indexer website
-description: Learn how to customize a Person model with the Azure AI Video Indexer website.
- Previously updated : 05/31/2022----
-# Customize a Person model with the Azure AI Video Indexer website
---
-Azure AI Video Indexer supports celebrity recognition for video content. The celebrity recognition feature covers approximately one million faces based on commonly requested data source such as IMDB, Wikipedia, and top LinkedIn influencers. For a detailed overview, see [Customize a Person model in Azure AI Video Indexer](customize-person-model-overview.md).
-
-You can use the Azure AI Video Indexer website to edit faces that were detected in a video, as described in this article. You can also use the API, as described in [Customize a Person model using APIs](customize-person-model-with-api.md).
-
-## Central management of Person models in your account
-
-1. To view, edit, and delete the Person models in your account, browse to the Azure AI Video Indexer website and sign in.
-1. Select the content model customization button on the left of the page.
-
- > [!div class="mx-imgBorder"]
- > :::image type="content" source="./media/content-model-customization/content-model-customization.png" alt-text="Customize content model":::
-1. Select the People tab.
-
- You'll see the Default Person model in your account. The Default Person model holds any faces you may have edited or changed in the insights of your videos for which you didn't specify a custom Person model during indexing.
-
- If you created other Person models, they'll also be listed on this page.
-
- > [!div class="mx-imgBorder"]
- > :::image type="content" source="./media/customize-face-model/content-model-customization-people-tab.png" alt-text="Customize people":::
-
-## Create a new Person model
-
-1. Select the **+ Add model** button on the right.
-1. Enter the name of the model and select the check button to save the new model created. You can now add new people and faces to the new Person model.
-1. Select the list menu button and choose **+ Add person**.
-
- > [!div class="mx-imgBorder"]
- > :::image type="content" source="./media/customize-face-model/add-new-person.png" alt-text="Add a peron":::
-
-## Add a new person to a Person model
-
-> [!NOTE]
-> Azure AI Video Indexer allows you to add multiple people with the same name in a Person model. However, it's recommended you give unique names to each person in your model for usability and clarity.
-
-1. To add a new face to a Person model, select the list menu button next to the Person model that you want to add the face to.
-1. Select **+ Add person** from the menu.
-
- A pop-up will prompt you to fill out the Person's details. Type in the name of the person and select the check button.
-
- You can then choose from your file explorer or drag and drop the face images of the face. Azure AI Video Indexer will take all standard image file types (ex: JPG, PNG, and more).
-
- Azure AI Video Indexer can detect occurrences of this person in the future videos that you index and the current videos that you had already indexed, using the Person model to which you added this new face. Recognition of the person in your current videos might take some time to take effect, as this is a batch process.
-
-## Rename a Person model
-
-You can rename any Person model in your account including the Default Person model. Even if you rename your default Person model, it will still serve as the Default person model in your account.
-
-1. Select the list menu button next to the Person model that you want to rename.
-1. Select **Rename** from the menu.
-1. Select the current name of the model and type in your new name.
-1. Select the check button for your model to be renamed.
-
-## Delete a Person model
-
-You can delete any Person model that you created in your account. However, you can't delete your Default person model.
-
-1. Select **Delete** from the menu.
-
- A pop-up will show up and notify you that this action will delete the Person model and all of the people and files that it contains. This action can't be undone.
-1. If you're sure, select delete again.
-
-> [!NOTE]
-> The existing videos that were indexed using this (now deleted) Person model won't support the ability for you to update the names of the faces that appear in the video. You'll be able to edit the names of faces in these videos only after you reindex them using another Person model. If you reindex without specifying a Person model, the default model will be used.
-
-## Manage existing people in a Person model
-
-To look at the contents of any of your Person models, select the arrow next to the name of the Person model. Then you can view all of the people in that particular Person model. If you select the list menu button next to each of the people, you see manage, rename, and delete options.
-
-![Screenshot shows a contextual menu with options to Manage, Rename, and Delete.](./media/customize-face-model/manage-people.png)
-
-### Rename a person
-
-1. To rename a person in your Person model, select the list menu button and choose **Rename** from the list menu.
-1. Select the current name of the person and type in your new name.
-1. Select the check button, and the person will be renamed.
-
-### Delete a person
-
-1. To delete a person from your Person model, select the list menu button and choose **Delete** from the list menu.
-1. A pop-up tells you that this action will delete the person and that this action can't be undone.
-1. Select **Delete** again and this will remove the person from the Person model.
-
-### Check if a person already exists
-
-You can use the search to check if a person already exists in the model.
-
-### Manage a person
-
-If you select **Manage**, you see the **Person's details** window with all the faces that this Person model is being trained from. These faces come from occurrences of that person in videos that use this Person model or from images that you've manually uploaded.
-
-> [!TIP]
-> You can get to the **Person's details** window by clicking on the person's name or by clicking **Manage**, as shown above.
-
-#### Add a face
-
-You can add more faces to the person by selecting **Add images**.
-
-#### Delete a face
-
-Select the image you wish to delete and click **Delete**.
-
-#### Rename and delete a person
-
-You can use the manage pane to rename the person and to delete the person from the Person model.
-
-## Use a Person model to index a video
-
-You can use a Person model to index your new video by assigning the Person model during the upload of the video.
-
-To use your Person model on a new video, do the following steps:
-
-1. Select the **Upload** button on the right of the page.
-1. Drop your video file or browse for your file.
-1. Select the **Advanced options** arrow.
-1. Select the drop-down and select the Person model that you created.
-1. Select the **Upload** option in the bottom of the page, and your new video will be indexed using your Person model.
-
-If you don't specify a Person model during the upload, Azure AI Video Indexer will index the video using the Default Person model in your account.
-
-## Use a Person model to reindex a video
-
-To use a Person model to reindex a video in your collection, go to your account videos on the Azure AI Video Indexer home page, and hover over the name of the video that you want to reindex.
-
-You see options to edit, delete, and reindex your video.
-
-1. Select the option to reindex your video.
-
- ![Screenshot shows Account videos and the option to reindex your video.](./media/customize-face-model/reindex.png)
-
- You can now select the Person model to reindex your video with.
-1. Select the drop-down and select the Person model that you want to use.
-1. Select the **Reindex** button and your video will be reindexed using your Person model.
-
-Any new edits that you make to the faces detected and recognized in the video that you just reindexed will be saved in the Person model that you used to reindex the video.
-
-## Managing people in your videos
-
-You can manage the faces that are detected and people that are recognized in the videos that you index by editing and deleting faces.
-
-Deleting a face removes a specific face from the insights of the video.
-
-Editing a face renames a face that's detected and possibly recognized in your video. When you edit a face in your video, that name is saved as a person entry in the Person model that was assigned to the video during upload and indexing.
-
-If you don't assign a Person model to the video during upload, your edit is saved in your account's Default person model.
-
-### Edit a face
-
-> [!NOTE]
-> If a Person model has two or more different people with the same name, you won't be able to tag that name within the videos that use that Person model. You'll only be able to make changes to people that share that name in the People tab of the content model customization page in Azure AI Video Indexer. For this reason, it's recommended that you give unique names to each person in your Person model.
-
-1. Browse to the Azure AI Video Indexer website and sign in.
-1. Search for a video you want to view and edit in your account.
-1. To edit a face in your video, go to the Insights tab and select the pencil icon on the top-right corner of the window.
-
- ![Screenshot shows a video with an unknown face to select.](./media/customize-face-model/edit-face.png)
-
-1. Select any of the detected faces and change their names from "Unknown #X" (or the name that was previously assigned to the face).
-1. After typing in the new name, select the check icon next to the new name. This action saves the new name and recognizes and names all occurrences of this face in your other current videos and in the future videos that you upload. Recognition of the face in your other current videos might take some time to take effect as this is a batch process.
-
-If you name a face with the name of an existing person in the Person model that the video is using, the detected face images from this video of that person will merge with what already exists in the model. If you name a face with a new name, a new Person entry is created in the Person model that the video is using.
-
-### Delete a face
-
-To delete a detected face in your video, go to the Insights pane and select the pencil icon in the top-right corner of the pane. Select the **Delete** option underneath the name of the face. This action removes the detected face from the video. The person's face will still be detected in the other videos in which it appears, but you can delete the face from those videos as well after they've been indexed.
-
-The person, if they had been named, will also continue to exist in the Person model that was used to index the video from which you deleted the face unless you specifically delete the person from the Person model.
-
-## Optimize the ability of your model to recognize a person
-
-To optimize your model ability to recognize the person, upload as many different images as possible and from different angles. To get optimal results, use high resolution images.
-
-## Next steps
-
-[Customize Person model using APIs](customize-person-model-with-api.md)
azure-video-indexer Customize Speech Model Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-video-indexer/customize-speech-model-overview.md
- Title: Customize a speech model in Azure AI Video Indexer
-description: This article gives an overview of what is a speech model in Azure AI Video Indexer.
- Previously updated : 03/06/2023----
-# Customize a speech model
---
-Through Azure AI Video Indexer integration with [Azure AI Speech services](../ai-services/speech-service/captioning-concepts.md), a Universal Language Model is utilized as a base model that is trained with Microsoft-owned data and reflects commonly used spoken language. The base model is pretrained with dialects and phonetics representing various common domains. The base model works well in most speech recognition scenarios.
-
-However, sometimes the base modelΓÇÖs transcription doesn't accurately handle some content. In these situations, a customized speech model can be used to improve recognition of domain-specific vocabulary or pronunciation that is specific to your content by providing text data to train the model. Through the process of creating and adapting speech customization models, your content can be properly transcribed. There's no additional charge for using Video Indexers speech customization.
-
-## When to use a customized speech model?
-
-If your content contains industry specific terminology or when reviewing Video Indexer transcription results you notice inaccuracies, you can create and train a custom speech model to recognize the terms and improve the transcription quality. It may only be worthwhile to create a custom model if the relevant words and names are expected to appear repeatedly in the content you plan to index. Training a model is sometimes an iterative process and you might find that after the initial training, results could still use improvement and would benefit from additional training, see [How to Improve your custom model](#how-to-improve-your-custom-models) section for guidance.
-
-However, if you notice a few words or names transcribed incorrectly in the transcript, a custom speech model might not be needed, especially if the words or names arenΓÇÖt expected to be commonly used in content you plan on indexing in the future. You can just edit and correct the transcript in the Video Indexer website (see [View and update transcriptions in Azure AI Video Indexer website](edit-transcript-lines-portal.md)) and donΓÇÖt have to address it through a custom speech model.
-
-For a list of languages that support custom models and pronunciation, see the Customization and Pronunciation columns of the language support table in [Language support in Azure AI Video Indexer](language-support.md).
-
-## Train datasets
-
-When indexing a video, you can use a customized speech model to improve the transcription. Models are trained by loading them with [datasets](../ai-services/speech-service/how-to-custom-speech-test-and-train.md) that can include plain text data and pronunciation data.
-
-Text used to test and train a custom model should include samples from a diverse set of content and scenarios that you want your model to recognize. Consider the following factors when creating and training your datasets:
--- Include text that covers the kinds of verbal statements that your users make when they're interacting with your model. For example, if your content is primarily related to a sport, train the model with content containing terminology and subject matter related to the sport. -- Include all speech variances that you want your model to recognize. Many factors can vary speech, including accents, dialects, and language-mixing. -- Only include data that is relevant to content you're planning to transcribe. Including other data can harm recognition quality overall. -
-### Dataset types
-
-There are two dataset types that you can use for customization. To help determine which dataset to use to address your problems, refer to the following table:
-
-|Use case|Data type|
-|||
-|Improve recognition accuracy on industry-specific vocabulary and grammar, such as medical terminology or IT jargon. |Plain text|
-|Define the phonetic and displayed form of a word or term that has nonstandard pronunciation, such as product names or acronyms. |Pronunciation data |
-
-### Plain-text data for training
-
-A dataset including plain text sentences of related text can be used to improve the recognition of domain-specific words and phrases. Related text sentences can reduce substitution errors related to misrecognition of common words and domain-specific words by showing them in context. Domain-specific words can be uncommon or made-up words, but their pronunciation must be straightforward to be recognized.
-
-### Best practices for plain text datasets
--- Provide domain-related sentences in a single text file. Instead of using full sentences, you can upload a list of words. However, while this adds them to the vocabulary, it doesn't teach the system how the words are ordinarily used. By providing full or partial utterances (sentences or phrases of things that users are likely to say), the language model can learn the new words and how they're used. The custom language model is good not only for adding new words to the system, but also for adjusting the likelihood of known words for your application. Providing full utterances helps the system learn better. -- Use text data thatΓÇÖs close to the expected spoken utterances. Utterances don't need to be complete or grammatically correct, but they must accurately reflect the spoken input that you expect the model to recognize. -- Try to have each sentence or keyword on a separate line. -- To increase the weight of a term such as product names, add several sentences that include the term. -- For common phrases that are used in your content, providing many examples is useful because it tells the system to listen for these terms.ΓÇ» -- Avoid including uncommon symbols (~, # @ % &) as get discarded. The sentences in which they appear also get discarded. -- Avoid putting too large inputs, such as hundreds of thousands of sentences, because doing so dilutes the effect of boosting. -
-Use this table to ensure that your plain text dataset file is formatted correctly:
-
-|Property|Value|
-|||
-|Text encoding |UTF-8 BOM|
-|Number of utterances per line |1 |
-|Maximum file size |200 MB |
-
-Try to follow these guidelines in your plain text files:
--- Avoid repeating characters, words, or groups of words more than three times, such as "yeah yeah yeah yeah" as the service might drop lines with too many repetitions. -- Don't use special characters or UTF-8 characters above U+00A1. -- URIs is rejected. -- For some languages such as Japanese or Korean, importing large amounts of text data can take a long time or can time out. Consider dividing the dataset into multiple text files with up to 20,000 lines in each. -
-## Pronunciation data for training
-
-You can add to your custom speech model a custom pronunciation dataset to improve recognition of mispronounced words, phrases, or names.
-
-Pronunciation datasets need to include the spoken form of a word or phrase as well as the recognized displayed form. The spoken form is the phonetic sequence spelled out, such as ΓÇ£Triple AΓÇ¥. It can be composed of letters, words, syllables, or a combination of all three. The recognized displayed form is how you would like the word or phrase to appear in the transcription. This table includes some examples:
-
-|Recognized displayed form |Spoken form |
-|||
-|3CPO |three c p o |
-|CNTK |c n t k |
-|AAA |Triple A |
-
-You provide pronunciation datasets in a single text file. Include the spoken utterance and a custom pronunciation for each. Each row in the file should begin with the recognized form, then a tab character, and then the space-delimited phonetic sequence.
-
-```
-3CPO three c p o
-CNTK c n t k
-IEEE i triple e
-```
-
-Consider the following when creating and training pronunciation datasets:
-
-ItΓÇÖs not recommended to use custom pronunciation files to alter the pronunciation of common words.
-
-If there are a few variations of how a word or name is incorrectly transcribed, consider using some or all of them when training the pronunciation dataset. For example, if Robert is mentioned five times in the video and transcribed as Robort, Ropert, and robbers. You can try including all variations in the file as in the following example but be cautious when training with actual words like robbers as if robbers is mentioned in the video, it is transcribed as Robert.
-
-`Robert Roport`
-`Robert Ropert`
-`Robert Robbers`
-
-Pronunciation model isn't meant to address acronyms. For example, if you want Doctor to be transcribed as Dr., this can't be achieved through a pronunciation model.
-
-Refer to the following table to ensure that your pronunciation dataset files are valid and correctly formatted.
-
-|Property |Value |
-|||
-|Text encoding |UTF-8 BOM (ANSI is also supported for English) |
-|Number of pronunciations per line |1 |
-|Maximum file size |1 MB (1 KB for free tier) |
-
-## How to improve your custom models
-
-Training a pronunciation model can be an iterative process, as you might gain more knowledge on the pronunciation of the subject after initial training and evaluation of your modelΓÇÖs results. Since existing models can't be edited or modified, training a model iteratively requires the creation and uploading of datasets with additional information as well as training new custom models based on the new datasets. You would then reindex the media files with the new custom speech model.
-
-Example:
-
-Let's say you plan on indexing sports content and anticipate transcript accuracy issues with specific sports terminology as well as in the names of players and coaches. Before indexing, you've created a speech model with a plain text dataset with content containing relevant sports terminology and a pronunciation dataset with some of the player and coachesΓÇÖ names. You index a few videos using the custom speech model and when reviewing the generated transcript, find that while the terminology is transcribed correctly, many names aren't. You can take the following steps to improve performance in the future:
-
-1. Review the transcript and note all the incorrectly transcribed names. They could fall into two groups:
-
- - Names not in the pronunciation file.
- - Names in the pronunciation file but they're still incorrectly transcribed.
-2. Create a new dataset file. Either download the pronunciation dataset file or modify your locally saved original. For group A, add the new names to the file with how they were incorrectly transcribed (MichaelΓÇâMikel). For group B, add additional lines with each line having the correct name and a unique example of how it was incorrectly transcribed. For example:
-
- `StephenΓÇâSteven`
- `StephenΓÇâSteafan`
- `StephenΓÇâSteevan`
-3. Upload this file as a new dataset file.
-4. Create a new speech model and add the original plain text dataset and the new pronunciation dataset file.
-5. Reindex the video with the new speech model.
-6. If needed, repeat steps 1-5 until the results are satisfactory.
-
-## Next steps
-
-To get started with speech customization, see:
--- [Customize a speech model using the API](customize-speech-model-with-api.md)-- [Customize a speech model using the website](customize-speech-model-with-website.md)
azure-video-indexer Customize Speech Model With Api https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-video-indexer/customize-speech-model-with-api.md
- Title: Customize a speech model with the Azure AI Video Indexer API
-description: Learn how to customize a speech model with the Azure AI Video Indexer API.
- Previously updated : 03/06/2023----
-# Customize a speech model with the API
---
-Azure AI Video Indexer lets you create custom language models to customize speech recognition by uploading adaptation text, namely text from the domain whose vocabulary you'd like the engine to adapt to or aligning word or name pronunciation with how it should be written.
-
-For a detailed overview and best practices for custom speech models, seeΓÇ»[Customize a speech model with Azure AI Video Indexer](customize-speech-model-overview.md).
-
-You can use the Azure AI Video Indexer APIs to create and edit custom language models in your account. You can also use the website, as described inΓÇ»[Customize speech model using the Azure AI Video Indexer website](customize-speech-model-with-website.md).
-
-The following are descriptions of some of the parameters:
-
-|NameΓÇâΓÇâΓÇâ|ΓÇâTypeΓÇâ|ΓÇâΓÇâDescriptionΓÇâ|ΓÇâ
-||||
-|`displayName`ΓÇâ |stringΓÇâ|The desired name of the dataset/model.|
-|`locale`ΓÇâ ΓÇâ|stringΓÇâ|The language code of the dataset/model. For full list, see [Language support](language-support.md).|
-|`kind` ΓÇâ|integer|0 for a plain text dataset, 1 for a pronunciation dataset.|
-|`description`ΓÇâΓÇâ |stringΓÇâ|Optional description of the dataset/model.|
-|`contentUrl`ΓÇâΓÇâ |uriΓÇâ |URL of source file used in creation of dataset.|
-|`customProperties`ΓÇâ|objectΓÇâ|Optional properties of dataset/model.|
-
-## Create a speech dataset
-
-The [create speech dataset](https://api-portal.videoindexer.ai/api-details#api=Operations&operation=Create-Speech-Dataset) API creates a dataset for training a speech model. You upload a file that is used to create a dataset with this call. The content of a dataset can't be modified after it's created.
-To upload a file to a dataset, you must update parameters in the Body, including a URL to the text file to be uploaded. The description and custom properties fields are optional. The following is a sample of the body:
-
-```json
-{
- "displayName": "Pronunciation Dataset",
- "locale": "en-US",
- "kind": "Pronunciation",
- "description": "This is a pronunciation dataset.",
- "contentUrl": https://contoso.com/location,
- "customProperties": {
- "tag": "Pronunciation Dataset Example"
- }
-}
-```
-
-### Response
-
-The response provides metadata on the newly created dataset following the format of this example JSON output:
-
-```json
-{
- "id": "000000-0000-0000-0000-f58ac7002ae9",
- "properties": {
- "acceptedLineCount": 0,
- "rejectedLineCount": 0,
- "duration": null,
- "error": null
- },
- "displayName": "Contoso plain text",
- "description": "AVI dataset",
- "locale": "en-US",
- "kind": "Language",
- "status": "Waiting",
- "lastActionDateTime": "2023-02-28T13:24:27Z",
- "createdDateTime": "2023-02-28T13:24:27Z",
- "customProperties": null
-}
-```
-
-## Create a speech model
-
-The [create a speech model](https://api-portal.videoindexer.ai/api-details#api=Operations&operation=Create-Speech-Model) API creates and trains a custom speech model that could then be used to improve the transcription accuracy of your videos. It must contain at least one plain text dataset and can optionally have pronunciation datasets. Create it with all of the relevant dataset files as a model’s datasets can't be added or updated after its creation.
-
-When creating a speech model, you must update parameters in the Body, including a list of strings where the strings are the dataset/s the model will include. The description and custom properties fiels are optional. The following is a sample of the body:
-
-```json
-{
- "displayName": "Contoso Speech Model",
- "locale": "en-US",
- "datasets": ["ff3d2bc4-ab5a-4522-b599-b3d5ba768c75", "87c8962d-1d3c-44e5-a2b2-c696fddb9bae"],
- "description": "Contoso ads example model",
- "customProperties": {
- "tag": "Example Model"
- }
-}
-```
-
-### Response
-
-The response provides metadata on the newly created model following the format of this example JSON output:
-
-```json{
- "id": "00000000-0000-0000-0000-85be4454cf",
- "properties": {
- "deprecationDates": {
- "adaptationDateTime": null,
- "transcriptionDateTime": "2025-04-15T00:00:00Z"
- },
- "error": null
- },
- "displayName": "Contoso speech model",
- "description": "Contoso speech model for video indexer",
- "locale": "en-US",
- "datasets": ["00000000-0000-0000-0000-f58ac7002ae9"],
- "status": "Processing",
- "lastActionDateTime": "2023-02-28T13:36:28Z",
- "createdDateTime": "2023-02-28T13:36:28Z",
- "customProperties": null
-}
-```
-
-## Get speech dataset
-
-TheΓÇ»[get speech dataset](https://api-portal.videoindexer.ai/api-details#api=Operations&operation=Get-Speech-Dataset) API returns information on the specified dataset.
-
-### Response
-
-The response provides metadata on the specified dataset following the format of this example JSON output:
-
-```json
-{
- "id": "00000000-0000-0000-0000-f58002ae9",
- "properties": {
- "acceptedLineCount": 41,
- "rejectedLineCount": 0,
- "duration": null,
- "error": null
- },
- "displayName": "Contoso plain text",
- "description": "AVI dataset",
- "locale": "en-US",
- "kind": "Language",
- "status": "Complete",
- "lastActionDateTime": "2023-02-28T13:24:43Z",
- "createdDateTime": "2023-02-28T13:24:27Z",
- "customProperties": null
-}
-```
-
-## Get speech datasets files
-
-TheΓÇ»[get speech dataset files](https://api-portal.videoindexer.ai/api-details#api=Operations&operation=Get-Speech-Dataset-Files) API returns the files and metadata of the specified dataset.
-
-### Response
-
-The response provides a URL with the dataset files and metadata following the format of this example JSON output:
-
-```json
-[{
- "datasetId": "00000000-0000-0000-0000-f58ac72a",
- "fileId": "00000000-0000-0000-0000-cb190769c",
- "name": "languagedata",
- "contentUrl": "",
- "kind": "LanguageData",
- "createdDateTime": "2023-02-28T13:24:43Z",
- "properties": {
- "size": 1517
- }
-}, {
- "datasetId": "00000000-0000-0000-0000-f58ac72ΓÇ¥
- "fileId": "00000000-0000-0000-0000-2369192e",
- "name": "normalized.txt",
- "contentUrl": "",
- "kind": "LanguageData",
- "createdDateTime": "2023-02-28T13:24:43Z",
- "properties": {
- "size": 1517
- }
-}, {
- "datasetId": "00000000-0000-0000-0000-f58ac7",
- "fileId": "00000000-0000-0000-0000-05f1e306",
- "name": "report.json",
- "contentUrl": "",
- "kind": "DatasetReport",
- "createdDateTime": "2023-02-28T13:24:43Z",
- "properties": {
- "size": 78
- }
-}]
-```
-
-## Get the specified account datasets
-
-TheΓÇ»[get speech datasets](https://api-portal.videoindexer.ai/api-details#api=Operations&operation=Get-Speech-Datasets) API returns information on all of the specified accounts datasets.
-
-### Response
-
-The response provides metadata on the datasets in the specified account following the format of this example JSON output:
-
-```json
-[{
- "id": "00000000-0000-0000-abf5-4dad0f",
- "properties": {
- "acceptedLineCount": 41,
- "rejectedLineCount": 0,
- "duration": null,
- "error": null
- },
- "displayName": "test",
- "description": "string",
- "locale": "en-US",
- "kind": "Language",
- "status": "Complete",
- "lastActionDateTime": "2023-02-27T08:42:02Z",
- "createdDateTime": "2023-02-27T08:41:39Z",
- "customProperties": null
-}]
-```
-
-## Get the specified speech model
-
-TheΓÇ»[get speech model](https://api-portal.videoindexer.ai/api-details#api=Operations&operation=Get-Speech-Model) API returns information on the specified model.
-
-### Response
-
-The response provides metadata on the specified model following the format of this example JSON output:
-
-```json
-{
- "id": "00000000-0000-0000-0000-5685be445",
- "properties": {
- "deprecationDates": {
- "adaptationDateTime": null,
- "transcriptionDateTime": "2025-04-15T00:00:00Z"
- },
- "error": null
- },
- "displayName": "Contoso speech model",
- "description": "Contoso speech model for video indexer",
- "locale": "en-US",
- "datasets": ["00000000-0000-0000-0000-f58ac7002"],
- "status": "Complete",
- "lastActionDateTime": "2023-02-28T13:36:38Z",
- "createdDateTime": "2023-02-28T13:36:28Z",
- "customProperties": null
-}
-```
-
-## Get the specified account speech models
-
-TheΓÇ»[get speech models](https://api-portal.videoindexer.ai/api-details#api=Operations&operation=Get-Speech-Models) API returns information on all of the models in the specified account.
-
-### Response
-
-The response provides metadata on all of the speech models in the specified account following the format of this example JSON output:
-
-```json
-[{
- "id": "00000000-0000-0000-0000-5685be445",
- "properties": {
- "deprecationDates": {
- "adaptationDateTime": null,
- "transcriptionDateTime": "2025-04-15T00:00:00Z"
- },
- "error": null
- },
- "displayName": "Contoso speech model",
- "description": "Contoso speech model for video indexer",
- "locale": "en-US",
- "datasets": ["00000000-0000-0000-0000-f58ac7002a"],
- "status": "Complete",
- "lastActionDateTime": "2023-02-28T13:36:38Z",
- "createdDateTime": "2023-02-28T13:36:28Z",
- "customProperties": null
-}]
-```
-
-## Delete speech dataset
-
-The [delete speech dataset](https://api-portal.videoindexer.ai/api-details#api=Operations&operation=Delete-Speech-Dataset) API deletes the specified dataset. Any model that was trained with the deleted dataset continues to be available until the model is deleted. You cannot delete a dataset while it is in use for indexing or training.
-
-### Response
-
-There's no returned content when the dataset is deleted successfully.
-
-## Delete a speech model
-
-The [delete speech model](https://api-portal.videoindexer.ai/api-details#api=Operations&operation=Delete-Speech-Model) API deletes the specified speech model. You cannot delete a model while it is in use for indexing or training.
-
-### Response
-
-There's no returned content when the speech model is deleted successfully.
-
-## Next steps
-
-[Customize a speech model using the website](customize-speech-model-with-website.md)
-
azure-video-indexer Customize Speech Model With Website https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-video-indexer/customize-speech-model-with-website.md
- Title: Customize a speech model with Azure AI Video Indexer website
-description: Learn how to customize a speech model with the Azure AI Video Indexer website.
- Previously updated : 03/06/2023----
-# Customize a speech model in the website
--
-
-Azure AI Video Indexer lets you create custom speech models to customize speech recognition by uploading datasets that are used to create a speech model. This article goes through the steps to do so through the Video Indexer website. You can also use the API, as described inΓÇ»[Customize speech model using API](customize-speech-model-with-api.md).
-
-For a detailed overview and best practices for custom speech models, seeΓÇ»[Customize a speech model with Azure AI Video Indexer](customize-speech-model-overview.md).
-
-## Create a dataset
-
-As all custom models must contain a dataset, we'll start with the process of how to create and manage datasets.
-
-1. Go to the [Azure AI Video Indexer website](https://www.videoindexer.ai/) and sign in.
-1. Select the Model customization button on the left of the page.
-1. Select the Speech (new) tab. Here you'll begin the process of uploading datasets that are used to train the speech models.
-
- > [!div class="mx-imgBorder"]
- > :::image type="content" source="./media/customize-speech-model/speech-model.png" alt-text="Screenshot of uploading datasets which are used to train the speech models.":::
-1. Select Upload dataset.
-1. Select either Plain text or Pronunciation from the Dataset type dropdown menu. Every speech model must have a plain text dataset and can optionally have a pronunciation dataset. To learn more about each type, see Customize a speech model with Azure AI Video Indexer.
-1. Select Browse which will open the File Explorer. You can only use one file in each dataset. Choose the relevant text file.
-1. Select a Language for the model. Choose the language that is spoken in the media files you plan on indexing with this model.
-1. The Dataset name is pre-populated with the name of the file but you can modify the name.
-1. You can optionally add a description of the dataset. This could be helpful to distinguish each dataset if you expect to have multiple datasets.
-
- > [!div class="mx-imgBorder"]
- > :::image type="content" source="./media/customize-speech-model/dataset-type.png" alt-text="Screenshot of multiple datasets.":::
-1. Once you're ready, select Upload. You'll then see a list of all of your datasets and their properties, including the type, language, status, number of lines, and creation date. Once the status is complete, it can be used in the training and creation or new models.
-
- > [!div class="mx-imgBorder"]
- > :::image type="content" source="./media/customize-speech-model/datasets.png" alt-text="Screenshot of a new model.":::
-
-## Review and update a dataset
-
-Once a Dataset has been uploaded, you might need to review it or perform any number of updates to it. This section covers how to view, download, troubleshoot, and delete a dataset.
-
-**View dataset**: You can view a dataset and its properties by either clicking on the dataset name or when hovering over the dataset or clicking on the ellipsis and selecting **View Dataset**.
-
-> [!div class="mx-imgBorder"]
-> :::image type="content" source="./media/customize-speech-model/view-dataset.png" alt-text="Screenshot of how to view dataset.":::
-
-You'll then view the name, description, language and status of the dataset plus the following properties:
-
-**Number of lines**: indicates the number of lines successfully loaded out of the total number of lines in the file. If the entire file is loaded successfully the numbers will match (for example, 10 of 10 normalized). If the numbers don't match (for example, 7 of 10 normalized), this means that only some of the lines successfully loaded and the rest had errors. Common causes of errors are formatting issues with a line, such as not spacing a tab between each word in a pronunciation file. Reviewing the plain text and pronunciation data for training articles should be helpful in finding the issue. To troubleshoot the cause, review the error details, which are contained in the report. Select **View report** to view the error details regarding the lines that didn't load successfully (errorKind). This can also be viewed by selecting the **Report** tab.
-
-> [!div class="mx-imgBorder"]
-> :::image type="content" source="./media/customize-speech-model/report-tab.png" alt-text="Screenshot of how to view by selecting report tab.":::
-
-**Dataset ID**: Each dataset has a unique GUID, which is needed when using the API for operations that reference the dataset.
-
-**Plain text (normalized)**: This contains the normalized text of the loaded dataset file. Normalized text is the recognized text in plain form without formatting.
-
-**Edit Details**: To edit a dataset's name or description, when hovering over the dataset, click on the ellipsis and then select Edit details. You're then able to edit the dataset name and description.
-
-> [!Note]
-> The data in a dataset can't be edited or updated once the dataset has been uploaded. If you need to edit or update the data in a dataset, download the dataset, perform the edits, save the file, and upload the new dataset file.
-
-**Download**: To download a dataset file, when hovering over the dataset, click on the ellipsis and then select Download. Alternatively, when viewing the dataset, you can select Download and then have the option of downloading the dataset file or the upload report in JSON form.
-
-**Delete**: To delete a dataset, when hovering over the dataset, click on the ellipsis and then select Delete.
-
-## Create a custom speech model
-
-Datasets are used in the creation and training of models. Once you have created a plain text dataset, you are now able to create and start using a custom speech model.
-
-Keep in mind the following when creating and using custom speech models:
-
-* A new model must include at least one plain text dataset and can have multiple plain text datasets.
-* It's optional to include a pronunciation dataset and no more than one can be included.
-* Once a model is created, you can't add additional datasets to it or perform any modifications to its datasets. If you need to add or modify datasets, create a new model.
-* If you have indexed a video using a custom speech model and then delete the model, the transcript is not impacted unless you perform a re-index.
-* If you deleted a dataset that was used to train a custom model, as the speech model was already trained by the dataset, it continues to use it until the speech model is deleted.
-* If you delete a custom model, it has no impact of the transcription of videos that were already indexed using the model.
--
-**The following are instructions to create and manage custom speech models. There are two ways to train a model ΓÇô through the dataset tab and through the model tab.**
-
-## Train a model through the Datasets tab
-
-1. When viewing the list of datasets, if you select a plain text dataset by clicking on the circle to the left of a plain text datasetΓÇÖs name, the Train new model icon above the datasets will now turn from greyed out to blue and can be selected. Select Train new model.
-
- > [!div class="mx-imgBorder"]
- > :::image type="content" source="./media/customize-speech-model/train-model.png" alt-text="Screenshot of how to train new model.":::
-1. In the Train a new model popup, enter a name for the model, a language, and optionally add a description. A model can only contain datasets of the same language.
-1. Select the Datasets tab and then select from the list of your datasets the datasets you would like to be included in the model. Once a model is created, datasets can't be added.
-1. Select Create ad train.
-
-## Train a model through the Models tab
-
-1. Select the Models tab and then the Train new model icon. If no plain text datasets have been uploaded, the icon is greyed out. Select all the datasets that you want to be part of the model by clicking on the circle to the left of a plain text datasetΓÇÖs name.
-1. In the Train a new model pop-up, enter a name for the model, a language, and optionally add a description. A model can only contain datasets of the same language.
-1. Select the Datasets tab and then select from the list of your datasets the datasets you would like to be included in the model. Once a model is created, datasets can't be added.
-1. Select Create and train.
-
-## Model review and update
-
-Once a Model has been created, you might need to review its datasets, edits its name, or delete it.
-
-**View Model**: You can view a model and its properties by either clicking on the modelΓÇÖs name or when hovering over the model, clicking on the ellipsis and then selecting View Model.
-
-> [!div class="mx-imgBorder"]
-> :::image type="content" source="./media/customize-speech-model/view-model.png" alt-text="Screenshot of how to review and update a model.":::
-
-You'll then see in the Details tab the name, description, language and status of the model plus the following properties:
-
-**Model ID**: Each model has a unique GUID, which is needed when using the API for operations that reference the model.
-
-**Created on**: The date the model was created.
-
-**Edit Details**: To edit a modelΓÇÖs name or description, when hovering over the model, click on the ellipsis and then select Edit details. You're then able to edit the modelΓÇÖs name and description.
-
-> [!div class="mx-imgBorder"]
-> :::image type="content" source="./media/customize-speech-model/create-model.png" alt-text="Screenshot of how to hover over the model.":::
-
-> [!Note]
-> Only the modelΓÇÖs name and description can be edited. If you want to make any changes to its datasets or add datasets, a new model must be created.
-
-**Delete**: To delete a model, when hovering over the dataset, click on the ellipsis and then select Delete.
-
-**Included datasets**: Click on the Included datasets tab to view the modelΓÇÖs datasets.
-
-> [!div class="mx-imgBorder"]
-> :::image type="content" source="./media/customize-speech-model/included-datasets.png" alt-text="Screenshot of how to delete the model.":::
-
-## How to use a custom language model when indexing a video
-
-A custom language model isn't used by default for indexing jobs and must be selected during the index upload process. To learn how to index a video, see Upload and index videos with Azure AI Video Indexer - Azure AI Video Indexer | Microsoft Learn.
-
-During the upload process, you can select the source language of the video. In the Video source language drop-down menu, you'll see your custom model among the language list. The naming of the model is the language of your Language model and the name that you gave it in parentheses. For example:
-
-> [!div class="mx-imgBorder"]
-> :::image type="content" source="./media/customize-speech-model/contoso-model.png" alt-text="Screenshot of indexing a video.":::
-
-Select the Upload option in the bottom of the page, and your new video will be indexed using your Language model. The same steps apply when you want to re-index a video with a custom model.
-
-## Next steps
-
-[Customize a speech model using the API](customize-speech-model-with-api.md)
azure-video-indexer Deploy With Arm Template https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-video-indexer/deploy-with-arm-template.md
- Title: Deploy Azure AI Video Indexer by using an ARM template
-description: Learn how to create an Azure AI Video Indexer account by using an Azure Resource Manager (ARM) template.
-- Previously updated : 05/23/2022----
-# Tutorial: Deploy Azure AI Video Indexer by using an ARM template
---
-In this tutorial, you'll create an Azure AI Video Indexer account by using the Azure Resource Manager template (ARM template, which is in preview). The resource will be deployed to your subscription and will create the Azure AI Video Indexer resource based on parameters defined in the *avam.template* file.
-
-> [!NOTE]
-> This sample is *not* for connecting an existing Azure AI Video Indexer classic account to a Resource Manager-based Azure AI Video Indexer account.
->
-> For full documentation on the Azure AI Video Indexer API, visit the [developer portal](https://aka.ms/avam-dev-portal). For the latest API version for *Microsoft.VideoIndexer*, see the [template reference](/azure/templates/microsoft.videoindexer/accounts?tabs=bicep).
-
-## Prerequisites
-
-You need an Azure Media Services account. You can create one for free through [Create a Media Services account](/azure/media-services/latest/account-create-how-to).
-
-## Deploy the sample
----
-### Option 1: Select the button for deploying to Azure, and fill in the missing parameters
-
-[![Deploy to Azure](https://aka.ms/deploytoazurebutton)](https://portal.azure.com/#create/Microsoft.Template/uri/https%3A%2F%2Fraw.githubusercontent.com%2FAzure-Samples%2Fmedia-services-video-indexer%2Fmaster%2FDeploy-Samples%2FArmTemplates%2Favam.template.json)
----
-### Option 2: Deploy by using a PowerShell script
-
-1. Open the [template file](https://github.com/Azure-Samples/media-services-video-indexer/blob/master/Deploy-Samples/ArmTemplates/avam.template.json) and inspect its contents.
-2. Fill in the required parameters.
-3. Run the following PowerShell commands:
-
- * Create a new resource group on the same location as your Azure AI Video Indexer account by using the [New-AzResourceGroup](/powershell/module/az.resources/new-azresourcegroup) cmdlet.
-
- ```powershell
- New-AzResourceGroup -Name myResourceGroup -Location eastus
- ```
-
- * Deploy the template to the resource group by using the [New-AzResourceGroupDeployment](/powershell/module/az.resources/new-azresourcegroupdeployment) cmdlet.
-
- ```powershell
- New-AzResourceGroupDeployment -ResourceGroupName myResourceGroup -TemplateFile ./avam.template.json
- ```
-
-> [!NOTE]
-> If you want to work with Bicep format, see [Deploy by using Bicep](./deploy-with-bicep.md).
-
-## Parameters
-
-### name
-
-* Type: string
-* Description: The name of the new Azure AI Video Indexer account.
-* Required: true
-
-### location
-
-* Type: string
-* Description: The Azure location where the Azure AI Video Indexer account should be created.
-* Required: false
-
-> [!NOTE]
-> You need to deploy your Azure AI Video Indexer account in the same location (region) as the associated Azure Media Services resource.
-
-### mediaServiceAccountResourceId
-
-* Type: string
-* Description: The resource ID of the Azure Media Services resource.
-* Required: true
-
-### managedIdentityId
-
-> [!NOTE]
-> User assigned managed Identify must have at least Contributor role on the Media Service before deployment, when using System Assigned Managed Identity the Contributor role should be assigned after deployment.
-
-* Type: string
-* Description: The resource ID of the managed identity that's used to grant access between Azure Media Services resource and the Azure AI Video Indexer account.
-* Required: true
-
-### tags
-
-* Type: object
-* Description: The array of objects that represents custom user tags on the Azure AI Video Indexer account.
-* Required: false
-
-## Reference documentation
-
-If you're new to Azure AI Video Indexer, see:
-
-* [The Azure AI Video Indexer documentation](./index.yml)
-* [The Azure AI Video Indexer API developer portal](https://api-portal.videoindexer.ai/)
-
-After you complete this tutorial, head to other Azure AI Video Indexer samples described in [README.md](https://github.com/Azure-Samples/media-services-video-indexer/blob/master/README.md).
-
-If you're new to template deployment, see:
-
-* [Azure Resource Manager documentation](../azure-resource-manager/index.yml)
-* [Deploy resources with ARM templates](../azure-resource-manager/templates/deploy-powershell.md)
-* [Deploy resources with Bicep and the Azure CLI](../azure-resource-manager/bicep/deploy-cli.md)
-
-## Next steps
-
-Connect a [classic paid Azure AI Video Indexer account to a Resource Manager-based account](connect-classic-account-to-arm.md).
azure-video-indexer Deploy With Bicep https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-video-indexer/deploy-with-bicep.md
- Title: Deploy Azure AI Video Indexer by using Bicep
-description: Learn how to create an Azure AI Video Indexer account by using a Bicep file.
-- Previously updated : 06/06/2022----
-# Tutorial: deploy Azure AI Video Indexer by using Bicep
--
-In this tutorial, you create an Azure AI Video Indexer account by using [Bicep](../azure-resource-manager/bicep/overview.md).
-
-> [!NOTE]
-> This sample is *not* for connecting an existing Azure AI Video Indexer classic account to an ARM-based Azure AI Video Indexer account.
-> For full documentation on Azure AI Video Indexer API, visit the [developer portal](https://aka.ms/avam-dev-portal) page.
-> For the latest API version for Microsoft.VideoIndexer, see the [template reference](/azure/templates/microsoft.videoindexer/accounts?tabs=bicep).
-
-## Prerequisites
-
-* An Azure Media Services (AMS) account. You can create one for free through the [Create AMS Account](/azure/media-services/latest/account-create-how-to).
-
-## Review the Bicep file
-
-One Azure resource is defined in the bicep file:
-
-```bicep
-param location string = resourceGroup().location
-
-@description('The name of the AVAM resource')
-param accountName string
-
-@description('The managed identity Resource Id used to grant access to the Azure Media Service (AMS) account')
-param managedIdentityResourceId string
-
-@description('The media Service Account Id. The Account needs to be created prior to the creation of this template')
-param mediaServiceAccountResourceId string
-
-@description('The AVAM Template')
-resource avamAccount 'Microsoft.VideoIndexer/accounts@2022-08-01' = {
- name: accountName
- location: location
- identity:{
- type: 'UserAssigned'
- userAssignedIdentities : {
- '${managedIdentityResourceId}' : {}
- }
- }
- properties: {
- media
- resourceId: mediaServiceAccountResourceId
- userAssignedIdentity: managedIdentityResourceId
- }
- }
-}
-```
-
-Check [Azure Quickstart Templates](https://github.com/Azure/azure-quickstart-templates) for more updated Bicep samples.
-
-## Deploy the sample
-
-1. Save the Bicep file as main.bicep to your local computer.
-1. Deploy the Bicep file using either Azure CLI or Azure PowerShell
-
- # [CLI](#tab/CLI)
-
- ```azurecli
- az group create --name exampleRG --location eastus
- az deployment group create --resource-group exampleRG --template-file main.bicep --parameters accountName=<account-name> managedIdentityResourceId=<managed-identity> mediaServiceAccountResourceId=<media-service-account-resource-id>
- ```
-
- # [PowerShell](#tab/PowerShell)
-
- ```azurepowershell
- New-AzResourceGroup -Name exampleRG -Location eastus
- New-AzResourceGroupDeployment -ResourceGroupName exampleRG -TemplateFile ./main.bicep -accountName "<account-name>" -managedIdentityResourceId "<managed-identity>" -mediaServiceAccountResourceId "<media-service-account-resource-id>"
- ```
-
-
-
- The location must be the same location as the existing Azure media service. You need to provide values for the parameters:
-
- * Replace **\<account-name\>** with the name of the new Azure AI Video Indexer account.
- * Replace **\<managed-identity\>** with the managed identity used to grant access between Azure Media Services(AMS).
- * Replace **\<media-service-account-resource-id\>** with the existing Azure media service.
-
-## Reference documentation
-
-If you're new to Azure AI Video Indexer, see:
-
-* [The Azure AI Video Indexer documentation](./index.yml)
-* [The Azure AI Video Indexer developer portal](https://api-portal.videoindexer.ai/)
-* After completing this tutorial, head to other Azure AI Video Indexer samples, described on [README.md](https://github.com/Azure-Samples/media-services-video-indexer/blob/master/README.md)
-
-If you're new to Bicep deployment, see:
-
-* [Azure Resource Manager documentation](../azure-resource-manager/index.yml)
-* [Deploy Resources with Bicep and Azure PowerShell](../azure-resource-manager/bicep/deploy-powershell.md)
-* [Deploy Resources with Bicep and Azure CLI](../azure-resource-manager/bicep/deploy-cli.md)
-
-## Next steps
-
-[Connect an existing classic paid Azure AI Video Indexer account to ARM-based account](connect-classic-account-to-arm.md)
azure-video-indexer Detect Textual Logo https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-video-indexer/detect-textual-logo.md
- Title: Detect textual logo with Azure AI Video Indexer
-description: This article gives an overview of Azure AI Video Indexer textual logo detection.
- Previously updated : 01/22/2023----
-# How to detect textual logo
---
-> [!NOTE]
-> Textual logo detection (preview) creation process is currently available through API. The result can be viewed through the Azure AI Video Indexer [website](https://www.videoindexer.ai/).
-
-**Textual logo detection** insights are based on the OCR textual detection, which matches a specific predefined text.
-
-For example, if a user would create a textual logo: ΓÇ£MicrosoftΓÇ¥, different appearances of the word ΓÇÿMicrosoftΓÇÖ will be detected as the ΓÇÿMicrosoftΓÇÖ logo. A logo can have different variations, these variations can be associated with the main logo name. For example, user might have under the ΓÇÿMicrosoftΓÇÖ logo the following variations: ΓÇÿMSΓÇÖ, ΓÇÿMSFTΓÇÖ etc.
-
-```json
-{
- "name": "Microsoft",
- "wikipediaSearchTerm": "Microsoft",
- "textVariations": [{
- "text": "Microsoft",
- "caseSensitive": false
- }, {
- "text": "MSFT",
- "caseSensitive": true
- }]
-}
-```
-
-> [!div class="mx-imgBorder"]
-> :::image type="content" source="./media/textual-logo-detection/microsoft-example.png" alt-text="Diagram of logo detection.":::
-
-## Prerequisite
-
-The Azure Video Index account must have (at the very least) the `contributor` role assigned to the resource.
-
-## How to use
-
-In order to use textual logo detection, follow these steps, described in this article:
-
-1. Create a logo instance using with the [Create logo](https://api-portal.videoindexer.ai/api-details#api=Operations&operation=Create-Logo) API (with variations).
-
- * Save the logo ID.
-1. Create a logo group using the [Create Logo Group](https://api-portal.videoindexer.ai/api-details#api=Operations&operation=Create-Logo-Group) API.
-
- * Associate the logo instance with the group when creating the new group (by pasting the ID in the logos array).
-1. Upload a video using: **Advanced video** or **Advance video + audio** preset, use the `logoGroupId` parameter to specify the logo group you would like to index the video with.
-
-## Create a logo instance
-
-Use the [Create logo](https://api-portal.videoindexer.ai/api-details#api=Operations&operation=Create-Logo) API to create your logo. You can use the **try it** button.
-
-> [!div class="mx-imgBorder"]
-> :::image type="content" source="./media/textual-logo-detection/logo-api.png" alt-text="Diagram of logo API.":::
-
-In this tutorial we use the example supplied as default:
-
-Insert the following:
-
-* `Location`: The location of the Azure AI Video Indexer account.
-* `Account ID`: The ID of the Azure AI Video Indexer account.
-* `Access token`: The token, at least at a contributor level permission.
-
-The default body is:
-
-```json
-{
- "name": "Microsoft",
- "wikipediaSearchTerm": "Microsoft",
- "textVariations": [{
- "text": "Microsoft",
- "caseSensitive": false
- }, {
- "text": "MSFT",
- "caseSensitive": true
- }]
-}
-```
-
-|Key|Value|
-|||
-|Name|Name of the logo, would be used in the Azure AI Video Indexer website.|
-|wikipediaSearchTerm|Used to create a description in the Video Indexer website.|
-|text|The text the model will compare too, make sure to add the obvious name as part of the variations. (e.g Microsoft)|
-|caseSensitive| true/false according to the variation.|
-
-The response should return **201 Created**.
-
-```
-HTTP/1.1 201 Created
-
-content-type: application/json; charset=utf-8
-
-{
- "id": "id"
- "creationTime": "2023-01-15T13:08:14.9518235Z",
- "lastUpdateTime": "2023-01-15T13:08:14.9518235Z",
- "lastUpdatedBy": "Jhon Doe",
- "createdBy": "Jhon Doe",
- "name": "Microsoft",
- "wikipediaSearchTerm": "Microsoft",
- "textVariations": [{
- "text": "Microsoft",
- "caseSensitive": false,
- "creationTime": "2023-01-15T13:08:14.9518235Z",
- "createdBy": "Jhon Doe"
- }, {
- "text": "MSFT",
- "caseSensitive": true,
- "creationTime": "2023-01-15T13:08:14.9518235Z",
- "createdBy": "Jhon Doe"
- }]
-}
-```
-## Create a new textual logo group
-
-Use the [Create Logo Group](https://api-portal.videoindexer.ai/api-details#api=Operations&operation=Create-Logo-Group) API to create a logo group. Use the **try it** button.
-
-Insert the following:
-
-* `Location`: The location of the Azure AI Video Indexer account.
-* `Account ID`: The ID of the Azure AI Video Indexer account.
-* `Access token`: The token, at least at a contributor level permission.
-
-> [!div class="mx-imgBorder"]
-> :::image type="content" source="./media/textual-logo-detection/logo-group-api.png" alt-text="Diagram of logo group API.":::
-
-In the **Body** paste the logo ID from the previous step.
-
-```json
-{
- "logos": [{
- "logoId": "id"
- }],
- "name": "Technology",
- "description": "A group of logos of technology companies."
-}
-```
-
-* The default example has two logo IDs, we have created the first group with only one logo ID.
-
- The response should return **201 Created**.
-
- ```
- HTTP/1.1 201 Created
-
- content-type: application/json; charset=utf-8
-
- {
- "id": "id",
- "creationTime": "2023-01-15T14:41:11.4860104Z",
- "lastUpdateTime": "2023-01-15T14:41:11.4860104Z",
- "lastUpdatedBy": "Jhon Doe",
- "createdBy": "Jhon Doe",
- "logos": [{
- "logoId": " e9d609b4-d6a6-4943-86ff-557e724bd7c6"
- }],
- "name": "Technology",
- "description": "A group of logos of technology companies."
- }
- ```
-
-## Upload from URL
-
-Use the upload API call:
-
-Specify the following:
-
-* `Location`: The location of the Azure AI Video Indexer account.
-* `Account`: The ID of the Azure AI Video Indexer account.
-* `Name`: The name of the media file you're indexing.
-* `Language`: `en-US`. For more information, see [Language support](language-support.md)
-* `IndexingPreset`: Select **Advanced Video/Audio+video**.
-* `Videourl`: The url.
-* `LogoGroupID`: GUID representing the logo group (you got it in the response when creating it).
-* `Access token`: The token, at least at a contributor level permission.
-
-## Inspect the output
-
-Assuming the textual logo model has found a match, you'll be able to view the result in the [Azure AI Video Indexer website](https://www.videoindexer.ai/).
-
-### Insights
-
-A new section would appear in the insights panel showing the number of custom logos that were detected. One representative thumbnail will be displayed representing the new logo.
-
-> [!div class="mx-imgBorder"]
-> :::image type="content" source="./media/textual-logo-detection/logo-insight.png" alt-text="Diagram of logo insight.":::
-
-### Timeline
-
-When switching to the Timeline view, under the **View**, mark the **Logos** checkbox. All detected thumbnails will be displayed according to their time stamp.
-
-> [!div class="mx-imgBorder"]
-> :::image type="content" source="./media/textual-logo-detection/logo-timeline.png" alt-text="Diagram of logo timeline.":::
-
-All logo instances that were recognized with a certainty above 80% present will be displayed, the extended list of detection including low certainty detection are available in the [Artifacts](https://api-portal.videoindexer.ai/api-details#api=Operations&operation=Get-Video-Artifact-Download-Url) file.
-
-## Next steps
-
-### Adding a logo to an existing logo group
-
-In the first part of this article, we had one instance of a logo and we have associated it to the right logo group upon the creation of the logo group. If all logo instances are created before the logo group is created, they can be associated with logo group on the creation phase. However, if the group was already created, the new instance should be associated to the group following these steps:
-
-1. Create the logo.
-
- 1. Copy the logo ID.
-1. [Get logo groups](https://api-portal.videoindexer.ai/api-details#api=Operations&operation=Get-Logo-Groups).
-
- 1. Copy the logo group ID of the right group.
-1. [Get logo group](https://api-portal.videoindexer.ai/api-details#api=Operations&operation=Get-Logo-Group).
-
- 1. Copy the response the list of logos IDs:
-
- Logo list sample:
-
- ```json
- "logos": [{
- "logoId": "id"
- }],
- ```
-1. [Update logo group](https://api-portal.videoindexer.ai/api-details#api=Operations&operation=Update-Logo-Group).
-
- 1. Logo group ID is the output received at step 2.
- 1. At the ΓÇÿBodyΓÇÖ of the request, paste the existing list of logos from step 3.
- 1. Then add to the list the logo ID from step 1.
-1. Validate the response of the [Update logo group](https://api-portal.videoindexer.ai/api-details#api=Operations&operation=Get-Logo-Groups) making sure the list contains the previous IDs and the new.
-
-### Additional information and limitations
-
-* A logo group can contain up to 50 logos.
-* One logo can be linked to more than one group.
-* Use the [Update logo group](https://api-portal.videoindexer.ai/api-details#api=Operations&operation=Get-Logo-Groups) to add the new logo to an existing group.
azure-video-indexer Detected Clothing https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-video-indexer/detected-clothing.md
- Title: Enable detected clothing feature
-description: Azure AI Video Indexer detects clothing associated with the person wearing it in the video and provides information such as the type of clothing detected and the timestamp of the appearance (start, end). The API returns the detection confidence level.
- Previously updated : 08/07/2023----
-# Enable detected clothing feature
--
-Azure AI Video Indexer detects clothing associated with the person wearing it in the video and provides information such as the type of clothing detected and the timestamp of the appearance (start, end). The API returns the detection confidence level. The clothing types that are detected are long pants, short pants, long sleeves, short sleeves, and skirt or dress.
-
-Two examples where this feature could be useful:
-
-- Improve efficiency when creating raw data for content creators, like video advertising, news, or sport games (for example, find people wearing a red shirt in a video archive).-- Post-event analysisΓÇödetect and track a personΓÇÖs movement to better analyze an accident or crime post-event (for example, explosion, bank robbery, incident).
-
-The newly added clothing detection feature is available when indexing your file by choosing the **Advanced option** -> **Advanced video** or **Advanced video + audio** preset (under Video + audio indexing). Standard indexing won't include this new advanced model.
-
-
-When you choose to see **Insights** of your video on the [Azure AI Video Indexer](https://www.videoindexer.ai/) website, the People's detected clothing could be viewed from the **Observed People** tracking insight. When choosing a thumbnail of a person the detected clothing became available.
-
-
-If you're interested to view People's detected clothing in the Timeline of your video on the Azure AI Video Indexer website, go to **View** -> **Show Insights** and select the All option, or **View** -> **Custom View** and select **Observed People**.
-
-
-Searching for a specific clothing to return all the observed people wearing it's enabled using the search bar of either the **Insights** or from the **Timeline** of your video on the Azure AI Video Indexer website.
-
-The following JSON response illustrates what Azure AI Video Indexer returns when tracking observed people having detected clothing associated:
-
-```json
-"observedPeople": [
- {
- "id": 1,
- "thumbnailId": "68bab0f2-f084-4c2b-859b-a951ed03c209",
- "clothing": [
- {
- "id": 1,
- "type": "sleeve",
- "properties": {
- "length": "short"
- }
- },
- {
- "id": 2,
- "type": "pants",
- "properties": {
- "length": "long"
- }
- }
- ],
- "instances": [
- {
- "adjustedStart": "0:00:05.5055",
- "adjustedEnd": "0:00:09.9766333",
- "start": "0:00:05.5055",
- "end": "0:00:09.9766333"
- }
- ]
- },
- {
- "id": 2,
- "thumbnailId": "449bf52d-06bf-43ab-9f6b-e438cde4f217",
- "clothing": [
- {
- "id": 1,
- "type": "sleeve",
- "properties": {
- "length": "short"
- }
- },
- {
- "id": 2,
- "type": "pants",
- "properties": {
- "length": "long"
- }
- }
- ],
- "instances": [
- {
- "adjustedStart": "0:00:07.2072",
- "adjustedEnd": "0:00:10.5105",
- "start": "0:00:07.2072",
- "end": "0:00:10.5105"
- }
- ]
- },
-]
-```
-
-## Limitations and assumptions
-
-As the detected clothing feature uses observed people tracking, the tracking quality is important. For tracking considerations and limitations, see [Considerations and limitations when choosing a use case](observed-matched-people.md#considerations-and-limitations-when-choosing-a-use-case).
--- As clothing detection is dependent on the visibility of the personΓÇÖs body, the accuracy is higher if a person is fully visible.-- There maybe errors when a person is without clothing.-- In this scenario or others of poor visibility, results may be given such as long pants and skirt or dress. -
-## Next steps
-
-[Track observed people in a video](observed-people-tracking.md)
azure-video-indexer Digital Patterns Color Bars https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-video-indexer/digital-patterns-color-bars.md
- Title: Enable and view digital patterns with color bars
-description: Learn about how to enable and view digital patterns with color bars.
- Previously updated : 09/20/2022----
-# Enable and view digital patterns with color bars
--
-This article shows how to enable and view digital patterns with color bars (preview).
-
-You can view the names of the specific digital patterns. <!-- They are searchable by the color bar type (Color Bar/Test card) in the insights. -->The timeline includes the following types:
--- Color bars-- Test cards-
-This insight is most useful to customers involved in the movie post-production process.
-
-## View post-production insights
-
-In order to set the indexing process to include the slate metadata, select the **Video + audio indexing** -> **Advanced** presets.
-
-> [!div class="mx-imgBorder"]
-> :::image type="content" source="./media/slate-detection-process/advanced-setting.png" alt-text="This image shows the advanced setting in order to view post-production clapperboards insights.":::
-
-After the file has been uploaded and indexed, if you want to view the timeline of the insight, select the **Post-production** checkmark from the list of insights.
-
-> [!div class="mx-imgBorder"]
-> :::image type="content" source="./media/slate-detection-process/post-production-checkmark.png" alt-text="This image shows the post-production checkmark needed to view clapperboards.":::
-
-### View digital patterns insights
-
-#### View the insight
-
-To see the instances on the website, select **Insights** and scroll to **Labels**.
-The insight shows under **Labels** in the **Insight** tab.
-
-> [!div class="mx-imgBorder"]
-> :::image type="content" source="./media/slate-detection-process/insights-color-bars.png" alt-text="This image shows the color bars under labels.":::
-
-#### View the timeline
-
-If you checked the **Post-production** insight, you can find the color bars instance and timeline under the **Timeline** tab.
-
-> [!div class="mx-imgBorder"]
-> :::image type="content" source="./media/slate-detection-process/timeline-color-bars.png" alt-text="This image shows the color bars under timeline.":::
-
-#### View JSON
-
-To display the JSON file:
-
-1. Select Download and then Insights (JSON).
-1. Copy the `framePatterns` element, under `insights`, and paste it into your Online JSON Viewer.
-
-> [!div class="mx-imgBorder"]
-> :::image type="content" source="./media/slate-detection-process/color-bar-json.png" alt-text="This image shows the color bars json.":::
-
-The following table describes fields found in json:
-
-|Name|Description|
-|||
-|`id`|The digital pattern ID.|
-|`patternType`|The following types are supported: ColorBars, TestCards.|
-|`confidence`|The confidence level for color bar accuracy.|
-|`name`|The name of the element. For example, "SMPTE color bars".|
-|`displayName`| The friendly/display name.
-|`thumbnailId`|The ID of the thumbnail.|
-|`instances`|A list of time ranges where this element appeared.|
-
-## Limitations
--- There can be a mismatch if the input video is of low quality (for example ΓÇô old Analog recordings). -- The digital patterns will be identified over the 10 min of the beginning and 10 min of the ending part of the video.
-<!-
-
-## Next steps
-
-* [Slate detection overview](slate-detection-insight.md)
-* [How to enable and view clapper board with extracted metadata](clapperboard-metadata.md)
-* [How to enable and view textless slate with matched scene](textless-slate-scene-matching.md)
azure-video-indexer Edit Speakers https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-video-indexer/edit-speakers.md
- Title: Edit speakers in the Azure AI Video Indexer website
-description: The article demonstrates how to edit speakers with the Azure AI Video Indexer website.
- Previously updated : 11/01/2022----
-# Edit speakers with the Azure AI Video Indexer website
--
-Azure AI Video Indexer identifies each speaker in a video and attributes each transcribed line to a speaker. The speakers are given a unique identity such as `Speaker #1` and `Speaker #2`. To provide clarity and enrich the transcript quality, you may want to replace the assigned identity with each speaker's actual name. To edit speakers' names, use the edit actions as described in the article.
-
-The article demonstrates how to edit speakers with the [Azure AI Video Indexer website](https://www.videoindexer.ai/). The same editing operations are possible with an API. To use API, call [update video index](https://api-portal.videoindexer.ai/api-details#api=Operations&operation=Update-Video-Index).
-
-> [!NOTE]
-> The addition or editing of a speaker name is applied throughout the transcript of the video but is not applied to other videos in your Azure AI Video Indexer account.
-
-## Start editing
-
-1. Sign in to the [Azure AI Video Indexer website](https://www.videoindexer.ai/).
-2. Select a video.
-3. Select the **Timeline** tab.
-4. Choose to view speakers.
--
-## Add a new speaker
-
-This action allows adding new speakers that were not identified by Azure AI Video Indexer. To add a new speaker from the website for the selected video, do the following:
-
-1. Select the edit mode.
-
- :::image type="content" alt-text="Screenshot of how to edit speakers." source="./media/edit-speakers-website/edit.png":::
-1. Go to the speakers drop down menu above the transcript line you wish to assign a new speaker to.
-1. Select **Assign a new speaker**.
-
- :::image type="content" alt-text="Screenshot of how to add a new speaker." source="./media/edit-speakers-website/assign-new.png":::
-1. Add the name of the speaker you would like to assign.
-1. Press a checkmark to save.
-
-> [!NOTE]
-> Speaker names should be unique across the speakers in the current video.
-
-## Rename an existing speaker
-
-This action allows renaming an existing speaker that was identified by Azure AI Video Indexer. The update applies to all speakers identified by this name.
-
-To rename a speaker from the website for the selected video, do the following:
-
-1. Select the edit mode.
-1. Go to the transcript line where the speaker you wish to rename appears.
-1. Select **Rename selected speaker**.
-
- :::image type="content" alt-text="Screenshot of how to rename a speaker." source="./media/edit-speakers-website/rename.png":::
-
- This action will update speakers by this name.
-1. Press a checkmark to save.
-
-## Assign a speaker to a transcript line
-
-This action allows assigning a speaker to a specific transcript line with a wrong assignment. To assign a speaker to a transcript line from the website, do the following:
-
-1. Go to the transcript line you want to assign a different speaker to.
-1. Select a speaker from the speakers drop down menu above that you wish to assign.
-
- The update only applies to the currently selected transcript line.
-
-If the speaker you wish to assign doesn't appear on the list you can either **Assign a new speaker** or **Rename an existing speaker** as described above.
-
-## Limitations
-
-When adding a new speaker or renaming a speaker, the new name should be unique.
-
-## Next steps
-
-[Insert or remove transcript lines in the Azure AI Video Indexer website](edit-transcript-lines-portal.md)
azure-video-indexer Edit Transcript Lines Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-video-indexer/edit-transcript-lines-portal.md
- Title: View and update transcriptions in Azure AI Video Indexer website
-description: This article explains how to insert or remove a transcript line in the Azure AI Video Indexer website. It also shows how to view word-level information.
- Previously updated : 05/03/2022----
-# View and update transcriptions
--
-This article explains how to insert or remove a transcript line in the Azure AI Video Indexer website. It also shows how to view word-level information.
-
-## Insert or remove transcript lines in the Azure AI Video Indexer website
-
-This section explains how to insert or remove a transcript line in the [Azure AI Video Indexer website](https://www.videoindexer.ai/).
-
-### Add new line to the transcript timeline
-
-While in the edit mode, hover between two transcription lines. You'll find a gap between **ending time** of the **transcript line** and the beginning of the following transcript line, user should see the following **add new transcription line** option.
--
-After clicking the add new transcription line, there will be an option to add the new text and the time stamp for the new line. Enter the text, choose the time stamp for the new line, and select **save**. The default time stamp is the gap between the previous and next transcript line.
--
-If there isnΓÇÖt an option to add a new line, you can adjust the end/start time of the relevant transcript lines to fit a new line in your desired place.
-
-Choose an existing line in the transcript line, click the **three dots** icon, select edit and change the time stamp accordingly.
-
-> [!NOTE]
-> New lines will not appear as part of the **From transcript edits** in the **Content model customization** under languages.
->
-> While using the API, when adding a new line, **Speaker name** can be added using free text. For example, *Speaker 1* can now become *Adam*.
-
-### Edit existing line
-
-While in the edit mode, select the three dots icon. The editing options were enhanced, they now contain not just the text but also the time stamp with accuracy of milliseconds.
-
-### Delete line
-
-Lines can now be deleted through the same three dots icon.
-
-### Consolidate two lines as one
-
-To consolidate two lines, which you believe should appear as one.
-
-1. Go to line number 2, select edit.
-1. Copy the text
-1. Delete the line
-1. Go to line 1, edit, paste the text and save.
-
-## Examine word-level transcription information
-
-This section shows how to examine word-level transcription information based on sentences and phrases that Azure AI Video Indexer identified. Each phrase is broken into words and each word has the following information associated with it
-
-|Name|Description|Example|
-||||
-|Word|A word from a phrase.|"thanks"|
-|Confidence|How confident the Azure AI Video Indexer that the word is correct.|0.80127704|
-|Offset|The time offset from the beginning of the video to where the word starts.|PT0.86S|
-|Duration|The duration of the word.|PT0.28S|
-
-### Get and view the transcript
-
-1. Sign in on the [Azure AI Video Indexer website](https://www.videoindexer.ai).
-1. Select a video.
-1. In the top-right corner, press arrow down and select **Artifacts (ZIP)**.
-1. Download the artifacts.
-1. Unzip the downloaded file > browse to where the unzipped files are located > find and open `transcript.speechservices.json`.
-1. Format and view the json.
-1. Find`RecognizedPhrases` > `NBest` > `Words` and find interesting to you information.
-
-```json
-"RecognizedPhrases": [
-{
- "RecognitionStatus": "Success",
- "Channel": 0,
- "Speaker": 1,
- "Offset": "PT0.86S",
- "Duration": "PT11.01S",
- "OffsetInTicks": 8600000,
- "DurationInTicks": 110100000,
- "NBest": [
- {
- "Confidence": 0.82356554,
- "Lexical": "thanks for joining ...",
- "ITN": "thanks for joining ...",
- "MaskedITN": "",
- "Display": "Thanks for joining ...",
- "Words": [
- {
- "Word": "thanks",
- "Confidence": 0.80127704,
- "Offset": "PT0.86S",
- "Duration": "PT0.28S",
- "OffsetInTicks": 8600000,
- "DurationInTicks": 2800000
- },
- {
- "Word": "for",
- "Confidence": 0.93965703,
- "Offset": "PT1.15S",
- "Duration": "PT0.13S",
- "OffsetInTicks": 11500000,
- "DurationInTicks": 1300000
- },
- {
- "Word": "joining",
- "Confidence": 0.97060966,
- "Offset": "PT1.29S",
- "Duration": "PT0.31S",
- "OffsetInTicks": 12900000,
- "DurationInTicks": 3100000
- },
- {
-
-```
-
-## Next steps
-
-For updating transcript lines and text using API visit the [Azure AI Video Indexer API developer portal](https://aka.ms/avam-dev-portal)
azure-video-indexer Emotions Detection https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-video-indexer/emotions-detection.md
- Title: Azure AI Video Indexer text-based emotion detection overview
-description: This article gives an overview of Azure AI Video Indexer text-based emotion detection.
- Previously updated : 08/02/2023-----
-# Text-based emotion detection
--
-Emotions detection is an Azure AI Video Indexer AI feature that automatically detects emotions in video's transcript lines. Each sentence can either be detected as:
--- *Anger*,-- *Fear*,-- *Joy*, -- *Sad*-
-Or, none of the above if no other emotion was detected.
-
-The model works on text only (labeling emotions in video transcripts.) This model doesn't infer the emotional state of people, may not perform where input is ambiguous or unclear, like sarcastic remarks. Thus, the model shouldn't be used for things like assessing employee performance or the emotional state of a person.
-
-## General principles
-
-There are many things you need to consider when deciding how to use and implement an AI-powered feature:
--- Will this feature perform well in my scenario? Before deploying emotions detection into your scenario, test how it performs using real-life data and make sure it can deliver the accuracy you need. -- Are we equipped to identify and respond to errors? AI-powered products and features aren't 100% accurate, consider how you identify and respond to any errors that may occur. -
-## View the insight
-
-When working on the website the insights are displayed in the **Insights** tab. They can also be generated in a categorized list in a JSON file that includes the ID, type, and a list of instances it appeared at, with their time and confidence.
-
-To display the instances in a JSON file, do the following:
-
-1. Select Download -> Insights (JSON).
-1. Copy the text and paste it into an online JSON viewer.
-
-```json
-"emotions": [
- {
- "id": 1,
- "type": "Sad",
- "instances": [
- {
- "confidence": 0.5518,
- "adjustedStart": "0:00:00",
- "adjustedEnd": "0:00:05.75",
- "start": "0:00:00",
- "end": "0:00:05.75"
- },
-
-```
-
-To download the JSON file via the API, use theΓÇ»[Azure AI Video Indexer developer portal](https://api-portal.videoindexer.ai/).
-
-> [!NOTE]
-> Text-based emotion detection is language independent, however if the transcript is not in English, it is first being translated to English and only then the model is applied. This may cause a reduced accuracy in emotions detection for non English languages.
-
-## Emotions detection components
-
-During the emotions detection procedure, the transcript of the video is processed, as follows:
-
-|Component |Definition |
-|||
-|Source language |The user uploads the source file for indexing. |
-|Transcription API |The audio file is sent to Azure AI services and the translated transcribed output is returned. If a language has been specified, it's processed. |
-|Emotions detection |Each sentence is sent to the emotions detection model. The model produces the confidence level of each emotion. If the confidence level exceeds a specific threshold, and there's no ambiguity between positive and negative emotions, the emotion is detected. In any other case, the sentence is labeled as neutral.|
-|Confidence level |The estimated confidence level of the detected emotions is calculated as a range of 0 to 1. The confidence score represents the certainty in the accuracy of the result. For example, an 82% certainty is represented as an 0.82 score. |
-
-## Considerations and limitations for input data
-
-Here are some considerations to keep in mind when using emotions detection:
--- When uploading a file always use high quality audio and video content.-
-When used responsibly and carefully emotions detection is a valuable tool for many industries. To respect the privacy and safety of others, and to comply with local and global regulations, we recommend the following:  
--- Always respect an individual’s right to privacy, and only ingest media for lawful and justifiable purposes.  
-Don't purposely disclose inappropriate media showing young children or family members of celebrities or other content that may be detrimental or pose a threat to an individual’s personal freedom.  
-- Commit to respecting and promoting human rights in the design and deployment of your analyzed media.   -- When using third-party materials, be aware of any existing copyrights or permissions required before distributing content derived from them.  -- Always seek legal advice when using media from unknown sources.  -- Always obtain appropriate legal and professional advice to ensure that your uploaded media is secured and have adequate controls to preserve the integrity of your content and to prevent unauthorized access.     -- Provide a feedback channel that allows users and individuals to report issues with the service.   -- Be aware of any applicable laws or regulations that exist in your area regarding processing, analyzing, and sharing media containing people.  -- Keep a human in the loop. Don't use any solution as a replacement for human oversight and decision-making.   -- Fully examine and review the potential of any AI model you're using to understand its capabilities and limitations.  --- Fully examine and review the potential of any AI model you're using to understand its capabilities and limitations.--
-## Transparency Notes
-
-### General
-
-Review [Transparency Note overview](/legal/azure-video-indexer/transparency-note?context=/azure/azure-video-indexer/context/context)
-
-### Emotion detection specific
-
-Introduction: This model is designed to help detect emotions in the transcript of a video. However, it isn't suitable for making assessments about an individual's emotional state, their ability, or their overall performance.
-
-Use cases: This emotion detection model is intended to help determine the sentiment behind sentences in the videoΓÇÖs transcript. However, it only works on the text itself, and may not perform well for sarcastic input or in cases where input may be ambiguous or unclear.
-
-Information requirements: To increase the accuracy of this model, it is recommended that input data be in a clear and unambiguous format. Users should also note that this model does not have context about input data, which can impact its accuracy.
-
-Limitations: This model can produce both false positives and false negatives. To reduce the likelihood of either, users are advised to follow best practices for input data and preprocessing, and to interpret outputs in the context of other relevant information. It's important to note that the system does not have any context of the input data.
-
-Interpretation: The outputs of this model should not be used to make assessments about an individual's emotional state or other human characteristics. This model is supported in English and may not function properly with non-English inputs. Not English inputs are being translated to English before entering the model, therefore may produce less accurate results.
-
-### Intended use cases
--- Content Creators and Video Editors - Content creators and video editors can use the system to analyze the emotions expressed in the text transcripts of their videos. This helps them gain insights into the emotional tone of their content, allowing them to fine-tune the narrative, adjust pacing, or ensure the intended emotional impact on the audience.-- Media Analysts and Researchers - Media analysts and researchers can employ the system to analyze the emotional content of a large volume of video transcripts quickly. They can use the emotional timeline generated by the system to identify trends, patterns, or emotional responses in specific topics or areas of interest.-- Marketing and Advertising Professionals - Marketing and advertising professionals can utilize the system to assess the emotional reception of their campaigns or video advertisements. Understanding the emotions evoked by their content helps them tailor messages more effectively and gauge the success of their campaigns.-- Video Consumers and Viewers - End-users, such as viewers or consumers of video content, can benefit from the system by understanding the emotional context of videos without having to watch them entirely. This is particularly useful for users who want to decide if a video is worth watching or for those with limited time to spare.-- Entertainment Industry Professionals - Professionals in the entertainment industry, such as movie producers or directors, can utilize the system to gauge the emotional impact of their film scripts or storylines, aiding in script refinement and audience engagement. -
-### Considerations when choosing other use cases
--- The model should not be used to evaluate employee performance and monitoring individuals.-- The model should not be used for making assessments about a person, their emotional state, or their ability.-- The results of the model can be inaccurate, as this is an AI system, and should be treated with caution.-- The confidence of the model in its prediction should also be taken into account.-- Non English videos will produce less accurate results.-
-## Next steps
-
-### Learn More about Responsible AI
--- [Microsoft Responsible AI principles](https://www.microsoft.com/ai/responsible-ai?activetab=pivot1%3aprimaryr6) -- [Microsoft Responsible AI resources](https://www.microsoft.com/ai/responsible-ai-resources)-- [Microsoft Azure Learning courses on Responsible AI](/training/paths/responsible-ai-business-principles/)-- [Microsoft Global Human Rights Statement](https://www.microsoft.com/corporate-responsibility/human-rights-statement?activetab=pivot_1:primaryr5) -
-### Contact us
-
-`visupport@microsoft.com`
-
-## Azure AI Video Indexer insights
-
-View some other Azure Video Insights:
--- [Audio effects detection](audio-effects-detection.md)-- [Face detection](face-detection.md)-- [OCR](ocr.md)-- [Keywords extraction](keywords.md)-- [Transcription, Translation & Language identification](transcription-translation-lid.md)-- [Named entities](named-entities.md)-- [Observed people tracking & matched persons](observed-matched-people.md)-- [Topics inference](topics-inference.md)
azure-video-indexer Face Detection https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-video-indexer/face-detection.md
- Title: Face detection overview
-description: Get an overview of face detection in Azure AI Video Indexer.
- Previously updated : 04/17/2023-----
-# Face detection
--
-Face detection, a feature of Azure AI Video Indexer, automatically detects faces in a media file, and then aggregates instances of similar faces into groups. The celebrities recognition model then runs to recognize celebrities.
-
-The celebrities recognition model covers approximately 1 million faces and is based on commonly requested data sources. Faces that Video Indexer doesn't recognize as celebrities are still detected but are left unnamed. You can build your own custom [person model](/azure/azure-video-indexer/customize-person-model-overview) to train Video Indexer to recognize faces that aren't recognized by default.
-
-Face detection insights are generated as a categorized list in a JSON file that includes a thumbnail and either a name or an ID for each face. Selecting a faceΓÇÖs thumbnail displays information like the name of the person (if they were recognized), the percentage of the video that the person appears, and the person's biography, if they're a celebrity. You can also scroll between instances in the video where the person appears.
-
-> [!IMPORTANT]
-> To support Microsoft Responsible AI principles, access to face identification, customization, and celebrity recognition features is limited and based on eligibility and usage criteria. Face identification, customization, and celebrity recognition features are available to Microsoft managed customers and partners. To apply for access, use the [facial recognition intake form](https://customervoice.microsoft.com/Pages/ResponsePage.aspx?id=v4j5cvGGr0GRqy180BHbR7en2Ais5pxKtso_Pz4b1_xUQjA5SkYzNDM4TkcwQzNEOE1NVEdKUUlRRCQlQCN0PWcu).
-
-## Prerequisites
-
-Review [Transparency Note for Azure AI Video Indexer](/legal/azure-video-indexer/transparency-note?context=/azure/azure-video-indexer/context/context).
-
-## General principles
-
-This article discusses face detection and key considerations for using this technology responsibly. You need to consider many important factors when you decide how to use and implement an AI-powered feature, including:
--- Will this feature perform well in your scenario? Before you deploy face detection in your scenario, test how it performs by using real-life data. Make sure that it can deliver the accuracy you need.-- Are you equipped to identify and respond to errors? AI-powered products and features aren't 100 percent accurate, so consider how you'll identify and respond to any errors that occur.-
-## Key terms
-
-| Term | Definition |
-|||
-| insightΓÇ» | The information and knowledge that you derive from processing and analyzing video and audio files. The insight can include detected objects, people, faces, keyframes, and translations or transcriptions. |
-| face recognition  | Analyzing images to identify the faces that appear in the images. This process is implemented via the Azure AI Face API. |
-| template | Enrolled images of people are converted to templates, which are then used for facial recognition. Machine-interpretable features are extracted from one or more images of an individual to create that individualΓÇÖs template. The enrollment or probe images aren't stored by the Face API, and the original images can't be reconstructed based on a template. Template quality is a key determinant for accuracy in your results. |
-| enrollment | The process of enrolling images of individuals for template creation so that they can be recognized. When a person is enrolled to a verification system that's used for authentication, their template is also associated with a primary identifier that's used to determine which template to compare against the probe template. High-quality images and images that represent natural variations in how a person looks (for instance, wearing glasses and not wearing glasses) generate high-quality enrollment templates. |
-| deep searchΓÇ» | The ability to retrieve only relevant video and audio files from a video library by searching for specific terms within the extracted insights.|
-
-## View insights
-
-To see face detection instances on the Azure AI Video Indexer website:
-
-1. When you upload the media file, in the **Upload and index** dialog, select **Advanced settings**.
-1. On the left menu, select **People models**. Select a model to apply to the media file.
-1. After the file is uploaded and indexed, go to **Insights** and scroll to **People**.
-
-To see face detection insights in a JSON file:
-
-1. On the Azure AI Video Indexer website, open the uploaded video.
-1. Select **Download** > **Insights (JSON)**.
-1. Under `insights`, copy the `faces` element and paste it into your JSON viewer.
-
- ```json
- "faces": [
- {
- "id": 1785,
- "name": "Emily Tran",
- "confidence": 0.7855,
- "description": null,
- "thumbnailId": "fd2720f7-b029-4e01-af44-3baf4720c531",
- "knownPersonId": "92b25b4c-944f-4063-8ad4-f73492e42e6f",
- "title": null,
- "imageUrl": null,
- "thumbnails": [
- {
- "id": "4d182b8c-2adf-48a2-a352-785e9fcd1fcf",
- "fileName": "FaceInstanceThumbnail_4d182b8c-2adf-48a2-a352-785e9fcd1fcf.jpg",
- "instances": [
- {
- "adjustedStart": "0:00:00",
- "adjustedEnd": "0:00:00.033",
- "start": "0:00:00",
- "end": "0:00:00.033"
- }
- ]
- },
- {
- "id": "feff177b-dabf-4f03-acaf-3e5052c8be57",
- "fileName": "FaceInstanceThumbnail_feff177b-dabf-4f03-acaf-3e5052c8be57.jpg",
- "instances": [
- {
- "adjustedStart": "0:00:05",
- "adjustedEnd": "0:00:05.033",
- "start": "0:00:05",
- "end": "0:00:05.033"
- }
- ]
- },
- ]
- }
- ]
- ```
-
-To download the JSON file via the API, go to the [Azure AI Video Indexer developer portal](https://api-portal.videoindexer.ai/).
-
-> [!IMPORTANT]
-> When you review face detections in the UI, you might not see all faces that appear in the video. We expose only face groups that have a confidence of more than 0.5, and the face must appear for a minimum of 4 seconds or 10 percent of the value of `video_duration`. Only when these conditions are met do we show the face in the UI and in the *Insights.json* file. You can always retrieve all face instances from the face artifact file by using the API: `https://api.videoindexer.ai/{location}/Accounts/{accountId}/Videos/{videoId}/ArtifactUrl[?Faces][&accessToken]`.
-
-## Face detection components
-
-The following table describes how images in a media file are processed during the face detection procedure:
-
-| Component | Definition |
-|||
-| source file | The user uploads the source file for indexing. |
-| detection and aggregation | The face detector identifies the faces in each frame. The faces are then aggregated and grouped. |
-| recognition | The celebrities model processes the aggregated groups to recognize celebrities. If you've created your own people model, it also processes groups to recognize other people. If people aren't recognized, they're labeled Unknown1, Unknown2, and so on. |
-| confidence value | Where applicable for well-known faces or for faces that are identified in the customizable list, the estimated confidence level of each label is calculated as a range of 0 to 1. The confidence score represents the certainty in the accuracy of the result. For example, an 82 percent certainty is represented as an 0.82 score. |
-
-## Example use cases
-
-The following list describes examples of common use cases for face detection:
--- Summarize where an actor appears in a movie or reuse footage by deep searching specific faces in organizational archives for insight about a specific celebrity.-- Get improved efficiency when you create feature stories at a news agency or sports agency. Examples include deep searching a celebrity or a football player in organizational archives.-- Use faces that appear in a video to create promos, trailers, or highlights. Video Indexer can assist by adding keyframes, scene markers, time stamps, and labeling so that content editors invest less time reviewing numerous files.-
-## Considerations for choosing a use case
-
-Face detection is a valuable tool for many industries when it's used responsibly and carefully. To respect the privacy and safety of others, and to comply with local and global regulations, we recommend that you follow these use guidelines:
--- Carefully consider the accuracy of the results. To promote more accurate detection, check the quality of the video. Low-quality video might affect the insights that are presented.-- Carefully review results if you use face detection for law enforcement. People might not be detected if they're small, sitting, crouching, or obstructed by objects or other people. To ensure fair and high-quality decisions, combine face detection-based automation with human oversight.-- Don't use face detection for decisions that might have serious, adverse impacts. Decisions that are based on incorrect output can have serious, adverse impacts. It's advisable to include human review of decisions that have the potential for serious impacts on individuals.-- Always respect an individualΓÇÖs right to privacy, and ingest videos only for lawful and justifiable purposes.-- Don't purposely disclose inappropriate content about young children, family members of celebrities, or other content that might be detrimental to or pose a threat to an individualΓÇÖs personal freedom.-- Commit to respecting and promoting human rights in the design and deployment of your analyzed media.-- If you use third-party materials, be aware of any existing copyrights or required permissions before you distribute content that's derived from them.ΓÇ» -- Always seek legal advice if you use content from an unknown source.ΓÇ» -- Always obtain appropriate legal and professional advice to ensure that your uploaded videos are secured and that they have adequate controls to preserve content integrity and prevent unauthorized access.-- Provide a feedback channel that allows users and individuals to report issues they might experience with the service.-- Be aware of any applicable laws or regulations that exist in your area about processing, analyzing, and sharing media that features people.ΓÇ» -- Keep a human in the loop. Don't use any solution as a replacement for human oversight and decision making.-- Fully examine and review the potential of any AI model that you're using to understand its capabilities and limitations.ΓÇ» -
-## Related content
-
-Learn more about Responsible AI:
--- [Microsoft Responsible AI principles](https://www.microsoft.com/ai/responsible-ai?activetab=pivot1%3aprimaryr6)-- [Microsoft Responsible AI resources](https://www.microsoft.com/ai/responsible-ai-resources)-- [Microsoft Azure Learn training courses on Responsible AI](/training/paths/responsible-ai-business-principles/)-- [Microsoft Global Human Rights Statement](https://www.microsoft.com/corporate-responsibility/human-rights-statement?activetab=pivot_1:primaryr5) -
-Azure AI Video Indexer insights:
--- [Audio effects detection](audio-effects-detection.md)-- [OCR](ocr.md)-- [Keywords extraction](keywords.md)-- [Transcription, translation, and language identification](transcription-translation-lid.md)-- [Labels identification](labels-identification.md)-- [Named entities](named-entities.md)-- [Observed people tracking and matched persons](observed-matched-people.md)-- [Topics inference](topics-inference.md)
azure-video-indexer Face Redaction With Api https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-video-indexer/face-redaction-with-api.md
- Title: Redact faces by using Azure AI Video Indexer API
-description: Learn how to use the Azure AI Video Indexer face redaction feature by using API.
- Previously updated : 08/11/2023----
-# Redact faces by using Azure AI Video Indexer API
--
-You can use Azure AI Video Indexer to detect and identify faces in video. To modify your video to blur (redact) faces of specific individuals, you can use API.
-
-A few minutes of footage that contain multiple faces can take hours to redact manually, but by using presets in Video Indexer API, the face redaction process requires just a few simple steps.
-
-This article shows you how to redact faces by using an API. Video Indexer API includes a **Face Redaction** preset that offers scalable face detection and redaction (blurring) in the cloud. The article demonstrates each step of how to redact faces by using the API in detail.
-
-The following video shows how to redact a video by using Azure AI Video Indexer API.
-
-> [!VIDEO https://www.microsoft.com/videoplayer/embed/RW16UBo]
-
-## Compliance, privacy, and security
-
-As an important [reminder](limited-access-features.md), you must comply with all applicable laws in your use of analytics or insights that you derive by using Video Indexer.
-
-Face service access is limited based on eligibility and usage criteria to support the Microsoft Responsible AI principles. Face service is available only to Microsoft managed customers and partners. Use the [Face Recognition intake form](https://customervoice.microsoft.com/Pages/ResponsePage.aspx?id=v4j5cvGGr0GRqy180BHbR7en2Ais5pxKtso_Pz4b1_xUQjA5SkYzNDM4TkcwQzNEOE1NVEdKUUlRRCQlQCN0PWcu) to apply for access. For more information, see the [Face limited access page](/legal/cognitive-services/computer-vision/limited-access-identity?context=%2Fazure%2Fcognitive-services%2Fcomputer-vision%2Fcontext%2Fcontext).
-
-## Face redaction terminology and hierarchy
-
-Face redaction in Video Indexer relies on the output of existing Video Indexer face detection results that we provide in our Video Standard and Advanced Analysis presets.
-
-To redact a video, you must first upload a video to Video Indexer and complete an analysis by using the **Standard** or **Advanced** video presets. You can do this by using the [Azure Video Indexer website](https://www.videoindexer.ai/media/library) or [API](https://api-portal.videoindexer.ai/api-details#api=Operations&operation=Upload-Video). You can then use face redaction API to reference this video by using the `videoId` value. We create a new video in which the indicated faces are redacted. Both the video analysis and face redaction are separate billable jobs. For more information, see our [pricing page](https://azure.microsoft.com/pricing/details/video-indexer/).
-
-## Types of blurring
-
-You can choose from different types of blurring in face redaction. To select a type, use a name or representative number for the `blurringKind` parameter in the request body:
-
-|blurringKind number | blurringKind name | Example |
-||||
-|0| MediumBlur|:::image type="content" source="./media/face-redaction-with-api/medium-blur.png" alt-text="Photo of the Azure AI Video Indexer medium blur.":::|
-|1| HighBlur|:::image type="content" source="./media/face-redaction-with-api/high-blur.png" alt-text="Photo of the Azure AI Video Indexer high blur.":::|
-|2| LowBlur|:::image type="content" source="./media/face-redaction-with-api/low-blur.png" alt-text="Photo of the Azure AI Video Indexer low blur.":::|
-|3| BoundingBox|:::image type="content" source="./media/face-redaction-with-api/bounding-boxes.png" alt-text="Photo of Azure AI Video Indexer bounding boxes.":::|
-|4| Black|:::image type="content" source="./media/face-redaction-with-api/black-boxes.png" alt-text="Photo of Azure AI Video Indexer black boxes kind.":::|
-
-You can specify the kind of blurring in the request body by using the `blurringKind` parameter.
-
-Here's an example:
-
-```json
-{
- "faces": {
- "blurringKind": "HighBlur"
- }
-}
-```
-
-Or, use a number that represents the type of blurring that's described in the preceding table:
-
-```json
-{
- "faces": {
- "blurringKind": 1
- }
-}
-```
-
-## Filters
-
-You can apply filters to set which face IDs to blur. You can specify the IDs of the faces in a comma-separated array in the body of the JSON file. Use the `scope` parameter to exclude or include these faces for redaction. By specifying IDs, you can either redact all faces *except* the IDs that you indicate or redact *only* those IDs. See examples in the next sections.
-
-### Exclude scope
-
-In the following example, to redact all faces except face IDs 1001 and 1016, use the `Exclude` scope:
-
-```json
-{
- "faces": {
- "blurringKind": "HighBlur",
- "filter": {
- "ids": [1001, 1016],
- "scope": "Exclude"
- }
- }
-}
-```
-
-### Include scope
-
-In the following example, to redact only face IDs 1001 and 1016, use the `Include` scope:
-
-```json
-{
- "faces": {
- "blurringKind": "HighBlur",
- "filter": {
- "ids": [1001, 1016],
- "scope": "Include"
- }
- }
-}
-```
-
-### Redact all faces
-
-To redact all faces, remove the scope filter:
-
-```json
-{
- "faces": {
- "blurringKind": "HighBlur",
- }
-}
-```
-
-To retrieve a face ID, you can go to the indexed video and retrieve the [artifact file](https://api-portal.videoindexer.ai/api-details#api=Operations&operation=Get-Video-Artifact-Download-Url). The artifact contains a *faces.json* file and a thumbnail .zip file that has all the faces that were detected in the video. You can match the face to the ID and decide which face IDs to redact.
-
-## Create a redaction job
-
-To create a redaction job, you can invoke the following API call:
-
-```http
-POST https://api.videoindexer.ai/{location}/Accounts/{accountId}/Videos/{videoId}/redact[?name][&priority][&privacy][&externalId][&streamingPreset][&callbackUrl][&accessToken]
-```
-
-The following values are required:
-
-| Name | Value | Description |
-||||
-|`Accountid` |`{accountId}`| The ID of your Video Indexer account. |
-| `Location` |`{location}`| The Azure region where your Video Indexer account is located. For example, westus. |
-|`AccessToken` |`{token}`| The token that has Account Contributor rights generated through the [Azure Resource Manager](/rest/api/videoindexer/stable/generate/access-token?tabs=HTTP) REST API. |
-| `Videoid` |`{videoId}`| The video ID of the source video to redact. You can retrieve the video ID by using the [List Video](https://api-portal.videoindexer.ai/api-details#api=Operations&operation=List-Videos) API. |
-| `Name` |`{name}`|The name of the new, redacted video. |
-
-Here's an example of a request:
-
-```http
-https://api.videoindexer.ai/westeurope/Accounts/{id}/Videos/{id}/redact?priority=Low&name=testredaction&privacy=Private&streamingPreset=Default
-```
-
-You can specify the token as an authorization header that has a key value type of `bearertoken:{token}`, or you can provide it as query parameter by using `?token={token}`.
-
-You also need to add a request body in JSON format with the redaction job options to apply. Here's an example:
-
-```json
-{
- "faces": {
- "blurringKind": "HighBlur"
- }
-}
-```
-
-When the request is successful, you receive the response `HTTP 202 ACCEPTED`.
-
-## Monitor job status
-
-In the response of the job creation request, you receive an HTTP header `Location` that has a URL to the job. You can use the same token to make a GET request to this URL to see the status of the redaction job.
-
-Here's an example URL:
-
-```http
-https://api.videoindexer.ai/westeurope/Accounts/<id>/Jobs/<id>
-```
-
-Here's an example response:
-
-```json
-{
- "creationTime": "2023-05-11T11:22:57.6114155Z",
- "lastUpdateTime": "2023-05-11T11:23:01.7993563Z",
- "progress": 20,
- "jobType": "Redaction",
- "state": "Processing"
-}
-```
-
-If you call the same URL when the redaction job is completed, in the `Location` header, you get a storage shared access signature (SAS) URL to the redacted video. For example:
-
-```http
-https://api.videoindexer.ai/westeurope/Accounts/<id>/Videos/<id>/SourceFile/DownloadUrl
-```
-
-This URL redirects to the .mp4 file that's stored in the Azure Storage account.
-
-## FAQs
-
-| Question | Answer |
-|||
-| Can I upload a video and redact in one operation? | No. You need to first upload and analyze a video by using Video Indexer API. Then, reference the indexed video in your redaction job. |
-| Can I use the [Azure AI Video Indexer website](https://www.videoindexer.ai/) to redact a video? | No. Currently you can use only the API to create a redaction job.|
-| Can I play back the redacted video by using the Video Indexer [website](https://www.videoindexer.ai/)?| Yes. The redacted video is visible on the Video Indexer website like any other indexed video, but it doesn't contain any insights. |
-| How do I delete a redacted video? | You can use the [Delete Video](https://api-portal.videoindexer.ai/api-details#api=Operations&operation=Delete-Video) API and provide the `Videoid` value for the redacted video. |
-| Do I need to pass facial identification gating to use face redaction? | Unless you represent a police department in the United States, no. Even if youΓÇÖre gated, we continue to offer face detection. We don't offer face identification if you're gated. However, you can redact all faces in a video by using only face detection. |
-| Will face redaction overwrite my original video? | No. The face redaction job creates a new video output file. |
-| Not all faces are properly redacted. What can I do? | Redaction relies on the initial face detection and tracking output of the analysis pipeline. Although we detect all faces most of the time, there are circumstances in which we can't detect a face. Factors like face angle, the number of frames the face is present, and the quality of the source video affect the quality of face redaction. For more information, see [Face insights](face-detection.md). |
-| Can I redact objects other than faces? | No. Currently, we offer only face redaction. If you have a need to redact other objects, you can provide feedback about our product in the [Azure User Voice](https://feedback.azure.com/d365community/forum/8952b9e3-e03b-ec11-8c62-00224825aadf) channel. |
-| How long is an SAS URL valid to download the redacted video? |<!--The SAS URL is valid for xxxx. --> To download the redacted video after the SAS URL expired, you need to call the initial job status URL. It's best to keep these `Jobstatus` URLs in a database in your back end for future reference. |
-
-## Error codes
-
-The following sections describe errors that might occur when you use face redaction.
-
-### Response: 404 Not Found
-
-The account wasn't found or the video wasn't found.
-
-#### Response headers
-
-| Name | Required | Type | Description |
-| - | - | - | - |
-| `x-ms-request-id` | false | string | A globally unique identifier (GUID) for the request is assigned by the server for instrumentation purposes. The server makes sure that all logs that are associated with handling the request can be linked to the server request ID. A client can provide this request ID in a support ticket so that support engineers can find the logs that are linked to this specific request. The server makes sure that the request ID is unique for each job. |
-
-#### Response body
-
-| Name | Required | Type |
-| - | - | - |
-| `ErrorType` | false | `ErrorType` |
-| `Message` | false | string |
-
-#### Default JSON
-
-```json
-{
- "ErrorType": "GENERAL",
- "Message": "string"
-}
-```
-
-### Response: 400 Bad Request
-
-Invalid input or can't redact the video since its original upload failed. Please upload the video again.
-
-Invalid input or can't redact the video because its original upload failed. Upload the video again.
-
-#### Response headers
-
-| Name | Required | Type | Description |
-| - | - | - | - |
-| `x-ms-request-id` | false | string | A GUID for the request is assigned by the server for instrumentation purposes. The server makes sure that all logs that are associated with handling the request can be linked to the server request ID. A client can provide this request ID in a support ticket so that support engineers can find the logs that are linked to this specific request. The server makes sure that the request ID is unique for each job. |
-
-#### Response body
-
-| Name | Required | Type |
-| - | - | - |
-| `ErrorType` | false | `ErrorType` |
-| `Message` | false | string |
-
-#### Default JSON
-
-```json
-{
- "ErrorType": "GENERAL",
- "Message": "string"
-}
-```
-
-### Response: 409 Conflict
-
-The video is already being indexed.
-
-#### Response headers
-
-| Name | Required | Type | Description |
-| - | - | - | - |
-| `x-ms-request-id` | false | string | A GUID for the request is assigned by the server for instrumentation purposes. The server makes sure that all logs that are associated with handling the request can be linked to the server request ID. A client can provide this request ID in a support ticket so that support engineers can find the logs that are linked to this specific request. The server makes sure that the request ID is unique for each job.|
-
-#### Response body
-
-| Name | Required | Type |
-| - | - | - |
-| `ErrorType` | false | `ErrorType` |
-| `Message` | false | string |
-
-#### Default JSON
-
-```json
-{
- "ErrorType": "GENERAL",
- "Message": "string"
-}
-```
-
-### Response: 401 Unauthorized
-
-The access token isn't authorized to access the account.
-
-#### Response headers
-
-| Name | Required | Type | Description |
-| - | - | - | - |
-| `x-ms-request-id` | false | string | A GUID for the request is assigned by the server for instrumentation purposes. The server makes sure that all logs that are associated with handling the request can be linked to the server request ID. A client can provide this request ID in a support ticket so that support engineers can find the logs that are linked to this specific request. The server makes sure that the request ID is unique for each job. |
-
-#### Response body
-
-| Name | Required | Type |
-| - | - | - |
-| `ErrorType` | false | `ErrorType` |
-| `Message` | false | string |
-
-#### Default JSON
-
-```json
-{
- "ErrorType": "USER_NOT_ALLOWED",
- "Message": "Access token is not authorized to access account 'SampleAccountId'."
-}
-```
-
-### Response: 500 Internal Server Error
-
-An error occurred on the server.
-
-#### Response headers
-
-| Name | Required | Type | Description |
-| - | - | - | - |
-| `x-ms-request-id` | false | string | A GUID for the request is assigned by the server for instrumentation purposes. The server makes sure that all logs that are associated with handling the request can be linked to the server request ID. A client can provide this request ID in a support ticket so that support engineers can find the logs that are linked to this specific request. The server makes sure that the request ID is unique for each job. |
-
-#### Response body
-
-| Name | Required | Type |
-| - | - | - |
-| `ErrorType` | false | `ErrorType` |
-| `Message` | false | string |
-
-#### Default JSON
-
-```json
-{
- "ErrorType": "GENERAL",
- "Message": "There was an error."
-}
-```
-
-### Response: 429 Too many requests
-
-Too many requests were sent. Use the `Retry-After` response header to decide when to send the next request.
-
-#### Response headers
-
-| Name | Required | Type | Description |
-| - | - | - | - |
-| `Retry-After` | false | integer | A non-negative decimal integer that indicates the number of seconds to delay after the response is received. |
-
-### Response: 504 Gateway Timeout
-
-The server didn't respond to the gateway within the expected time.
-
-#### Response headers
-
-| Name | Required | Type | Description |
-| - | - | - | - |
-| `x-ms-request-id` | false | string | A GUID for the request is assigned by the server for instrumentation purposes. The server makes sure that all logs that are associated with handling the request can be linked to the server request ID. A client can provide this request ID in a support ticket so that support engineers can find the logs that are linked to this specific request. The server makes sure that the request ID is unique for each job. |
-
-#### Default JSON
-
-```json
-{
- "ErrorType": "SERVER_TIMEOUT",
- "Message": "Server did not respond to gateway within expected time"
-}
-```
-
-## Next steps
--- Learn more about [Video Indexer](https://azure.microsoft.com/pricing/details/video-indexer/).-- See [Azure pricing](https://azure.microsoft.com/pricing/) for encoding, streaming, and storage billed by Azure service providers.
azure-video-indexer Import Content From Trial https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-video-indexer/import-content-from-trial.md
- Title: Import your content from the trial account
-description: Learn how to import your content from the trial account.
- Previously updated : 12/19/2022-----
-# Import content from your trial account to a regular account
--
-If you would like to transition from the Video Indexer trial account experience to that of a regular paid account, Video Indexer allows you at not cost to import the content in your trial content to your new regular account.
-
-When might you want to switch from a trial to a regular account?
-
-* If you have used up the free trial minutes and want to continue indexing.
-* You are ready to start using Video Indexer for production workloads.
-* You want an experience which doesn't have minute, support, or SLA limitations.
-
-## Create a new ARM account for the import
-
-* First you need to create an account. The regular account needs to have been already created and available before performing the import. Azure AI Video Indexer accounts are Azure Resource Manager (ARM) based and account creation can be performed through the Azure portal (see [Create an account with the Azure portal](create-account-portal.md)) or API (see [Create accounts with API](/rest/api/videoindexer/stable/accounts)).
-* The target ARM-based account has to be an empty account that has not yet been used to index any media files.
-* Import from trial can be performed only once per trial account.
-
-## Import your data
-
-To import your data, follow the steps:
-
- 1. Go to the [Azure AI Video Indexer website](https://aka.ms/vi-portal-link)
- 2. Select your trial account and go to the **Account settings** page.
- 3. Click the **Import content to an ARM-based account**.
- 4. From the dropdown menu choose the ARM-based account you wish to import the data to.
-
- * If the account ID isn't showing, you can copy and paste the account ID from the Azure portal or from the list of accounts under the User account blade at the top right of the Azure AI Video Indexer Portal.
-
- 5. Click **Import content**
-
- :::image type="content" alt-text="Screenshot that shows how to import your data." source="./media/create-account/import-to-arm-account.png":::
-
-All media and as well as your customized content model will be copied from the trial account into the new ARM-based account.
--
azure-video-indexer Indexing Configuration Guide https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-video-indexer/indexing-configuration-guide.md
- Title: Indexing configuration guide
-description: This article explains the configuration options of indexing process with Azure AI Video Indexer.
- Previously updated : 04/27/2023----
-# The indexing configuration guide
--
-It's important to understand the configuration options to index efficiently while ensuring you meet your indexing objectives. When indexing videos, users can use the default settings or adjust many of the settings. Azure AI Video Indexer allows you to choose between a range of language, indexing, custom models, and streaming settings that have implications on the insights generated, cost, and performance.
-
-This article explains each of the options and the impact of each option to enable informed decisions when indexing. The article discusses the [Azure AI Video Indexer website](https://www.videoindexer.ai/) experience but the same options apply when submitting jobs through the API (see the [API guide](video-indexer-use-apis.md)). When indexing large volumes, follow the [at-scale guide](considerations-when-use-at-scale.md).
-
-The initial upload screen presents options to define the video name, source language, and privacy settings.
--
-All the other setting options appear if you select Advanced options.
--
-## Default settings
-
-By default, Azure AI Video Indexer is configured to a **Video source language** of English, **Privacy** of private, **Standard** audio and video setting, and **Streaming quality** of single bitrate.
-
-> [!TIP]
-> This topic describes each indexing option in detail.
-
-Below are a few examples of when using the default setting might not be a good fit:
--- If you need insights observed people or matched person that is only available through Advanced Video. -- If you're only using Azure AI Video Indexer for transcription and translation, indexing of both audio and video isnΓÇÖt required, **Basic** for audio should suffice. -- If you're consuming Azure AI Video Indexer insights but have no need to generate a new media file, streaming isn't necessary and **No streaming** should be selected to avoid the encoding job and its associated cost. -- If a video is primarily in a language that isn't English. -
-### Video source language
-
-If you're aware of the language spoken in the video, select the language from the video source language list. If you're unsure of the language of the video, choose **Auto-detect single language**. When uploading and indexing your video, Azure AI Video Indexer will use language identification (LID) to detect the videos language and generate transcription and insights with the detected language.
-
-If the video may contain multiple languages and you aren't sure which ones, select **Auto-detect multi-language**. In this case, multi-language (MLID) detection will be applied when uploading and indexing your video.
-
-While auto-detect is a great option when the language in your videos varies, there are two points to consider when using LID or MLID:
--- LID/MLID don't support all the languages supported by Azure AI Video Indexer.-- The transcription is of a higher quality when you pre-select the videoΓÇÖs appropriate language.-
-Learn more about [language support and supported languages](language-support.md).
-
-### Privacy
-
-This option allows you to determine if the insights should only be accessible to users in your Azure AI Video Indexer account or to anyone with a link.
-
-### Indexing options
-
-When indexing a video with the default settings, beware each of the audio and video indexing options may be priced differently. See [Azure AI Video Indexer pricing](https://azure.microsoft.com/pricing/details/video-indexer/) for details.
-
-Below are the indexing type options with details of their insights provided. To modify the indexing type, select **Advanced settings**.
-
-|Audio only|Video only |Audio & Video |
-||||
-|Basic |||
-|Standard| Standard |Standard |
-|Advanced |Advanced|Advanced |
-
-## Advanced settings
-
-### Audio only
--- **Basic**: Indexes and extract insights by using audio only (ignoring video) and provides the following insights: transcription, translation, formatting of output captions and subtitles (closed captions).-- **Standard**: Indexes and extract insights by using audio only (ignoring video) and provides the following insights: transcription, translation, formatting of output captions and subtitles (closed captions), automatic language detection, emotions, keywords, named entities (brands, locations, people), sentiments, speakers, topic extraction, and textual content moderation. -- **Advanced**: Indexes and extract insights by using audio only (ignoring video) and provides the following insights: transcription, translation, formatting of output captions and subtitles (closed captions), automatic language detection, audio event detection, emotions, keywords, named entities (brands, locations, people), sentiments, speakers, topic extraction, and textual content moderation. -
-### Video only
--- **Standard**: Indexes and extract insights by using video only (ignoring audio) and provides the following insights: labels (OCR), named entities (OCR - brands, locations, people), OCR, people, scenes (keyframes and shots), black frames, visual content moderation, and topic extraction (OCR). -- **Advanced**: Indexes and extract insights by using video only (ignoring audio) and provides the following insights: labels (OCR), matched person (preview), named entities (OCR - brands, locations, people), OCR, observed people (preview), people, scenes (keyframes and shots), clapperboard detection, digital pattern detection, featured clothing insight, textless slate detection, textual logo detection, black frames, visual content moderation, and topic extraction (OCR). -
-### Audio and Video
--- **Standard**: Indexes and extract insights by using audio and video and provides the following insights: transcription, translation, formatting of output captions and subtitles (closed captions), automatic language detection, emotions, keywords, named entities (brands, locations, people), OCR, scenes (keyframes and shots), black frames, visual content moderation, people, sentiments, speakers, topic extraction, and textual content moderation. -- **Advanced**: Indexes and extract insights by using audio and video and provides the following insights: transcription, translation, formatting of output captions and subtitles (closed captions), automatic language detection, textual content moderation, audio event detection, emotions, keywords, matched person, named entities (brands, locations, people), OCR, observed people (preview), people, clapperboard detection, digital pattern detection, featured clothing insight, textless slate detection, sentiments, speakers, scenes (keyframes and shots), textual logo detection, black frames, visual content moderation, and topic extraction. -
-### Streaming quality options
-
-When indexing a video, you can decide if encoding of the file should occur which will enable streaming. The sequence is as follows:
-
-Upload > Encode (optional) > Index & Analysis > Publish for streaming (optional)
-
-Encoding and streaming operations are performed by and billed by Azure Media Services. There are two additional operations associated with the creation of a streaming video:
--- The creation of a Streaming Endpoint. -- Egress traffic ΓÇô the volume depends on the number of video playbacks, video playback length, and the video quality (bitrate).
-
-There are several aspects that influence the total costs of the encoding job. The first is if the encoding is with single or adaptive streaming. This will create either a single output or multiple encoding quality outputs. Each output is billed separately and depends on the source quality of the video you uploaded to Azure AI Video Indexer.
-
-For Media Services encoding pricing details, see [pricing](https://azure.microsoft.com/pricing/details/media-services/#pricing).
-
-When indexing a video, default streaming settings are applied. Below are the streaming type options that can be modified if you, select **Advanced** settings and go to **Streaming quality**.
-
-|Single bitrate|Adaptive bitrate| No streaming |
-||||
--- **Single bitrate**: With Single Bitrate, the standard Media Services encoder cost will apply for the output. If the video height is greater than or equal to 720p HD, Azure AI Video Indexer encodes it with a resolution of 1280 x 720. Otherwise, it's encoded as 640 x 468. The default setting is content-aware encoding. -- **Adaptive bitrate**: With Adaptive Bitrate, if you upload a video in 720p HD single bitrate to Azure AI Video Indexer and select Adaptive Bitrate, the encoder will use the [AdaptiveStreaming](/rest/api/media/transforms/create-or-update?tabs=HTTP#encodernamedpreset) preset. An output of 720p HD (no output exceeding 720p HD is created) and several lower quality outputs are created (for playback on smaller screens/low bandwidth environments). Each output will use the Media Encoder Standard base price and apply a multiplier for each output. The multiplier is 2x for HD, 1x for non-HD, and 0.25 for audio and billing is per minute of the input video. -
- **Example**: If you index a video in the US East region that is 40 minutes in length and is 720p HP and have selected the streaming option of Adaptive Bitrate, 3 outputs will be created - 1 HD (multiplied by 2), 1 SD (multiplied by 1) and 1 audio track (multiplied by 0.25). This will total to (2+1+0.25) * 40 = 130 billable output minutes.
-
- Output minutes (standard encoder): 130 x $0.015/minute = $1.95.
-- **No streaming**: Insights are generated but no streaming operation is performed and the video isn't available on the Azure AI Video Indexer website. When No streaming is selected, you aren't billed for encoding. -
-### Customizing content models
-
-Azure AI Video Indexer allows you to customize some of its models to be adapted to your specific use case. These models include brands, language, and person. If you have customized models, this section enables you to configure if one of the created models should be used for the indexing.
-
-## Next steps
-
-Learn more about [language support and supported languages](language-support.md).
azure-video-indexer Insights Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-video-indexer/insights-overview.md
- Title: Azure AI Video Indexer insights overview
-description: This article gives a brief overview of Azure AI Video Indexer insights.
- Previously updated : 08/02/2023----
-# Azure AI Video Indexer insights
--
-When a video is indexed, Azure AI Video Indexer analyzes the video and audio content by running 30+ AI models, generating rich insights. Insights contain an aggregated view of the data: transcripts, optical character recognition elements (OCRs), face, topics, emotions, etc. Once the video is indexed and analyzed, Azure AI Video Indexer produces a JSON content that contains details of the video insights. For example, each insight type includes instances of time ranges that show when the insight appears in the video.
-
-Read details about the following insights here:
--- [Audio effects detection](audio-effects-detection-overview.md)-- [Text-based emotion detection](emotions-detection.md)-- [Faces detection](face-detection.md)-- [OCR](ocr.md)-- [Keywords extraction](keywords.md)-- [Transcription, translation, language](transcription-translation-lid.md)-- [Labels identification](labels-identification.md)-- [Named entities](named-entities.md)-- [Observed people tracking & matched faces](observed-matched-people.md)-- [Topics inference](topics-inference.md)-
-For information about features and other insights, see:
--- [Azure AI Video Indexer overview](video-indexer-overview.md)-- [Transparency note](/legal/azure-video-indexer/transparency-note?context=/azure/azure-video-indexer/context/context)-
-Once you [set up](video-indexer-get-started.md) an Azure AI Video Indexer account (see [account types](accounts-overview.md)) and [upload a video](upload-index-videos.md), you can view insights as described below.
-
-## Get the insights using the website
-
-To visually examine the video's insights, press the **Play** button on the video on the [Azure AI Video Indexer](https://www.videoindexer.ai/) website.
-
-![Screenshot of the Insights tab in Azure AI Video Indexer.](./media/video-indexer-output-json/video-indexer-summarized-insights.png)
-
-To get insights produced on the website or the Azure portal:
-
-1. Browse to the [Azure AI Video Indexer](https://www.videoindexer.ai/) website and sign in.
-1. Find a video whose output you want to examine.
-1. Press **Play**.
-1. Choose the **Insights** tab.
-2. Select which insights you want to view (under the **View** drop-down, on the right-top corner).
-3. Go to the **Timeline** tab to see timestamped transcript lines.
-4. Select **Download** > **Insights (JSON)** to get the insights output file.
-5. If you want to download artifacts, beware of the following:
-
- [!INCLUDE [artifacts](./includes/artifacts.md)]
-
-## Get insights produced by the API
-
-When indexing with an API and the response status is OK, you get a detailed JSON output as the response content. When calling the [Get Video Index](https://api-portal.videoindexer.ai/api-details#api=Operations&operation=Get-Video-Index) API, we recommend passing `&includeSummarizedInsights=false`.
--
-This API returns a URL only with a link to the specific resource type you request. An additional GET request must be made to this URL for the specific artifact. The file types for each artifact type vary depending on the artifact.
--
-## Examine the Azure AI Video Indexer output
-
-For more information, see [Examine the Azure AI Video Indexer output]( video-indexer-output-json-v2.md).
-
-## Next steps
-
-[View and edit video insights](video-indexer-view-edit.md).
azure-video-indexer Keywords https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-video-indexer/keywords.md
- Title: Azure AI Video Indexer keywords extraction overview
-description: An introduction to Azure AI Video Indexer keywords extraction component responsibly.
- Previously updated : 06/15/2022-----
-# Keywords extraction
--
-Keywords extraction is an Azure AI Video Indexer AI feature that automatically detects insights on the different keywords discussed in media files. Keywords extraction can extract insights in both single language and multi-language media files. The total number of extracted keywords and their categories are listed in the Insights tab, where clicking a Keyword and then clicking Play Previous or Play Next jumps to the keyword in the media file.
-
-## Prerequisites
-
-Review [Transparency Note overview](/legal/azure-video-indexer/transparency-note?context=/azure/azure-video-indexer/context/context)
-
-## General principles
-
-This article discusses Keywords and the key considerations for making use of this technology responsibly. There are many things you need to consider when deciding how to use and implement an AI-powered feature:
--- Will this feature perform well in my scenario? Before deploying Keywords Extraction into your scenario, test how it performs using real-life data and make sure it can deliver the accuracy you need. -- Are we equipped to identify and respond to errors? AI-powered products and features won't be 100% accurate, so consider how you'll identify and respond to any errors that may occur. -
-## View the insight
-
-When working on the website the insights are displayed in the **Insights** tab. They can also be generated in a categorized list in a JSON file which includes the KeywordΓÇÖs ID, text, together with each keywordΓÇÖs specific start and end time and confidence score.
-
-To display the instances in a JSON file, do the following:
-
-1. Click Download and then Insights (JSON).
-1. Copy the text and paste it into your Online JSON Viewer.
-
- ```json
- "keywords": [
- {
- "id": 1,
- "text": "office insider",
- "confidence": 1,
- "language": "en-US",
- "instances": [
- {
- "adjustedStart": "0:00:00",
- "adjustedEnd": "0:00:05.75",
- "start": "0:00:00",
- "end": "0:00:05.75"
- },
- {
- "adjustedStart": "0:01:21.82",
- "adjustedEnd": "0:01:24.7",
- "start": "0:01:21.82",
- "end": "0:01:24.7"
- },
- {
- "adjustedStart": "0:01:31.32",
- "adjustedEnd": "0:01:32.76",
- "start": "0:01:31.32",
- "end": "0:01:32.76"
- },
- {
- "adjustedStart": "0:01:35.8",
- "adjustedEnd": "0:01:37.84",
- "start": "0:01:35.8",
- "end": "0:01:37.84"
- }
- ]
- },
- {
- "id": 2,
- "text": "insider tip",
- "confidence": 0.9975,
- "language": "en-US",
- "instances": [
- {
- "adjustedStart": "0:01:14.91",
- "adjustedEnd": "0:01:19.51",
- "start": "0:01:14.91",
- "end": "0:01:19.51"
- }
- ]
- },
-
- ```
-
-To download the JSON file via the API, use the [Azure AI Video Indexer developer portal](https://api-portal.videoindexer.ai/).
-
-> [!NOTE]
-> Keywords extraction is language independent.
-
-## Keywords components
-
-During the Keywords procedure, audio and images in a media file are processed, as follows:
-
-|Component|Definition|
-|||
-|Source language | The user uploads the source file for indexing. |
-|Transcription API |The audio file is sent to Azure AI services and the translated transcribed output is returned. If a language has been specified it is processed.|
-|OCR of video |Images in a media file are processed using the Azure AI Vision Read API to extract text, its location, and other insights. |
-|Keywords extraction |An extraction algorithm processes the transcribed audio. The results are then combined with the insights detected in the video during the OCR process. The keywords and where they appear in the media and then detected and identified. |
-|Confidence level| The estimated confidence level of each keyword is calculated as a range of 0 to 1. The confidence score represents the certainty in the accuracy of the result. For example, an 82% certainty will be represented as an 0.82 score.|
-
-## Example use cases
--- Personalization of keywords to match customer interests, for example websites about England posting promotions about English movies or festivals. -- Deep-searching archives for insights on specific keywords to create feature stories about companies, personas or technologies, for example by a news agency. -
-## Considerations and limitations when choosing a use case
-
-Below are some considerations to keep in mind when using keywords extraction:
--- When uploading a file always use high-quality video content. The recommended maximum frame size is HD and frame rate is 30 FPS. A frame should contain no more than 10 people. When outputting frames from videos to AI models, only send around 2 or 3 frames per second. Processing 10 and more frames might delay the AI result. -- When uploading a file always use high quality audio and video content. At least 1 minute of spontaneous conversational speech is required to perform analysis. Audio effects are detected in non-speech segments only. The minimal duration of a non-speech section is 2 seconds. Voice commands and singing aren't supported.  -
-When used responsibly and carefully Keywords is a valuable tool for many industries. To respect the privacy and safety of others, and to comply with local and global regulations, we recommend the following:  
--- Always respect an individual’s right to privacy, and only ingest media for lawful and justifiable purposes.   -- Don't purposely disclose inappropriate media showing young children or family members of celebrities or other content that may be detrimental or pose a threat to an individual’s personal freedom.   -- Commit to respecting and promoting human rights in the design and deployment of your analyzed media.   -- When using 3rd party materials, be aware of any existing copyrights or permissions required before distributing content derived from them.  -- Always seek legal advice when using media from unknown sources.  -- Always obtain appropriate legal and professional advice to ensure that your uploaded media is secured and have adequate controls to preserve the integrity of your content and to prevent unauthorized access.     -- Provide a feedback channel that allows users and individuals to report issues with the service.   -- Be aware of any applicable laws or regulations that exist in your area regarding processing, analyzing, and sharing media containing people.  -- Keep a human in the loop. Don't use any solution as a replacement for human oversight and decision-making.   -- Fully examine and review the potential of any AI model you're using to understand its capabilities and limitations.  -
-## Next steps
-
-### Learn More about Responsible AI
--- [Microsoft Responsible AI principles](https://www.microsoft.com/ai/responsible-ai?activetab=pivot1%3aprimaryr6) -- [Microsoft Responsible AI resources](https://www.microsoft.com/ai/responsible-ai-resources)-- [Microsoft Azure Learning courses on Responsible AI](/training/paths/responsible-ai-business-principles/)-- [Microsoft Global Human Rights Statement](https://www.microsoft.com/corporate-responsibility/human-rights-statement?activetab=pivot_1:primaryr5) -
-### Contact us
-
-`visupport@microsoft.com`
-
-## Azure AI Video Indexer insights
--- [Audio effects detection](audio-effects-detection.md)-- [Face detection](face-detection.md)-- [OCR](ocr.md)-- [Transcription, Translation & Language identification](transcription-translation-lid.md)-- [Labels identification](labels-identification.md) -- [Named entities](named-entities.md)-- [Observed people tracking & matched persons](observed-matched-people.md)-- [Topics inference](topics-inference.md)
azure-video-indexer Labels Identification https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-video-indexer/labels-identification.md
- Title: Azure AI Video Indexer labels identification overview
-description: This article gives an overview of an Azure AI Video Indexer labels identification.
- Previously updated : 06/15/2022-----
-# Labels identification
--
-Labels identification is an Azure AI Video Indexer AI feature that identifies visual objects like sunglasses or actions like swimming, appearing in the video footage of a media file. There are many labels identification categories and once extracted, labels identification instances are displayed in the Insights tab and can be translated into over 50 languages. Clicking a Label opens the instance in the media file, select Play Previous or Play Next to see more instances.
-
-## Prerequisites
-
-Review [Transparency Note overview](/legal/azure-video-indexer/transparency-note?context=/azure/azure-video-indexer/context/context)
-
-## General principles
-
-This article discusses labels identification and the key considerations for making use of this technology responsibly. There are many things you need to consider when deciding how to use and implement an AI-powered feature:
--- Does this feature perform well in my scenario? Before deploying labels identification into your scenario, test how it performs using real-life data and make sure it can deliver the accuracy you need.-- Are we equipped to identify and respond to errors? AI-powered products and features won't be 100% accurate, so consider how you'll identify and respond to any errors that may occur.-
-## View the insight
-
-When working on the website the instances are displayed in the Insights tab. They can also be generated in a categorized list in a JSON file that includes the Labels ID, category, instances together with each labelΓÇÖs specific start and end times and confidence score, as follows:
-
-To display labels identification insights in a JSON file, do the following:
-
-1. Click Download and then Insights (JSON).
-1. Copy the text, paste it into your JSON Viewer.
-
- ```json
- "labels": [
- {
- "id": 1,
- "name": "human face",
- "language": "en-US",
- "instances": [
- {
- "confidence": 0.9987,
- "adjustedStart": "0:00:00",
- "adjustedEnd": "0:00:25.6",
- "start": "0:00:00",
- "end": "0:00:25.6"
- },
- {
- "confidence": 0.9989,
- "adjustedStart": "0:01:21.067",
- "adjustedEnd": "0:01:41.334",
- "start": "0:01:21.067",
- "end": "0:01:41.334"
- }
- ]
- },
- {
- "id": 2,
- "name": "person",
- "referenceId": "person",
- "language": "en-US",
- "instances": [
- {
- "confidence": 0.9959,
- "adjustedStart": "0:00:00",
- "adjustedEnd": "0:00:26.667",
- "start": "0:00:00",
- "end": "0:00:26.667"
- },
- {
- "confidence": 0.9974,
- "adjustedStart": "0:01:21.067",
- "adjustedEnd": "0:01:41.334",
- "start": "0:01:21.067",
- "end": "0:01:41.334"
- }
- ]
- },
- ```
-
-To download the JSON file via the API, [Azure AI Video Indexer developer portal](https://api-portal.videoindexer.ai/).
-
-## Labels components
-
-During the Labels procedure, objects in a media file are processed, as follows:
-
-|Component|Definition|
-|||
-|Source |The user uploads the source file for indexing. |
-|Tagging| Images are tagged and labeled. For example, door, chair, woman, headphones, jeans. |
-|Filtering and aggregation |Tags are filtered according to their confidence level and aggregated according to their category.|
-|Confidence level| The estimated confidence level of each label is calculated as a range of 0 to 1. The confidence score represents the certainty in the accuracy of the result. For example, an 82% certainty is represented as an 0.82 score.|
-
-## Example use cases
--- Extracting labels from frames for contextual advertising or branding. For example, placing an ad for beer following footage on a beach.-- Creating a verbal description of footage to enhance accessibility for the visually impaired, for example a background storyteller in movies. -- Deep searching media archives for insights on specific objects to create feature stories for the news.-- Using relevant labels to create content for trailers, highlights reels, social media or new clips. -
-## Considerations when choosing a use case
--- Carefully consider the accuracy of the results, to promote more accurate detections, check the quality of the video, low quality video might impact the detected insights. -- Carefully consider when using for law enforcement that Labels potentially cannot detect parts of the video. To ensure fair and high-quality decisions, combine Labels with human oversight. -- Don't use labels identification for decisions that may have serious adverse impacts. Machine learning models can result in undetected or incorrect classification output. Decisions based on incorrect output could have serious adverse impacts. Additionally, it's advisable to include human review of decisions that have the potential for serious impacts on individuals. -
-When used responsibly and carefully, Azure AI Video Indexer is a valuable tool for many industries. To respect the privacy and safety of others, and to comply with local and global regulations, we recommend the following:
--- Always respect an individualΓÇÖs right to privacy, and only ingest videos for lawful and justifiable purposes. -- Don't purposely disclose inappropriate content about young children or family members of celebrities or other content that may be detrimental or pose a threat to an individualΓÇÖs personal freedom. -- Commit to respecting and promoting human rights in the design and deployment of your analyzed media. -- When using 3rd party materials, be aware of any existing copyrights or permissions required before distributing content derived from them. -- Always seek legal advice when using content from unknown sources. -- Always obtain appropriate legal and professional advice to ensure that your uploaded videos are secured and have adequate controls to preserve the integrity of your content and to prevent unauthorized access. -- Provide a feedback channel that allows users and individuals to report issues with the service. -- Be aware of any applicable laws or regulations that exist in your area regarding processing, analyzing, and sharing media containing people. -- Keep a human in the loop. Do not use any solution as a replacement for human oversight and decision-making. -- Fully examine and review the potential of any AI model you're using to understand its capabilities and limitations.-
-## Learn more about labels identification
--- [Transparency note](/legal/cognitive-services/computer-vision/imageanalysis-transparency-note)-- [Use cases](/legal/cognitive-services/computer-vision/imageanalysis-transparency-note#use-cases)-- [Capabilities and limitations](/legal/cognitive-services/computer-vision/imageanalysis-transparency-note#system-performance-and-limitations-for-image-analysis) -- [Evaluation of image analysis](/legal/cognitive-services/computer-vision/imageanalysis-transparency-note#evaluation-of-image-analysis) -- [Data, privacy and security](/legal/cognitive-services/computer-vision/ocr-data-privacy-security)-
-## Next steps
-
-### Learn More about Responsible AI
--- [Microsoft Responsible AI principles](https://www.microsoft.com/ai/responsible-ai?activetab=pivot1%3aprimaryr6) -- [Microsoft Responsible AI resources](https://www.microsoft.com/ai/responsible-ai-resources)-- [Microsoft Azure Learning courses on Responsible AI](/training/paths/responsible-ai-business-principles/)-- [Microsoft Global Human Rights Statement](https://www.microsoft.com/corporate-responsibility/human-rights-statement?activetab=pivot_1:primaryr5) -
-### Contact us
-
-`visupport@microsoft.com`
-
-## Azure AI Video Indexer insights
--- [Audio effects detection](audio-effects-detection.md)-- [Face detection](face-detection.md)-- [OCR](ocr.md)-- [Keywords extraction](keywords.md)-- [Transcription, Translation & Language identification](transcription-translation-lid.md)-- [Named entities](named-entities.md)-- [Observed people tracking & matched persons](observed-matched-people.md)-- [Topics inference](topics-inference.md)
azure-video-indexer Language Identification Model https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-video-indexer/language-identification-model.md
- Title: Use Azure AI Video Indexer to auto identify spoken languages
-description: This article describes how the Azure AI Video Indexer language identification model is used to automatically identifying the spoken language in a video.
- Previously updated : 08/28/2023----
-# Automatically identify the spoken language with language identification model
--
-Azure AI Video Indexer supports automatic language identification (LID), which is the process of automatically identifying the spoken language from audio content. The media file is transcribed in the dominant identified language.
-
-See the list of supported by Azure AI Video Indexer languages in [supported languages](language-support.md).
-
-Make sure to review the [Guidelines and limitations](#guidelines-and-limitations) section.
-
-## Choosing auto language identification on indexing
-
-When indexing or [reindexing](https://api-portal.videoindexer.ai/api-details#api=Operations&operation=Re-Index-Video) a video using the API, choose the `auto detect` option in the `sourceLanguage` parameter.
-
-When using portal, go to your **Account videos** on the [Azure AI Video Indexer](https://www.videoindexer.ai/) home page and hover over the name of the video that you want to reindex. On the right-bottom corner, select the **Re-index** button. In the **Re-index video** dialog, choose *Auto detect* from the **Video source language** drop-down box.
--
-## Model output
-
-Azure AI Video Indexer transcribes the video according to the most likely language if the confidence for that language is `> 0.6`. If the language can't be identified with confidence, it assumes the spoken language is English.
-
-Model dominant language is available in the insights JSON as the `sourceLanguage` attribute (under root/videos/insights). A corresponding confidence score is also available under the `sourceLanguageConfidence` attribute.
-
-```json
-"insights": {
- "version": "1.0.0.0",
- "duration": "0:05:30.902",
- "sourceLanguage": "fr-FR",
- "language": "fr-FR",
- "transcript": [...],
- . . .
- "sourceLanguageConfidence": 0.8563
- }
-```
-
-## Guidelines and limitations
-
-Automatic language identification (LID) supports the following languages:
-
- See the list of supported by Azure AI Video Indexer languages in [supported languages](language-support.md).
--- If the audio contains languages other than the [supported list](language-support.md), the result is unexpected.-- If Azure AI Video Indexer can't identify the language with a high enough confidence (greater than 0.6), the fallback language is English.-- Currently, there isn't support for files with mixed language audio. If the audio contains mixed languages, the result is unexpected. -- Low-quality audio may affect the model results.-- The model requires at least one minute of speech in the audio.-- The model is designed to recognize a spontaneous conversational speech (not voice commands, singing, and so on).-
-## Next steps
--- [Overview](video-indexer-overview.md)-- [Automatically identify and transcribe multi-language content](multi-language-identification-transcription.md)
azure-video-indexer Language Support https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-video-indexer/language-support.md
- Title: Language support in Azure AI Video Indexer
-description: This article provides a comprehensive list of language support by service features in Azure AI Video Indexer.
- Previously updated : 03/10/2023-----
-# Language support in Azure AI Video Indexer
--
-This article explains Video Indexer's language options and provides a list of language support for each one. It includes the languages support for Video Indexer features, translation, language identification, customization, and the language settings of the Video Indexer website.
-
-## Supported languages per scenario
-
-This section explains the Video Indexer language options and has a table of the supported languages for each one.
-
-> [!IMPORTANT]
-> All of the languages listed support translation when indexing through the API.
-
-### Column explanations
--- **Supported source language** – The language spoken in the media file supported for transcription, translation, and search.-- **Language identification** - Whether the language can be automatically detected by Video Indexer when language identification is used for indexing. To learn more, see [Use Azure AI Video Indexer to auto identify spoken languages](language-identification-model.md) and the **Language Identification** section.-- **Customization (language model)** - Whether the language can be used when customizing language models in Video Indexer. To learn more, see [Customize a language model in Azure AI Video Indexer](customize-language-model-overview.md).-- **Pronunciation (language model)** - Whether the language can be used to create a pronunciation dataset as part of a custom speech model. To learn more, see [Customize a speech model with Azure AI Video Indexer](customize-speech-model-overview.md).-- **Website Translation** – Whether the language is supported for translation when using the [Azure AI Video Indexer website](https://aka.ms/vi-portal-link). Select the translated language in the language drop-down menu.-
- :::image type="content" source="media/language-support/website-translation.png" alt-text="Screenshot showing a menu with download, English and views as menu items. A tooltip is shown as mouseover on the English item and says Translation is set to English." lightbox="media/language-support/website-translation.png":::
-
- The following insights are translated:
-
- - Transcript
- - Keywords
- - Topics
- - Labels
- - Frame patterns (Only to Hebrew as of now)
-
- All other insights appear in English when using translation.
-- **Website Language** - Whether the language can be selected for use on the [Azure AI Video Indexer website](https://aka.ms/vi-portal-link). Select the **Settings icon** then select the language in the **Language settings** dropdown.-
- :::image type="content" source="media/language-support/website-language.jpg" alt-text="Screenshot showing a menu with user settings show them all toggled to on." lightbox="media/language-support/website-language.jpg":::
-
-| **Language** | **Code** | **Supported<br/>source language** | **Language<br/>identification** | **Customization<br/>(language model)** | **Pronunciation<br>(language model)** | **Website<br/>Translation** | **Website<br/>Language** |
-|||||||||
-| Afrikaans | af-ZA | | | | | Γ£ö | |
-| Arabic (Israel) | ar-IL | Γ£ö | | Γ£ö | | | |
-| Arabic (Iraq) | ar-IQ | Γ£ö | Γ£ö | | | | |
-| Arabic (Jordan) | ar-JO | Γ£ö | Γ£ö | Γ£ö | | | |
-| Arabic (Kuwait) | ar-KW | Γ£ö | Γ£ö | Γ£ö | | | |
-| Arabic (Lebanon) | ar-LB | Γ£ö | | Γ£ö | | | |
-| Arabic (Oman) | ar-OM | Γ£ö | Γ£ö | Γ£ö | | | |
-| Arabic (Palestinian Authority) | ar-PS | Γ£ö | | Γ£ö | | | |
-| Arabic (Qatar) | ar-QA | Γ£ö | Γ£ö | Γ£ö | | | |
-| Arabic (Saudi Arabia) | ar-SA | Γ£ö | Γ£ö | Γ£ö | | | |
-| Arabic (United Arab Emirates) | ar-AE | Γ£ö | Γ£ö | Γ£ö | | | |
-| Arabic Egypt | ar-EG | Γ£ö | Γ£ö | Γ£ö | | Γ£ö | |
-| Arabic Modern Standard (Bahrain) | ar-BH | Γ£ö | Γ£ö | Γ£ö | | | |
-| Arabic Syrian Arab Republic | ar-SY | Γ£ö | Γ£ö | Γ£ö | | | |
-| Armenian | hy-AM | Γ£ö | | | | | |
-| Bangla | bn-BD | | | | | Γ£ö | |
-| Bosnian | bs-Latn | | | | | Γ£ö | |
-| Bulgarian | bg-BG | Γ£ö | Γ£ö | | | Γ£ö | |
-| Catalan | ca-ES | Γ£ö | Γ£ö | Γ£ö | Γ£ö | Γ£ö | |
-| Chinese (Cantonese Traditional) | zh-HK | Γ£ö | Γ£ö | Γ£ö | Γ£ö | Γ£ö | |
-| Chinese (Simplified) | zh-Hans | Γ£ö | Γ£ö | | | Γ£ö | Γ£ö |
-| Chinese (Simplified) | zh-CK | Γ£ö | Γ£ö | | | Γ£ö | Γ£ö |
-| Chinese (Traditional) | zh-Hant | | | | | Γ£ö | |
-| Croatian | hr-HR | Γ£ö | Γ£ö | | Γ£ö | Γ£ö | |
-| Czech | cs-CZ | Γ£ö | Γ£ö | Γ£ö | Γ£ö | Γ£ö | Γ£ö |
-| Danish | da-DK | Γ£ö | Γ£ö | Γ£ö | Γ£ö | Γ£ö | |
-| Dutch | nl-NL | Γ£ö | Γ£ö | Γ£ö | Γ£ö | Γ£ö | Γ£ö |
-| English Australia | en-AU | Γ£ö | Γ£ö | Γ£ö | Γ£ö | Γ£ö | |
-| English United Kingdom | en-GB | Γ£ö | Γ£ö | Γ£ö | Γ£ö | Γ£ö | |
-| English United States | en-US | Γ£ö | Γ£ö | Γ£ö | Γ£ö | Γ£ö | Γ£ö |
-| Estonian | et-EE | Γ£ö | Γ£ö | Γ£ö | Γ£ö | Γ£ö | |
-| Fijian | en-FJ | | | | | Γ£ö | |
-| Filipino | fil-PH | | | | | Γ£ö | |
-| Finnish | fi-FI | Γ£ö | Γ£ö | Γ£ö | Γ£ö | Γ£ö | |
-| French | fr-FR | Γ£ö | Γ£ö | Γ£ö | Γ£ö | Γ£ö | |
-| French (Canada) | fr-CA | Γ£ö | Γ£ö | Γ£ö | Γ£ö | Γ£ö | Γ£ö |
-| German | de-DE | Γ£ö | Γ£ö | Γ£ö | Γ£ö | Γ£ö | Γ£ö |
-| Greek | el-GR | Γ£ö | Γ£ö | | | Γ£ö | |
-| Gujarati | gu-IN | Γ£ö | Γ£ö | | | Γ£ö | |
-| Haitian | fr-HT | | | | | Γ£ö | |
-| Hebrew | he-IL | Γ£ö | Γ£ö | Γ£ö | | Γ£ö | |
-| Hindi | hi-IN | Γ£ö | Γ£ö | Γ£ö | | Γ£ö | Γ£ö |
-| Hungarian | hu-HU | | Γ£ö | Γ£ö | Γ£ö | Γ£ö | Γ£ö |
-| Icelandic | is-IS | Γ£ö | | | | | |
-| Indonesian | id-ID | | | Γ£ö | Γ£ö | Γ£ö | |
-| Irish | ga-IE | Γ£ö | Γ£ö | Γ£ö | Γ£ö | | |
-| Italian | it-IT | Γ£ö | Γ£ö | Γ£ö | Γ£ö | Γ£ö | Γ£ö |
-| Japanese | ja-JP | Γ£ö | Γ£ö | Γ£ö | | Γ£ö | Γ£ö |
-| Kannada | kn-IN | Γ£ö | Γ£ö | | | | |
-| Kiswahili | sw-KE | | | | | Γ£ö | |
-| Korean | ko-KR | Γ£ö | Γ£ö | Γ£ö | | Γ£ö | Γ£ö |
-| Latvian | lv-LV | Γ£ö | Γ£ö | Γ£ö | Γ£ö | Γ£ö | |
-| Lithuanian | lt-LT | | | Γ£ö | Γ£ö | Γ£ö | |
-| Malagasy | mg-MG | | | | | Γ£ö | |
-| Malay | ms-MY | Γ£ö | | | | Γ£ö | |
-| Malayalam | ml-IN | Γ£ö | Γ£ö | | | | |
-| Maltese | mt-MT | | | | | Γ£ö | |
-| Norwegian | nb-NO | Γ£ö | Γ£ö | Γ£ö | | Γ£ö | |
-| Persian | fa-IR | Γ£ö | | Γ£ö | | Γ£ö | |
-| Polish | pl-PL | Γ£ö | Γ£ö | Γ£ö | Γ£ö | Γ£ö | Γ£ö |
-| Portuguese | pt-BR | Γ£ö | Γ£ö | Γ£ö | Γ£ö | Γ£ö | Γ£ö |
-| Portuguese (Portugal) | pt-PT | Γ£ö | Γ£ö | Γ£ö | Γ£ö | Γ£ö | |
-| Romanian | ro-RO | Γ£ö | Γ£ö | Γ£ö | Γ£ö | Γ£ö | |
-| Russian | ru-RU | Γ£ö | Γ£ö | Γ£ö | | Γ£ö | Γ£ö |
-| Samoan | en-WS | | | | | | |
-| Serbian (Cyrillic) | sr-Cyrl-RS | | | | | Γ£ö | |
-| Serbian (Latin) | sr-Latn-RS | | | | | Γ£ö | |
-| Slovak | sk-SK | Γ£ö | Γ£ö | Γ£ö | Γ£ö | Γ£ö | |
-| Slovenian | sl-SI | Γ£ö | Γ£ö | Γ£ö | Γ£ö | Γ£ö | |
-| Spanish | es-ES | Γ£ö | Γ£ö | Γ£ö | Γ£ö | Γ£ö | Γ£ö |
-| Spanish (Mexico) | es-MX | Γ£ö | Γ£ö | Γ£ö | Γ£ö | Γ£ö | |
-| Swedish | sv-SE | Γ£ö | Γ£ö | Γ£ö | Γ£ö | Γ£ö | Γ£ö |
-| Tamil | ta-IN | Γ£ö | Γ£ö | | | Γ£ö | |
-| Telugu | te-IN | Γ£ö | Γ£ö | | | | |
-| Thai | th-TH | Γ£ö | Γ£ö | Γ£ö | | Γ£ö | |
-| Tongan | to-TO | | | | | Γ£ö | |
-| Turkish | tr-TR | Γ£ö | Γ£ö | Γ£ö | | Γ£ö | Γ£ö |
-| Ukrainian | uk-UA | Γ£ö | Γ£ö | | | Γ£ö | |
-| Urdu | ur-PK | | | | | Γ£ö | |
-| Vietnamese | vi-VN | Γ£ö | Γ£ö | | | Γ£ö |
-
-## Get supported languages through the API
-
-Use the Get Supported Languages API call to pull a full list of supported languages per area. For more information, see [Get Supported Languages](https://api-portal.videoindexer.ai/api-details#api=Operations&operation=Get-Supported-Languages).
-
-The API returns a list of supported languages with the following values:
-
-```json
-{
- "name": "Language",
- "languageCode": "Code",
- "isRightToLeft": true/false,
- "isSourceLanguage": true/false,
- "isAutoDetect": true/false
-}
-```
--- Supported source language:-
- If `isSourceLanguage` is false, the language is supported for translation only.
- If `isSourceLanguage` is true, the language is supported as source for transcription, translation, and search.
--- Language identification (auto detection):-
- If `isAutoDetect` is true, the language is supported for language identification (LID) and multi-language identification (MLID).
-
-## Language Identification
-
-When uploading a media file to Video Indexer, you can specify the media file's source language. If indexing a file through the Video Indexer website, this can be done by selecting a language during the file upload. If you're submitting the indexing job through the API, it's done by using the language parameter. The selected language is then used to generate the transcription of the file.
-
-If you aren't sure of the source language of the media file or it may contain multiple languages, Video Indexer can detect the spoken languages. If you select either Auto-detect single language (LID) or multi-language (MLID) for the media fileΓÇÖs source language, the detected language or languages will be used to transcribe the media file. To learn more about LID and MLID, see Use Azure AI Video Indexer to auto identify spoken languages, see [Automatically identify the spoken language with language identification model](language-identification-model.md) and [Automatically identify and transcribe multi-language content](multi-language-identification-transcription.md)
-
-There's a limit of 10 languages allowed for identification during the indexing of a media file for both LID and MLID. The following are the 9 *default* languages of Language identification (LID) and Multi-language identification (MILD):
--- German (de-DE)-- English United States (en-US)-- Spanish (es-ES)-- French (fr-FR)-- Italian (it-IT)-- Japanese (ja-JP)-- Portuguese (pt-BR)-- Russian (ru-RU)-- Chinese (Simplified) (zh-Hans)-
-## How to change the list of default languages
-
-If you need to use languages for identification that aren't used by default, you can customize the list to any 10 languages that support customization through either the website or the API:
-
-### Use the website to change the list
-
-1. Select the **Language ID** tab under Model customization. The list of languages is specific to the Video Indexer account you're using and for the signed-in user. The default list of languages is saved per user on their local device, per device, and browser. As a result, each user can configure their own default identified language list.
-1. Use **Add language** to search and add more languages. If 10 languages are already selected, you first must remove one of the existing detected languages before adding a new one.
-
- :::image type="content" source="media/language-support/default-language.png" alt-text="Screenshot showing a table showing all of the selected languages." lightbox="media/language-support/default-language.png":::
-
-### Use the API to change the list
-
-When you upload a file, the Video Indexer language model cross references 9 languages by default. If there's a match, the model generates the transcription for the file with the detected language.
-
-Use the language parameter to specify `multi` (MLID) or `auto` (LID) parameters. Use the `customLanguages` parameter to specify up to 10 languages. (The parameter is used only when the language parameter is set to `multi` or `auto`.) To learn more about using the API, see [Use the Azure AI Video Indexer API](video-indexer-use-apis.md).
-
-## Next steps
--- [Overview](video-indexer-overview.md)-- [Release notes](release-notes.md)
azure-video-indexer Limited Access Features https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-video-indexer/limited-access-features.md
- Title: Limited Access features of Azure AI Video Indexer
-description: This article talks about the limited access features of Azure AI Video Indexer.
- Previously updated : 06/17/2022----
-# Limited Access features of Azure AI Video Indexer
---
-Our vision is to empower developers and organizations to leverage AI to transform society in positive ways. We encourage responsible AI practices to protect the rights and safety of individuals. Microsoft facial recognition services are Limited Access in order to help prevent the misuse of the services in accordance with our [AI Principles](https://www.microsoft.com/ai/responsible-ai?SilentAuth=1&wa=wsignin1.0&activetab=pivot1%3aprimaryr6) and [facial recognition](https://blogs.microsoft.com/on-the-issues/2018/12/17/six-principles-to-guide-microsofts-facial-recognition-work/) principles. The Face Identify and Celebrity Recognition operations in Azure AI Video Indexer are Limited Access features that require registration.
-
-Since the announcement on June 11th, 2020, customers may not use, or allow use of, any Azure facial recognition service by or for a police department in the United States.
-
-## Application process
-
-Limited Access features of Azure AI Video Indexer are only available to customers managed by Microsoft, and only for use cases selected at the time of registration. Other Azure AI Video Indexer features do not require registration to use.
-
-Customers and partners who wish to use Limited Access features of Azure AI Video Indexer are required to [submit an intake form](https://aka.ms/facerecognition). Access is subject to MicrosoftΓÇÖs sole discretion based on eligibility criteria and a vetting process. Microsoft may require customers and partners to reverify this information periodically.
-
-The Azure AI Video Indexer service is made available to customers and partners under the terms governing their subscription to Microsoft Azure Services (including the [Service Specific Terms](https://www.microsoft.com/licensing/terms/productoffering/MicrosoftAzure/MCA#ServiceSpecificTerms)). Please review these terms carefully as they contain important conditions and obligations governing your use of Azure AI Video Indexer.
-
-## Limited access features
--
-## Help and support
-
-FAQ about Limited Access can be found [here](https://aka.ms/limitedaccesscogservices).
-
-If you need help with Azure AI Video Indexer, find support [here](../ai-services/cognitive-services-support-options.md).
-
-[Report Abuse](https://msrc.microsoft.com/report/abuse) of Azure AI Video Indexer.
-
-## Next steps
-
-Learn more about the legal terms that apply to this service [here](https://azure.microsoft.com/support/legal/).
-
azure-video-indexer Logic Apps Connector Arm Accounts https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-video-indexer/logic-apps-connector-arm-accounts.md
- Title: Logic Apps connector with ARM-based AVI accounts
-description: This article shows how to unlock new experiences and monetization opportunities Azure AI Video Indexer connectors with Logic App and Power Automate with AVI ARM accounts.
- Previously updated : 11/16/2022----
-# Logic Apps connector with ARM-based AVI accounts
--
-Azure AI Video Indexer (AVI) [REST API](https://api-portal.videoindexer.ai/api-details#api=Operations&operation=Upload-Video) supports both server-to-server and client-to-server communication. The API enables you to integrate video and audio insights into your application logic.
-
-> [!TIP]
-> For the latest `api-version`, chose the latest stable version in [our REST documentation](/rest/api/videoindexer/stable/generate).
-
-To make the integration easier, we support [Logic Apps](https://azure.microsoft.com/services/logic-apps/) and [Power Automate](https://make.powerautomate.com/connectors/shared_videoindexer-v2/video-indexer-v2/) connectors that are compatible with the Azure AI Video Indexer API.
-
-You can use the connectors to set up custom workflows to effectively index and extract insights from a large amount of video and audio files, without writing a single line of code. Furthermore, using the connectors for the integration gives you better visibility on the health of your workflow and an easy way to debug it.
-
-> [!TIP]
-> If you are using a classic AVI account, see [Logic Apps connector with classic-based AVI accounts](logic-apps-connector-tutorial.md).
-
-## Get started with the Azure AI Video Indexer connectors
-
-To help you get started quickly with the Azure AI Video Indexer connectors, the example in this article creates Logic App flows. The Logic App and Power Automate capabilities and their editors are almost identical, thus the diagrams and explanations are applicable to both. The example in this article is based on the ARM AVI account. If you're working with a classic account, see [Logic App connectors with classic-based AVI accounts](logic-apps-connector-tutorial.md).
-
-The "upload and index your video automatically" scenario covered in this article is composed of two different flows that work together. The "two flow" approach is used to support async upload and indexing of larger files effectively.
-
-* The first flow is triggered when a blob is added or modified in an Azure Storage account. It uploads the new file to Azure AI Video Indexer with a callback URL to send a notification once the indexing operation completes.
-* The second flow is triggered based on the callback URL and saves the extracted insights back to a JSON file in Azure Storage.
-
-The logic apps that you create in this article, contain one flow per app. The second section (**Create a new logic app of type consumption**) explains how to connect the two. The second flow stands alone and is triggered by the first one (the section with the callback URL).
-
-## Prerequisites
--- [!INCLUDE [quickstarts-free-trial-note](../../includes/quickstarts-free-trial-note.md)]-- Create an ARM-based [Azure AI Video Indexer account](create-account-portal.md).-- Create an Azure Storage account. Keep note of the access key for your Storage account.-
- Create two containers: one to store the media files, second to store the insights generated by Azure AI Video Indexer. In this article, the containers are `videos` and `insights`.
-
-## Set up the file upload flow (the first flow)
-
-This section describes how to set up the first ("file upload") flow. The first flow is triggered when a blob is added or modified in an Azure Storage account. It uploads the new file to Azure AI Video Indexer with a callback URL to send a notification once the indexing operation completes.
-
-The following image shows the first flow:
-
-![Screenshot of the file upload flow.](./media/logic-apps-connector-arm-accounts/first-flow-high-level.png)
-
-1. Create the <a href="https://portal.azure.com/#create/Microsoft.LogicApp" target="_blank">Logic App</a>. We create a Logic App in the same region as the Azure Video Indexer region (recommended but not required). We call the logic app `UploadIndexVideosApp`.
-
- 1. Select **Consumption** for **Plan type**.
- 1. Press **Review + Create** -> **Create**.
- 1. Once the Logic App deployment is complete, in the Azure portal, search and navigate to the newly created Logic App.
- 1. Under the **Settings** section, on the left side's panel, select the **Identity** tab.
- 3. Under **System assigned**, change the **Status** from **Off** to **On** (the step is important for later on in this tutorial).
- 4. Press **Save** (on the top of the page).
- 5. Select the **Logic app designer** tab, in the pane on the left.
- 6. Pick a **Blank Logic App** flow.
- 7. Search for "blob" in the **Choose an Operation** blade.
- 8. In the **All** tab, choose the **Azure Blob Storage** component.
- 9. Under **Triggers**, select the **When a blob is added or modified (properties only) (V2)** trigger.
-1. Set the storage connection.
-
- After creating a **When a blob is added or modified (properties only) (V2)** trigger, the connection needs to be set to the following values:
-
- |Key | Value|
- |--|--|
- |Connection name | <*Name your connection*>. |
- |Authentication type | Access Key|
- |Azure Storage Account name| <*Storage account name where media files are going to be stored*>.|
- |Azure Storage Account Access Key| To get access key of your storage account: in the Azure portal -> my-storage -> under **Security + networking** -> **Access keys** -> copy one of the keys.|
-
- Select **Create**.
-
- ![Screenshot of the storage connection trigger.](./media/logic-apps-connector-arm-accounts/trigger.png)
-
- After setting the connection to the storage, it's required to specify the blob storage container that is being monitored for changes.
-
- |Key| Value|
- |--|--|
- |Storage account name | *Storage account name where media files are stored*|
- |Container| `/videos`|
-
- Select **Save** -> **+New step**
-
- ![Screenshot of the storage container trigger.](./media/logic-apps-connector-arm-accounts/storage-container-trigger.png)
-1. Create SAS URI by path action.
-
- 1. Select the **Action** tab.
- 1. Search for and select **Create SAS URI by path (V2)**.
-
- |Key| Value|
- |--|--|
- |Storage account name | <*The storage account name where media files as stored*>.|
- | Blob path| Under **Dynamic content**, select **List of Files Path**|
- | Group Policy Identifier| Leave the default value.|
- | Permissions| **Read** |
- | Shared Access protocol (appears after pressing **Add new parameter**)| **HttpsOnly**|
-
- Select **Save** (at the top of the page).
-
- > [!div class="mx-imgBorder"]
- > :::image type="content" source="./media/logic-apps-connector-arm-accounts/create-sas.png" alt-text="Screenshot of the create SAS URI by path logic." lightbox="./media/logic-apps-connector-arm-accounts/create-sas.png":::
-
- Select **+New Step**.
-1. <a name="access_token"></a>Generate an access token.
-
- > [!NOTE]
- > For details about the ARM API and the request/response examples, see [Generate an Azure AI Video Indexer access token](/rest/api/videoindexer/preview/generate/access-token).
- >
- > Press **Try it** to get the correct values for your account.
-
- Search and create an **HTTP** action.
-
- |Key| Value|Notes|
- |-|-||
- |Method | **POST**||
- | URI| [generateAccessToken](/rest/api/videoindexer/stable/generate/access-token?tabs=HTTP#generate-accesstoken-for-account-contributor). ||
- | Body|`{ "permissionType": "Contributor", "scope": "Account" }` |See the [REST doc example](/rest/api/videoindexer/preview/generate/access-token?tabs=HTTP#generate-accesstoken-for-account-contributor), make sure to delete the **POST** line.|
- | Add new parameter | **Authentication** ||
-
- ![Screenshot of the HTTP access token.](./media/logic-apps-connector-arm-accounts/http-with-param.png)
-
- After the **Authentication** parameter is added, fill the required parameters according to the table below:
-
- |Key| Value|
- |-|-|
- | Authentication type | **Managed identity** |
- | Managed identity | **System-assigned managed identity**|
- | Audience | `https://management.core.windows.net` |
-
- Select **Save**.
-
- > [!TIP]
- > Before moving to the next step, set up the right permission between the Logic app and the Azure AI Video Indexer account.
- >
- > Make sure you have followed the steps to enable the system -assigned managed identity of your Logic Apps.
-
- > [!div class="mx-imgBorder"]
- > :::image type="content" source="./media/logic-apps-connector-arm-accounts/enable-system.png" alt-text="Screenshot of the how to enable the system assigned managed identity." lightbox="./media/logic-apps-connector-arm-accounts/enable-system.png":::
- 1. Set up system assigned managed identity for permission on Azure AI Video Indexer resource.
-
- In the Azure portal, go to your Azure AI Video Indexer resource/account.
-
- 1. On the left side blade, and select **Access control**.
- 1. Select **Add** -> **Add role assignment** -> **Contributor** -> **Next** -> **User, group, or service principal** -> **+Select members**.
- 1. Under **Members**, search for the Logic Apps name you created (in this case, `UploadIndexVideosApp`).
- 1. Press **Select**.
- 1. Press **Review + assign**.
-1. Back in your Logic App, create an **Upload video and index** action.
-
- 1. Select **Video Indexer(V2)**.
- 1. From Video Indexer(V2), select **Upload Video and index**.
- 1. Set the connection to the Video Indexer account.
-
- |Key| Value|
- |-|-|
- | Connection name| <*Enter a name for the connection*>, in this case `aviconnection`.|
- | API key| This is your personal API key, which is available under **Profile** in the [developer portal](https://api-portal.videoindexer.ai/profile) Because this Logic App is for ARM accounts we do not need the actual API key and you can fill in a dummy value like 12345 |
-
- Select **Create**.
-
- 1. Fill **Upload video and index** action parameters.
-
- > [!TIP]
- > If the AVI Account ID cannot be found and isn't in the drop-down, use the custom value.
-
- |Key| Value|
- |-|-|
- |Location| Location of the associated the Azure AI Video Indexer account.|
- | Account ID| Account ID of the associated Azure AI Video Indexer account. You can find the **Account ID** in the **Overview** page of your account, in the Azure portal. Or, the **Account settings** tab, left of the [Azure AI Video Indexer website](https://www.videoindexer.ai/).|
- |Access Token| Use the `body('HTTP')['accessToken']` expression to extract the access token in the right format from the previous HTTP call.|
- | Video Name| Select **List of Files Name** from the dynamic content of **When a blob is added or modified** action. |
- |Video URL|Select **Web Url** from the dynamic content of **Create SAS URI by path** action.|
- | Body| Can be left as default.|
-
- > [!div class="mx-imgBorder"]
- > :::image type="content" source="./media/logic-apps-connector-arm-accounts/upload-and-index-expression.png" alt-text="Screenshot of the upload and index action." lightbox="./media/logic-apps-connector-arm-accounts/upload-and-index-expression.png":::
-
- Select **Save**.
-
-The completion of the uploading and indexing from the first flow will send an HTTP request with the correct callback URL to trigger the second flow. Then, it will retrieve the insights generated by Azure AI Video Indexer. In this example, it will store the output of your indexing job in your Azure Storage. However, it's up to you what you do with the output.
-
-## Create a new logic app of type consumption (the second flow)
-
-Create the second flow, Logic Apps of type consumption. The second flow is triggered based on the callback URL and saves the extracted insights back to a JSON file in Azure Storage.
-
-![Screenshot of the high level flow.](./media/logic-apps-connector-arm-accounts/second-flow-high-level.png)
-
-1. Set up the trigger
-
- Search for the **When an HTTP request is received**.
-
- ![Screenshot of the set up the trigger.](./media/logic-apps-connector-arm-accounts/serach-trigger.png)
-
- For the trigger, we'll see an HTTP POST URL field. The URL wonΓÇÖt be generated until after you save your flow; however, you'll need the URL eventually.
-
- > [!TIP]
- > We will come back to the URL created in this step.
-1. Generate an access token.
-
- Follow all the steps from:
-
- 1. **Generate an access token** we did for the first flow ([as shown here](#access_token)).
- 1. Select **Save** -> **+ New step**.
-1. Get Video Indexer insights.
-
- 1. Search for "Video Indexer".
- 1. From **Video Indexer(V2)**, select the **Get Video Index** action.
-
- Set the connection name:
-
- |Key| Value|
- |-|-|
- |Connection name| <*A name for connection*>. For example, `aviconnection`.|
- | API key| This is your personal API key, which is available under **Profile** at the [developer portal](https://api-portal.videoindexer.ai/profile). For more information, see [Subscribe to the API](video-indexer-use-apis.md#subscribe-to-the-api).|
- 1. Select **Create**.
- 1. Fill out the required parameters according to the table:
-
- |Key| Value|
- |-|-|
- |Location| The Location of the Azure AI Video Indexer account.|
- | Account ID| The Video Indexer account ID can be copied from the resource/account **Overview** page in the Azure portal.|
- | Video ID\*| For Video ID, add dynamic content of type **Expression** and put in the following expression: **triggerOutputs()['queries']['id']**. |
- | Access Token| From the dynamic content, under the **Parse JSON** section select the **accessToken** that is the output of the parse JSON action. |
-
- \*This expression tells the connecter to get the Video ID from the output of your trigger. In this case, the output of your trigger will be the output of **Upload video and index** in your first trigger.
-
- ![Screenshot of the upload and index a video action.](./media/logic-apps-connector-arm-accounts/get-video-index.png)
-
- Select **Save** -> **+ New step**.
-1. Create a blob and store the insights JSON.
-
- 1. Search for "Azure blob", from the group of actions.
- 1. Select **Create blob(V2)**.
- 1. Set the connection to the blob storage that will store the JSON insights files.
-
- |Key| Value|
- |-|-|
- | Connection name| <*Enter a connection name*>.|
- | Authentication type |Access Key|
- | Azure Storage Account name| <* The storage account name where insights will be stored*>. |
- | Azure Storage Account Access key| Go to Azure portal-> my-storage-> under **Security + networking** ->Access keys -> copy one of the keys. |
-
- ![Screenshot of the create blob action.](./media/logic-apps-connector-arm-accounts/storage-connection.png)
- 1. Select **Create**.
- 1. Set the folder in which insights will be stored.
-
- |Key| Value|
- |-|-|
- |Storage account name| <*Enter the storage account name that would contain the JSON output (in this tutorial is the same as the source video).>*|
- | Folder path | From the dropdown, select the `/insights`|
- | Blob name| From the dynamic content, under the **Get Video Index** section select **Name** and add `_insights.json`, insights file name will be the video name + insights.json |
- | Blob content| From the dynamic content, under the **Get Video Index** section, select the **Body**. |
-
- ![Screenshot of the store blob content action.](./media/logic-apps-connector-arm-accounts/create-blob.png)
- 1. Select **Save flow**.
-1. Update the callback URL to get notified when an index job is finished.
-
- Once the flow is saved, an HTTP POST URL is created in the trigger.
-
- 1. Copy the URL from the trigger.
-
- ![Screenshot of the save URL trigger.](./media/logic-apps-connector-arm-accounts/http-callback-url.png)
- 1. Go back to the first flow and paste the URL in the **Upload video and index** action for the **Callback URL parameter**.
-
-Make sure both flows are saved.
-
-## Next steps
-
-Try out your newly created Logic App or Power Automate solution by adding a video to your Azure blobs container, and go back a few minutes later to see that the insights appear in the destination folder.
azure-video-indexer Logic Apps Connector Tutorial https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-video-indexer/logic-apps-connector-tutorial.md
- Title: The Azure AI Video Indexer connectors with Logic App and Power Automate.
-description: This tutorial shows how to unlock new experiences and monetization opportunities Azure AI Video Indexer connectors with Logic App and Power Automate.
- Previously updated : 09/21/2020----
-# Use Azure AI Video Indexer with Logic App and Power Automate
--
-Azure AI Video Indexer [REST API](https://api-portal.videoindexer.ai/api-details#api=Operations&operation=Delete-Video) supports both server-to-server and client-to-server communication and enables Azure AI Video Indexer users to integrate video and audio insights easily into their application logic, unlocking new experiences and monetization opportunities.
-
-To make the integration even easier, we support [Logic Apps](https://azure.microsoft.com/services/logic-apps/) and [Power Automate](https://make.powerautomate.com/connectors/shared_videoindexer-v2/video-indexer-v2/) connectors that are compatible with our API. You can use the connectors to set up custom workflows to effectively index and extract insights from a large amount of video and audio files, without writing a single line of code. Furthermore, using the connectors for your integration gives you better visibility on the health of your workflow and an easy way to debug it. 
-
-To help you get started quickly with the Azure AI Video Indexer connectors, we will do a walkthrough of an example Logic App and Power Automate solution you can set up. This tutorial shows how to set up flows using Logic Apps. However, the editors and capabilities are almost identical in both solutions, thus the diagrams and explanations are applicable to both Logic Apps and Power Automate.
-
-The "upload and index your video automatically" scenario covered in this tutorial is comprised of two different flows that work together.
-* The first flow is triggered when a blob is added or modified in an Azure Storage account. It uploads the new file to Azure AI Video Indexer with a callback URL to send a notification once the indexing operation completes.
-* The second flow is triggered based on the callback URL and saves the extracted insights back to a JSON file in Azure Storage. This two flow approach is used to support async upload and indexing of larger files effectively.
-
-This tutorial is using Logic App to show how to:
-
-> [!div class="checklist"]
-> * Set up the file upload flow
-> * Set up the JSON extraction flow
--
-## Prerequisites
-
-* To begin with, you will need an Azure AI Video Indexer account along with [access to the APIs via API key](video-indexer-use-apis.md).
-* You will also need an Azure Storage account. Keep note of the access key for your Storage account. Create two containers ΓÇô one to store videos in and one to store insights generated by Azure AI Video Indexer in.
-* Next, you will need to open two separate flows on either Logic Apps or Power Automate (depending on which you are using).
-
-## Set up the first flow - file upload
-
-The first flow is triggered whenever a blob is added in your Azure Storage container. Once triggered, it will create a SAS URI that you can use to upload and index the video in Azure AI Video Indexer. In this section you will create the following flow.
-
-![File upload flow](./media/logic-apps-connector-tutorial/file-upload-flow.png)
-
-To set up the first flow, you will need to provide your Azure AI Video Indexer API Key and Azure Storage credentials.
-
-![Azure blob storage](./media/logic-apps-connector-tutorial/azure-blob-storage.png)
-
-![Connection name and API key](./media/logic-apps-connector-tutorial/connection-name-api-key.png)
-
-> [!TIP]
-> If you previously connected an Azure Storage account or Azure AI Video Indexer account to a Logic App, your connection details are stored and you will be connected automatically. <br/>You can edit the connection by clicking on **Change connection** at the bottom of an Azure Storage (the storage window) or Azure AI Video Indexer (the player window) action.
-
-Once you can connect to your Azure Storage and Azure AI Video Indexer accounts, find and choose the "When a blob is added or modified" trigger in **Logic Apps Designer**.
-
-Select the container that you will place your video files in.
-
-![Screenshot shows the When a blob is added or modified dialog box where you can select a container.](./media/logic-apps-connector-tutorial/container.png)
-
-Next, find and select the "Create SAS URI by pathΓÇ¥ action. In the dialog for the action, select List of Files Path from the Dynamic content options.
-
-Also, add a new "Shared Access Protocol" parameter. Choose HttpsOnly for the value of the parameter.
-
-![SAS uri by path](./media/logic-apps-connector-tutorial/sas-uri-by-path.jpg)
-
-Fill out [your account location](regions.md) and [account ID](./video-indexer-use-apis.md#operational-api-calls) to get the Azure AI Video Indexer account token.
-
-![Get account access token](./media/logic-apps-connector-tutorial/account-access-token.png)
-
-For ΓÇ£Upload video and indexΓÇ¥, fill out the required parameters and Video URL. Select ΓÇ£Add new parameterΓÇ¥ and select Callback URL.
-
-![Upload and index](./media/logic-apps-connector-tutorial/upload-and-index.png)
-
-You will leave the callback URL empty for now. YouΓÇÖll add it only after finishing the second flow where the callback URL is created.
-
-You can use the default value for the other parameters or set them according to your needs.
-
-Click **Save**, and letΓÇÖs move on to configure the second flow, to extract the insights once the upload and indexing is completed.
-
-## Set up the second flow - JSON extraction
-
-The completion of the uploading and indexing from the first flow will send an HTTP request with the correct callback URL to trigger the second flow. Then, it will retrieve the insights generated by Azure AI Video Indexer. In this example, it will store the output of your indexing job in your Azure Storage. However, it is up to you what you can do with the output.
-
-Create the second flow separate from the first one.
-
-![JSON extraction flow](./media/logic-apps-connector-tutorial/json-extraction-flow.png)
-
-To set up this flow, you will need to provide your Azure AI Video Indexer API Key and Azure Storage credentials again. You will need to update the same parameters as you did for the first flow.
-
-For your trigger, you will see an HTTP POST URL field. The URL wonΓÇÖt be generated until after you save your flow; however, you will need the URL eventually. We will come back to this.
-
-Fill out [your account location](regions.md) and [account ID](./video-indexer-use-apis.md#operational-api-calls) to get the Azure AI Video Indexer account token.
-
-Go to the ΓÇ£Get Video IndexΓÇ¥ action and fill out the required parameters. For Video ID, put in the following expression: triggerOutputs()['queries']['id']
-
-![Azure AI Video Indexer action info](./media/logic-apps-connector-tutorial/video-indexer-action-info.jpg)
-
-This expression tells the connecter to get the Video ID from the output of your trigger. In this case, the output of your trigger will be the output of ΓÇ£Upload video and indexΓÇ¥ in your first trigger.
-
-Go to the ΓÇ£Create blobΓÇ¥ action and select the path to the folder in which you will save the insights to. Set the name of the blob you are creating. For Blob content, put in the following expression: body(ΓÇÿGet_Video_IndexΓÇÖ)
-
-![Create blob action](./media/logic-apps-connector-tutorial/create-blob-action.jpg)
-
-This expression takes the output of the ΓÇ£Get Video IndexΓÇ¥ action from this flow.
-
-Click **Save flow**.
-
-Once the flow is saved, an HTTP POST URL is created in the trigger. Copy the URL from the trigger.
-
-![Save URL trigger](./media/logic-apps-connector-tutorial/save-url-trigger.png)
-
-Now, go back to the first flow and paste the URL in the "Upload video and index" action for the Callback URL parameter.
-
-Make sure both flows are saved, and youΓÇÖre good to go!
-
-Try out your newly created Logic App or Power Automate solution by adding a video to your Azure blobs container, and go back a few minutes later to see that the insights appear in the destination folder.
-
-## Generate captions
-
-See the following blog for the steps that show [how to generate captions with Azure AI Video Indexer and Logic Apps](https://techcommunity.microsoft.com/t5/azure-media-services/generating-captions-with-video-indexer-and-logic-apps/ba-p/1672198).
-
-The article also shows how to index a video automatically by copying it to OneDrive and how to store the captions generated by Azure AI Video Indexer in OneDrive.
-
-## Clean up resources
-
-After you are done with this tutorial, feel free to keep this Logic App or Power Automate solution up and running if you need. However, if you do not want to keep this running and do not want to be billed, Turn Off both of your flows if youΓÇÖre using Power Automate. Disable both of the flows if youΓÇÖre using Logic Apps.
-
-## Next steps
-
-This tutorial showed just one Azure AI Video Indexer connectors example. You can use the Azure AI Video Indexer connectors for any API call provided by Azure AI Video Indexer. For example: upload and retrieve insights, translate the results, get embeddable widgets and even customize your models. Additionally, you can choose to trigger those actions based on different sources like updates to file repositories or emails sent. You can then choose to have the results update to our relevant infrastructure or application or generate any number of action items.
-
-> [!div class="nextstepaction"]
-> [Use the Azure AI Video Indexer API](video-indexer-use-apis.md)
-
-For additional resources, refer to [Azure AI Video Indexer](/connectors/videoindexer-v2/)
azure-video-indexer Manage Account Connected To Azure https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-video-indexer/manage-account-connected-to-azure.md
- Title: Repair the connection to Azure, check errors/warnings
-description: Learn how to manage an Azure AI Video Indexer account connected to Azure repair the connection, examine errors/warnings.
- Previously updated : 01/14/2021----
-# Repair the connection to Azure, examine errors/warnings
---
-This article demonstrates how to manage an Azure AI Video Indexer account that's connected to your Azure subscription and an Azure Media Services account.
-
-> [!NOTE]
-> You have to be the Azure AI Video Indexer account owner to do account configuration adjustments discussed in this topic.
-
-## Prerequisites
-
-Connect your Azure AI Video Indexer account to Azure, as described in [Connected to Azure](connect-to-azure.md).
-
-Make sure to follow [Prerequisites](connect-to-azure.md#prerequisites-for-connecting-to-azure) and review [Considerations](connect-to-azure.md#azure-media-services-considerations) in the article.
-
-## Examine account settings
-
-This section examines settings of your Azure AI Video Indexer account.
-
-To view settings:
-
-1. Click on the user icon in the top-right corner and select **Settings**.
-
- ![Settings in Azure AI Video Indexer](./media/manage-account-connected-to-azure/select-settings.png)
-
-2. On the **Settings** page, select the **Account** tab.
-
-If your Videos Indexer account is connected to Azure, you see the following things:
-
-* The name of the underlying Azure Media Services account.
-* The number of indexing jobs running and queued.
-* The number and type of allocated reserved units.
-
-If your account needs some adjustments, you'll see relevant errors and warnings about your account configuration on the **Settings** page. The messages contain links to exact places in Azure portal where you need to make changes. For more information, see the [errors and warnings](#errors-and-warnings) section that follows.
-
-## Repair the connection to Azure
-
-In the **Update connection to Azure Media Services** dialog of your [Azure AI Video Indexer](https://www.videoindexer.ai/) page, you're asked to provide values for the following settings:
-
-|Setting|Description|
-|||
-|Azure subscription ID|The subscription ID can be retrieved from the Azure portal. Click on **All services** in the left panel and search for "subscriptions". Select **Subscriptions** and choose the desired ID from the list of your subscriptions.|
-|Azure Media Services resource group name|The name for the resource group in which you created the Media Services account.|
-|Application ID|The Microsoft Entra application ID (with permissions for the specified Media Services account) that you created for this Azure AI Video Indexer account. <br/><br/>To get the app ID, navigate to Azure portal. Under the Media Services account, choose your account and go to **API Access**. Select **Connect to Media Services API with service principal** -> **Microsoft Entra App**. Copy the relevant parameters.|
-|Application key|The Microsoft Entra application key associated with your Media Services account that you specified above. <br/><br/>To get the app key, navigate to Azure portal. Under the Media Services account, choose your account and go to **API Access**. Select **Connect to Media Services API with service principal** -> **Manage application** -> **Certificates & secrets**. Copy the relevant parameters.|
-
-## Errors and warnings
-
-If your account needs some adjustments, you see relevant errors and warnings about your account configuration on the **Settings** page. The messages contain links to exact places in Azure portal where you need to make changes. This section gives more details about the error and warning messages.
-
-* Event Grid
-
- You have to register the Event Grid resource provider using the Azure portal. In the [Azure portal](https://portal.azure.com/), go to **Subscriptions** > [subscription] > **ResourceProviders** > **Microsoft.EventGrid**. If not in the **Registered** state, select **Register**. It takes a couple of minutes to register.
-
-* Streaming endpoint
-
- Make sure the underlying Media Services account has the default **Streaming Endpoint** in a started state. Otherwise, you can't watch videos from this Media Services account or in Azure AI Video Indexer.
-
-* Media reserved units
-
- You must allocate Media Reserved Units on your Media Service resource in order to index videos. For optimal indexing performance, it's recommended to allocate at least 10 S3 Reserved Units. For pricing information, see the FAQ section of the [Media Services pricing](https://azure.microsoft.com/pricing/details/media-services/) page.
-
-## Next steps
-
-You can programmatically interact with your trial account or Azure AI Video Indexer accounts that are connected to Azure by following the instructions in: [Use APIs](video-indexer-use-apis.md).
-
-Use the same Microsoft Entra user you used when connecting to Azure.
azure-video-indexer Manage Multiple Tenants https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-video-indexer/manage-multiple-tenants.md
- Title: Manage multiple tenants with Azure AI Video Indexer - Azure
-description: This article suggests different integration options for managing multiple tenants with Azure AI Video Indexer.
- Previously updated : 05/15/2019----
-# Manage multiple tenants
--
-This article discusses different options for managing multiple tenants with Azure AI Video Indexer. Choose a method that is most suitable for your scenario:
-
-* Azure AI Video Indexer account per tenant
-* Single Azure AI Video Indexer account for all tenants
-* Azure subscription per tenant
-
-## Azure AI Video Indexer account per tenant
-
-When using this architecture, an Azure AI Video Indexer account is created for each tenant. The tenants have full isolation in the persistent and compute layer.
-
-![Azure AI Video Indexer account per tenant](./media/manage-multiple-tenants/video-indexer-account-per-tenant.png)
-
-### Considerations
-
-* Customers don't share storage accounts (unless manually configured by the customer).
-* Customers don't share compute (reserved units) and don't impact processing jobs times of one another.
-* You can easily remove a tenant from the system by deleting the Azure AI Video Indexer account.
-* There's no ability to share custom models between tenants.
-
- Make sure there's no business requirement to share custom models.
-* Harder to manage due to multiple Azure AI Video Indexer (and associated Media Services) accounts per tenant.
-
-> [!TIP]
-> Create an admin user for your system in [the Azure AI Video Indexer developer portal](https://api-portal.videoindexer.ai/) and use the Authorization API to provide your tenants the relevant [account access token](https://api-portal.videoindexer.ai/api-details#api=Operations&operation=Get-Account-Access-Token).
-
-## Single Azure AI Video Indexer account for all users
-
-When using this architecture, the customer is responsible for tenants isolation. All tenants have to use a single Azure AI Video Indexer account with a single Azure Media Service account. When uploading, searching, or deleting content, the customer will need to filter the proper results for that tenant.
-
-![Single Azure AI Video Indexer account for all users](./media/manage-multiple-tenants/single-video-indexer-account-for-all-users.png)
-
-With this option, customization models (Person, Language, and Brands) can be shared or isolated between tenants by filtering the models by tenant.
-
-When [uploading videos](https://api-portal.videoindexer.ai/api-details#api=Operations&operation=Upload-Video), you can specify a different partition attribute per tenant. This will allow isolation in the [search API](https://api-portal.videoindexer.ai/api-details#api=Operations&operation=Search-Videos). By specifying the partition attribute in the search API you'll only get results of the specified partition.
-
-### Considerations
-
-* Ability to share content and customization models between tenants.
-* One tenant impacts the performance of other tenants.
-* Customer needs to build a complex management layer on top of Azure AI Video Indexer.
-
-> [!TIP]
-> You can use the [priority](upload-index-videos.md) attribute to prioritize tenants jobs.
-
-## Azure subscription per tenant
-
-When using this architecture, each tenant will have their own Azure subscription. For each user, you'll create a new Azure AI Video Indexer account in the tenant subscription.
-
-![Azure subscription per tenant](./media/manage-multiple-tenants/azure-subscription-per-tenant.png)
-
-### Considerations
-
-* This is the only option that enables billing separation.
-* This integration has more management overhead than Azure AI Video Indexer account per tenant. If billing isn't a requirement, it's recommended to use one of the other options described in this article.
-
-## Next steps
-
-[Overview](video-indexer-overview.md)
azure-video-indexer Matched Person https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-video-indexer/matched-person.md
- Title: Enable the matched person insight
-description: The topic explains how to use a match observed people feature. These are people that are detected in the video with the corresponding faces ("People" insight).
- Previously updated : 12/10/2021----
-# Enable the matched person insight
---
-Azure AI Video Indexer matches observed people that were detected in the video with the corresponding faces ("People" insight). To produce the matching algorithm, the bounding boxes for both the faces and the observed people are assigned spatially along the video. The API returns the confidence level of each matching.
-
-The following are some scenarios that benefit from this feature:
-
-* Improve efficiency when creating raw data for content creators, like video advertising, news, or sport games (for example, find all appearances of a specific person in a video archive).
-* Post-event analysisΓÇödetect and track specific personΓÇÖs movement to better analyze an accident or crime post-event (for example, explosion, bank robbery, incident).
-* Create a summary out of a long video, to include the parts where the specific person appears.
-
-The **Matched person** feature is available when indexing your file by choosing the
-**Advanced** -> **Video + audio indexing** preset.
-
-> [!NOTE]
-> Standard indexing does not include this advanced model.
-
-> [!div class="mx-imgBorder"]
-> :::image type="content" source="./media/matched-person/index-matched-person-feature.png" alt-text="Advanced video or Advanced video + audio preset":::
-
-To view the Matched person on the [Azure AI Video Indexer](https://www.videoindexer.ai/) website, go to **View** -> **Show Insights** -> select the **All** option or **View** -> **Custom View** -> **Mapped Faces**.
-
-When you choose to see insights of your video on the [Azure AI Video Indexer](https://www.videoindexer.ai/) website, the matched person could be viewed from the **Observed People tracing** insight. When choosing a thumbnail of a person the matched person became available.
-
-> [!div class="mx-imgBorder"]
-> :::image type="content" source="./media/matched-person/from-observed-people.png" alt-text="View matched people from the Observed People insight":::
-
-If you would like to view people's detected clothing in the **Timeline** of your video on the [Video Indexer website](https://www.videoindexer.ai/), go to **View** -> **Show Insights** and select the **All option** or **View** -> **Custom View** -> **Observed People**.
-
-Searching for a specific person by name, returning all the appearances of the specific person is enables using the search bar of the Insights of your video on the Azure AI Video Indexer.
-
-## JSON code sample
-
-The following JSON response illustrates what Azure AI Video Indexer returns when tracing observed people having Mapped person associated:
-
-```json
-"observedPeople": [
- {
- "id": 1,
- "thumbnailId": "d09ad62e-e0a4-42e5-8ca9-9a640c686596",
- "clothing": [
- {
- "id": 1,
- "type": "sleeve",
- "properties": {
- "length": "short"
- }
- },
- {
- "id": 2,
- "type": "pants",
- "properties": {
- "length": "short"
- }
- }
- ],
- "matchingFace": {
- "id": 1310,
- "confidence": 0.3819
- },
- "instances": [
- {
- "adjustedStart": "0:00:34.8681666",
- "adjustedEnd": "0:00:36.0026333",
- "start": "0:00:34.8681666",
- "end": "0:00:36.0026333"
- },
- {
- "adjustedStart": "0:00:36.6699666",
- "adjustedEnd": "0:00:36.7367",
- "start": "0:00:36.6699666",
- "end": "0:00:36.7367"
- },
- {
- "adjustedStart": "0:00:37.2038333",
- "adjustedEnd": "0:00:39.6729666",
- "start": "0:00:37.2038333",
- "end": "0:00:39.6729666"
- }
- ]
- }
-]
-```
-
-## Limitations and assumptions
-
-It's important to note the limitations of Mapped person, to avoid or mitigate the effects of miss matches between people or people who have no matches.
-
-**Precondition** for the matching is that the person that showing in the observed faces was detected and can be found in the People insight.
-**Pose**: The tracks are optimized to handle observed people who most often appear on the front.
-**Obstructions**: There is no match between faces and observed people where there are obstruction (people or faces overlapping each other).
-**Spatial allocation per frame**: There is no match where different people appear in the same spatial position relatively to the frame in a short time.
-
-See the limitations of Observed people: [Trace observed people in a video](observed-people-tracing.md)
-
-## Next steps
-
-[Overview](video-indexer-overview.md)
azure-video-indexer Monitor Video Indexer Data Reference https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-video-indexer/monitor-video-indexer-data-reference.md
- Title: Monitoring Azure AI Video Indexer data reference
-description: Important reference material needed when you monitor Azure AI Video Indexer
--- Previously updated : 04/17/2023----
-# Monitor Azure AI Video Indexer data reference
--
-See [Monitoring Azure AI Video Indexer](monitor-video-indexer.md) for details on collecting and analyzing monitoring data for Azure AI Video Indexer.
-
-## Metrics
-
-Azure AI Video Indexer currently does not support any metrics monitoring.
-
-<!-- REQUIRED if you support Metrics. If you don't, keep the section but call that out. Some services are only onboarded to logs.
-<!-- Please keep headings in this order -->
-
-<!-- 2 options here depending on the level of extra content you have. -->
-
-<!--**OPTION 1 EXAMPLE**
-
-<!-- OPTION 1 - Minimum - Link to relevant bookmarks in https://learn.microsoft.com/azure/azure-monitor/platform/metrics-supported, which is auto generated from underlying systems. Not all metrics are published depending on whether your product group wants them to be. If the metric is published, but descriptions are wrong of missing, contact your PM and tell them to update them in the Azure Monitor "shoebox" manifest. If this article is missing metrics that you and the PM know are available, both of you contact azmondocs@microsoft.com.
>-
-<!-- Example format. There should be AT LEAST one Resource Provider/Resource Type here. -->
-
-<!--This section lists all the automatically collected platform metrics collected for Azure AI Video Indexer.
-
-|Metric Type | Resource Provider / Type Namespace<br/> and link to individual metrics |
-|-|--|
-| Virtual Machine | [Microsoft.Compute/virtualMachine](/azure/azure-monitor/platform/metrics-supported#microsoftcomputevirtualmachines) |
-| Virtual machine scale set | [Microsoft.Compute/virtualMachinescaleset](/azure/azure-monitor/platform/metrics-supported#microsoftcomputevirtualmachinescaleset)
-**OPTION 2 EXAMPLE** --
-<!-- OPTION 2 - Link to the metrics as above, but work in extra information not found in the automated metric-supported reference article. NOTE: YOU WILL NOW HAVE TO MANUALLY MAINTAIN THIS SECTION to make sure it stays in sync with the metrics-supported link. For highly customized example, see [CosmosDB](../cosmos-db/monitor-cosmos-db-reference.md#metrics). They even regroup the metrics into usage type vs. resource provider and type.
>-
-<!-- Example format. Mimic the setup of metrics supported, but add extra information -->
-
-<!--### Virtual Machine metrics
-
-Resource Provider and Type: [Microsoft.Compute/virtualMachines](/azure/azure-monitor/platform/metrics-supported#microsoftcomputevirtualmachines)
-
-| Metric | Unit | Description | *TODO replace this label with other information* |
-|:-|:--|:|:|
-| | | | Use this metric for <!-- put your specific information in here -->
-<!--| | | | |
-
-<!--### Virtual machine scale set metrics
-
-Namespace- [Microsoft.Compute/virtualMachinesscaleset](/azure/azure-monitor/platform/metrics-supported#microsoftcomputevirtualmachinescalesets)
-
-| Metric | Unit | Description | *TODO replace this label with other information* |
-|:-|:--|:|:|
-| | | | Use this metric for <!-- put your specific information in here -->
-<!--| | | | |
-
-<!-- Add additional explanation of reference information as needed here. Link to other articles such as your Monitor [servicename] article as appropriate. -->
-
-<!-- Keep this text as-is -->
-For more information, see a list of [all platform metrics supported in Azure Monitor](/azure/azure-monitor/platform/metrics-supported).
-
-## Metric dimensions
-
-Azure AI Video Indexer currently does not support any metrics monitoring.
-<!-- REQUIRED. Please keep headings in this order -->
-<!-- If you have metrics with dimensions, outline it here. If you have no dimensions, say so. Questions email azmondocs@microsoft.com -->
-
-<!--For more information on what metric dimensions are, see [Multi-dimensional metrics](/azure/azure-monitor/platform/data-platform-metrics#multi-dimensional-metrics).
-
-Azure AI Video Indexer does not have any metrics that contain dimensions.
-
-*OR*
-
-Azure AI Video Indexer has the following dimensions associated with its metrics.
-
-<!-- See https://learn.microsoft.com/azure/storage/common/monitor-storage-reference#metrics-dimensions for an example. Part is copied below. -->
-
-<!--**--EXAMPLE format when you have dimensions**
-
-Azure Storage supports following dimensions for metrics in Azure Monitor.
-
-| Dimension Name | Description |
-| - | -- |
-| **BlobType** | The type of blob for Blob metrics only. The supported values are **BlockBlob**, **PageBlob**, and **Azure Data Lake Storage**. Append blobs are included in **BlockBlob**. |
-| **BlobTier** | Azure storage offers different access tiers, which allow you to store blob object data in the most cost-effective manner. See more in [Azure Storage blob tier](/azure/storage/blobs/storage-blob-storage-tiers). The supported values include: <br/> <li>**Hot**: Hot tier</li> <li>**Cool**: Cool tier</li> <li>**Archive**: Archive tier</li> <li>**Premium**: Premium tier for block blob</li> <li>**P4/P6/P10/P15/P20/P30/P40/P50/P60**: Tier types for premium page blob</li> <li>**Standard**: Tier type for standard page Blob</li> <li>**Untiered**: Tier type for general purpose v1 storage account</li> |
-| **GeoType** | Transaction from Primary or Secondary cluster. The available values include **Primary** and **Secondary**. It applies to Read Access Geo Redundant Storage(RA-GRS) when reading objects from secondary tenant. | -->
-
-## Resource logs
-<!-- REQUIRED. Please keep headings in this order -->
-
-This section lists the types of resource logs you can collect for Azure AI Video Indexer.
-
-<!-- List all the resource log types you can have and what they are for -->
-
-For reference, see a list of [all resource logs category types supported in Azure Monitor](/azure/azure-monitor/platform/resource-logs-schema).
-
-<!--**OPTION 1 EXAMPLE**
-
-<!-- OPTION 1 - Minimum - Link to relevant bookmarks in https://learn.microsoft.com/azure/azure-monitor/platform/resource-logs-categories, which is auto generated from the REST API. Not all resource log types metrics are published depending on whether your product group wants them to be. If the resource log is published, but category display names are wrong or missing, contact your PM and tell them to update them in the Azure Monitor "shoebox" manifest. If this article is missing resource logs that you and the PM know are available, both of you contact azmondocs@microsoft.com.
>-
-<!-- Example format. There should be AT LEAST one Resource Provider/Resource Type here. -->
-
-<!--This section lists all the resource log category types collected for Azure AI Video Indexer.
-
-|Resource Log Type | Resource Provider / Type Namespace<br/> and link to individual metrics |
-|-|--|
-| Web Sites | [Microsoft.web/sites](/azure/azure-monitor/platform/resource-logs-categories#microsoftwebsites) |
-| Web Site Slots | [Microsoft.web/sites/slots](/azure/azure-monitor/platform/resource-logs-categories#microsoftwebsitesslots)
-**OPTION 2 EXAMPLE** --
-<!-- OPTION 2 - Link to the resource logs as above, but work in extra information not found in the automated metric-supported reference article. NOTE: YOU WILL NOW HAVE TO MANUALLY MAINTAIN THIS SECTION to make sure it stays in sync with the resource-log-categories link. You can group these sections however you want provided you include the proper links back to resource-log-categories article.
>-
-<!-- Example format. Add extra information -->
-
-<!--### Web Sites
-
-Resource Provider and Type: [Microsoft.videoindexer/accounts](/azure/azure-monitor/platform/resource-logs-categories#microsoftwebsites)
-
-| Category | Display Name | *TODO replace this label with other information* |
-|:|:-||
-| AppServiceAppLogs | App Service Application Logs | *TODO other important information about this type* |
-| AppServiceAuditLogs | Access Audit Logs | *TODO other important information about this type* |
-| etc. | | | -->
-
-### Azure AI Video Indexer
-
-Resource Provider and Type: [Microsoft.VideoIndexer/accounts](/azure/azure-monitor/platform/resource-logs-categories#microsoftvideoindexeraccounts)
-
-| Category | Display Name | Additional information |
-|:|:-||
-| VIAudit | Azure AI Video Indexer Audit Logs | Logs are produced from both the [Azure AI Video Indexer website](https://www.videoindexer.ai/) and the REST API. |
-| IndexingLogs | Indexing Logs | Azure AI Video Indexer indexing logs to monitor all files uploads, indexing and reindexing jobs. |
-
-<!-- --**END Examples** - -->
-
-## Azure Monitor Logs tables
-<!-- REQUIRED. Please keep heading in this order -->
-
-This section refers to all of the Azure Monitor Logs Kusto tables relevant to Azure AI Video Indexer and available for query by Log Analytics.
-
-<!--**OPTION 1 EXAMPLE**
-
-<!-- OPTION 1 - Minimum - Link to relevant bookmarks in https://learn.microsoft.com/azure/azure-monitor/reference/tables/tables-resourcetype where your service tables are listed. These files are auto generated from the REST API. If this article is missing tables that you and the PM know are available, both of you contact azmondocs@microsoft.com.
>-
-<!-- Example format. There should be AT LEAST one Resource Provider/Resource Type here. -->
-
-|Resource Type | Notes |
-|-|--|
-| [Azure AI Video Indexer](/azure/azure-monitor/reference/tables/tables-resourcetype#azure-video-indexer) | |
-
-<!-**OPTION 2 EXAMPLE** -
-
-<!-- OPTION 2 - List out your tables adding additional information on what each table is for. Individually link to each table using the table name. For example, link to [AzureMetrics](/azure/azure-monitor/reference/tables/azuremetrics).
-
-NOTE: YOU WILL NOW HAVE TO MANUALLY MAINTAIN THIS SECTION to make sure it stays in sync with the automatically generated list. You can group these sections however you want provided you include the proper links back to the proper tables.
>-
-### Azure AI Video Indexer
-
-| Table | Description | Additional information |
-|:|:-||
-| [VIAudit](/azure/azure-monitor/reference/tables/tables-resourcetype#azure-video-indexer)<!-- (S/azure/azure-monitor/reference/tables/viaudit)--> | <!-- description copied from previous link --> Events produced using the Azure AI Video Indexer [website](https://aka.ms/VIportal) or the [REST API portal](https://aka.ms/vi-dev-portal). | |
-|VIIndexing| Events produced using the Azure AI Video Indexer [upload](https://api-portal.videoindexer.ai/api-details#api=Operations&operation=Upload-Video) and [re-index](https://api-portal.videoindexer.ai/api-details#api=Operations&operation=Re-Index-Video) APIs. |
-<!--| [AzureMetrics](/azure/azure-monitor/reference/tables/azuremetrics) | <!-- description copied from previous link -->
-<!--Metric data emitted by Azure services that measure their health and performance. | *TODO other important information about this type |
-| etc. | | |
-
-<!--### Virtual Machine Scale Sets
-
-| Table | Description | *TODO replace this label with other information* |
-|:|:-||
-| [ADAssessmentRecommendation](/azure/azure-monitor/reference/tables/adassessmentrecommendation) | <!-- description copied from previous link -->
-<!-- Recommendations generated by AD assessments that are started through a scheduled task. When you schedule the assessment it runs by default every 7 days and upload the data into Azure Log Analytics | *TODO other important information about this type |
-| [ADReplicationResult](/azure/azure-monitor/reference/tables/adreplicationresult) | <!-- description copied from previous link -->
-<!--The AD Replication Status solution regularly monitors your Active Directory environment for any replication failures. | *TODO other important information about this type |
-| etc. | | |
-
-<!-- Add extra information if required -->
-
-For a reference of all Azure Monitor Logs / Log Analytics tables, see the [Azure Monitor Log Table Reference](/azure/azure-monitor/reference/tables/tables-resourcetype).
-
-<!-- --**END EXAMPLES** -
-
-### Diagnostics tables
-<!-- REQUIRED. Please keep heading in this order -->
-<!-- If your service uses the AzureDiagnostics table in Azure Monitor Logs / Log Analytics, list what fields you use and what they are for. Azure Diagnostics is over 500 columns wide with all services using the fields that are consistent across Azure Monitor and then adding extra ones just for themselves. If it uses service specific diagnostic table, refers to that table. If it uses both, put both types of information in. Most services in the future will have their own specific table. If you have questions, contact azmondocs@microsoft.com -->
-
-<!-- Azure AI Video Indexer uses the [Azure Diagnostics](/azure/azure-monitor/reference/tables/azurediagnostics) table and the [TODO whatever additional] table to store resource log information. The following columns are relevant.
-
-**Azure Diagnostics**
-
-| Property | Description |
-|: |:|
-| | |
-| | |
-
-**[TODO Service-specific table]**
-
-| Property | Description |
-|: |:|
-| | |
-| | |-->
-
-## Activity log
-<!-- REQUIRED. Please keep heading in this order -->
-
-The following table lists the operations related to Azure AI Video Indexer that may be created in the Activity log.
-
-<!-- Fill in the table with the operations that can be created in the Activity log for the service. -->
-| Operation | Description |
-|:|:|
-|Generate_AccessToken | |
-|Accounts_Update | |
-|Write tags | |
-|Create or update resource diagnostic setting| |
-|Delete resource diagnostic setting|
-
-<!-- NOTE: This information may be hard to find or not listed anywhere. Please ask your PM for at least an incomplete list of what type of messages could be written here. If you can't locate this, contact azmondocs@microsoft.com for help -->
-
-For more information on the schema of Activity Log entries, see [Activity Log schema](../azure-monitor/essentials/activity-log-schema.md).
-
-## Schemas
-<!-- REQUIRED. Please keep heading in this order -->
-
-The following schemas are in use by Azure AI Video Indexer
-
-<!-- List the schema and their usage. This can be for resource logs, alerts, event hub formats, etc depending on what you think is important. -->
-
-#### Audit schema
-
-```json
-{
- "time": "2022-03-22T10:59:39.5596929Z",
- "resourceId": "/SUBSCRIPTIONS/602a61eb-c111-43c0-8323-74825230a47d/RESOURCEGROUPS/VI-RESOURCEGROUP/PROVIDERS/MICROSOFT.VIDEOINDEXER/ACCOUNTS/VIDEOINDEXERACCOUNT",
- "operationName": "Get-Video-Thumbnail",
- "category": "Audit",
- "location": "westus2",
- "durationMs": "192",
- "resultSignature": "200",
- "resultType": "Success",
- "resultDescription": "Get Video Thumbnail",
- "correlationId": "33473fc3-bcbc-4d47-84cc-9fba2f3e9faa",
- "callerIpAddress": "46.*****",
- "operationVersion": "Operations",
- "identity": {
- "externalUserId": "4704F34286364F2*****",
- "upn": "alias@outlook.com",
- "claims": { "permission": "Reader", "scope": "Account" }
- },
- "properties": {
- "accountName": "videoIndexerAccoount",
- "accountId": "8878b584-d8a0-4752-908c-00d6e5597f55",
- "videoId": "1e2ddfdd77"
- }
- }
- ```
-
-#### Indexing schema
-
-```json
-{
- "time": "2022-09-28T09:41:08.6216252Z",
- "resourceId": "/SUBSCRIPTIONS/{SubscriptionId}/RESOURCEGROUPS/{ResourceGroup}/PROVIDERS/MICROSOFT.VIDEOINDEXER/ACCOUNTS/MY-VI-ACCOUNT",
- "operationName": "UploadStarted",
- "category": "IndexingLogs",
- "correlationId": "5cc9a3ea-126b-4f53-a4b5-24b1a5fb9736",
- "resultType": "Success",
- "location": "eastus",
- "operationVersion": "2.0",
- "durationMs": "0",
- "identity": {
- "upn": "my-email@microsoft.com",
- "claims": null
- },
- "properties": {
- "accountName": "my-vi-account",
- "accountId": "6961331d-16d3-413a-8f90-f86a5cabf3ef",
- "videoId": "46b91bc012",
- "indexing": {
- "Language": "en-US",
- "Privacy": "Private",
- "Partition": null,
- "PersonModelId": null,
- "LinguisticModelId": null,
- "AssetId": null,
- "IndexingPreset": "Default",
- "StreamingPreset": "Default",
- "Description": null,
- "Priority": null,
- "ExternalId": null,
- "Filename": "1 Second Video 1.mp4",
- "AnimationModelId": null,
- "BrandsCategories": null,
- "CustomLanguages": "en-US,ar-BH,hi-IN,es-MX",
- "ExcludedAIs": "Faces",
- "LogoGroupId": "ea9d154d-0845-456c-857e-1c9d5d925d95"
- }
- }
-}
- ```
-
-## Next steps
-
-<!-- replace below with the proper link to your main monitoring service article -->
-- See [Monitoring Azure AI Video Indexer](monitor-video-indexer.md) for a description of monitoring Azure AI Video Indexer.-- See [Monitoring Azure resources with Azure Monitor](../azure-monitor/essentials/monitor-azure-resource.md) for details on monitoring Azure resources.
azure-video-indexer Monitor Video Indexer https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-video-indexer/monitor-video-indexer.md
- Title: Monitoring Azure AI Video Indexer
-description: Start here to learn how to monitor Azure AI Video Indexer
--- Previously updated : 12/19/2022----
-# Monitoring Azure AI Video Indexer
--
-When you have critical applications and business processes relying on Azure resources, you want to monitor those resources for their availability, performance, and operation.
-
-This article describes the monitoring data generated by Azure AI Video Indexer. Azure AI Video Indexer uses [Azure Monitor](../azure-monitor/overview.md). If you are unfamiliar with the features of Azure Monitor common to all Azure services that use it, read [Monitoring Azure resources with Azure Monitor](../azure-monitor/essentials/monitor-azure-resource.md).
-
-Some services in Azure have a special focused pre-built monitoring dashboard in the Azure portal that provides a starting point for monitoring your service. These special dashboards are called "insights".
-
-> [!NOTE]
-> The monitoring feature is not available for trial and classic accounts. To update to an ARM account, see [Connect a classic account to ARM](connect-classic-account-to-arm.md) or [Import content from a trial account](import-content-from-trial.md).
-
-## Monitoring data
-
-Azure AI Video Indexer collects the same kinds of monitoring data as other Azure resources that are described in [Monitoring data from Azure resources](../azure-monitor/essentials/monitor-azure-resource.md#monitoring-data-from-azure-resources).
-
-See [Monitoring *Azure AI Video Indexer* data reference](monitor-video-indexer-data-reference.md) for detailed information on the metrics and logs metrics created by Azure AI Video Indexer.
-
-## Collection and routing
-
-Activity logs are collected and stored automatically, but can be routed to other locations by using a diagnostic setting.
-
-Resource Logs are not collected and stored until you create a diagnostic setting and route them to one or more locations.
-
-See [Create diagnostic setting to collect platform logs and metrics in Azure](/azure/azure-monitor/platform/diagnostic-settings) for the detailed process for creating a diagnostic setting using the Azure portal, CLI, or PowerShell. When you create a diagnostic setting, you specify which categories of logs to collect. The categories for *Azure AI Video Indexer* are listed in [Azure AI Video Indexer monitoring data reference](monitor-video-indexer-data-reference.md#resource-logs).
-
-| Category | Description |
-|:|:|
-|Audit | Read/Write operations|
-|Indexing Logs| Monitor the indexing process from upload to indexing and Re-indexing when needed|
---
-The metrics and logs you can collect are discussed in the following sections.
-
-## Analyzing metrics
-
-Currently Azure AI Video Indexer does not support monitoring of metrics.
-
-## Analyzing logs
-
-Data in Azure Monitor Logs is stored in tables where each table has its own set of unique properties.
-
-All resource logs in Azure Monitor have the same fields followed by service-specific fields. The common schema is outlined in [Azure Monitor resource log schema](../azure-monitor/essentials/resource-logs-schema.md) The schema for Azure AI Video Indexer resource logs is found in the [Azure AI Video Indexer Data Reference](monitor-video-indexer-data-reference.md#schemas)
-
-The [Activity log](../azure-monitor/essentials/activity-log.md) is a type of platform sign-in Azure that provides insight into subscription-level events. You can view it independently or route it to Azure Monitor Logs, where you can do much more complex queries using Log Analytics.
-
-For a list of the types of resource logs collected for Azure AI Video Indexer, see [Monitoring Azure AI Video Indexer data reference](monitor-video-indexer-data-reference.md#resource-logs)
-
-For a list of the tables used by Azure Monitor Logs and queryable by Log Analytics, see [Monitoring Azure AI Video Indexer data reference](monitor-video-indexer-data-reference.md#azure-monitor-logs-tables)
-
-### Sample Kusto queries
-
-#### Audit related sample queries
-
-> [!IMPORTANT]
-> When you select **Logs** from the Azure AI Video Indexer account menu, Log Analytics is opened with the query scope set to the current Azure AI Video Indexer account. This means that log queries will only include data from that resource. If you want to run a query that includes data from other Azure AI Video Indexer account or data from other Azure services, select **Logs** from the **Azure Monitor** menu. See [Log query scope and time range in Azure Monitor Log Analytics](../azure-monitor/logs/scope.md) for details.
-
-Following are queries that you can use to help you monitor your Azure AI Video Indexer account.
-
-```kusto
-// Project failures summarized by operationName and Upn, aggregated in 30m windows.
-VIAudit
-| where Status == "Failure"
-| summarize count() by OperationName, bin(TimeGenerated, 30m), Upn
-| render timechart
-```
-
-```kusto
-// Project failures with detailed error message.
-VIAudit
-| where Status == "Failure"
-| parse Description with "ErrorType: " ErrorType ". Message: " ErrorMessage ". Trace" *
-| project TimeGenerated, OperationName, ErrorMessage, ErrorType, CorrelationId, _ResourceId
-```
-
-#### Indexing realted sample queries
-
-```kusto
-// Display Video Indexer Account logs of all failed indexing operations.
-VIIndexing
-// | where AccountId == "<AccountId>" // to filter on a specific accountId, uncomment this line
-| where Status == "Failure"
-| summarize count() by bin(TimeGenerated, 1d)
-| render columnchart
-```
-
-```kusto
-// Video Indexer top 10 users by operations
-// Render timechart of top 10 users by operations, with an optional account id for filtering.
-// Trend of top 10 active Upn's
-VIIndexing
-// | where AccountId == "<AccountId>" // to filter on a specific accountId, uncomment this line
-| where OperationName in ("IndexingStarted", "ReindexingStarted")
-| summarize count() by Upn
-| top 10 by count_ desc
-| project Upn
-| join (VIIndexing
-| where TimeGenerated > ago(30d)
-| where OperationName in ("IndexingStarted", "ReindexingStarted")
-| summarize count() by Upn, bin(TimeGenerated,1d)) on Upn
-| project TimeGenerated, Upn, count_
-| render timechart
-```
-
-## Alerts
-
-Azure Monitor alerts proactively notify you when important conditions are found in your monitoring data. They allow you to identify and address issues in your system before your customers notice them. You can set alerts on [metrics](../azure-monitor/alerts/alerts-metric-overview.md), [logs](../azure-monitor/alerts/alerts-unified-log.md), and the [activity log](../azure-monitor/alerts/activity-log-alerts.md). Different types of alerts have benefits and drawbacks.
-
-The following table lists common and recommended alert rules for Azure AI Video Indexer.
-
-| Alert type | Condition | Description |
-|:|:|:|
-| Log Alert|Failed operation |Send an alert when an upload failed |
-
-```kusto
-//All failed uploads, aggregated in one hour window.
-VIAudit
-| where OperationName == "Upload-Video" and Status == "Failure"
-| summarize count() by bin(TimeGenerated, 1h)
-```
-
-## Next steps
--- See [Monitoring Azure AI Video Indexer data reference](monitor-video-indexer-data-reference.md) for a reference of the metrics, logs, and other important values created by Azure AI Video Indexer account.-- See [Monitoring Azure resources with Azure Monitor](../azure-monitor/essentials/monitor-azure-resource.md) for details on monitoring Azure resources.
azure-video-indexer Multi Language Identification Transcription https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-video-indexer/multi-language-identification-transcription.md
- Title: Automatically identify and transcribe multi-language content with Azure AI Video Indexer
-description: This topic demonstrates how to automatically identify and transcribe multi-language content with Azure AI Video Indexer.
- Previously updated : 09/01/2019----
-# Automatically identify and transcribe multi-language content
--
-Azure AI Video Indexer supports automatic language identification and transcription in multi-language content. This process involves automatically identifying the spoken language in different segments from audio, sending each segment of the media file to be transcribed and combine the transcription back to one unified transcription.
-
-## Choosing multilingual identification on indexing with portal
-
-You can choose **multi-language detection** when uploading and indexing your video. Alternatively, you can choose **multi-language detection** when re-indexing your video. The following steps describe how to reindex:
-
-1. Browse to the [Azure AI Video Indexer](https://vi.microsoft.com/) website and sign in.
-1. Go to the **Library** page and hover over the name of the video that you want to reindex.
-1. On the right-bottom corner, click the **Re-index video** button.
-1. In the **Re-index video** dialog, choose **multi-language detection** from the **Video source language** drop-down box.
-
- * When a video is indexed as multi-language, the insight page will include that option, and an additional insight type will appear, enabling the user to view which segment is transcribed in which language "Spoken language".
- * Translation to all languages is fully available from the multi-language transcript.
- * All other insights will appear in the master language detected ΓÇô that is the language that appeared most in the audio.
- * Closed captioning on the player is available in multi-language as well.
-
-![Portal experience](./media/multi-language-identification-transcription/portal-experience.png)
-
-## Choosing multilingual identification on indexing with API
-
-When indexing or [reindexing](https://api-portal.videoindexer.ai/api-details#api=Operations&operation=Re-Index-Video) a video using the API, choose the `multi-language detection` option in the `sourceLanguage` parameter.
-
-### Model output
-
-The model will retrieve all of the languages detected in the video in one list
-
-```json
-"sourceLanguage": null,
-"sourceLanguages": [
- "es-ES",
- "en-US"
-],
-```
-
-Additionally, each instance in the transcription section will include the language in which it was transcribed
-
-```json
-{
- "id": 136,
- "text": "I remember well when my youth Minister took me to hear Doctor King I was a teenager.",
- "confidence": 0.9343,
- "speakerId": 1,
- "language": "en-US",
- "instances": [
- {
- "adjustedStart": "0:21:10.42",
- "adjustedEnd": "0:21:17.48",
- "start": "0:21:10.42",
- "end": "0:21:17.48"
- }
- ]
-},
-```
-
-## Guidelines and limitations
-
-* Set of supported languages: English, French, German, Spanish.
-* Support for multi-language content with up to three supported languages.
-* If the audio contains languages other than the supported list above, the result is unexpected.
-* Minimal segment length to detect for each language ΓÇô 15 seconds.
-* Language detection offset is 3 seconds on average.
-* Speech is expected to be continuous. Frequent alternations between languages may affect the models performance.
-* Speech of non-native speakers may affect the model performance (for example, when speakers use their native tongue and they switch to another language).
-* The model is designed to recognize a spontaneous conversational speech with reasonable audio acoustics (not voice commands, singing, etc.).
-* Project creation and editing is currently not available for multi-language videos.
-* Custom language models are not available when using multi-language detection.
-* Adding keywords is not supported.
-* When exporting closed caption files the language indication will not appear.
-* The update transcript API does not support multiple languages files.
-
-## Next steps
-
-[Azure AI Video Indexer overview](video-indexer-overview.md)
azure-video-indexer Named Entities https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-video-indexer/named-entities.md
- Title: Azure AI Video Indexer named entities extraction overview
-description: An introduction to Azure AI Video Indexer named entities extraction component responsibly.
- Previously updated : 06/15/2022-----
-# Named entities extraction
--
-Named entities extraction is an Azure AI Video Indexer AI feature that uses Natural Language Processing (NLP) to extract insights on the locations, people and brands appearing in audio and images in media files. Named entities extraction is automatically used with Transcription and OCR and its insights are based on those extracted during these processes. The resulting insights are displayed in the **Insights** tab and are filtered into locations, people and brand categories. Clicking a named entity, displays its instance in the media file. It also displays a description of the entity and a Find on Bing link of recognizable entities.
-
-## Prerequisites
-
-Review [Transparency Note overview](/legal/azure-video-indexer/transparency-note?context=/azure/azure-video-indexer/context/context)
-
-## General principles
-
-This article discusses named entities and the key considerations for making use of this technology responsibly. There are many things you need to consider when deciding how to use and implement an AI-powered feature:
--- Will this feature perform well in my scenario? Before deploying named entities extraction into your scenario, test how it performs using real-life data and make sure it can deliver the accuracy you need.-- Are we equipped to identify and respond to errors? AI-powered products and features won't be 100% accurate, so consider how you'll identify and respond to any errors that may occur.-
-## View the insight
-
-To see the insights in the website, do the following:
-
-1. Go to View and check Named Entities.
-1. Go to Insights and scroll to named entities.
-
-To display named entities extraction insights in a JSON file, do the following:
-
-1. Click Download and then Insights (JSON).
-2. Named entities are divided into three:
-
- * Brands
- * Location
- * People
-3. Copy the text and paste it into your JSON Viewer.
-
- ```json
- namedPeople: [
- {
- referenceId: "Satya_Nadella",
- referenceUrl: "https://en.wikipedia.org/wiki/Satya_Nadella",
- confidence: 1,
- description: "CEO of Microsoft Corporation",
- seenDuration: 33.2,
- id: 2,
- name: "Satya Nadella",
- appearances: [
- {
- startTime: "0:01:11.04",
- endTime: "0:01:17.36",
- startSeconds: 71,
- endSeconds: 77.4
- },
- {
- startTime: "0:01:31.83",
- endTime: "0:01:37.1303666",
- startSeconds: 91.8,
- endSeconds: 97.1
- },
- ```
-
-To download the JSON file via the API, use the [Azure AI Video Indexer developer portal](https://api-portal.videoindexer.ai/).
-
-## Named entities extraction components
-
-During the named entities extraction procedure, the media file is processed, as follows:
-
-|Component|Definition|
-|||
-|Source file | The user uploads the source file for indexing. |
-|Text extraction |- The audio file is sent to Speech Services API to extract the transcription.<br/>- Sampled frames are sent to the Azure AI Vision API to extract OCR. |
-|Analytics |The insights are then sent to the Text Analytics API to extract the entities. For example, Microsoft, Paris or a personΓÇÖs name like Paul or Sarah.
-|Processing and consolidation | The results are then processed. Where applicable, Wikipedia links are added and brands are identified via the Video Indexer built-in and customizable branding lists.
-Confidence value The estimated confidence level of each named entity is calculated as a range of 0 to 1. The confidence score represents the certainty in the accuracy of the result. For example, an 82% certainty is represented as an 0.82 score.|
-
-## Example use cases
--- Contextual advertising, for example, placing an ad for a Pizza chain following footage on Italy.-- Deep searching media archives for insights on people or locations to create feature stories for the news.-- Creating a verbal description of footage via OCR processing to enhance accessibility for the visually impaired, for example a background storyteller in movies. -- Extracting insights on brand na-
-## Considerations and limitations when choosing a use case
--- Carefully consider the accuracy of the results, to promote more accurate detections, check the quality of the audio and images, low quality audio and images might impact the detected insights. -- Named entities only detect insights in audio and images. Logos in a brand name may not be detected.-- Carefully consider that when using for law enforcement named entities may not always detect parts of the audio. To ensure fair and high-quality decisions, combine named entities with human oversight. -- Don't use named entities for decisions that may have serious adverse impacts. Machine learning models that extract text can result in undetected or incorrect text output. Decisions based on incorrect output could have serious adverse impacts. Additionally, it's advisable to include human review of decisions that have the potential for serious impacts on individuals. -
-When used responsibly and carefully Azure AI Video Indexer is a valuable tool for many industries. To respect the privacy and safety of others, and to comply with local and global regulations, we recommend the following:
--- Always respect an individualΓÇÖs right to privacy, and only ingest videos for lawful and justifiable purposes. -- Don't purposely disclose inappropriate content about young children or family members of celebrities or other content that may be detrimental or pose a threat to an individualΓÇÖs personal freedom. -- Commit to respecting and promoting human rights in the design and deployment of your analyzed media. -- When using 3rd party materials, be aware of any existing copyrights or permissions required before distributing content derived from them. -- Always seek legal advice when using content from unknown sources. -- Always obtain appropriate legal and professional advice to ensure that your uploaded videos are secured and have adequate controls to preserve the integrity of your content and to prevent unauthorized access. -- Provide a feedback channel that allows users and individuals to report issues with the service. -- Be aware of any applicable laws or regulations that exist in your area regarding processing, analyzing, and sharing media containing people. -- Keep a human in the loop. Do not use any solution as a replacement for human oversight and decision-making. -- Fully examine and review the potential of any AI model you're using to understand its capabilities and limitations. -
-## Next steps
-
-### Learn More about Responsible AI
--- [Microsoft Responsible AI principles](https://www.microsoft.com/ai/responsible-ai?activetab=pivot1%3aprimaryr6) -- [Microsoft Responsible AI resources](https://www.microsoft.com/ai/responsible-ai-resources)-- [Microsoft Azure Learning courses on Responsible AI](/training/paths/responsible-ai-business-principles/)-- [Microsoft Global Human Rights Statement](https://www.microsoft.com/corporate-responsibility/human-rights-statement?activetab=pivot_1:primaryr5) -
-### Contact us
-
-`visupport@microsoft.com`
-
-## Azure AI Video Indexer insights
--- [Audio effects detection](audio-effects-detection.md)-- [Face detection](face-detection.md)-- [Keywords extraction](keywords.md)-- [Transcription, Translation & Language identification](transcription-translation-lid.md)-- [Labels identification](labels-identification.md) -- [Observed people tracking & matched persons](observed-matched-people.md)-- [Topics inference](topics-inference.md)
azure-video-indexer Network Security https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-video-indexer/network-security.md
- Title: How to enable network security
-description: This article gives an overview of the Azure AI Video Indexer network security options.
-- Previously updated : 12/19/2022----
-# NSG service tags for Azure AI Video Indexer
--
-Azure AI Video Indexer is a service hosted on Azure. In some cases the service needs to interact with other services in order to index video files (for example, a Storage account) or when you orchestrate indexing jobs against Azure AI Video Indexer API endpoint using your own service hosted on Azure (for example, AKS, Web Apps, Logic Apps, Functions).
-
-> [!NOTE]
-> If you are already using "AzureVideoAnalyzerForMedia" Network Service Tag you may experience issues with your networking security group starting 9 January 2023. This is because we are moving to a new Security Tag label "VideoIndexer". The mitigatation is to remove the old "AzureVideoAnalyzerForMedia" tag from your configuration and deployment scripts and start using the "VideoIndexer" tag going forward.
-
-Use [Network Security Groups with Service Tags](../virtual-network/service-tags-overview.md) to limit access to your resources on a network level. A service tag represents a group of IP address prefixes from a given Azure service, in this case Azure AI Video Indexer. Microsoft manages the address prefixes grouped by the service tag and automatically updates the service tag as addresses change in our backend, minimizing the complexity of frequent updates to network security rules by the customer.
-
-> [!NOTE]
-> The NSG service tags feature is not available for trial and classic accounts. To update to an ARM account, see [Connect a classic account to ARM](connect-classic-account-to-arm.md) or [Import content from a trial account](import-content-from-trial.md).
-
-## Get started with service tags
-
-Currently we support the global service tag option for using service tags in your network security groups:
-
-**Use a single global VideoIndexer service tag**: This option opens your virtual network to all IP addresses that the Azure AI Video Indexer service uses across all regions we offer our service. This method will allow for all IP addresses owned and used by Azure AI Video Indexer to reach your network resources behind the NSG.
-
-> [!NOTE]
-> Currently we do not support IPs allocated to our services in the Switzerland North Region. These will be added soon. If your account is located in this region you cannot use Service Tags in your NSG today since these IPs are not in the Service Tag list and will be rejected by the NSG rule.
-
-## Use a single global Azure AI Video Indexer service tag
-
-The easiest way to begin using service tags with your Azure AI Video Indexer account is to add the global tag `VideoIndexer` to an NSG rule.
-
-1. From the [Azure portal](https://portal.azure.com/), select your network security group.
-1. Under **Settings**, select **Inbound security rules**, and then select **+ Add**.
-1. From the **Source** drop-down list, select **Service Tag**.
-1. From the **Source service tag** drop-down list, select **VideoIndexer**.
--
-This tag contains the IP addresses of Azure AI Video Indexer services for all regions where available. The tag will ensure that your resource can communicate with the Azure AI Video Indexer services no matter where it's created.
-
-## Using Azure CLI
-
-You can also use Azure CLI to create a new or update an existing NSG rule and add the **VideoIndexer** service tag using the `--source-address-prefixes`. For a full list of CLI commands and parameters see [az network nsg](/cli/azure/network/nsg/rule?view=azure-cli-latest&preserve-view=true)
-
-Example of a security rule using service tags. For more details, visit https://aka.ms/servicetags
-
-`az network nsg rule create -g MyResourceGroup --nsg-name MyNsg -n MyNsgRuleWithTags --priority 400 --source-address-prefixes VideoIndexer --destination-address-prefixes '*' --destination-port-ranges '*' --direction Inbound --access Allow --protocol Tcp --description "Allow traffic from Video Indexer"`
-
-## Next steps
-
-[Disaster recovery](video-indexer-disaster-recovery.md)
azure-video-indexer Object Detection https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-video-indexer/object-detection.md
- Title: Azure AI Video Indexer object detection overview
-description: An introduction to Azure AI Video Indexer object detection overview.
- Previously updated : 09/26/2023-----
-# Azure Video Indexer object detection
-
-Azure Video Indexer can detect objects in videos. The insight is part of all standard and advanced presets.
-
-## Prerequisites
-
-Review [transparency note overview](/legal/azure-video-indexer/transparency-note?context=/azure/azure-video-indexer/context/context)
-
-## JSON keys and definitions
-
-| **Key** | **Definition** |
-| | |
-| ID | Incremental number of IDs of the detected objects in the media file |
-| Type | Type of objects, for example, Car
-| ThumbnailID | GUID representing a single detection of the object |
-| displayName | Name to be displayed in the VI portal experience |
-| WikiDataID | A unique identifier in the WikiData structure |
-| Instances | List of all instances that were tracked
-| Confidence | A score between 0-1 indicating the object detection confidence |
-| adjustedStart | adjusted start time of the video when using the editor |
-| adjustedEnd | adjusted end time of the video when using the editor |
-| start | the time that the object appears in the frame |
-| end | the time that the object no longer appears in the frame |
-
-## JSON response
-
-Object detection is included in the insights that are the result of an [Upload](https://api-portal.videoindexer.ai/api-details#api=Operations&operation=Upload-Video) request.
-
-### Detected and tracked objects
-
-Detected and tracked objects appear under ΓÇ£detected ObjectsΓÇ¥ in the downloaded *insights.json* file. Every time a unique object is detected, it's given an ID. That object is also tracked, meaning that the model watches for the detected object to return to the frame. If it does, another instance is added to the instances for the object with different start and end times.
-
-In this example, the first car was detected and given an ID of 1 since it was also the first object detected. Then, a different car was detected and that car was given the ID of 23 since it was the 23rd object detected. Later, the first car appeared again and another instance was added to the JSON. Here is the resulting JSON:
-
-```json
-detectedObjects: [
- {
- id: 1,
- type: "Car",
- thumbnailId: "1c0b9fbb-6e05-42e3-96c1-abe2cd48t33",
- displayName: "car",
- wikiDataId: "Q1420",
- instances: [
- {
- confidence: 0.468,
- adjustedStart: "0:00:00",
- adjustedEnd: "0:00:02.44",
- start: "0:00:00",
- end: "0:00:02.44"
- },
- {
- confidence: 0.53,
- adjustedStart: "0:03:00",
- adjustedEnd: "0:00:03.55",
- start: "0:03:00",
- end: "0:00:03.55"
- }
- ]
- },
- {
- id: 23,
- type: "Car",
- thumbnailId: "1c0b9fbb-6e05-42e3-96c1-abe2cd48t34",
- displayName: "car",
- wikiDataId: "Q1420",
- instances: [
- {
- confidence: 0.427,
- adjustedStart: "0:00:00",
- adjustedEnd: "0:00:14.24",
- start: "0:00:00",
- end: "0:00:14.24"
- }
- ]
- }
-]
-```
-
-## Try object detection
-
-You can try out object detection with the web portal or with the API.
-
-## [Web Portal](#tab/webportal)
-
-Once you have uploaded a video, you can view the insights. On the insights tab, you can view the list of objects detected and their main instances.
-
-### Insights
-Select the **Insights** tab. The objects are in descending order of the number of appearances in the video.
--
-### Timeline
-Select the **Timeline** tab.
--
-Under the timeline tab, all object detection is displayed according to the time of appearance. When you hover over a specific detection, it shows the detection percentage of certainty.
-
-### Player
-
-The player automatically marks the detected object with a bounding box. The selected object from the insights pane is highlighted in blue with the objects type and serial number also displayed.
-
-Filter the bounding boxes around objects by selecting bounding box icon on the player.
--
-Then, select or deselect the detected objects checkboxes.
--
-Download the insights by selecting **Download** and then **Insights (JSON)**.
-
-## [API](#tab/api)
-
-When you use the [Upload](https://api-portal.videoindexer.ai/api-details#api=Operations&operation=Upload-Video) request with the standard or advanced video presets, object detection is included in the indexing.
-
-To examine object detection more thoroughly, use [Get Video Index](https://api-portal.videoindexer.ai/api-details#api=Operations&operation=Get-Video-Index).
---
-## Supported objects
-
- :::column:::
- - airplane
- - apple
- - backpack
- - banana
- - baseball bat
- - baseball glove
- - bed
- - bicycle
- - bottle
- - bowl
- - broccoli
- - bus
- - cake
- :::column-end:::
- :::column:::
- - car
- - carrot
- - cell phone
- - chair
- - clock
- - computer mouse
- - couch
- - cup
- - dining table
- - donut
- - fire hydrant
- - fork
- - frisbee
- :::column-end:::
- :::column:::
- - handbag
- - hot dog
- - kite
- - knife
- - laptop
- - microwave
- - motorcycle
- - necktie
- - orange
- - oven
- - parking meter
- - pizza
- - potted plant
- :::column-end:::
- :::column:::
- - refrigerator
- - remote
- - sandwich
- - scissors
- - skateboard
- - skis
- - snowboard
- - spoon
- - sports ball
- - suitcase
- - surfboard
- - teddy bear
- - television
- :::column-end:::
- :::column:::
- - tennis racket
- - toaster
- - toilet
- - toothbrush
- - traffic light
- - train
- - umbrella
- - vase
- - wine glass
- :::column-end:::
-
-## Limitations
--- Up to 20 detections per frame for standard and advanced processing and 35 tracks per class.-- The video area shouldn't exceed 1920 x 1080 pixels.-- Object size shouldn't be greater than 90 percent of the frame.-- A high frame rate (> 30 FPS) may result in slower indexing, with little added value to the quality of the detection and tracking.-- Other factors that may affect the accuracy of the object detection include low light conditions, camera motion, and occlusion.
azure-video-indexer Observed Matched People https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-video-indexer/observed-matched-people.md
- Title: Azure AI Video Indexer observed people tracking & matched faces overview
-description: An introduction to Azure AI Video Indexer observed people tracking & matched faces component responsibly.
- Previously updated : 04/06/2023-----
-# Observed people tracking and matched faces
--
-> [!IMPORTANT]
-> Face identification, customization and celebrity recognition features access is limited based on eligibility and usage criteria in order to support our Responsible AI principles. Face identification, customization and celebrity recognition features are only available to Microsoft managed customers and partners. Use the [Face Recognition intake form](https://customervoice.microsoft.com/Pages/ResponsePage.aspx?id=v4j5cvGGr0GRqy180BHbR7en2Ais5pxKtso_Pz4b1_xUQjA5SkYzNDM4TkcwQzNEOE1NVEdKUUlRRCQlQCN0PWcu) to apply for access.
-
-Observed people tracking and matched faces are Azure AI Video Indexer AI features that automatically detect and match people in media files. Observed people tracking and matched faces can be set to display insights on people, their clothing, and the exact timeframe of their appearance.
-
-The resulting insights are displayed in a categorized list in the Insights tab, the tab includes a thumbnail of each person and their ID. Clicking the thumbnail of a person displays the matched person (the corresponding face in the People insight). Insights are also generated in a categorized list in a JSON file that includes the thumbnail ID of the person, the percentage of time appearing in the file, Wiki link (if they're a celebrity) and confidence level.
-
-## Prerequisites
-
-Review [transparency note overview](/legal/azure-video-indexer/transparency-note?context=/azure/azure-video-indexer/context/context)
-
-## General principles
-
-This article discusses observed people tracking and matched faces and the key considerations for making use of this technology responsibly. There are many things you need to consider when deciding how to use and implement an AI-powered feature:
--- Will this feature perform well in my scenario? Before deploying observed people tracking and matched faces into your scenario, test how it performs using real-life data and make sure it can deliver the accuracy you need.-- Are we equipped to identify and respond to errors? AI-powered products and features will not be 100% accurate, so consider how you'll identify and respond to any errors that may occur.-
-## View the insight
-
-When uploading the media file, go to Video + Audio Indexing and select Advanced.
-
-To display observed people tracking and matched faces insight on the website, do the following:
-
-1. After the file has been indexed, go to Insights and then scroll to observed people.
-
-To see the insights in a JSON file, do the following:
-
-1. Click Download and then Insights (JSON).
-1. Copy the `observedPeople` text and paste it into your JSON viewer.
-
- The following section shows observed people and clothing. For the person with id 4 (`"id": 4`) there's also a matching face.
-
- ```json
- "observedPeople": [
- {
- "id": 1,
- "thumbnailId": "4addcebf-6c51-42cd-b8e0-aedefc9d8f6b",
- "clothing": [
- {
- "id": 1,
- "type": "sleeve",
- "properties": {
- "length": "long"
- }
- },
- {
- "id": 2,
- "type": "pants",
- "properties": {
- "length": "long"
- }
- }
- ],
- "instances": [
- {
- "adjustedStart": "0:00:00.0667333",
- "adjustedEnd": "0:00:12.012",
- "start": "0:00:00.0667333",
- "end": "0:00:12.012"
- }
- ]
- },
- {
- "id": 2,
- "thumbnailId": "858903a7-254a-438e-92fd-69f8bdb2ac88",
- "clothing": [
- {
- "id": 1,
- "type": "sleeve",
- "properties": {
- "length": "short"
- }
- }
- ],
- "instances": [
- {
- "adjustedStart": "0:00:23.2565666",
- "adjustedEnd": "0:00:25.4921333",
- "start": "0:00:23.2565666",
- "end": "0:00:25.4921333"
- },
- {
- "adjustedStart": "0:00:25.8925333",
- "adjustedEnd": "0:00:25.9926333",
- "start": "0:00:25.8925333",
- "end": "0:00:25.9926333"
- },
- {
- "adjustedStart": "0:00:26.3930333",
- "adjustedEnd": "0:00:28.5618666",
- "start": "0:00:26.3930333",
- "end": "0:00:28.5618666"
- }
- ]
- },
- {
- "id": 3,
- "thumbnailId": "1406252d-e7f5-43dc-852d-853f652b39b6",
- "clothing": [
- {
- "id": 1,
- "type": "sleeve",
- "properties": {
- "length": "short"
- }
- },
- {
- "id": 2,
- "type": "pants",
- "properties": {
- "length": "long"
- }
- },
- {
- "id": 3,
- "type": "skirtAndDress"
- }
- ],
- "instances": [
- {
- "adjustedStart": "0:00:31.9652666",
- "adjustedEnd": "0:00:34.4010333",
- "start": "0:00:31.9652666",
- "end": "0:00:34.4010333"
- }
- ]
- },
- {
- "id": 4,
- "thumbnailId": "d09ad62e-e0a4-42e5-8ca9-9a640c686596",
- "clothing": [
- {
- "id": 1,
- "type": "sleeve",
- "properties": {
- "length": "short"
- }
- },
- {
- "id": 2,
- "type": "pants",
- "properties": {
- "length": "short"
- }
- }
- ],
- "matchingFace": {
- "id": 1310,
- "confidence": 0.3819
- },
- "instances": [
- {
- "adjustedStart": "0:00:34.8681666",
- "adjustedEnd": "0:00:36.0026333",
- "start": "0:00:34.8681666",
- "end": "0:00:36.0026333"
- },
- {
- "adjustedStart": "0:00:36.6699666",
- "adjustedEnd": "0:00:36.7367",
- "start": "0:00:36.6699666",
- "end": "0:00:36.7367"
- },
- {
- "adjustedStart": "0:00:37.2038333",
- "adjustedEnd": "0:00:39.6729666",
- "start": "0:00:37.2038333",
- "end": "0:00:39.6729666"
- }
- ]
- }
- ]
- ```
-
-To download the JSON file via the API, use the [Azure AI Video Indexer developer portal](https://api-portal.videoindexer.ai/).
-
-## Observed people tracking and matched faces components
-
-During the observed people tracking and matched faces procedure, images in a media file are processed, as follows:
-
-|Component|Definition|
-|||
-|Source file | The user uploads the source file for indexing. |
-|Detection | The media file is tracked to detect observed people and their clothing. For example, shirt with long sleeves, dress or long pants. Note that to be detected, the full upper body of the person must appear in the media.|
-|Local grouping |The identified observed faces are filtered into local groups. If a person is detected more than once, additional observed faces instances are created for this person. |
-|Matching and Classification |The observed people instances are matched to faces. If there is a known celebrity, the observed person will be given their name. Any number of observed people instances can be matched to the same face. |
-|Confidence value| The estimated confidence level of each observed person is calculated as a range of 0 to 1. The confidence score represents the certainty in the accuracy of the result. For example, an 82% certainty is represented as an 0.82 score.|
-
-## Example use cases
--- Tracking a personΓÇÖs movement, for example, in law enforcement for more efficiency when analyzing an accident or crime.-- Improving efficiency by deep searching for matched people in organizational archives for insight on specific celebrities, for example when creating promos and trailers.-- Improved efficiency when creating feature stories, for example, searching for people wearing a red shirt in the archives of a football game at a News or Sports agency.-
-## Considerations and limitations when choosing a use case
-
-Below are some considerations to keep in mind when using observed people and matched faces.
-
-### Limitations of observed people tracking
-
-It's important to note the limitations of observed people tracking, to avoid or mitigate the effects of false negatives (missed detections) and limited detail.
-
-* People are generally not detected if they appear small (minimum person height is 100 pixels).
-* Maximum frame size is FHD
-* Low quality video (for example, dark lighting conditions) may impact the detection results.
-* The recommended frame rate at least 30 FPS.
-* Recommended video input should contain up to 10 people in a single frame. The feature could work with more people in a single frame, but the detection result retrieves up to 10 people in a frame with the detection highest confidence.
-* People with similar clothes: (for example, people wear uniforms, players in sport games) could be detected as the same person with the same ID number.
-* Obstruction ΓÇô there maybe errors where there are obstructions (scene/self or obstructions by other people).
-* Pose: The tracks may be split due to different poses (back/front)
-
-### Other considerations
-
-When used responsibly and carefully, Azure AI Video Indexer is a valuable tool for many industries. To respect the privacy and safety of others, and to comply with local and global regulations, we recommend the following:
--- Always respect an individualΓÇÖs right to privacy, and only ingest videos for lawful and justifiable purposes.-- Don't purposely disclose inappropriate media showing young children or family members of celebrities or other content that may be detrimental or pose a threat to an individualΓÇÖs personal freedom.-- Commit to respecting and promoting human rights in the design and deployment of your analyzed media.-- When using 3rd party materials, be aware of any existing copyrights or permissions required before distributing content derived from them.-- Always seek legal advice when using media from unknown sources.-- Always obtain appropriate legal and professional advice to ensure that your uploaded videos are secured and have adequate controls to preserve the integrity of your content and to prevent unauthorized access.-- Provide a feedback channel that allows users and individuals to report issues with the service.-- Be aware of any applicable laws or regulations that exist in your area regarding processing, analyzing, and sharing media containing people.-- Keep a human in the loop. Don't use any solution as a replacement for human oversight and decision-making.-- Fully examine and review the potential of any AI model you're using to understand its capabilities and limitations.-
-## Next steps
-
-### Learn More about Responsible AI
--- [Microsoft Responsible AI principles](https://www.microsoft.com/ai/responsible-ai?activetab=pivot1%3aprimaryr6)-- [Microsoft Responsible AI resources](https://www.microsoft.com/ai/responsible-ai-resources)-- [Microsoft Azure Learning courses on Responsible AI](/training/paths/responsible-ai-business-principles/)-- [Microsoft Global Human Rights Statement](https://www.microsoft.com/corporate-responsibility/human-rights-statement?activetab=pivot_1:primaryr5)-
-### Contact us
-
-`visupport@microsoft.com`
-
-## Azure AI Video Indexer insights
--- [Audio effects detection](audio-effects-detection.md)-- [Face detection](face-detection.md)-- [Keywords extraction](keywords.md)-- [Transcription, translation & language identification](transcription-translation-lid.md)-- [Labels identification](labels-identification.md)-- [Named entities](named-entities.md)-- [Topics inference](topics-inference.md)
azure-video-indexer Observed People Featured Clothing https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-video-indexer/observed-people-featured-clothing.md
- Title: Enable featured clothing of an observed person
-description: When indexing a video using Azure AI Video Indexer advanced video settings, you can view the featured clothing of an observed person.
- Previously updated : 08/14/2023----
-# Enable featured clothing of an observed person
--
-When indexing a video using Azure AI Video Indexer advanced video settings, you can view the featured clothing of an observed person. The insight provides moments within the video where key people are prominently featured and clearly visible, including the coordinates of the people, timestamp, and the frame of the shot. This insight allows high-quality in-video contextual advertising, where relevant clothing ads are matched with the specific time within the video in which they're viewed.
-
-This article discusses how to view the featured clothing insight and how the featured clothing images are ranked.
-
-## View an intro video
-
-You can view the following short video that discusses how to view and use the featured clothing insight.
-
-> [!VIDEO https://www.microsoft.com/videoplayer/embed//RE5b4JJ]
-
-## Viewing featured clothing
-
-The featured clothing insight is available when indexing your file by choosing the Advanced option -> Advanced video or Advanced video + audio preset (under Video + audio indexing). Standard indexing doesn't include this insight.
--
-The featured clothing images are ranked based on some of the following factors: key moments of the video, duration the person appears, text-based emotions, and audio events. The insights privates the highest ranking frame per scene, which enables you to produce contextual advertisements per scene throughout the video. The JSON file is ranked by the sequence of scenes in the video, with each scene having the top rated frame as the result.
-
-> [!NOTE]
-> The featured clothing insight can only be viewed from the artifact file, and the insight is not in the Azure AI Video Indexer website.
-
-1. In the right-upper corner, select to download the artifact zip file: **Download** -> **Artifact (ZIP)**
-1. Open `featuredclothing.zip`.
-
-The .zip file contains two objects:
--- `featuredclothing.map.json` - the file contains instances of each featured clothing, with the following properties: -
- - `id` ΓÇô ranking index (`"id": 1` is the most important clothing).
- - `confidence` ΓÇô the score of the featured clothing.
- - `frameIndex` ΓÇô the best frame of the clothing.
- - `timestamp` ΓÇô corresponding to the frameIndex.
- - `opBoundingBox` ΓÇô bounding box of the person.
- - `faceBoundingBox` ΓÇô bounding box of the person's face, if detected.
- - `fileName` ΓÇô where the best frame of the clothing is saved.
- - `sceneID` - the scene where the scene appears.
-
- An example of the featured clothing with `"sceneID": 1`.
-
- ```json
- "instances": [
- {
- "confidence": 0.07,
- "faceBoundingBox": {},
- "fileName": "frame_100.jpg",
- "frameIndex": 100,
- "opBoundingBox": {
- "x": 0.09062,
- "y": 0.4,
- "width": 0.11302,
- "height": 0.59722
- },
- "timestamp": "0:00:04",
- "personName": "Observed Person #1",
- "sceneId": 1
- }
- ```
-- `featuredclothing.frames.map` ΓÇô this folder contains images of the best frames that the featured clothing appeared in, corresponding to the `fileName` property in each instance in `featuredclothing.map.json`. -
-## Limitations and assumptions
-
-It's important to note the limitations of featured clothing to avoid or mitigate the effects of false detections of images with low quality or low relevancy.ΓÇ»
--- Precondition for the featured clothing is that the person wearing the clothes can be found in the observed people insight. -- If the face of a person wearing the featured clothing isn't detected, the results don't include the faces bounding box.-- If a person in a video wears more than one outfit, the algorithm selects its best outfit as a single featured clothing image. -- When posed, the tracks are optimized to handle observed people who most often appear on the front. -- Wrong detections may occur when people are overlapping. -- Frames containing blurred people are more prone to low quality results. -
-For more information, see the [limitations of observed people](observed-people-tracing.md#limitations-and-assumptions).
-
-## Next steps
--- [Trace observed people in a video](observed-people-tracing.md)-- [People's detected clothing](detected-clothing.md)
azure-video-indexer Observed People Tracking https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-video-indexer/observed-people-tracking.md
- Title: Track observed people in a video
-description: This topic gives an overview of Track observed people in a video concept.
- Previously updated : 08/07/2023----
-# Track observed people in a video
--
-Azure AI Video Indexer detects observed people in videos and provides information such as the location of the person in the video frame and the exact timestamp (start, end) when a person appears. The API returns the bounding box coordinates (in pixels) for each person instance detected, including detection confidence.
-
-Some scenarios where this feature could be useful:
-
-* Post-event analysisΓÇödetect and track a personΓÇÖs movement to better analyze an accident or crime post-event (for example, explosion, bank robbery, incident).
-* Improve efficiency when creating raw data for content creators, like video advertising, news, or sport games (for example, find people wearing a red shirt in a video archive).
-* Create a summary out of a long video, like court evidence of a specific personΓÇÖs appearance in a video, using the same detected personΓÇÖs ID.
-* Learn and analyze trends over time, for exampleΓÇöhow customers move across aisles in a shopping mall or how much time they spend in checkout lines.
-
-For example, if a video contains a person, the detect operation will list the personΓÇÖs appearances together with their coordinates in the video frames. You can use this functionality to determine the personΓÇÖs path in a video. It also lets you determine whether there are multiple instances of the same person in a video.
-
-The newly added **Observed people tracking** feature is available when indexing your file by choosing the **Advanced option** -> **Advanced video** or **Advanced video + audio** preset (under **Video + audio indexing**). Standard indexing will not include this new advanced model.
-
-
-When you choose to see **Insights** of your video on the [Video Indexer](https://www.videoindexer.ai/account/login) website, the Observed People Tracking will show up on the page with all detected people thumbnails. You can choose a thumbnail of a person and see where the person appears in the video player.
-
-The following JSON response illustrates what Video Indexer returns when tracking observed people:
-
-```json
- {
- ...
- "videos": [
- {
- ...
- "insights": {
- ...
- "observedPeople": [{
- "id": 1,
- "thumbnailId": "560f2cfb-90d0-4d6d-93cb-72bd1388e19d",
- "instances": [
- {
- "adjustedStart": "0:00:01.5682333",
- "adjustedEnd": "0:00:02.7027",
- "start": "0:00:01.5682333",
- "end": "0:00:02.7027"
- }
- ]
- },
- {
- "id": 2,
- "thumbnailId": "9c97ae13-558c-446b-9989-21ac27439da0",
- "instances": [
- {
- "adjustedStart": "0:00:16.7167",
- "adjustedEnd": "0:00:18.018",
- "start": "0:00:16.7167",
- "end": "0:00:18.018"
- }
- ]
- },]
- }
- ...
- }
- ]
-}
-```
-
-## Limitations and assumptions
-
-For more information, see [Considerations and limitations when choosing a use case](observed-matched-people.md#considerations-and-limitations-when-choosing-a-use-case).
-
-## Next steps
-
-Review [overview](video-indexer-overview.md)
azure-video-indexer Ocr https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-video-indexer/ocr.md
- Title: Azure AI Video Indexer optical character recognition (OCR) overview -
-description: An introduction to Azure AI Video Indexer optical character recognition (OCR) component responsibly.
-- Previously updated : 06/15/2022-----
-# Optical character recognition (OCR)
--
-Optical character recognition (OCR) is an Azure AI Video Indexer AI feature that extracts text from images like pictures, street signs and products in media files to create insights.
-
-OCR currently extracts insights from printed and handwritten text in over 50 languages, including from an image with text in multiple languages. For more information, see [OCR supported languages](../ai-services/computer-vision/language-support.md#optical-character-recognition-ocr).
-
-## Prerequisites
-
-Review [transparency note overview](/legal/azure-video-indexer/transparency-note?context=/azure/azure-video-indexer/context/context)
-
-## General principles
-
-This article discusses optical character recognition (OCR) and the key considerations for making use of this technology responsibly. There are many things you need to consider when deciding how to use and implement an AI-powered feature:
--- Will this feature perform well in my scenario? Before deploying OCR into your scenario, test how it performs using real-life data and make sure it can deliver the accuracy you need. -- Are we equipped to identify and respond to errors? AI-powered products and features won't be 100% accurate, so consider how you'll identify and respond to any errors that may occur. -
-## View the insight
-
-When working on the website the insights are displayed in the **Timeline** tab. They can also be generated in a categorized list in a JSON file that includes the ID, transcribed text, duration and confidence score.
-
-To see the instances on the website, do the following:
-
-1. Go to View and check OCR.
-1. Select Timeline to display the extracted text.
-
-Insights can also be generated in a categorized list in a JSON file that includes the ID, language, text together with each instanceΓÇÖs confidence score.
-
-To see the insights in a JSON file, do the following:
-
-1. Select Download -> Insight (JSON).
-1. Copy the `ocr` element, under `insights`, and paste it into your online JSON viewer.
-
- ```json
- "ocr": [
- {
- "id": 1,
- "text": "2017 Ruler",
- "confidence": 0.4365,
- "left": 901,
- "top": 3,
- "width": 80,
- "height": 23,
- "angle": 0,
- "language": "en-US",
- "instances": [
- {
- "adjustedStart": "0:00:45.5",
- "adjustedEnd": "0:00:46",
- "start": "0:00:45.5",
- "end": "0:00:46"
- },
- {
- "adjustedStart": "0:00:55",
- "adjustedEnd": "0:00:55.5",
- "start": "0:00:55",
- "end": "0:00:55.5"
- }
- ]
- },
- {
- "id": 2,
- "text": "2017 Ruler postppu - PowerPoint",
- "confidence": 0.4712,
- "left": 899,
- "top": 4,
- "width": 262,
- "height": 48,
- "angle": 0,
- "language": "en-US",
- "instances": [
- {
- "adjustedStart": "0:00:44.5",
- "adjustedEnd": "0:00:45",
- "start": "0:00:44.5",
- "end": "0:00:45"
- }
- ]
- },
- ```
-
-To download the JSON file via the API, use the [Azure AI Video Indexer developer portal](https://api-portal.videoindexer.ai/).
-
-## OCR components
-
-During the OCR procedure, text images in a media file are processed, as follows:
-
-|Component|Definition|
-|||
-|Source file| The user uploads the source file for indexing.|
-|Read model |Images are detected in the media file and text is then extracted and analyzed by Azure AI services. |
-|Get read results model |The output of the extracted text is displayed in a JSON file.|
-|Confidence value| The estimated confidence level of each word is calculated as a range of 0 to 1. The confidence score represents the certainty in the accuracy of the result. For example, an 82% certainty will be represented as an 0.82 score.|
-
-For more information, seeΓÇ»[OCR technology](../ai-services/computer-vision/overview-ocr.md).
-
-## Example use cases
--- Deep searching media footage for images with signposts, street names or car license plates, for example, in law enforcement. -- Extracting text from images in media files and then translating it into multiple languages in labels for accessibility, for example in media or entertainment. -- Detecting brand names in images and tagging them for translation purposes, for example in advertising and branding. -- Extracting text in images that is then automatically tagged and categorized for accessibility and future usage, for example to generate content at a news agency. -- Extracting text in warnings in online instructions and then translating the text to comply with local standards, for example, e-learning instructions for using equipment. -
-## Considerations and limitations when choosing a use case
--- Carefully consider the accuracy of the results, to promote more accurate detections, check the quality of the image, low quality images might impact the detected insights. -- Carefully consider when using for law enforcement that OCR can potentially misread or not detect parts of the text. To ensure fair and high-quality decisions, combine OCR-based automation with human oversight. -- When extracting handwritten text, avoid using the OCR results of signatures that are hard to read for both humans and machines. A better way to use OCR is to use it for detecting the presence of a signature for further analysis. -- Don't use OCR for decisions that may have serious adverse impacts. Machine learning models that extract text can result in undetected or incorrect text output. Decisions based on incorrect output could have serious adverse impacts. Additionally, it's advisable to include human review of decisions that have the potential for serious impacts on individuals. -
-When used responsibly and carefully, Azure AI Video Indexer is a valuable tool for many industries. To respect the privacy and safety of others, and to comply with local and global regulations, we recommend the following:  
--- Always respect an individual’s right to privacy, and only ingest videos for lawful and justifiable purposes.   -- Don't purposely disclose inappropriate content about young children or family members of celebrities or other content that may be detrimental or pose a threat to an individual’s personal freedom.   -- Commit to respecting and promoting human rights in the design and deployment of your analyzed media.   -- When using third party materials, be aware of any existing copyrights or permissions required before distributing content derived from them.  -- Always seek legal advice when using content from unknown sources.  -- Always obtain appropriate legal and professional advice to ensure that your uploaded videos are secured and have adequate controls to preserve the integrity of your content and to prevent unauthorized access.     -- Provide a feedback channel that allows users and individuals to report issues with the service.   -- Be aware of any applicable laws or regulations that exist in your area regarding processing, analyzing, and sharing media containing people.  -- Keep a human in the loop. Don't use any solution as a replacement for human oversight and decision-making.   -- Fully examine and review the potential of any AI model you're using to understand its capabilities and limitations.  -
-## Learn more about OCR
--- [Azure AI services documentation](/azure/ai-services/computer-vision/overview-ocr)-- [Transparency note](/legal/cognitive-services/computer-vision/ocr-transparency-note) -- [Use cases](/legal/cognitive-services/computer-vision/ocr-transparency-note#example-use-cases) -- [Capabilities and limitations](/legal/cognitive-services/computer-vision/ocr-characteristics-and-limitations) -- [Guidance for integration and responsible use with OCR technology](/legal/cognitive-services/computer-vision/ocr-guidance-integration-responsible-use)-- [Data, privacy and security](/legal/cognitive-services/computer-vision/ocr-data-privacy-security)-- [Meter: WER](/legal/cognitive-services/computer-vision/ocr-characteristics-and-limitations#word-level-accuracy-measure) -
-## Next steps
-
-### Learn More about Responsible AI
--- [Microsoft Responsible AI principles](https://www.microsoft.com/ai/responsible-ai?activetab=pivot1%3aprimaryr6) -- [Microsoft Responsible AI resources](https://www.microsoft.com/ai/responsible-ai-resources)-- [Microsoft Azure Learning courses on Responsible AI](/training/paths/responsible-ai-business-principles/)-- [Microsoft Global Human Rights Statement](https://www.microsoft.com/corporate-responsibility/human-rights-statement?activetab=pivot_1:primaryr5) -
-### Contact us
-
-`visupport@microsoft.com`
-
-## Azure AI Video Indexer insights
--- [Audio effects detection](audio-effects-detection.md)-- [Face detection](face-detection.md)-- [Keywords extraction](keywords.md)-- [Transcription, translation & language identification](transcription-translation-lid.md)-- [Labels identification](labels-identification.md) -- [Named entities](named-entities.md)-- [Observed people tracking & matched faces](observed-matched-people.md)-- [Topics inference](topics-inference.md)
azure-video-indexer Odrv Download https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-video-indexer/odrv-download.md
- Title: Index videos stored on OneDrive - Azure AI Video Indexer
-description: Learn how to index videos stored on OneDrive by using Azure AI Video Indexer.
- Previously updated : 12/17/2021----
-# Index your videos stored on OneDrive
--
-This article shows how to index videos stored on OneDrive by using the Azure AI Video Indexer website.
-
-## Supported file formats
-
-For a list of file formats that you can use with Azure AI Video Indexer, see [Standard Encoder formats and codecs](/azure/media-services/latest/encode-media-encoder-standard-formats-reference).
-
-## Index a video by using the website
-
-1. Sign into the [Azure AI Video Indexer](https://www.videoindexer.ai/) website, and then select **Upload**.
-
- > [!div class="mx-imgBorder"]
- > :::image type="content" source="./media/video-indexer-get-started/video-indexer-upload.png" alt-text="Screenshot that shows the Upload button.":::
-
-1. Click on **enter a file URL** button
- > [!div class="mx-imgBorder"]
- > :::image type="content" source="./media/video-indexer-get-started/avam-enter-file-url.png" alt-text="Screenshot that shows the enter file URL button.":::
-
-1. Next, go to your video/audio file located on your OneDrive using a web browser. Select the file you want to index, at the top select **embed**
- > [!div class="mx-imgBorder"]
- > :::image type="content" source="./media/video-indexer-get-started/avam-odrv-embed.png" alt-text="Screenshot that shows the embed code button.":::
-
-1. On the right click on **Generate** to generate an embed url.
- > [!div class="mx-imgBorder"]
- > :::image type="content" source="./media/video-indexer-get-started/avam-odrv-embed-generate.png" alt-text="Screenshot that shows the embed code generate button.":::
-
-1. Copy the embed code and extract only the URL part including the key. For example:
-
- `https://onedrive.live.com/embed?cid=5BC591B7C713B04F&resid=5DC518B6B713C40F%2110126&authkey=HnsodidN_50oA3lLfk`
-
- Replace **embed** with **download**. You will now have a url that looks like this:
-
- `https://onedrive.live.com/download?cid=5BC591B7C713B04F&resid=5DC518B6B713C40F%2110126&authkey=HnsodidN_50oA3lLfk`
-
-1. Now enter this URL in the Azure AI Video Indexer website in the URL field.
-
- > [!div class="mx-imgBorder"]
- > :::image type="content" source="./media/video-indexer-get-started/avam-odrv-url.png" alt-text="Screenshot that shows the onedrive url field.":::
-
-After your video is downloaded from OneDrive, Azure AI Video Indexer starts indexing and analyzing the video.
-
-> [!div class="mx-imgBorder"]
-> :::image type="content" source="./media/video-indexer-get-started/progress.png" alt-text="Screenshot that shows the progress of an upload.":::
-
-Once Azure AI Video Indexer is done analyzing, you will receive an email with a link to your indexed video. The email also includes a short description of what was found in your video (for example: people, topics, optical character recognition).
-
-## Upload and index a video by using the API
-
-You can use the [Upload Video](https://api-portal.videoindexer.ai/api-details#api=Operations&operation=Upload-Video) API to upload and index your videos based on a URL. The code sample that follows includes the commented-out code that shows how to upload the byte array.
-
-### Configurations and parameters
-
-This section describes some of the optional parameters and when to set them. For the most up-to-date info about parameters, see the [Azure AI Video Indexer API developer portal](https://api-portal.videoindexer.ai/api-details#api=Operations&operation=Upload-Video).
-
-#### externalID
-
-Use this parameter to specify an ID that will be associated with the video. The ID can be applied to integration into an external video content management (VCM) system. The videos that are in the Azure AI Video Indexer website can be searched via the specified external ID.
-
-#### callbackUrl
-
-Use this parameter to specify a callback URL.
--
-Azure AI Video Indexer returns any existing parameters provided in the original URL. The URL must be encoded.
-
-#### indexingPreset
-
-Use this parameter to define an AI bundle that you want to apply on your audio or video file. This parameter is used to configure the indexing process. You can specify the following values:
--- `AudioOnly`: Index and extract insights by using audio only (ignoring video).-- `VideoOnly`: Index and extract insights by using video only (ignoring audio).-- `Default`: Index and extract insights by using both audio and video.-- `DefaultWithNoiseReduction`: Index and extract insights from both audio and video, while applying noise reduction algorithms on the audio stream.-
- The `DefaultWithNoiseReduction` value is now mapped to a default preset (deprecated).
-- `BasicAudio`: Index and extract insights by using audio only (ignoring video). Include only basic audio features (transcription, translation, formatting of output captions and subtitles).-- `AdvancedAudio`: Index and extract insights by using audio only (ignoring video). Include advanced audio features (such as audio event detection) in addition to the standard audio analysis.-- `AdvancedVideo`: Index and extract insights by using video only (ignoring audio). Include advanced video features (such as observed people tracing) in addition to the standard video analysis.-- `AdvancedVideoAndAudio`: Index and extract insights by using both advanced audio and advanced video analysis.-
-> [!NOTE]
-> The preceding advanced presets include models that are in public preview. When these models reach general availability, there might be implications for the price.
-
-Azure AI Video Indexer covers up to two tracks of audio. If the file has more audio tracks, they're treated as one track. If you want to index the tracks separately, you need to extract the relevant audio file and index it as `AudioOnly`.
-
-Price depends on the selected indexing option. For more information, see [Media Services pricing](https://azure.microsoft.com/pricing/details/media-services/).
-
-#### priority
-
-Azure AI Video Indexer indexes videos according to their priority. Use the `priority` parameter to specify the index priority. The following values are valid: `Low`, `Normal` (default), and `High`.
-
-This parameter is supported only for paid accounts.
-
-#### streamingPreset
-
-After your video is uploaded, Azure AI Video Indexer optionally encodes the video. It then proceeds to indexing and analyzing the video. When Azure AI Video Indexer is done analyzing, you get a notification with the video ID.
-
-When you're using the [Upload Video](https://api-portal.videoindexer.ai/api-details#api=Operations&operation=Upload-Video) or [Re-Index Video](https://api-portal.videoindexer.ai/api-details#api=Operations&operation=Re-Index-Video) API, one of the optional parameters is `streamingPreset`. If you set `streamingPreset` to `Default`, `SingleBitrate`, or `AdaptiveBitrate`, the encoding process is triggered.
-
-After the indexing and encoding jobs are done, the video is published so you can also stream your video. The streaming endpoint from which you want to stream the video must be in the **Running** state.
-
-For `SingleBitrate`, the standard encoder cost will apply for the output. If the video height is greater than or equal to 720, Azure AI Video Indexer encodes it as 1280 x 720. Otherwise, it's encoded as 640 x 468.
-The default setting is [content-aware encoding](/azure/media-services/latest/encode-content-aware-concept).
-
-If you only want to index your video and not encode it, set `streamingPreset` to `NoStreaming`.
-
-#### videoUrl
-
-This parameter specifies the URL of the video or audio file to be indexed. If the `videoUrl` parameter is not specified, Azure AI Video Indexer expects you to pass the file as multipart/form body content.
-
-### Code sample
-
-> [!NOTE]
-> The following sample is intended for Classic accounts only and isn't compatible with ARM accounts. For an updated sample for ARM, see [this ARM sample repo](https://github.com/Azure-Samples/media-services-video-indexer/blob/master/API-Samples/C%23/ArmBased/Program.cs).
-
-The following C# code snippets demonstrate the usage of all the Azure AI Video Indexer APIs together.
-
-### [Classic account](#tab/With-classic-account/)
-
-After you copy the following code into your development platform, you'll need to provide two parameters:
-
-* API key (`apiKey`): Your personal API management subscription key. It allows you to get an access token in order to perform operations on your Azure AI Video Indexer account.
-
- To get your API key:
-
- 1. Go to the [Azure AI Video Indexer API developer portal](https://api-portal.videoindexer.ai/).
- 1. Sign in.
- 1. Go to **Products** > **Authorization** > **Authorization subscription**.
- 1. Copy the **Primary key** value.
-
-* Video URL (`videoUrl`): A URL of the video or audio file to be indexed. Here are the requirements:
-
- - The URL must point at a media file. (HTML pages are not supported.)
- - The file can be protected by an access token that's provided as part of the URI. The endpoint that serves the file must be secured with TLS 1.2 or later.
- - The URL must be encoded.
-
-The result of successfully running the code sample includes an insight widget URL and a player widget URL. They allow you to examine the insights and the uploaded video, respectively.
--
-```csharp
-public async Task Sample()
-{
- var apiUrl = "https://api.videoindexer.ai";
- var apiKey = "..."; // Replace with API key taken from https://aka.ms/viapi
-
- System.Net.ServicePointManager.SecurityProtocol =
- System.Net.ServicePointManager.SecurityProtocol | System.Net.SecurityProtocolType.Tls12;
-
- // Create the HTTP client
- var handler = new HttpClientHandler();
- handler.AllowAutoRedirect = false;
- var client = new HttpClient(handler);
- client.DefaultRequestHeaders.Add("Ocp-Apim-Subscription-Key", apiKey);
-
- // Obtain account information and access token
- string queryParams = CreateQueryString(
- new Dictionary<string, string>()
- {
- {"generateAccessTokens", "true"},
- {"allowEdit", "true"},
- });
- HttpResponseMessage result = await client.GetAsync($"{apiUrl}/auth/trial/Accounts?{queryParams}");
- var json = await result.Content.ReadAsStringAsync();
- var accounts = JsonConvert.DeserializeObject<AccountContractSlim[]>(json);
-
- // Take the relevant account. Here we simply take the first.
- // You can also get the account via accounts.First(account => account.Id == <GUID>);
- var accountInfo = accounts.First();
-
- // We'll use the access token from here on, so there's no need for the APIM key
- client.DefaultRequestHeaders.Remove("Ocp-Apim-Subscription-Key");
-
- // Upload a video
- var content = new MultipartFormDataContent();
- Console.WriteLine("Uploading...");
- // Get the video from URL
- var videoUrl = "VIDEO_URL"; // Replace with the video URL from OneDrive
-
- // As an alternative to specifying video URL, you can upload a file.
- // Remove the videoUrl parameter from the query parameters below and add the following lines:
- //FileStream video =File.OpenRead(Globals.VIDEOFILE_PATH);
- //byte[] buffer =new byte[video.Length];
- //video.Read(buffer, 0, buffer.Length);
- //content.Add(new ByteArrayContent(buffer));
-
- queryParams = CreateQueryString(
- new Dictionary<string, string>()
- {
- {"accessToken", accountInfo.AccessToken},
- {"name", "video_name"},
- {"description", "video_description"},
- {"privacy", "private"},
- {"partition", "partition"},
- {"videoUrl", videoUrl},
- });
- var uploadRequestResult = await client.PostAsync($"{apiUrl}/{accountInfo.Location}/Accounts/{accountInfo.Id}/Videos?{queryParams}", content);
- var uploadResult = await uploadRequestResult.Content.ReadAsStringAsync();
-
- // Get the video ID from the upload result
- string videoId = JsonConvert.DeserializeObject<dynamic>(uploadResult)["id"];
- Console.WriteLine("Uploaded");
- Console.WriteLine("Video ID:");
- Console.WriteLine(videoId);
-
- // Wait for the video index to finish
- while (true)
- {
- await Task.Delay(10000);
-
- queryParams = CreateQueryString(
- new Dictionary<string, string>()
- {
- {"accessToken", accountInfo.AccessToken},
- {"language", "English"},
- });
-
- var videoGetIndexRequestResult = await client.GetAsync($"{apiUrl}/{accountInfo.Location}/Accounts/{accountInfo.Id}/Videos/{videoId}/Index?{queryParams}");
- var videoGetIndexResult = await videoGetIndexRequestResult.Content.ReadAsStringAsync();
-
- string processingState = JsonConvert.DeserializeObject<dynamic>(videoGetIndexResult)["state"];
-
- Console.WriteLine("");
- Console.WriteLine("State:");
- Console.WriteLine(processingState);
-
- // Job is finished
- if (processingState != "Uploaded" && processingState != "Processing")
- {
- Console.WriteLine("");
- Console.WriteLine("Full JSON:");
- Console.WriteLine(videoGetIndexResult);
- break;
- }
- }
-
- // Search for the video
- queryParams = CreateQueryString(
- new Dictionary<string, string>()
- {
- {"accessToken", accountInfo.AccessToken},
- {"id", videoId},
- });
-
- var searchRequestResult = await client.GetAsync($"{apiUrl}/{accountInfo.Location}/Accounts/{accountInfo.Id}/Videos/Search?{queryParams}");
- var searchResult = await searchRequestResult.Content.ReadAsStringAsync();
- Console.WriteLine("");
- Console.WriteLine("Search:");
- Console.WriteLine(searchResult);
-
- // Generate video access token (used for get widget calls)
- client.DefaultRequestHeaders.Add("Ocp-Apim-Subscription-Key", apiKey);
- var videoTokenRequestResult = await client.GetAsync($"{apiUrl}/auth/{accountInfo.Location}/Accounts/{accountInfo.Id}/Videos/{videoId}/AccessToken?allowEdit=true");
- var videoAccessToken = (await videoTokenRequestResult.Content.ReadAsStringAsync()).Replace("\"", "");
- client.DefaultRequestHeaders.Remove("Ocp-Apim-Subscription-Key");
-
- // Get insights widget URL
- queryParams = CreateQueryString(
- new Dictionary<string, string>()
- {
- {"accessToken", videoAccessToken},
- {"widgetType", "Keywords"},
- {"allowEdit", "true"},
- });
- var insightsWidgetRequestResult = await client.GetAsync($"{apiUrl}/{accountInfo.Location}/Accounts/{accountInfo.Id}/Videos/{videoId}/InsightsWidget?{queryParams}");
- var insightsWidgetLink = insightsWidgetRequestResult.Headers.Location;
- Console.WriteLine("Insights Widget url:");
- Console.WriteLine(insightsWidgetLink);
-
- // Get player widget URL
- queryParams = CreateQueryString(
- new Dictionary<string, string>()
- {
- {"accessToken", videoAccessToken},
- });
- var playerWidgetRequestResult = await client.GetAsync($"{apiUrl}/{accountInfo.Location}/Accounts/{accountInfo.Id}/Videos/{videoId}/PlayerWidget?{queryParams}");
- var playerWidgetLink = playerWidgetRequestResult.Headers.Location;
- Console.WriteLine("");
- Console.WriteLine("Player Widget url:");
- Console.WriteLine(playerWidgetLink);
- Console.WriteLine("\nPress Enter to exit...");
- String line = Console.ReadLine();
- if (line == "enter")
- {
- System.Environment.Exit(0);
- }
-
-}
-
-private string CreateQueryString(IDictionary<string, string> parameters)
-{
- var queryParameters = HttpUtility.ParseQueryString(string.Empty);
- foreach (var parameter in parameters)
- {
- queryParameters[parameter.Key] = parameter.Value;
- }
-
- return queryParameters.ToString();
-}
-
-public class AccountContractSlim
-{
- public Guid Id { get; set; }
- public string Name { get; set; }
- public string Location { get; set; }
- public string AccountType { get; set; }
- public string Url { get; set; }
- public string AccessToken { get; set; }
-}
-```
-
-### [Azure Resource Manager account](#tab/with-arm-account-account/)
-
-After you copy this C# project into your development platform, you need to take the following steps:
-
-1. Go to Program.cs and populate:
-
- - ```SubscriptionId``` with your subscription ID.
- - ```ResourceGroup``` with your resource group.
- - ```AccountName``` with your account name.
- - ```VideoUrl``` with your video URL.
-1. Make sure that .NET 6.0 is installed. If it isn't, [install it](https://dotnet.microsoft.com/download/dotnet/6.0).
-1. Make sure that the Azure CLI is installed. If it isn't, [install it](/cli/azure/install-azure-cli).
-1. Open your terminal and go to the *VideoIndexerArm* folder.
-1. Log in to Azure: ```az login --use-device```.
-1. Build the project: ```dotnet build```.
-1. Run the project: ```dotnet run```.
-
-```csharp
-<Project Sdk="Microsoft.NET.Sdk">
-
- <PropertyGroup>
- <OutputType>Exe</OutputType>
- <TargetFramework>net5.0</TargetFramework>
- </PropertyGroup>
-
- <ItemGroup>
- <PackageReference Include="Azure.Identity" Version="1.4.1" />
- <PackageReference Include="Microsoft.Identity.Client" Version="4.36.2" />
- </ItemGroup>
-
-</Project>
-```
-
-```csharp
-using System;
-using System.Collections.Generic;
-using System.Net.Http;
-using System.Net.Http.Headers;
-using System.Text.Json;
-using System.Text.Json.Serialization;
-using System.Threading.Tasks;
-using System.Web;
-using Azure.Core;
-using Azure.Identity;
--
-namespace VideoIndexerArm
-{
- public class Program
- {
- private const string AzureResourceManager = "https://management.azure.com";
- private const string SubscriptionId = ""; // Your Azure subscription
- private const string ResourceGroup = ""; // Your resource group
- private const string AccountName = ""; // Your account name
- private const string VideoUrl = ""; // The video URL from OneDrive you want to index
-
- public static async Task Main(string[] args)
- {
- // Build Azure AI Video Indexer resource provider client that has access token through Azure Resource Manager
- var videoIndexerResourceProviderClient = await VideoIndexerResourceProviderClient.BuildVideoIndexerResourceProviderClient();
-
- // Get account details
- var account = await videoIndexerResourceProviderClient.GetAccount();
- var accountId = account.Properties.Id;
- var accountLocation = account.Location;
- Console.WriteLine($"account id: {accountId}");
- Console.WriteLine($"account location: {accountLocation}");
-
- // Get account-level access token for Azure AI Video Indexer
- var accessTokenRequest = new AccessTokenRequest
- {
- PermissionType = AccessTokenPermission.Contributor,
- Scope = ArmAccessTokenScope.Account
- };
-
- var accessToken = await videoIndexerResourceProviderClient.GetAccessToken(accessTokenRequest);
- var apiUrl = "https://api.videoindexer.ai";
- System.Net.ServicePointManager.SecurityProtocol = System.Net.ServicePointManager.SecurityProtocol | System.Net.SecurityProtocolType.Tls12;
--
- // Create the HTTP client
- var handler = new HttpClientHandler();
- handler.AllowAutoRedirect = false;
- var client = new HttpClient(handler);
-
- // Upload a video
- var content = new MultipartFormDataContent();
- Console.WriteLine("Uploading...");
- // Get the video from URL
-
- // As an alternative to specifying video URL, you can upload a file.
- // Remove the videoUrl parameter from the query parameters below and add the following lines:
- // FileStream video =File.OpenRead(Globals.VIDEOFILE_PATH);
- // byte[] buffer =new byte[video.Length];
- // video.Read(buffer, 0, buffer.Length);
- // content.Add(new ByteArrayContent(buffer));
-
- var queryParams = CreateQueryString(
- new Dictionary<string, string>()
- {
- {"accessToken", accessToken},
- {"name", "video sample"},
- {"description", "video_description"},
- {"privacy", "private"},
- {"partition", "partition"},
- {"videoUrl", VideoUrl},
- });
- var uploadRequestResult = await client.PostAsync($"{apiUrl}/{accountLocation}/Accounts/{accountId}/Videos?{queryParams}", content);
- var uploadResult = await uploadRequestResult.Content.ReadAsStringAsync();
-
- // Get the video ID from the upload result
- string videoId = JsonSerializer.Deserialize<Video>(uploadResult).Id;
- Console.WriteLine("Uploaded");
- Console.WriteLine("Video ID:");
- Console.WriteLine(videoId);
-
- // Wait for the video index to finish
- while (true)
- {
- await Task.Delay(10000);
-
- queryParams = CreateQueryString(
- new Dictionary<string, string>()
- {
- {"accessToken", accessToken},
- {"language", "English"},
- });
-
- var videoGetIndexRequestResult = await client.GetAsync($"{apiUrl}/{accountLocation}/Accounts/{accountId}/Videos/{videoId}/Index?{queryParams}");
- var videoGetIndexResult = await videoGetIndexRequestResult.Content.ReadAsStringAsync();
-
- string processingState = JsonSerializer.Deserialize<Video>(videoGetIndexResult).State;
-
- Console.WriteLine("");
- Console.WriteLine("State:");
- Console.WriteLine(processingState);
-
- // Job is finished
- if (processingState != "Uploaded" && processingState != "Processing")
- {
- Console.WriteLine("");
- Console.WriteLine("Full JSON:");
- Console.WriteLine(videoGetIndexResult);
- break;
- }
- }
-
- // Search for the video
- queryParams = CreateQueryString(
- new Dictionary<string, string>()
- {
- {"accessToken", accessToken},
- {"id", videoId},
- });
-
- var searchRequestResult = await client.GetAsync($"{apiUrl}/{accountLocation}/Accounts/{accountId}/Videos/Search?{queryParams}");
- var searchResult = await searchRequestResult.Content.ReadAsStringAsync();
- Console.WriteLine("");
- Console.WriteLine("Search:");
- Console.WriteLine(searchResult);
-
- // Get insights widget URL
- queryParams = CreateQueryString(
- new Dictionary<string, string>()
- {
- {"accessToken", accessToken},
- {"widgetType", "Keywords"},
- {"allowEdit", "true"},
- });
- var insightsWidgetRequestResult = await client.GetAsync($"{apiUrl}/{accountLocation}/Accounts/{accountId}/Videos/{videoId}/InsightsWidget?{queryParams}");
- var insightsWidgetLink = insightsWidgetRequestResult.Headers.Location;
- Console.WriteLine("Insights Widget url:");
- Console.WriteLine(insightsWidgetLink);
-
- // Get player widget URL
- queryParams = CreateQueryString(
- new Dictionary<string, string>()
- {
- {"accessToken", accessToken},
- });
- var playerWidgetRequestResult = await client.GetAsync($"{apiUrl}/{accountLocation}/Accounts/{accountId}/Videos/{videoId}/PlayerWidget?{queryParams}");
- var playerWidgetLink = playerWidgetRequestResult.Headers.Location;
- Console.WriteLine("");
- Console.WriteLine("Player Widget url:");
- Console.WriteLine(playerWidgetLink);
- Console.WriteLine("\nPress Enter to exit...");
- String line = Console.ReadLine();
- if (line == "enter")
- {
- System.Environment.Exit(0);
- }
-
- }
-
- static string CreateQueryString(IDictionary<string, string> parameters)
- {
- var queryParameters = HttpUtility.ParseQueryString(string.Empty);
- foreach (var parameter in parameters)
- {
- queryParameters[parameter.Key] = parameter.Value;
- }
-
- return queryParameters.ToString();
- }
-
- public class VideoIndexerResourceProviderClient
- {
- private readonly string armAaccessToken;
-
- async public static Task<VideoIndexerResourceProviderClient> BuildVideoIndexerResourceProviderClient()
- {
- var tokenRequestContext = new TokenRequestContext(new[] { $"{AzureResourceManager}/.default" });
- var tokenRequestResult = await new DefaultAzureCredential().GetTokenAsync(tokenRequestContext);
- return new VideoIndexerResourceProviderClient(tokenRequestResult.Token);
- }
- public VideoIndexerResourceProviderClient(string armAaccessToken)
- {
- this.armAaccessToken = armAaccessToken;
- }
-
- public async Task<string> GetAccessToken(AccessTokenRequest accessTokenRequest)
- {
- Console.WriteLine($"Getting access token. {JsonSerializer.Serialize(accessTokenRequest)}");
- // Set the generateAccessToken (from video indexer) HTTP request content
- var jsonRequestBody = JsonSerializer.Serialize(accessTokenRequest);
- var httpContent = new StringContent(jsonRequestBody, System.Text.Encoding.UTF8, "application/json");
-
- // Set request URI
- var requestUri = $"{AzureResourceManager}/subscriptions/{SubscriptionId}/resourcegroups/{ResourceGroup}/providers/Microsoft.VideoIndexer/accounts/{AccountName}/generateAccessToken?api-version=2021-08-16-preview";
-
- // Generate access token from video indexer
- var client = new HttpClient(new HttpClientHandler());
- client.DefaultRequestHeaders.Authorization = new AuthenticationHeaderValue("Bearer", armAaccessToken);
- var result = await client.PostAsync(requestUri, httpContent);
- var jsonResponseBody = await result.Content.ReadAsStringAsync();
- return JsonSerializer.Deserialize<GenerateAccessTokenResponse>(jsonResponseBody).AccessToken;
- }
-
- public async Task<Account> GetAccount()
- {
-
- Console.WriteLine($"Getting account.");
- // Set request URI
- var requestUri = $"{AzureResourceManager}/subscriptions/{SubscriptionId}/resourcegroups/{ResourceGroup}/providers/Microsoft.VideoIndexer/accounts/{AccountName}/?api-version=2021-08-16-preview";
-
- // Get account
- var client = new HttpClient(new HttpClientHandler());
- client.DefaultRequestHeaders.Authorization = new AuthenticationHeaderValue("Bearer", armAaccessToken);
- var result = await client.GetAsync(requestUri);
- var jsonResponseBody = await result.Content.ReadAsStringAsync();
- return JsonSerializer.Deserialize<Account>(jsonResponseBody);
- }
- }
-
- public class AccessTokenRequest
- {
- [JsonPropertyName("permissionType")]
- public AccessTokenPermission PermissionType { get; set; }
-
- [JsonPropertyName("scope")]
- public ArmAccessTokenScope Scope { get; set; }
-
- [JsonPropertyName("projectId")]
- public string ProjectId { get; set; }
-
- [JsonPropertyName("videoId")]
- public string VideoId { get; set; }
- }
-
- [JsonConverter(typeof(JsonStringEnumConverter))]
- public enum AccessTokenPermission
- {
- Reader,
- Contributor,
- MyAccessAdministrator,
- Owner,
- }
-
- [JsonConverter(typeof(JsonStringEnumConverter))]
- public enum ArmAccessTokenScope
- {
- Account,
- Project,
- Video
- }
-
- public class GenerateAccessTokenResponse
- {
- [JsonPropertyName("accessToken")]
- public string AccessToken { get; set; }
-
- }
- public class AccountProperties
- {
- [JsonPropertyName("accountId")]
- public string Id { get; set; }
- }
-
- public class Account
- {
- [JsonPropertyName("properties")]
- public AccountProperties Properties { get; set; }
-
- [JsonPropertyName("location")]
- public string Location { get; set; }
-
- }
-
- public class Video
- {
- [JsonPropertyName("id")]
- public string Id { get; set; }
-
- [JsonPropertyName("state")]
- public string State { get; set; }
- }
- }
-}
-
-```
-
-### Common errors
-
-The upload operation might return the following status codes:
-
-|Status code|ErrorType (in response body)|Description|
-||||
-|409|VIDEO_INDEXING_IN_PROGRESS|The same video is already being processed in this account.|
-|400|VIDEO_ALREADY_FAILED|The same video failed to process in this account less than 2 hours ago. API clients should wait at least 2 hours before reuploading a video.|
-|429||Trial accounts are allowed 5 uploads per minute. Paid accounts are allowed 50 uploads per minute.|
-
-## Uploading considerations and limitations
--- The name of a video must be no more than 80 characters.-- When you're uploading a video based on the URL (preferred), the endpoint must be secured with TLS 1.2 or later.-- The upload size with the URL option is limited to 30 GB.-- The length of the request URL is limited to 6,144 characters. The length of the query string URL is limited to 4,096 characters.-- The upload size with the byte array option is limited to 2 GB.-- The byte array option times out after 30 minutes.-- The URL provided in the `videoURL` parameter must be encoded.-- Indexing Media Services assets has the same limitation as indexing from a URL.-- Azure AI Video Indexer has a duration limit of 4 hours for a single file.-- The URL must be accessible (for example, a public URL).-
- If it's a private URL, the access token must be provided in the request.
-- The URL must point to a valid media file and not to a webpage, such as a link to the `www.youtube.com` page.-- In a paid account, you can upload up to 50 movies per minute. In a trial account, you can upload up to 5 movies per minute.-
-> [!Tip]
-> We recommend that you use .NET Framework version 4.6.2. or later, because older .NET Framework versions don't default to TLS 1.2.
->
-> If you must use an older .NET Framework version, add one line to your code before making the REST API call:
->
-> `System.Net.ServicePointManager.SecurityProtocol = SecurityProtocolType.Tls | SecurityProtocolType.Tls11 | SecurityProtocolType.Tls12;`
-
-## Firewall
-
-For information about a storage account that's behind a firewall, see the [FAQ](faq.yml#can-a-storage-account-connected-to-the-media-services-account-be-behind-a-firewall).
-
-## Next steps
-
-[Examine the Azure AI Video Indexer output produced by an API](video-indexer-output-json-v2.md)
azure-video-indexer Regions https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-video-indexer/regions.md
- Title: Regions in which Azure AI Video Indexer is available
-description: This article talks about Azure regions in which Azure AI Video Indexer is available.
- Previously updated : 09/14/2020----
-# Azure regions in which Azure AI Video Indexer exists
--
-Azure AI Video Indexer APIs contain a **location** parameter that you should set to the Azure region to which the call should be routed. This must be an [Azure region in which Azure AI Video Indexer is available](https://azure.microsoft.com/global-infrastructure/services/?products=cognitive-services&regions=all).
-
-## Locations
-
-The `location` parameter must be given the Azure region code name as its value. If you are using Azure AI Video Indexer in preview mode, you should put `"trial"` as the value. `trial` is the default value for the `location` parameter. Otherwise, to get the code name of the Azure region that your account is in and that your call should be routed to, you can use the Azure portal or run a [Azure CLI](/cli/azure) command.
-
-### Azure portal
-
-1. Sign in on the [Azure AI Video Indexer](https://www.videoindexer.ai/) website.
-1. Select **User accounts** from the top-right corner of the page.
-1. Find the location of your account in the top-right corner.
-
- > [!div class="mx-imgBorder"]
- > :::image type="content" source="./media/location/location1.png" alt-text="Location":::
-
-### CLI command
-
-```azurecli-interactive
-az account list-locations
-```
-
-Once you run the line shown above, you get a list of all Azure regions. Navigate to the Azure region that has the *displayName* you are looking for, and use its *name* value for the **location** parameter.
-
-For example, for the Azure region West US 2 (displayed below), you will use "westus2" for the **location** parameter.
-
-```json
- {
- "displayName": "West US 2",
- "id": "/subscriptions/00000000-0000-0000-0000-000000000000/locations/westus2",
- "latitude": "47.233",
- "longitude": "-119.852",
- "name": "westus2",
- "subscriptionId": null
- }
-```
-
-## Next steps
--- [Customize Language model using APIs](customize-language-model-with-api.md)-- [Customize Brands model using APIs](customize-brands-model-with-api.md)-- [Customize Person model using APIs](customize-person-model-with-api.md)
azure-video-indexer Release Notes https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-video-indexer/release-notes.md
- Title: Azure AI Video Indexer release notes | Microsoft Docs
-description: To stay up-to-date with the most recent developments, this article provides you with the latest updates on Azure AI Video Indexer.
-- Previously updated : 09/27/2023----
-# Azure AI Video Indexer release notes
-
-Revisit this page to view the latest updates.
-
-To stay up-to-date with the most recent Azure AI Video Indexer developments, this article provides you with information about:
-
-* The latest releases
-* Known issues
-* Bug fixes
-* Deprecated functionality
-
-## September 2023
-
-### Changes related to AMS retirement
-As a result of the June 30th 2024 [retirement of Azure Media Services (AMS)](/azure/media-services/latest/azure-media-services-retirement), Video Indexer has announced a number of related retirements. They include the June 30th 2024 retirement of Video Indexer Classic accounts, API changes, and no longer supporting adaptive bitrate. For full details, see[Changes related to Azure Media Service (AMS) retirement](https://aka.ms/vi-ams-related-changes).
-
-## July 2023
-
-### Redact faces with Azure Video Indexer API
-
-You can now redact faces with Azure Video Indexer API. For more information see [Redact faces with Azure Video Indexer API](face-redaction-with-api.md).
-
-### API request limit increase
-
-Video Indexer has increased the API request limit from 60 requests per minute to 120.
-
-## June 2023
-
-### FAQ - following the Azure Media Services retirement announcement
-
-For more information, see [AMS deprecation FAQ](ams-deprecation-faq.yml).
-
-## May 2023
-
-### API updates
-
-We're introducing a change in behavior that may require a change to your existing query logic. The change is in the **List** and **Search** APIs, find a detailed change between the current and the new behavior in a table that follows. You may need to update your code to utilize the [new APIs](https://api-portal.videoindexer.ai/).
-
-|API |Current|New|The update|
-|||||
-|List Videos|ΓÇó List all videos/projects according to 'IsBase' boolean parameter. If 'IsBase' isn't defined, list both.<br/>ΓÇó Returns videos in all states (In progress/Proccessed/Failed). |ΓÇó List Videos API will Return only videos (with paging) in all states.<br/>ΓÇó List Projects API returns only projects (with paging).|ΓÇó List videos API was divided into two new APIΓÇÖs **List Videos** and **List Projects**<br/>ΓÇó The 'IsBase' parameter no longer has a meaning. |
-|Search Videos|ΓÇó Search all videos/projects according to 'IsBase' boolean parameter. If 'IsBase' isn't defined, search both. <br/>ΓÇó Search videos in all states (In progress/Proccessed/Failed). |Search only processed videos.|ΓÇó Search Videos API will only search videos and not projects.<br/>ΓÇó The 'IsBase' parameter no longer has a meaning.<br/>ΓÇó Search Videos API will only search Processed videos (and not Failed/InProgress ones.)|
-
-### Support for HTTP/2
-
-Added support for HTTP/2 for our [Data Plane API](https://api-portal.videoindexer.ai/). [HTTP/2](https://en.wikipedia.org/wiki/HTTP/2) offers several benefits over HTTP/1.1, which continues to be supported for backwards compatibility. One of the main benefits of HTTP/2 is increased performance, better reliability and reduced system resource requirements over HTTP/1.1. With this change we now support HTTP/2 for both the Video Indexer [Portal](https://videoindexer.ai/) and our Data Plane API. We advise you to update your code to take advantage of this change.
-
-### Topics insight improvements
-
-We now support all five levels of IPTC ontology.
-
-## April 2023
-
-### Resource Health support
-
-Azure AI Video Indexer is now integrated with Azure Resource Health enabling you to see the health and availability of each of your Azure AI Video Indexer resources. Azure Resource Health also helps with diagnosing and solving problems and you can set alerts to be notified whenever your resources are affected. For more information, see [Azure Resource Health overview](../service-health/resource-health-overview.md).
-
-### The animation character recognition model has been retired
-
-The **animation character recognition** model has been retired on March 1, 2023. For any related issues, [open a support ticket via the Azure portal](https://portal.azure.com/#blade/Microsoft_Azure_Support/HelpAndSupportBlade/overview).
-
-### Excluding sensitive AI models
-
-Following the Microsoft Responsible AI agenda, Azure AI Video Indexer now allows you to exclude specific AI models when indexing media files. The list of sensitive AI models includes: face detection, observed people, emotions, labels identification.
-
-This feature is currently available through the API, and is available in all presets except the Advanced preset.
-
-### Observed people tracing improvements
-
-For more information, see [Considerations and limitations when choosing a use case](observed-matched-people.md#considerations-and-limitations-when-choosing-a-use-case).
-
-## March 2023
-
-### Support for storage behind firewall
-
-It's good practice to lock storage accounts and disable public access to enhance or comply with enterprise security policy. Video Indexer can now access non-public accessible storage accounts using the [Azure Trusted Service](/azure/storage/common/storage-network-security?tabs=azure-portal#trusted-access-based-on-a-managed-identity) exception using Managed Identities. You can read more how to set it up in our [how-to](storage-behind-firewall.md).
-
-### New custom speech and pronunciation training
-
-Azure AI Video Indexer has added a new custom speech model experience. The experience includes ability to use custom pronunciation datasets to improve recognition of mispronounced words, phrases, or names. The custom models can be used to improve the transcription quality of content with industry specific terminology. To learn more, see [Customize speech model overview](customize-speech-model-overview.md).
-
-### Observed people quality improvements
-
-Observed people now supports people who are sitting. This is in addition to existing support of people who are standing or walking. This improvement makes observed people model more versatile and suitable for a wider range of use cases. We have also improved the model re-identification and grouping algorithms by 50%. The model can now more accurately track and group people across multiple camera views.
-
-### Observed people indexing duration optimization
-
-We have optimized the memory usage of the observed people model, resulting in a 60% reduction in indexing duration when using the advanced video analysis preset. You can now process your video footage more efficiently and get results faster.
-
-## February 2023
-
-### Pricing
-
-On January 01, 2023 we introduced the Advanced Audio and Video SKU for Advanced presets. This was done on order to report the use of each preset, Basic, Standard & Advanced, with their ow