Updates from: 10/17/2023 01:13:54
Service Microsoft Docs article Related commit history on GitHub Change details
active-directory-b2c Error Codes https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-b2c/error-codes.md
The following errors can be returned by the Azure Active Directory B2C service.
| `AADB2C99013` | The supplied grant_type [{0}] and token_type [{1}] combination is not supported. | | `AADB2C99015` | Profile '{0}' in policy '{1}' in tenant '{2}' is missing all InputClaims required for resource owner password credential flow. | [Create a resource owner policy](add-ropc-policy.md#create-a-resource-owner-policy) | |`AADB2C99002`| User doesn't exist. Please sign up before you can sign in. |
+| `AADB2C99027` | Policy '{0}' does not contain a AuthorizationTechnicalProfile with a corresponding ClientAssertionType. | [Client credentials flow](client-credentials-grant-flow.md) |
active-directory-b2c Saml Service Provider Options https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-b2c/saml-service-provider-options.md
Previously updated : 10/05/2021 Last updated : 10/16/2023
Replace the following values:
You can use a complete sample policy for testing with the SAML test app:
-1. Download the [SAML-SP-initiated login sample policy](https://github.com/azure-ad-b2c/saml-sp/tree/master/policy/SAML-SP-Initiated).
+1. Download the [SAML-SP-initiated login sample policy](https://github.com/azure-ad-b2c/saml-sp/tree/master/policy/SAML-IdP-Initiated-LocalAccounts).
1. Update `TenantId` to match your tenant name. This article uses the example *contoso.b2clogin.com*. 1. Keep the policy name *B2C_1A_signup_signin_saml*.
active-directory Concept Password Ban Bad Combined Policy https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/authentication/concept-password-ban-bad-combined-policy.md
Previously updated : 04/02/2023 Last updated : 10/16/2023
This topic explains details about the password policy criteria checked by Micros
A password policy is applied to all user and admin accounts that are created and managed directly in Microsoft Entra ID. You can [ban weak passwords](concept-password-ban-bad.md) and define parameters to [lock out an account](howto-password-smart-lockout.md) after repeated bad password attempts. Other password policy settings can't be modified.
-The Microsoft Entra password policy doesn't apply to user accounts synchronized from an on-premises AD DS environment using Microsoft Entra Connect unless you enable EnforceCloudPasswordPolicyForPasswordSyncedUsers.
+The Microsoft Entra password policy doesn't apply to user accounts synchronized from an on-premises AD DS environment using Microsoft Entra Connect unless you enable EnforceCloudPasswordPolicyForPasswordSyncedUsers. If EnforceCloudPasswordPolicyForPasswordSyncedUsers and password writeback are enabled, Microsoft Entra password expiration policy applies, but the on-premises password policy takes precedence for length, complexity, and so on.
The following Microsoft Entra password policy requirements apply for all passwords that are created, changed, or reset in Microsoft Entra ID. Requirements are applied during user provisioning, password change, and password reset flows. You can't change these settings except as noted.
active-directory How To Configure Aws Iam https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/cloud-infrastructure-entitlement-management/how-to-configure-aws-iam.md
Previously updated : 06/07/2023 Last updated : 10/16/2023
-# Configure AWS IAM Identity Center as an identity provider
+# Configure AWS IAM Identity Center as an identity provider (preview)
If you're an Amazon Web Services (AWS) customer who uses the AWS IAM Identity Center, you can configure the Identity Center as an identity provider in Permissions Management. Configuring your AWS IAM Identity Center information allows you to receive more accurate data for your identities in Permissions Management.
If you're an Amazon Web Services (AWS) customer who uses the AWS IAM Identity Ce
1. If the **Data Collectors** dashboard isn't displayed when Permissions Management launches, select **Settings** (gear icon), and then select the **Data Collectors** subtab.
-2. On the **Data Collectors** dashboard, select **AWS**, and then select **Create Configuration**. If a Data Collector already exists in your AWS account and you want to add AWS IAM integration, do the following:
+2. On the **Data Collectors** dashboard, select **AWS**, and then select **Create Configuration**. If a Data Collector already exists in your AWS account and you want to add AWS IAM integration, then:
- Select the Data Collector for which you want to configure AWS IAM. - Click on the ellipsis next to theΓÇ»**Authorization Systems Status**. - SelectΓÇ»**Integrate Identity Provider**.
If you're an Amazon Web Services (AWS) customer who uses the AWS IAM Identity Ce
- Your **AWS Management Account Role** 5. SelectΓÇ»**Launch Management Account Template**. The template opens in a new window.
-6. If the Management Account stack is created with the Cloud Formation Template as part of the previous onboarding steps, update the stack by running ``EnableSSO`` as true. This creates a new stack when running the Management Account Template.
+6. If the Management Account stack is created with the Cloud Formation Template as part of the previous onboarding steps, update the stack by running ``EnableSSO`` as true. Running this command creates a new stack when running the Management Account Template.
-The template execution attaches the AWS managed policy ``AWSSSOReadOnly`` and the newly created custom policy ``SSOPolicy`` to the AWS IAM role that allows Microsoft Entra Permissions Management to collect organizational information. The following details are requested in the template. All fields are pre-populated, and you can edit the data as you need:
-- **Stack name** ΓÇô This is the name of the AWS stack for creating the required AWS resources for Permissions Management to collect organizational information. The default value is ``mciem-org-<tenant-id>``.
+The template execution attaches the AWS managed policy ``AWSSSOReadOnly`` and the newly created custom policy ``SSOPolicy`` to the AWS IAM role that allows Microsoft Entra Permissions Management to collect organizational information. The following details are requested in the template. All fields are prepopulated, and you can edit the data as you need:
+- **Stack name** ΓÇô The Stack name is the name of the AWS stack for creating the required AWS resources for Permissions Management to collect organizational information. The default value is ``mciem-org-<tenant-id>``.
- **CFT Parameters** - **OIDC Provider Role Name** ΓÇô Name of the IAM Role OIDC Provider that can assume the role. The default value is the OIDC account role (as entered in Permissions Management).
- - **Org Account Role Name** - Name of the IAM Role. The default value is pre-populated with the Management account role name (as entered in Microsoft Entra PM).
+ - **Org Account Role Name** - Name of the IAM Role. The default value is prepopulated with the Management account role name (as entered in Microsoft Entra PM).
- **true** ΓÇô Enables AWS SSO. The default value is ``true`` when the template is launched from the Configure Identity Provider (IdP) page, otherwise the default is ``false``.
active-directory Concept Conditional Access Grant https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/conditional-access/concept-conditional-access-grant.md
The following client apps support this setting. This list isn't exhaustive and i
- Adobe Acrobat Reader mobile app - iAnnotate for Office 365 - Microsoft Cortana
+- Microsoft Dynamics 365 for Phones
+- Micorsoft Dynamics 365 Sales
- Microsoft Edge - Microsoft Excel - Microsoft Power Automate
The following client apps support this setting. This list isn't exhaustive and i
- Microsoft To Do - Microsoft Word - Microsoft Whiteboard Services-- Microsoft Field Service (Dynamics 365) - MultiLine for Intune - Nine Mail - Email and Calendar - Notate for Intune
active-directory Location Condition https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/conditional-access/location-condition.md
Previously updated : 10/06/2023 Last updated : 10/16/2023
Multiple Conditional Access policies may prompt users for their GPS location bef
> [!IMPORTANT] > Users may receive prompts every hour letting them know that Microsoft Entra ID is checking their location in the Authenticator app. The preview should only be used to protect very sensitive apps where this behavior is acceptable or where access needs to be restricted to a specific country/region.
+#### Deny requests with modified location
+Users can modify the location reported by iOS and Android devices. As a result, Microsoft Authenticator is updating its security baseline for location-based Conditional Access policies. Authenticator will deny authentications where the user may be using a different location than the actual GPS location of the mobile device where Authenticator installed.
+
+In the November 2023 release of Authenticator, users who modify the location of their device will get a denial message in Authenticator when they try location-based authentication. Beginning January 2024, any users that run older Authenticator versions will be blocked from location-based authentication:
+
+- Authenticator version 6.2309.6329 or earlier on Android
+- Authenticator version 6.7.16 or earlier on iOS
+
+To find which users run older versions of Authenticator, use [Microsft Graph APIs](/graph/api/resources/microsoftauthenticatorauthenticationmethod#properties).
#### Include unknown countries/regions
active-directory Supported Accounts Validation https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/develop/supported-accounts-validation.md
See the following table for the validation differences of various properties for
| Application ID URI (`identifierURIs`) | Must be unique in the tenant <br><br> `urn://` schemes are supported <br><br> Wildcards aren't supported <br><br> Query strings and fragments are supported <br><br> Maximum length of 255 characters <br><br> No limit\* on number of identifierURIs | Must be globally unique <br><br> `urn://` schemes are supported <br><br> Wildcards aren't supported <br><br> Query strings and fragments are supported <br><br> Maximum length of 255 characters <br><br> No limit\* on number of identifierURIs | Must be globally unique <br><br> `urn://` schemes aren't supported <br><br> Wildcards, fragments, and query strings aren't supported <br><br> Maximum length of 120 characters <br><br> Maximum of 50 identifierURIs | | National clouds | Supported | Supported | Not supported | | Certificates (`keyCredentials`) | Symmetric signing key | Symmetric signing key | Encryption and asymmetric signing key |
-| Client secrets (`passwordCredentials`) | No limit\* | No limit\* | If liveSDK is enabled: Maximum of two client secrets |
+| Client secrets (`passwordCredentials`) | No limit\* | No limit\* | Maximum of two client secrets |
| Redirect URIs (`replyURLs`) | See [Redirect URI/reply URL restrictions and limitations](reply-url.md) for more info. | | | | API permissions (`requiredResourceAccess`) | No more than 50 total APIs (resource apps), with no more than 10 APIs from other tenants. No more than 400 permissions total across all APIs. | No more than 50 total APIs (resource apps), with no more than 10 APIs from other tenants. No more than 400 permissions total across all APIs. | No more than 50 total APIs (resource apps), with no more than 10 APIs from other tenants. No more than 200 permissions total across all APIs. Maximum of 30 permissions per resource (for example, Microsoft Graph). | | Scopes defined by this API (`oauth2Permissions`) | Maximum scope name length of 120 characters <br><br> No limit\* on the number of scopes defined | Maximum scope name length of 120 characters <br><br> No limit\* on the number of scopes defined | Maximum scope name length of 40 characters <br><br> Maximum of 100 scopes defined |
active-directory Licensing Service Plan Reference https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/enterprise-users/licensing-service-plan-reference.md
When [managing licenses in the Azure portal](https://portal.azure.com/#blade/Mic
| Microsoft Entra ID P1_USGOV_GCCHIGH | AAD_PREMIUM_USGOV_GCCHIGH | de597797-22fb-4d65-a9fe-b7dbe8893914 | EXCHANGE_S_FOUNDATION (113feb6c-3fe4-4440-bddc-54d774bf0318)<br/>MFA_PREMIUM (8a256a2b-b617-496d-b51b-e76466e88db0)<br/>ADALLOM_S_DISCOVERY (932ad362-64a8-4783-9106-97849a1a30b9)<br/>AAD_PREMIUM (41781fb2-bc02-4b7c-bd55-b576c07bb09d) | Exchange Foundation (113feb6c-3fe4-4440-bddc-54d774bf0318)<br/>Microsoft Azure Multi-Factor Authentication (8a256a2b-b617-496d-b51b-e76466e88db0)<br/>Microsoft Defender for Cloud Apps Discovery (932ad362-64a8-4783-9106-97849a1a30b9)<br/>Microsoft Entra ID P1 (41781fb2-bc02-4b7c-bd55-b576c07bb09d) | | Microsoft Entra ID P2 | AAD_PREMIUM_P2 | 84a661c4-e949-4bd2-a560-ed7766fcaf2b | AAD_PREMIUM (41781fb2-bc02-4b7c-bd55-b576c07bb09d)<br/>AAD_PREMIUM_P2 (eec0eb4f-6444-4f95-aba0-50c24d67f998)<br/>ADALLOM_S_DISCOVERY (932ad362-64a8-4783-9106-97849a1a30b9)<br/>EXCHANGE_S_FOUNDATION (113feb6c-3fe4-4440-bddc-54d774bf0318)<br/>MFA_PREMIUM (8a256a2b-b617-496d-b51b-e76466e88db0) | Microsoft Entra ID P1 (41781fb2-bc02-4b7c-bd55-b576c07bb09d)<br/>Microsoft Entra ID P2 (eec0eb4f-6444-4f95-aba0-50c24d67f998)<br/>CLOUD APP SECURITY DISCOVERY (932ad362-64a8-4783-9106-97849a1a30b9)<br/>EXCHANGE FOUNDATION (113feb6c-3fe4-4440-bddc-54d774bf0318)<br/>MICROSOFT AZURE MULTI-FACTOR AUTHENTICATION (8a256a2b-b617-496d-b51b-e76466e88db0) | | Azure Information Protection Plan 1 | RIGHTSMANAGEMENT | c52ea49f-fe5d-4e95-93ba-1de91d380f89 | RMS_S_ENTERPRISE (bea4c11e-220a-4e6d-8eb8-8ea15d019f90)<br/>RMS_S_PREMIUM (6c57d4b6-3b23-47a5-9bc9-69f17b4947b3) | AZURE INFORMATION PROTECTION PREMIUM P1 (6c57d4b6-3b23-47a5-9bc9-69f17b4947b3)<br/>Microsoft Entra RIGHTS (bea4c11e-220a-4e6d-8eb8-8ea15d019f90) |
+| Azure Information Protection Plan 1 | RIGHTSMANAGEMENT_CE | a0e6a48f-b056-4037-af70-b9ac53504551 | EXCHANGE_S_FOUNDATION (113feb6c-3fe4-4440-bddc-54d774bf0318)<br/>RMS_S_PREMIUM (6c57d4b6-3b23-47a5-9bc9-69f17b4947b3)<br/>RMS_S_ENTERPRISE (bea4c11e-220a-4e6d-8eb8-8ea15d019f90) | Exchange Foundation (113feb6c-3fe4-4440-bddc-54d774bf0318)<br/>Azure Information Protection Premium P1 (6c57d4b6-3b23-47a5-9bc9-69f17b4947b3)<br/>Azure Rights Management (bea4c11e-220a-4e6d-8eb8-8ea15d019f90) |
| Azure Information Protection Premium P1 for Government | RIGHTSMANAGEMENT_CE_GOV | 78362de1-6942-4bb8-83a1-a32aa67e6e2c | EXCHANGE_S_FOUNDATION_GOV (922ba911-5694-4e99-a794-73aed9bfeec8)<br/>RMS_S_PREMIUM_GOV (1b66aedf-8ca1-4f73-af76-ec76c6180f98)<br/>RMS_S_ENTERPRISE_GOV (6a76346d-5d6e-4051-9fe3-ed3f312b5597) | Exchange Foundation for Government (922ba911-5694-4e99-a794-73aed9bfeec8)<br/>Azure Information Protection Premium P1 for GCC (1b66aedf-8ca1-4f73-af76-ec76c6180f98)<br/>Azure Rights Management (6a76346d-5d6e-4051-9fe3-ed3f312b5597) | | Azure Information Protection Premium P1_USGOV_GCCHIGH | RIGHTSMANAGEMENT_CE_USGOV_GCCHIGH | c57afa2a-d468-46c4-9a90-f86cb1b3c54a | EXCHANGE_S_FOUNDATION (113feb6c-3fe4-4440-bddc-54d774bf0318)<br/>RMS_S_PREMIUM (6c57d4b6-3b23-47a5-9bc9-69f17b4947b3)<br/>RMS_S_ENTERPRISE (bea4c11e-220a-4e6d-8eb8-8ea15d019f90) | Exchange Foundation (113feb6c-3fe4-4440-bddc-54d774bf0318)<br/>Azure Information Protection Premium P1 (6c57d4b6-3b23-47a5-9bc9-69f17b4947b3)<br/>Azure Rights Management (bea4c11e-220a-4e6d-8eb8-8ea15d019f90) | | Business Apps (free) | SMB_APPS | 90d8b3f8-712e-4f7b-aa1e-62e7ae6cbe96 | DYN365BC_MS_INVOICING (39b5c996-467e-4e60-bd62-46066f572726)<br/>MICROSOFTBOOKINGS (199a5c09-e0ca-4e37-8f7c-b05d533e1ea2) | Microsoft Invoicing (39b5c996-467e-4e60-bd62-46066f572726)<br/>Microsoft Bookings (199a5c09-e0ca-4e37-8f7c-b05d533e1ea2) |
active-directory Howto Identity Protection Remediate Unblock https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/identity-protection/howto-identity-protection-remediate-unblock.md
Organizations who have enabled [password hash synchronization](../hybrid/connect
This configuration provides organizations two new capabilities: -- Risky hybrid users can self-remediate without administrators intervention. When a password is changed on-premises, user risk is now automatically remediated within Entra ID Protection, bringing the user to a safe state.
+- Risky hybrid users can self-remediate without administrators intervention. When a password is changed on-premises, user risk is now automatically remediated within Entra ID Protection, resetting the current user risk state.
- Organizations can proactively deploy [user risk policies that require password changes](howto-identity-protection-configure-risk-policies.md#user-risk-policy-in-conditional-access) to confidently protect their hybrid users. This option strengthens your organization's security posture and simplifies security management by ensuring that user risks are promptly addressed, even in complex hybrid environments. :::image type="content" source="media/howto-identity-protection-remediate-unblock/allow-on-premises-password-reset-user-risk.png" alt-text="Screenshot showing the location of the Allow on-premises password change to reset user risk checkbox." lightbox="media/howto-identity-protection-remediate-unblock/allow-on-premises-password-reset-user-risk.png":::
active-directory Concept Diagnostic Settings Logs Options https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/reports-monitoring/concept-diagnostic-settings-logs-options.md
Previously updated : 10/02/2023 Last updated : 10/16/2023
The `EnrichedOffice365AuditLogs` logs are associated with the enriched logs you
### Microsoft Graph activity logs
-The `MicrosoftGraphActivityLogs` is associated with a feature that's still in preview, but may be visible in the Microsoft Entra admin center. These logs provide administrators full visibility into all HTTP requests accessing your tenant's resources through the Microsoft Graph API. You can use these logs to identify activities that a compromised user account conducted in your tenant or to investigate problematic or unexpected behaviors for client applications, such as extreme call volumes. Route these logs to the same Log Analytics workspace with `SignInLogs` to cross-reference details of token requests for sign-in logs.
-
-The feature is currently in private preview. For more information, see [Access Microsoft Graph activity logs (preview)](/graph/microsoft-graph-activity-logs-overview).
+The `MicrosoftGraphActivityLogs` provide administrators full visibility into all HTTP requests accessing your tenant's resources through the Microsoft Graph API. You can use these logs to identify activities that a compromised user account conducted in your tenant or to investigate problematic or unexpected behaviors for client applications, such as extreme call volumes. Route these logs to the same Log Analytics workspace with `SignInLogs` to cross-reference details of token requests for sign-in logs. For more information, see [Access Microsoft Graph activity logs (preview)](/graph/microsoft-graph-activity-logs-overview).
### Network access traffic logs
active-directory Arborxr Tutorial https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/arborxr-tutorial.md
+
+ Title: Microsoft Entra SSO integration with ArborXR
+description: Learn how to configure single sign-on between Microsoft Entra ID and ArborXR.
++++++++ Last updated : 10/03/2023++++
+# Microsoft Entra SSO integration with ArborXR
+
+In this tutorial, you'll learn how to integrate ArborXR with Microsoft Entra ID. When you integrate ArborXR with Microsoft Entra ID, you can:
+
+* Control in Microsoft Entra ID who has access to ArborXR.
+* Enable your users to be automatically signed-in to ArborXR with their Microsoft Entra accounts.
+* Manage your accounts in one central location.
+
+## Prerequisites
+
+To integrate Microsoft Entra ID with ArborXR, you need:
+
+* A Microsoft Entra subscription. If you don't have a subscription, you can get a [free account](https://azure.microsoft.com/free/).
+* ArborXR single sign-on (SSO) enabled subscription.
+
+## Scenario description
+
+In this tutorial, you configure and test Microsoft Entra SSO in a test environment.
+
+* ArborXR supports **SP** initiated SSO.
+
+## Add ArborXR from the gallery
+
+To configure the integration of ArborXR into Microsoft Entra ID, you need to add ArborXR from the gallery to your list of managed SaaS apps.
+
+1. Sign in to the [Microsoft Entra admin center](https://entra.microsoft.com) as at least a [Cloud Application Administrator](../roles/permissions-reference.md#cloud-application-administrator).
+1. Browse to **Identity** > **Applications** > **Enterprise applications** > **New application**.
+1. In the **Add from the gallery** section, type **ArborXR** in the search box.
+1. Select **ArborXR** from results panel and then add the app. Wait a few seconds while the app is added to your tenant.
+
+Alternatively, you can also use the [Enterprise App Configuration Wizard](https://portal.office.com/AdminPortal/home?Q=Docs#/azureadappintegration). In this wizard, you can add an application to your tenant, add users/groups to the app, assign roles, and walk through the SSO configuration as well. [Learn more about Microsoft 365 wizards.](/microsoft-365/admin/misc/azure-ad-setup-guides)
+
+## Configure and test Microsoft Entra SSO for ArborXR
+
+Configure and test Microsoft Entra SSO with ArborXR using a test user called **B.Simon**. For SSO to work, you need to establish a link relationship between a Microsoft Entra user and the related user in ArborXR.
+
+To configure and test Microsoft Entra SSO with ArborXR, perform the following steps:
+
+1. **[Configure Microsoft Entra SSO](#configure-microsoft-entra-sso)** - to enable your users to use this feature.
+ 1. **[Create a Microsoft Entra ID test user](#create-a-microsoft-entra-id-test-user)** - to test Microsoft Entra single sign-on with B.Simon.
+ 1. **[Assign the Microsoft Entra ID test user](#assign-the-microsoft-entra-id-test-user)** - to enable B.Simon to use Microsoft Entra single sign-on.
+1. **[Configure ArborXR SSO](#configure-arborxr-sso)** - to configure the single sign-on settings on application side.
+ 1. **[Create ArborXR test user](#create-arborxr-test-user)** - to have a counterpart of B.Simon in ArborXR that is linked to the Microsoft Entra ID representation of user.
+1. **[Test SSO](#test-sso)** - to verify whether the configuration works.
+
+## Configure Microsoft Entra SSO
+
+Follow these steps to enable Microsoft Entra SSO in the Microsoft Entra admin center.
+
+1. Sign in to the [Microsoft Entra admin center](https://entra.microsoft.com) as at least a [Cloud Application Administrator](../roles/permissions-reference.md#cloud-application-administrator).
+1. Browse to **Identity** > **Applications** > **Enterprise applications** > **ArborXR** > **Single sign-on**.
+1. On the **Select a single sign-on method** page, select **SAML**.
+1. On the **Set up single sign-on with SAML** page, click the pencil icon for **Basic SAML Configuration** to edit the settings.
+
+ ![Screenshot shows how to edit Basic SAML Configuration.](common/edit-urls.png "Basic Configuration")
+
+1. On the **Basic SAML Configuration** section, perform the following steps:
+
+ a. In the **Identifier (Entity ID)** text box, type a URL using the following pattern:
+ `https://api.xrdm.app/auth/realms/<INSTANCE>`
+
+ b. In the **Reply URL** text box, type a URL using the following pattern:
+ `https://api.xrdm.app/auth/realms/<INSTANCE>/broker/SAML2/endpoint`
+
+ c. In the **Sign on URL** text box, type a URL using the following pattern:
+ `https://api.xrdm.app/auth/realms/<INSTANCE>`
+
+ > [!NOTE]
+ > These values are not real. Update these values with the actual Identifier, Reply URL and Sign on URL. Contact [ArborXR support team](mailto:support@arborxr.com) to get these values. You can also refer to the patterns shown in the **Basic SAML Configuration** section in the Microsoft Entra admin center.
+
+1. On the **Set up single sign-on with SAML** page, in the **SAML Signing Certificate** section, click copy button to copy **App Federation Metadata Url** and save it on your computer.
+
+ ![Screenshot shows the Certificate download link.](common/copy-metadataurl.png "Certificate")
+
+### Create a Microsoft Entra ID test user
+
+In this section, you'll create a test user in the Microsoft Entra admin center called B.Simon.
+
+1. Sign in to the [Microsoft Entra admin center](https://entra.microsoft.com) as at least a [User Administrator](../roles/permissions-reference.md#user-administrator).
+1. Browse to **Identity** > **Users** > **All users**.
+1. Select **New user** > **Create new user**, at the top of the screen.
+1. In the **User** properties, follow these steps:
+ 1. In the **Display name** field, enter `B.Simon`.
+ 1. In the **User principal name** field, enter the username@companydomain.extension. For example, `B.Simon@contoso.com`.
+ 1. Select the **Show password** check box, and then write down the value that's displayed in the **Password** box.
+ 1. Select **Review + create**.
+1. Select **Create**.
+
+### Assign the Microsoft Entra ID test user
+
+In this section, you'll enable B.Simon to use Microsoft Entra single sign-on by granting access to ArborXR.
+
+1. Sign in to the [Microsoft Entra admin center](https://entra.microsoft.com) as at least a [Cloud Application Administrator](../roles/permissions-reference.md#cloud-application-administrator).
+1. Browse to **Identity** > **Applications** > **Enterprise applications** > **ArborXR**.
+1. In the app's overview page, select **Users and groups**.
+1. Select **Add user/group**, then select **Users and groups** in the **Add Assignment** dialog.
+ 1. In the **Users and groups** dialog, select **B.Simon** from the Users list, then click the **Select** button at the bottom of the screen.
+ 1. If you are expecting a role to be assigned to the users, you can select it from the **Select a role** dropdown. If no role has been set up for this app, you see "Default Access" role selected.
+ 1. In the **Add Assignment** dialog, click the **Assign** button.
+
+## Configure ArborXR SSO
+
+1. Log in to ArborXR company site as an administrator.
+
+1. Go to **Settings** > **Single Sign-On** > and click **SAML**.
+
+1. In the **Hosted IdP Metadata URL** textbox, paste the **App Federation Metadata Url**, which you have copied from the Microsoft Entra admin center.
+
+ ![Screenshot shows settings of the configuration.](./media/arborxr-tutorial/settings.png "Account")
+
+1. Click **Apply Changes**.
+
+### Create ArborXR test user
+
+1. In a different web browser window, sign into ArborXR website as an administrator.
+
+1. Navigate to **Settings** > **Users** and click **Add Users**.
+
+ ![Screenshot shows how to create users in application.](./media/arborxr-tutorial/create.png "Users")
+
+1. In the **Add Users** section, perform the following steps:
+
+ ![Screenshot shows how to create new users in the page.](./media/arborxr-tutorial/details.png "Creating Users")
+
+ 1. Select **Role** from the drop-down.
+
+ 1. Enter a valid email address in the **Invite via email** textbox.
+
+ 1. Click **Invite**.
+
+## Test SSO
+
+In this section, you test your Microsoft Entra single sign-on configuration with following options.
+
+* Click on **Test this application** in Microsoft Entra admin center. This will redirect to ArborXR Sign-on URL where you can initiate the login flow.
+
+* Go to ArborXR Sign-on URL directly and initiate the login flow from there.
+
+* You can use Microsoft My Apps. When you click the ArborXR tile in the My Apps, this will redirect to ArborXR Sign-on URL. For more information about the My Apps, see [Introduction to the My Apps](../user-help/my-apps-portal-end-user-access.md).
++
+## Next steps
+
+Once you configure ArborXR you can enforce session control, which protects exfiltration and infiltration of your organization's sensitive data in real time. Session control extends from Conditional Access. [Learn how to enforce session control with Microsoft Defender for Cloud Apps](/cloud-app-security/proxy-deployment-any-app).
active-directory Webxt Recognition Tutorial https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/webxt-recognition-tutorial.md
+
+ Title: Microsoft Entra SSO integration with WebXT Recognition
+description: Learn how to configure single sign-on between Microsoft Entra ID and WebXT Recognition.
++++++++ Last updated : 10/10/2023++++
+# Microsoft Entra SSO integration with WebXT Recognition
+
+In this tutorial, you'll learn how to integrate WebXT Recognition with Microsoft Entra ID. When you integrate WebXT Recognition with Microsoft Entra ID, you can:
+
+* Control in Microsoft Entra ID who has access to WebXT Recognition.
+* Enable your users to be automatically signed-in to WebXT Recognition with their Microsoft Entra accounts.
+* Manage your accounts in one central location.
+
+## Prerequisites
+
+To integrate Microsoft Entra ID with WebXT Recognition, you need:
+
+* A Microsoft Entra subscription. If you don't have a subscription, you can get a [free account](https://azure.microsoft.com/free/).
+* WebXT Recognition single sign-on (SSO) enabled subscription.
+
+## Scenario description
+
+In this tutorial, you configure and test Microsoft Entra SSO in a test environment.
+
+* WebXT Recognition supports **IDP** initiated SSO.
+
+## Add WebXT Recognition from the gallery
+
+To configure the integration of WebXT Recognition into Microsoft Entra ID, you need to add WebXT Recognition from the gallery to your list of managed SaaS apps.
+
+1. Sign in to the [Microsoft Entra admin center](https://entra.microsoft.com) as at least a [Cloud Application Administrator](../roles/permissions-reference.md#cloud-application-administrator).
+1. Browse to **Identity** > **Applications** > **Enterprise applications** > **New application**.
+1. In the **Add from the gallery** section, type **WebXT Recognition** in the search box.
+1. Select **WebXT Recognition** from results panel and then add the app. Wait a few seconds while the app is added to your tenant.
+
+Alternatively, you can also use the [Enterprise App Configuration Wizard](https://portal.office.com/AdminPortal/home?Q=Docs#/azureadappintegration). In this wizard, you can add an application to your tenant, add users/groups to the app, assign roles, and walk through the SSO configuration as well. [Learn more about Microsoft 365 wizards.](/microsoft-365/admin/misc/azure-ad-setup-guides)
+
+## Configure and test Microsoft Entra SSO for WebXT Recognition
+
+Configure and test Microsoft Entra SSO with WebXT Recognition using a test user called **B.Simon**. For SSO to work, you need to establish a link relationship between a Microsoft Entra user and the related user in WebXT Recognition.
+
+To configure and test Microsoft Entra SSO with WebXT Recognition, perform the following steps:
+
+1. **[Configure Microsoft Entra SSO](#configure-microsoft-entra-sso)** - to enable your users to use this feature.
+ 1. **[Create a Microsoft Entra ID test user](#create-a-microsoft-entra-id-test-user)** - to test Microsoft Entra single sign-on with B.Simon.
+ 1. **[Assign the Microsoft Entra ID test user](#assign-the-microsoft-entra-id-test-user)** - to enable B.Simon to use Microsoft Entra single sign-on.
+1. **[Configure WebXT Recognition SSO](#configure-webxt-recognition-sso)** - to configure the single sign-on settings on application side.
+ 1. **[Create WebXT Recognition test user](#create-webxt-recognition-test-user)** - to have a counterpart of B.Simon in WebXT Recognition that is linked to the Microsoft Entra ID representation of user.
+1. **[Test SSO](#test-sso)** - to verify whether the configuration works.
+
+## Configure Microsoft Entra SSO
+
+Follow these steps to enable Microsoft Entra SSO in the Microsoft Entra admin center.
+
+1. Sign in to the [Microsoft Entra admin center](https://entra.microsoft.com) as at least a [Cloud Application Administrator](../roles/permissions-reference.md#cloud-application-administrator).
+1. Browse to **Identity** > **Applications** > **Enterprise applications** > **WebXT Recognition** > **Single sign-on**.
+1. On the **Select a single sign-on method** page, select **SAML**.
+1. On the **Set up single sign-on with SAML** page, click the pencil icon for **Basic SAML Configuration** to edit the settings.
+
+ ![Screenshot shows how to edit Basic SAML Configuration.](common/edit-urls.png "Basic Configuration")
+
+1. On the **Basic SAML Configuration** section, perform the following steps:
+
+ a. In the **Identifier** text box, type a value using the following pattern:
+ `<webxt>`
+
+ b. In the **Reply URL** text box, type a URL using the following pattern:
+ `https://webxtrecognition.<DOMAIN>.com/<INSTANCE>`
+
+ > [!NOTE]
+ > These values are not real. Update these values with the actual Identifier and Reply URL. Contact [WebXT Recognition support team](mailto:webxtrecognition@biworldwide.com) to get these values. You can also refer to the patterns shown in the **Basic SAML Configuration** section in the Microsoft Entra admin center.
+
+1. WebXT Recognition application expects the SAML assertions in a specific format, which requires you to add custom attribute mappings to your SAML token attributes configuration. The following screenshot shows the list of default attributes.
+
+ ![Screenshot shows the image of attributes configuration.](common/default-attributes.png "Image")
+
+1. In addition to above, WebXT Recognition application expects few more attributes to be passed back in SAML response which are shown below. These attributes are also pre populated but you can review them as per your requirements.
+
+ | Name | Source Attribute|
+ | | |
+ | employeeid | user.employeeid |
+
+1. On the **Set up single sign-on with SAML** page, in the **SAML Signing Certificate** section, find **Federation Metadata XML** and select **Download** to download the certificate and save it on your computer.
+
+ ![Screenshot shows the Certificate download link.](common/metadataxml.png "Certificate")
+
+1. On the **Set up WebXT Recognition** section, copy the appropriate URL(s) based on your requirement.
+
+ ![Screenshot shows to copy configuration URLs.](common/copy-configuration-urls.png "Metadata")
+
+### Create a Microsoft Entra ID test user
+
+In this section, you'll create a test user in the Microsoft Entra admin center called B.Simon.
+
+1. Sign in to the [Microsoft Entra admin center](https://entra.microsoft.com) as at least a [User Administrator](../roles/permissions-reference.md#user-administrator).
+1. Browse to **Identity** > **Users** > **All users**.
+1. Select **New user** > **Create new user**, at the top of the screen.
+1. In the **User** properties, follow these steps:
+ 1. In the **Display name** field, enter `B.Simon`.
+ 1. In the **User principal name** field, enter the username@companydomain.extension. For example, `B.Simon@contoso.com`.
+ 1. Select the **Show password** check box, and then write down the value that's displayed in the **Password** box.
+ 1. Select **Review + create**.
+1. Select **Create**.
+
+### Assign the Microsoft Entra ID test user
+
+In this section, you'll enable B.Simon to use Microsoft Entra single sign-on by granting access to WebXT Recognition.
+
+1. Sign in to the [Microsoft Entra admin center](https://entra.microsoft.com) as at least a [Cloud Application Administrator](../roles/permissions-reference.md#cloud-application-administrator).
+1. Browse to **Identity** > **Applications** > **Enterprise applications** > **WebXT Recognition**.
+1. In the app's overview page, select **Users and groups**.
+1. Select **Add user/group**, then select **Users and groups** in the **Add Assignment** dialog.
+ 1. In the **Users and groups** dialog, select **B.Simon** from the Users list, then click the **Select** button at the bottom of the screen.
+ 1. If you are expecting a role to be assigned to the users, you can select it from the **Select a role** dropdown. If no role has been set up for this app, you see "Default Access" role selected.
+ 1. In the **Add Assignment** dialog, click the **Assign** button.
+
+## Configure WebXT Recognition SSO
+
+To configure single sign-on on **WebXT Recognition** side, you need to send the downloaded **Federation Metadata XML** and appropriate copied URLs from Microsoft Entra admin center to [WebXT Recognition support team](mailto:webxtrecognition@biworldwide.com). They set this setting to have the SAML SSO connection set properly on both sides.
+
+### Create WebXT Recognition test user
+
+In this section, you create a user called B.Simon in WebXT Recognition. Work with [WebXT Recognition support team](mailto:webxtrecognition@biworldwide.com) to add the users in the WebXT Recognition platform. Users must be created and activated before you use single sign-on.
+
+## Test SSO
+
+In this section, you test your Microsoft Entra single sign-on configuration with following options.
+
+* Click on Test this application in Microsoft Entra admin center and you should be automatically signed in to the WebXT Recognition for which you set up the SSO.
+
+* You can use Microsoft My Apps. When you click the WebXT Recognition tile in the My Apps, you should be automatically signed in to the WebXT Recognition for which you set up the SSO. For more information about the My Apps, see [Introduction to the My Apps](../user-help/my-apps-portal-end-user-access.md).
+
+## Next steps
+
+Once you configure WebXT Recognition you can enforce session control, which protects exfiltration and infiltration of your organization's sensitive data in real time. Session control extends from Conditional Access. [Learn how to enforce session control with Microsoft Defender for Cloud Apps](/cloud-app-security/proxy-deployment-any-app).
ai-services Cognitive Services Limited Access https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/cognitive-services-limited-access.md
The following services are Limited Access:
- [Speaker Recognition](/legal/cognitive-services/speech-service/speaker-recognition/limited-access-speaker-recognition?context=/azure/ai-services/speech-service/context/context): All features - [Face API](/legal/cognitive-services/computer-vision/limited-access-identity?context=/azure/ai-services/computer-vision/context/context): Identify and Verify features, face ID property - [Azure AI Vision](/legal/cognitive-services/computer-vision/limited-access?context=/azure/ai-services/computer-vision/context/context): Celebrity Recognition feature -- [Azure AI Video Indexer](../azure-video-indexer/limited-access-features.md): Celebrity Recognition and Face Identify features
+- [Azure AI Video Indexer](/azure/azure-video-indexer/limited-access-features): Celebrity Recognition and Face Identify features
- [Azure OpenAI](/legal/cognitive-services/openai/limited-access): Azure OpenAI Service, modified abuse monitoring, and modified content filters Features of these services that aren't listed above are available without registration.
ai-services Harm Categories https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/content-safety/concepts/harm-categories.md
Classification can be multi-labeled. For example, when a text sample goes throug
Every harm category the service applies also comes with a severity level rating. The severity level is meant to indicate the severity of the consequences of showing the flagged content.
-| Severity Levels | Label |
+| 4 Severity Levels |8 Severity Levels | Label |
| -- | -- |
-|Severity Level 0 ΓÇô Safe | Content may be related to violence, self-harm, sexual or hate categories but the terms are used in general, journalistic, scientific, medical, and similar professional contexts which are appropriate for most audiences. |
-|Severity Level 2 ΓÇô Low | Content that expresses prejudiced, judgmental, or opinionated views, includes offensive use of language, stereotyping, use cases exploring a fictional world (e.g., gaming, literature) and depictions at low intensity. |
-|Severity Level 4 ΓÇô Medium| Content that uses offensive, insulting, mocking, intimidating, or demeaning language towards specific identity groups, includes depictions of seeking and executing harmful instructions, fantasies, glorification, promotion of harm at medium intensity. |
-|Severity Level 6 ΓÇô High | Content that displays explicit and severe harmful instructions, actions, damage, or abuse, includes endorsement, glorification, promotion of severe harmful acts, extreme or illegal forms of harm, radicalization, and non-consensual power exchange or abuse. |
+|Severity Level 0 ΓÇô Safe | Severity Level 0 and 1 ΓÇô Safe |Content might be related to violence, self-harm, sexual or hate categories but the terms are used in general, journalistic, scientific, medical, and similar professional contexts which are appropriate for most audiences. |
+|Severity Level 2 ΓÇô Low | Severity Level 2 and 3 ΓÇô Low |Content that expresses prejudiced, judgmental, or opinionated views, includes offensive use of language, stereotyping, use cases exploring a fictional world (e.g., gaming, literature) and depictions at low intensity. |
+|Severity Level 4 ΓÇô Medium| Severity Level 4 and 5 ΓÇô Medium |Content that uses offensive, insulting, mocking, intimidating, or demeaning language towards specific identity groups, includes depictions of seeking and executing harmful instructions, fantasies, glorification, promotion of harm at medium intensity. |
+|Severity Level 6 ΓÇô High | Severity Level 6-7 ΓÇô High |Content that displays explicit and severe harmful instructions, actions, damage, or abuse, includes endorsement, glorification, promotion of severe harmful acts, extreme or illegal forms of harm, radicalization, and non-consensual power exchange or abuse. |
## Next steps
ai-services Migrate To General Availability https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/content-safety/how-to/migrate-to-general-availability.md
+
+ Title: Migrate from Content Safety public preview to GA
+description: Learn how to upgrade your app from the public preview version of Azure AI Content Safety to the GA version.
+++++ Last updated : 09/25/2023+++
+# Migrate from Content Safety public preview to GA
+
+This guide shows you how to upgrade your existing code from the public preview version of Azure AI Content Safety to the GA version.
+
+## REST API calls
+
+In all API calls, be sure to change the _api-version_ parameter in your code:
+
+|old | new |
+|--|--|
+`api-version=2023-04-30-preview` | `api-version=2023-10-01` |
+
+Note the following REST endpoint name changes:
+
+| Public preview term | GA term |
+|-||
+| **addBlockItems** | **addOrUpdateBlocklistItems** |
+| **blockItems** | **blocklistItems** |
+| **removeBlockItems** | **removeBlocklistItems** |
++
+## JSON fields
+
+The following JSON fields have been renamed. Be sure to change them when you send data to a REST call:
+
+| Public preview Term | GA Term |
+|-|-|
+| `blockItems` | `blocklistItems` |
+| `BlockItemId` | `blocklistItemId` |
+| `blockItemIds` | `blocklistItemIds` |
+| `blocklistMatchResults` | `blocklistsMatch` |
+| `breakByBlocklists` | `haltOnBlocklistHit` |
++
+## Return formats
+
+Some of the JSON return formats have changed. See the following updated JSON return examples.
+
+The **text:analyze** API call with category analysis:
+
+```json
+{
+ "categoriesAnalysis": [
+ {
+ "category": "Hate",
+ "severity": 2
+ },
+ {
+ "category": "SelfHarm",
+ "severity": 0
+ },
+ {
+ "category": "Sexual",
+ "severity": 0
+ },
+ {
+ "category": "Violence",
+ "severity": 0
+ }
+ ]
+}
+```
+
+The **text:analyze** API call with a blocklist:
+```json
+{
+ "blocklistsMatch": [
+ {
+ "blocklistName": "string",
+ "blocklistItemId": "string",
+ "blocklistItemText": "bleed"
+ }
+ ],
+ "categoriesAnalysis": [
+ {
+ "category": "Hate",
+ "severity": 0
+ }
+ ]
+}
+```
+
+The **addOrUpdateBlocklistItems** API call:
+
+```json
+{
+ "blocklistItems:"[
+ {
+ "blocklistItemId": "string",
+ "description": "string",
+ "text": "bleed"
+ }
+ ]
+}
+```
+
+The **blocklistItems** API call (list all blocklist items):
+```json
+{
+ "values": [
+ {
+ "blocklistItemId": "string",
+ "description": "string",
+ "text": "bleed",
+ }
+ ]
+}
+```
+
+The **blocklistItems** API call with an item ID (retrieve a single item):
+
+```json
+{
+ "blocklistItemId": "string",
+ "description": "string",
+ "text": "string"
+}
+```
++
+## Next steps
+
+- [Quickstart: Analyze text content](../quickstart-text.md)
ai-services Use Blocklist https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/content-safety/how-to/use-blocklist.md
Title: "Use blocklists for text moderation"
-description: Learn how to customize text moderation in Content Safety by using your own list of blockItems.
+description: Learn how to customize text moderation in Content Safety by using your own list of blocklistItems.
keywords:
# Use a blocklist > [!CAUTION]
-> The sample data in this guide may contain offensive content. User discretion is advised.
+> The sample data in this guide might contain offensive content. User discretion is advised.
-The default AI classifiers are sufficient for most content moderation needs. However, you may need to screen for items that are specific to your use case.
+The default AI classifiers are sufficient for most content moderation needs. However, you might need to screen for items that are specific to your use case.
## Prerequisites * An Azure subscription - [Create one for free](https://azure.microsoft.com/free/cognitive-services/)
-* Once you have your Azure subscription, <a href="https://aka.ms/acs-create" title="Create a Content Safety resource" target="_blank">create a Content Safety resource </a> in the Azure portal to get your key and endpoint. Enter a unique name for your resource, select the subscription you entered on the application form, select a resource group, supported region, and supported pricing tier. Then select **Create**.
+* Once you have your Azure subscription, <a href="https://aka.ms/acs-create" title="Create a Content Safety resource" target="_blank">create a Content Safety resource </a> in the Azure portal to get your key and endpoint. Enter a unique name for your resource, select the subscription you entered on the application form, and select a resource group, supported region, and supported pricing tier. Then select **Create**.
* The resource takes a few minutes to deploy. After it finishes, Select **go to resource**. In the left pane, under **Resource Management**, select **Subscription Key and Endpoint**. The endpoint and either of the keys are used to call APIs. * One of the following installed: * [cURL](https://curl.haxx.se/) for REST API calls.
Copy the cURL command below to a text editor and make the following changes:
```shell
-curl --location --request PATCH '<endpoint>/contentsafety/text/blocklists/<your_list_name>?api-version=2023-04-30-preview' \
+curl --location --request PATCH '<endpoint>/contentsafety/text/blocklists/<your_list_name>?api-version=2023-10-01' \
--header 'Ocp-Apim-Subscription-Key: <enter_your_key_here>' \ --header 'Content-Type: application/json' \ --data-raw '{
else if (createResponse.Status == 200)
1. Optionally replace `<description>` with a custom description. 1. Run the code.
-#### [Python](#tab/python)
+#### [Python](#tab/python)
Create a new Python script and open it in your preferred editor or IDE. Paste in the following code. ```python
except HttpResponseError as e:
-### Add blockItems to the list
+### Add blocklistItems to the list
> [!NOTE] >
-> There is a maximum limit of **10,000 terms** in total across all lists. You can add at most 100 blockItems in one request.
+> There is a maximum limit of **10,000 terms** in total across all lists. You can add at most 100 blocklistItems in one request.
#### [REST API](#tab/rest)
Copy the cURL command below to a text editor and make the following changes:
1. Replace `<enter_your_key_here>` with your key. 1. Replace `<your_list_name>` (in the URL) with the name you used in the list creation step. 1. Optionally replace the value of the `"description"` field with a custom description.
-1. Replace the value of the `"text"` field with the item you'd like to add to your blocklist. The maximum length of a blockItem is 128 characters.
+1. Replace the value of the `"text"` field with the item you'd like to add to your blocklist. The maximum length of a blocklistItem is 128 characters.
```shell
-curl --location --request POST '<endpoint>/contentsafety/text/blocklists/<your_list_name>:addBlockItems?api-version=2023-04-30-preview' \
+curl --location --request POST '<endpoint>/contentsafety/text/blocklists/<your_list_name>:addOrUpdateBlocklistItems?api-version=2023-10-01' \
--header 'Ocp-Apim-Subscription-Key: <enter_your_key_here>' \ --header 'Content-Type: application/json' \data-raw '"blockItems": [{
+--data-raw '"blocklistItems": [{
"description": "string", "text": "bleed" }]' ``` > [!TIP]
-> You can add multiple blockItems in one API call. Make the request body a JSON array of data groups:
+> You can add multiple blocklistItems in one API call. Make the request body a JSON array of data groups:
> > ```json > [{
The response code should be `200`.
```console {
- "blockItemId": "string",
+"blocklistItems:"[
+ {
+ "blocklistItemId": "string",
"description": "string", "text": "bleed"
+ }
+ ]
} ``` #### [C#](#tab/csharp)- Create a new C# console app and open it in your preferred editor or IDE. Paste in the following code. ```csharp
except HttpResponseError as e:
> > There will be some delay after you add or edit a blockItem before it takes effect on text analysis, usually **not more than five minutes**. ++ ### Analyze text with a blocklist #### [REST API](#tab/rest)
Copy the cURL command below to a text editor and make the following changes:
1. Optionally change the value of the `"text"` field to whatever text you want to analyze. ```shell
-curl --location --request POST '<endpoint>/contentsafety/text:analyze?api-version=2023-04-30-preview&' \
+curl --location --request POST '<endpoint>/contentsafety/text:analyze?api-version=2023-10-01&' \
--header 'Ocp-Apim-Subscription-Key: <enter_your_key_here>' \ --header 'Content-Type: application/json' \ --data-raw '{
curl --location --request POST '<endpoint>/contentsafety/text:analyze?api-versio
"Violence" ], "blocklistNames":["<your_list_name>"],
- "breakByBlocklists": true
+ "haltOnBlocklistHit": false,
+ "outputType": "FourSeverityLevels"
}' ```
The JSON response will contain a `"blocklistMatchResults"` that indicates any ma
```json {
- "blocklistMatchResults": [
+ "blocklistsMatch": [
{ "blocklistName": "string",
- "blockItemID": "string",
- "blockItemText": "bleed",
- "offset": "28",
- "length": "5"
+ "blocklistItemId": "string",
+ "blocklistItemText": "bleed"
+ }
+ ],
+ "categoriesAnalysis": [
+ {
+ "category": "Hate",
+ "severity": 0
} ] }
except HttpResponseError as e:
This section contains more operations to help you manage and use the blocklist feature.
-### List all blockItems in a list
+### List all blocklistItems in a list
#### [REST API](#tab/rest)
Copy the cURL command below to a text editor and make the following changes:
1. Replace `<your_list_name>` (in the request URL) with the name you used in the list creation step. ```shell
-curl --location --request GET '<endpoint>/contentsafety/text/blocklists/<your_list_name>/blockItems?api-version=2023-04-30-preview' \
+curl --location --request GET '<endpoint>/contentsafety/text/blocklists/<your_list_name>/blocklistItems?api-version=2023-10-01' \
--header 'Ocp-Apim-Subscription-Key: <enter_your_key_here>' \ --header 'Content-Type: application/json' ```
The status code should be `200` and the response body should look like this:
{ "values": [ {
- "blockItemId": "string",
+ "blocklistItemId": "string",
"description": "string", "text": "bleed", }
Copy the cURL command below to a text editor and make the following changes:
```shell
-curl --location --request GET '<endpoint>/contentsafety/text/blocklists?api-version=2023-04-30-preview' \
+curl --location --request GET '<endpoint>/contentsafety/text/blocklists?api-version=2023-10-01' \
--header 'Ocp-Apim-Subscription-Key: <enter_your_key_here>' \ --header 'Content-Type: application/json' ```
Run the script.
-### Get a blocklist by name
+
+### Get a blocklist by blocklistName
#### [REST API](#tab/rest)
Copy the cURL command below to a text editor and make the following changes:
1. Replace `<your_list_name>` (in the request URL) with the name you used in the list creation step. ```shell
-cURL --location '<endpoint>contentsafety/text/blocklists/<your_list_name>?api-version=2023-04-30-preview' \
+cURL --location '<endpoint>contentsafety/text/blocklists/<your_list_name>?api-version=2023-10-01' \
--header 'Ocp-Apim-Subscription-Key: <enter_your_key_here>' \ --data '' ```
The status code should be `200`. The JSON response looks like this:
```json {
- "blocklistName": "string",
- "description": "string"
+ "blocklistName": "string",
+ "description": "string"
} ```
except HttpResponseError as e:
-### Get a blockItem by blockItem ID
+### Get a blocklistItem by blocklistName and blocklistItemId
#### [REST API](#tab/rest)
Copy the cURL command below to a text editor and make the following changes:
1. Replace `<endpoint>` with your endpoint URL. 1. Replace `<enter_your_key_here>` with your key. 1. Replace `<your_list_name>` (in the request URL) with the name you used in the list creation step.
-1. Replace `<your_item_id>` with the ID value for the blockItem. This is the value of the `"blockItemId"` field from the **Add blockItem** or **Get all blockItems** API calls.
+1. Replace `<your_item_id>` with the ID value for the blocklistItem. This is the value of the `"blocklistItemId"` field from the **Add blocklistItem** or **Get all blocklistItems** API calls.
```shell
-cURL --location '<endpoint>contentsafety/text/blocklists/<your_list_name>/blockitems/<your_item_id>?api-version=2023-04-30-preview' \
+cURL --location '<endpoint>contentsafety/text/blocklists/<your_list_name>/blocklistItems/<your_item_id>?api-version=2023-10-01' \
--header 'Ocp-Apim-Subscription-Key: <enter_your_key_here>' \ --data '' ```
The status code should be `200`. The JSON response looks like this:
```json {
- "blockItemId": "string",
- "description": "string",
- "text": "string"
+ "blocklistItemId": "string",
+ "description": "string",
+ "text": "string"
} ```
except HttpResponseError as e:
-### Remove a blockItem from a list
++
+### Remove blocklistItems from a blocklist.
> [!NOTE] >
Copy the cURL command below to a text editor and make the following changes:
1. Replace `<endpoint>` with your endpoint URL. 1. Replace `<enter_your_key_here>` with your key. 1. Replace `<your_list_name>` (in the request URL) with the name you used in the list creation step.
-1. Replace `<item_id>` with the ID value for the blockItem. This is the value of the `"blockItemId"` field from the **Add blockItem** or **Get all blockItems** API calls.
+1. Replace `<item_id>` with the ID value for the blocklistItem. This is the value of the `"blocklistItemId"` field from the **Add blocklistItem** or **Get all blocklistItems** API calls.
```shell
-curl --location --request DELETE '<endpoint>/contentsafety/text/blocklists/<your_list_name>/removeBlockItems?api-version=2023-04-30-preview' \
+curl --location --request DELETE '<endpoint>/contentsafety/text/blocklists/<your_list_name>:removeBlocklistItems?api-version=2023-10-01' \
--header 'Ocp-Apim-Subscription-Key: <enter_your_key_here>' \ --header 'Content-Type: application/json'data-raw '"blockItemIds":[
+--data-raw '"blocklistItemIds":[
"<item_id>" ]' ``` > [!TIP]
-> You can delete multiple blockItems in one API call. Make the request body an array of `blockItemId` values.
+> You can delete multiple blocklistItems in one API call. Make the request body an array of `blocklistItemId` values.
The response code should be `204`. #### [C#](#tab/csharp) + Create a new C# console app and open it in your preferred editor or IDE. Paste in the following code. ```csharp
Replace `<block_item_text>` with your block item text.
+ ### Delete a list and all of its contents > [!NOTE]
Copy the cURL command below to a text editor and make the following changes:
1. Replace `<your_list_name>` (in the request URL) with the name you used in the list creation step. ```shell
-curl --location --request DELETE '<endpoint>/contentsafety/text/blocklists/<your_list_name>?api-version=2023-04-30-preview' \
+curl --location --request DELETE '<endpoint>/contentsafety/text/blocklists/<your_list_name>?api-version=2023-10-01' \
--header 'Ocp-Apim-Subscription-Key: <enter_your_key_here>' \ --header 'Content-Type: application/json' \ ```
except HttpResponseError as e:
+ ## Next steps See the API reference documentation to learn more about the APIs used in this guide.
ai-services Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/content-safety/overview.md
Title: What is Azure AI Content Safety? (preview)
+ Title: What is Azure AI Content Safety?
description: Learn how to use Content Safety to track, flag, assess, and filter inappropriate material in user-generated content.
#Customer intent: As a developer of content management software, I want to find out whether Azure AI Content Safety is the right solution for my moderation needs.
-# What is Azure AI Content Safety? (preview)
+# What is Azure AI Content Safety?
[!INCLUDE [Azure AI services rebrand](../includes/rebrand-note.md)]
There are different types of analysis available from this service. The following
[Azure AI Content Safety Studio](https://contentsafety.cognitive.azure.com) is an online tool designed to handle potentially offensive, risky, or undesirable content using cutting-edge content moderation ML models. It provides templates and customized workflows, enabling users to choose and build their own content moderation system. Users can upload their own content or try it out with provided sample content.
-Content Safety Studio not only contains the out-of-the-box AI models, but also includes Microsoft's built-in terms blocklists to flag profanities and stay up to date with new trends. You can also upload your own blocklists to enhance the coverage of harmful content that's specific to your use case.
+Content Safety Studio not only contains out-of-the-box AI models but also includes Microsoft's built-in terms blocklists to flag profanities and stay up to date with new trends. You can also upload your own blocklists to enhance the coverage of harmful content that's specific to your use case.
Studio also lets you set up a moderation workflow, where you can continuously monitor and improve content moderation performance. It can help you meet content requirements from all kinds of industries like gaming, media, education, E-commerce, and more. Businesses can easily connect their services to the Studio and have their content moderated in real time, whether user-generated or AI-generated.
-All of these capabilities are handled by the Studio and its backend; customers donΓÇÖt need to worry about model development. You can onboard your data for quick validation and monitor your KPIs accordingly, like technical metrics (latency, accuracy, recall), or business metrics (block rate, block volume, category proportions, language proportions and more). With simple operations and configurations, customers can test different solutions quickly and find the best fit, instead of spending time experimenting with custom models or doing moderation manually.
+All of these capabilities are handled by the Studio and its backend; customers donΓÇÖt need to worry about model development. You can onboard your data for quick validation and monitor your KPIs accordingly, like technical metrics (latency, accuracy, recall), or business metrics (block rate, block volume, category proportions, language proportions, and more). With simple operations and configurations, customers can test different solutions quickly and find the best fit, instead of spending time experimenting with custom models or doing moderation manually.
> [!div class="nextstepaction"] > [Content Safety Studio](https://contentsafety.cognitive.azure.com)
For enhanced security, you can use Microsoft Entra ID or Managed Identity (MI) t
### Encryption of data at rest
-Learn how Content Safety handles the [encryption and decryption of your data](./how-to/encrypt-data-at-rest.md). Customer-managed keys (CMK), also known as Bring your own key (BYOK), offer greater flexibility to create, rotate, disable, and revoke access controls. You can also audit the encryption keys used to protect your data.
+Learn how Content Safety handles the [encryption and decryption of your data](./how-to/encrypt-data-at-rest.md). Customer-managed keys (CMK), also known as Bring Your Own Key (BYOK), offer greater flexibility to create, rotate, disable, and revoke access controls. You can also audit the encryption keys used to protect your data.
## Pricing
-Currently, the public preview features are available in the **F0 and S0** pricing tier.
+Currently, Content Safety has an **F0 and S0** pricing tier.
## Service limits ### Language support
-Content Safety models have been specifically trained and tested on the following languages: English, German, Japanese, Spanish, French, Italian, Portuguese, and Chinese. However, the service can work in many other languages, but the quality may vary. In all cases, you should do your own testing to ensure that it works for your application.
+Content Safety models have been specifically trained and tested in the following languages: English, German, Japanese, Spanish, French, Italian, Portuguese, and Chinese. However, the service can work in many other languages, but the quality might vary. In all cases, you should do your own testing to ensure that it works for your application.
For more information, see [Language support](/azure/ai-services/content-safety/language-support).
-### Region / location
+### Region/location
-To use the preview APIs, you must create your Azure AI Content Safety resource in a supported region. Currently, the public preview features are available in the following Azure regions:
+To use the Content Safety APIs, you must create your Azure AI Content Safety resource in the supported regions. Currently, it is available in the following Azure regions:
+- Australia East
+- Canada East
+- Central US
- East US
+- East US 2
+- France Central
+- Japan East
+- North Central US
+- South Central US
+- Switzerland North
+- UK South
- West Europe
+- West US 2
Feel free to [contact us](mailto:acm-team@microsoft.com) if you need other regions for your business.
If you get stuck, [email us](mailto:acm-team@microsoft.com) or use the feedback
Follow a quickstart to get started using Content Safety in your application. > [!div class="nextstepaction"]
-> [Content Safety quickstart](./quickstart-text.md)
+> [Content Safety quickstart](./quickstart-text.md)
ai-services Quickstart Image https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/content-safety/quickstart-image.md
Get started with the Content Studio, REST API, or client SDKs to do basic image
::: zone-end ++++++ ## Clean up resources
ai-services Quickstart Text https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/content-safety/quickstart-text.md
Get started with the Content Safety Studio, REST API, or client SDKs to do basic
::: zone-end ++++++ ## Clean up resources
ai-services Whats New https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/content-safety/whats-new.md
# What's new in Content Safety
-Learn what's new in the service. These items may be release notes, videos, blog posts, and other types of information. Bookmark this page to stay up to date with new features, enhancements, fixes, and documentation updates.
+Learn what's new in the service. These items might be release notes, videos, blog posts, and other types of information. Bookmark this page to stay up to date with new features, enhancements, fixes, and documentation updates.
+
+## October 2023
+
+### Content Safety is generally available (GA)
+
+The Azure AI Content Safety service is now generally available as a cloud service.
+- The service is available in many more Azure regions. See the [Overview](./overview.md) for a list.
+- The return formats of the Analyze APIs have changed. See the [Quickstarts](./quickstart-text.md) for the latest examples.
+- The names and return formats of several APIs have changed. See the [Migration guide](./how-to/migrate-to-general-availability.md) for a full list of breaking changes. Other guides and quickstarts now reflect the GA version.
+
+### Content Safety Java and JavaScript SDKs
+
+The Azure AI Content Safety service is now available through Java and JavaScript SDKs. The SDKs are available on [Maven](https://central.sonatype.com/artifact/com.azure/azure-ai-contentsafety) and [npm](https://www.npmjs.com/package/@azure-rest/ai-content-safety). Follow a [quickstart](./quickstart-text.md) to get started.
## July 2023 ### Content Safety C# SDK
-The Azure AI Content Safety service is now available through a C# SDK. The SDK is available on [NuGet](https://www.nuget.org/packages/Azure.AI.ContentSafety/). Follow the [quickstart](./quickstart-text.md) to get started.
+The Azure AI Content Safety service is now available through a C# SDK. The SDK is available on [NuGet](https://www.nuget.org/packages/Azure.AI.ContentSafety/). Follow a [quickstart](./quickstart-text.md) to get started.
## May 2023
ai-services Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/immersive-reader/overview.md
With Immersive Reader you can break words into syllables to improve readability
## How does Immersive Reader work?
-Immersive Reader is a standalone web application. When invoked using the Immersive Reader client library is displayed on top of your existing web application in an `iframe`. When your wep application calls the Immersive Reader service, you specify the content to show the reader. The Immersive Reader client library handles the creation and styling of the `iframe` and communication with the Immersive Reader backend service. The Immersive Reader service processes the content for parts of speech, text to speech, translation, and more.
+Immersive Reader is a standalone web application. When invoked using the Immersive Reader client library is displayed on top of your existing web application in an `iframe`. When your web application calls the Immersive Reader service, you specify the content to show the reader. The Immersive Reader client library handles the creation and styling of the `iframe` and communication with the Immersive Reader backend service. The Immersive Reader service processes the content for parts of speech, text to speech, translation, and more.
## Get started with Immersive Reader
ai-services Azure Openai Integration https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/language-service/question-answering/how-to/azure-openai-integration.md
Last updated 08/02/2023
Custom Question Answering enables you to create a conversational layer on your data based on sophisticated Natural Language Processing (NLP) capabilities with enhanced relevance using a deep learning ranker, precise answers, and end-to-end region support. Most use cases for Custom Question Answering rely on finding appropriate answers for inputs by integrating it with chat bots, social media applications and speech-enabled desktop applications.
-AI runtimes however, are evolving due to the development of Large Language Models (LLMs), such as GPT-35-Turbo and GPT-4 offered by [Azure Open AI](../../../openai/overview.md) can address many chat-based use cases, which you may want to integrate with.
+AI runtimes however, are evolving due to the development of Large Language Models (LLMs), such as GPT-35-Turbo and GPT-4 offered by [Azure OpenAI](../../../openai/overview.md) can address many chat-based use cases, which you may want to integrate with.
At the same time, customers often require a custom answer authoring experience to achieve more granular control over the quality and content of question-answer pairs, and allow them to address content issues in production. Read this article to learn how to integrate Azure OpenAI On Your Data (Preview) with question-answer pairs from your Custom Question Answering project, using your project's underlying Azure Cognitive Search indexes. ## Prerequisites
-* An existing Azure Open AI resource. If you don't already have an Azure Open AI resource, then [create one and deploy a model](../../../openai/how-to/create-resource.md).
+* An existing Azure OpenAI resource. If you don't already have an Azure OpenAI resource, then [create one and deploy a model](../../../openai/how-to/create-resource.md).
* An Azure Language Service resource and Custom Question Answering project. If you donΓÇÖt have one already, then [create one](../quickstart/sdk.md). * Azure OpenAI requires registration and is currently only available to approved enterprise customers and partners. See [Limited access to Azure OpenAI Service](/legal/cognitive-services/openai/limited-access?context=/azure/ai-services/openai/context/context) for more information. You can apply for access to Azure OpenAI by completing the form at https://aka.ms/oai/access. Open an issue on this repo to contact us if you have an issue. * Be sure that you are assigned at least the [Cognitive Services OpenAI Contributor role](/azure/role-based-access-control/built-in-roles#cognitive-services-openai-contributor) for the Azure OpenAI resource.
At the same time, customers often require a custom answer authoring experience t
1. Select the **Azure Search** tab on the navigation menu to the left.
-1. Make a note of your Azure Search details, such as Azure Search resource name, subscription, and location. You will need this information when you connect your Azure Cognitive Search index to Azure Open AI.
+1. Make a note of your Azure Search details, such as Azure Search resource name, subscription, and location. You will need this information when you connect your Azure Cognitive Search index to Azure OpenAI.
:::image type="content" source="../media/question-answering/azure-search.png" alt-text="A screenshot showing the Azure search section for a Custom Question Answering project." lightbox="../media/question-answering/azure-search.png":::
At the same time, customers often require a custom answer authoring experience t
You can now start exploring Azure OpenAI capabilities with a no-code approach through the chat playground. It's simply a text box where you can submit a prompt to generate a completion. From this page, you can quickly iterate and experiment with the capabilities. You can also launch a [web app](../../..//openai/concepts/use-your-data.md#using-the-web-app) to chat with the model over the web. ## Next steps
-* [Using Azure OpenAI on your data](../../../openai/concepts/use-your-data.md)
+* [Using Azure OpenAI on your data](../../../openai/concepts/use-your-data.md)
ai-services Language Support https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/language-support.md
The following table provides links to language support reference articles by sup
|![QnA Maker icon](medi) (retired) | Distill information into easy-to-navigate questions and answers. | |![Speech icon](medi)| Configure speech-to-text, text-to-speech, translation, and speaker recognition applications. | |![Translator icon](medi) | Translate more than 100 languages and dialects including those deemed at-risk and endangered. |
-|![Video Indexer icon](medi#guidelines-and-limitations) | Extract actionable insights from your videos. |
+|![Video Indexer icon](media/service-icons/video-indexer.svg)</br>[Video Indexer](/azure/azure-video-indexer/language-identification-model#guidelines-and-limitations) | Extract actionable insights from your videos. |
|![Vision icon](medi) | Analyze content in images and videos. | ## Language independent services
ai-services Models https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/openai/concepts/models.md
GPT-3.5 Turbo version 0301 is the first version of the model released. Version
| Model ID | Base model Regions | Fine-Tuning Regions | Max Request (tokens) | Training Data (up to) | | | | - | -- | - | | `gpt-35-turbo`<sup>1</sup> (0301) | East US, France Central, South Central US, UK South, West Europe | N/A | 4,096 | Sep 2021 |
-| `gpt-35-turbo` (0613) | Australia East, Canada East, East US, East US 2, France Central, Japan East, North Central US, Sweden Central, Switzerland North, UK South | N/A | 4,096 | Sep 2021 |
+| `gpt-35-turbo` (0613) | Australia East, Canada East, East US, East US 2, France Central, Japan East, North Central US, Sweden Central, Switzerland North, UK South | North Central US, Sweden Central | 4,096 | Sep 2021 |
| `gpt-35-turbo-16k` (0613) | Australia East, Canada East, East US, East US 2, France Central, Japan East, North Central US, Sweden Central, Switzerland North, UK South | N/A | 16,384 | Sep 2021 | | `gpt-35-turbo-instruct` (0914) | East US, Sweden Central | N/A | 4,097 | Sep 2021 |
These models can only be used with Embedding API requests.
| | | | | | | dalle2 | East US | N/A | 1000 | N/A |
+### Fine-tuning models (Preview)
+
+`babbage-002` and `davinci-002` are not trained to follow instructions. Querying these base models should only be done as a point of reference to a fine-tuned version to evaluate the progress of your training.
+
+`gpt-35-turbo-0613` - fine-tuning of this model is limited to a subset of regions, and is not available in every region the base model is available.
+
+| Model ID | Fine-Tuning Regions | Max Request (tokens) | Training Data (up to) |
+| | | | | |
+| `babbage-002` | North Central US, Sweden Central | 16,384 | Sep 2021 |
+| `davinci-002` | North Central US, Sweden Central | 16,384 | Sep 2021 |
+| `gpt-35-turbo` (0613) | North Central US, Sweden Central | 4096 | Sep 2021 |
+ ### Whisper models (Preview) | Model ID | Base model Regions | Fine-Tuning Regions | Max Request (audio file size) | Training Data (up to) |
ai-services Fine Tuning https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/openai/how-to/fine-tuning.md
Previously updated : 09/01/2023-- Last updated : 10/12/2023++ zone_pivot_groups: openai-fine-tuning keywords:
-# Customize a model with Azure OpenAI Service
+# Customize a model with fine-tuning (preview)
Azure OpenAI Service lets you tailor our models to your personal datasets by using a process known as *fine-tuning*. This customization step lets you get more out of the service by providing: -- Higher quality results than what you can get just from prompt design.-- The ability to train on more examples than can fit into a prompt.-- Lower-latency requests.
-
-A customized model improves on the few-shot learning approach by training the model's weights on your specific prompts and structure. The customized model lets you achieve better results on a wider number of tasks without needing to provide examples in your prompt. The result is less text sent and fewer tokens processed on every API call, saving cost and improving request latency.
-
+- Higher quality results than what you can get just from [prompt engineering](../concepts/prompt-engineering.md)
+- The ability to train on more examples than can fit into a model's max request context limit.
+- Lower-latency requests, particularly when using smaller models.
+A fine-tuned model improves on the few-shot learning approach by training the model's weights on your own data. A customized model lets you achieve better results on a wider number of tasks without needing to provide examples in your prompt. The result is less text sent and fewer tokens processed on every API call, potentially saving cost and improving request latency.
::: zone pivot="programming-language-studio"
ai-services Prepare Dataset https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/openai/how-to/prepare-dataset.md
- Title: 'How to prepare a dataset for custom model training'-
-description: Learn how to prepare your dataset for fine-tuning
---- Previously updated : 06/24/2022--
-recommendations: false
-keywords:
--
-# Learn how to prepare your dataset for fine-tuning
-
-The first step of customizing your model is to prepare a high quality dataset. To do this you'll need a set of training examples composed of single input prompts and the associated desired output ('completion'). This format is notably different than using models during inference in the following ways:
--- Only provide a single prompt vs a few examples.-- You don't need to provide detailed instructions as part of the prompt.-- Each prompt should end with a fixed separator to inform the model when the prompt ends and the completion begins. A simple separator, which generally works well is `\n\n###\n\n`. The separator shouldn't appear elsewhere in any prompt.-- Each completion should start with a whitespace due to our tokenization, which tokenizes most words with a preceding whitespace.-- Each completion should end with a fixed stop sequence to inform the model when the completion ends. A stop sequence could be `\n`, `###`, or any other token that doesn't appear in any completion.-- For inference, you should format your prompts in the same way as you did when creating the training dataset, including the same separator. Also specify the same stop sequence to properly truncate the completion.-- The dataset cannot exceed 100 MB in total file size.-
-## Best practices
-
-Customization performs better with high-quality examples and the more you have, generally the better the model performs. We recommend that you provide at least a few hundred high-quality examples to achieve a model that performs better than using well-designed prompts with a base model. From there, performance tends to linearly increase with every doubling of the number of examples. Increasing the number of examples is usually the best and most reliable way of improving performance.
-
-If you're fine-tuning on a pre-existing dataset rather than writing prompts from scratch, be sure to manually review your data for offensive or inaccurate content if possible, or review as many random samples of the dataset as possible if it's large.
-
-## Specific guidelines
-
-Fine-tuning can solve various problems, and the optimal way to use it may depend on your specific use case. Below, we've listed the most common use cases for fine-tuning and corresponding guidelines.
-
-### Classification
-
-Classifiers are the easiest models to get started with. For classification problems we suggest using **ada**, which generally tends to perform only very slightly worse than more capable models once fine-tuned, while being significantly faster. In classification problems, each prompt in the dataset should be classified into one of the predefined classes. For this type of problem, we recommend:
--- Use a separator at the end of the prompt, for example, `\n\n###\n\n`. Remember to also append this separator when you eventually make requests to your model.-- Choose classes that map to a single token. At inference time, specify max_tokens=1 since you only need the first token for classification.-- Ensure that the prompt + completion doesn't exceed 2048 tokens, including the separator-- Aim for at least 100 examples per class-- To get class log probabilities, you can specify logprobs=5 (for five classes) when using your model-- Ensure that the dataset used for fine-tuning is very similar in structure and type of task as what the model will be used for-
-#### Case study: Is the model making untrue statements?
-
-Let's say you'd like to ensure that the text of the ads on your website mentions the correct product and company. In other words, you want to ensure the model isn't making things up. You may want to fine-tune a classifier which filters out incorrect ads.
-
-The dataset might look something like the following:
-
-```json
-{"prompt":"Company: BHFF insurance\nProduct: allround insurance\nAd:One stop shop for all your insurance needs!\nSupported:", "completion":" yes"}
-{"prompt":"Company: Loft conversion specialists\nProduct: -\nAd:Straight teeth in weeks!\nSupported:", "completion":" no"}
-```
-
-In the example above, we used a structured input containing the name of the company, the product, and the associated ad. As a separator we used `\nSupported:` which clearly separated the prompt from the completion. With a sufficient number of examples, the separator you choose doesn't make much of a difference (usually less than 0.4%) as long as it doesn't appear within the prompt or the completion.
-
-For this use case we fine-tuned an ada model since it is faster and cheaper, and the performance is comparable to larger models because it's a classification task.
-
-Now we can query our model by making a Completion request.
-
-```console
-curl https://YOUR_RESOURCE_NAME.openaiazure.com/openai/deployments/YOUR_DEPLOYMENT_NAME/completions?api-version=2023-05-15\ \
- -H 'Content-Type: application/json' \
- -H 'api-key: YOUR_API_KEY' \
- -d '{
- "prompt": "Company: Reliable accountants Ltd\nProduct: Personal Tax help\nAd:Best advice in town!\nSupported:",
- "max_tokens": 1
- }'
-```
-
-Which will return either `yes` or `no`.
-
-#### Case study: Sentiment analysis
-
-Let's say you'd like to get a degree to which a particular tweet is positive or negative. The dataset might look something like the following:
-
-```console
-{"prompt":"Overjoyed with the new iPhone! ->", "completion":" positive"}
-{"prompt":"@contoso_basketball disappoint for a third straight night. ->", "completion":" negative"}
-```
-
-Once the model is fine-tuned, you can get back the log probabilities for the first completion token by setting `logprobs=2` on the completion request. The higher the probability for positive class, the higher the relative sentiment.
-
-Now we can query our model by making a Completion request.
-
-```console
-curl https://YOUR_RESOURCE_NAME.openaiazure.com/openai/deployments/YOUR_DEPLOYMENT_NAME/completions?api-version=2023-05-15\ \
- -H 'Content-Type: application/json' \
- -H 'api-key: YOUR_API_KEY' \
- -d '{
- "prompt": "Excited to share my latest blog post! ->",
- "max_tokens": 1,
- "logprobs": 2
- }'
-```
-
-Which will return:
-
-```json
-{
- "object": "text_completion",
- "created": 1589498378,
- "model": "YOUR_FINE_TUNED_MODEL_NAME",
- "choices": [
- {
- "logprobs": {
- "text_offset": [
- 19
- ],
- "token_logprobs": [
- -0.03597255
- ],
- "tokens": [
- " positive"
- ],
- "top_logprobs": [
- {
- " negative": -4.9785037,
- " positive": -0.03597255
- }
- ]
- },
-
- "text": " positive",
- "index": 0,
- "finish_reason": "length"
- }
- ]
-}
-```
-
-#### Case study: Categorization for Email triage
-
-Let's say you'd like to categorize incoming email into one of a large number of predefined categories. For classification into a large number of categories, we recommend you convert those categories into numbers, which will work well with up to approximately 500 categories. We've observed that adding a space before the number sometimes slightly helps the performance, due to tokenization. You may want to structure your training data as follows:
-
-```json
-{
- "prompt":"Subject: <email_subject>\nFrom:<customer_name>\nDate:<date>\nContent:<email_body>\n\n###\n\n", "completion":" <numerical_category>"
-}
-```
-
-For example:
-
-```json
-{
- "prompt":"Subject: Update my address\nFrom:Joe Doe\nTo:support@ourcompany.com\nDate:2021-06-03\nContent:Hi,\nI would like to update my billing address to match my delivery address.\n\nPlease let me know once done.\n\nThanks,\nJoe\n\n###\n\n",
- "completion":" 4"
-}
-```
-
-In the example above we used an incoming email capped at 2043 tokens as input. (This allows for a four token separator and a one token completion, summing up to 2048.) As a separator we used `\n\n###\n\n` and we removed any occurrence of ### within the email.
-
-### Conditional generation
-
-Conditional generation is a problem where the content needs to be generated given some kind of input. This includes paraphrasing, summarizing, entity extraction, product description writing given specifications, chatbots and many others. For this type of problem we recommend:
--- Use a separator at the end of the prompt, for example, `\n\n###\n\n`. Remember to also append this separator when you eventually make requests to your model.-- Use an ending token at the end of the completion, for example, `END`.-- Remember to add the ending token as a stop sequence during inference, for example, `stop=[" END"]`.-- Aim for at least ~500 examples.-- Ensure that the prompt + completion doesn't exceed 2048 tokens, including the separator.-- Ensure the examples are of high quality and follow the same desired format.-- Ensure that the dataset used for fine-tuning is similar in structure and type of task as what the model will be used for.-- Using Lower learning rate and only 1-2 epochs tends to work better for these use cases.-
-#### Case study: Write an engaging ad based on a Wikipedia article
-
-This is a generative use case so you would want to ensure that the samples you provide are of the highest quality, as the fine-tuned model will try to imitate the style (and mistakes) of the given examples. A good starting point is around 500 examples. A sample dataset might look like this:
-
-```json
-{
- "prompt":"<Product Name>\n<Wikipedia description>\n\n###\n\n",
- "completion":" <engaging ad> END"
-}
-```
-
-For example:
-
-```json
-{
- "prompt":"Samsung Galaxy Feel\nThe Samsung Galaxy Feel is an Android smartphone developed by Samsung Electronics exclusively for the Japanese market. The phone was released in June 2017 and was sold by NTT Docomo. It runs on Android 7.0 (Nougat), has a 4.7 inch display, and a 3000 mAh battery.\nSoftware\nSamsung Galaxy Feel runs on Android 7.0 (Nougat), but can be later updated to Android 8.0 (Oreo).\nHardware\nSamsung Galaxy Feel has a 4.7 inch Super AMOLED HD display, 16 MP back facing and 5 MP front facing cameras. It has a 3000 mAh battery, a 1.6 GHz Octa-Core ARM Cortex-A53 CPU, and an ARM Mali-T830 MP1 700 MHz GPU. It comes with 32GB of internal storage, expandable to 256GB via microSD. Aside from its software and hardware specifications, Samsung also introduced a unique a hole in the phone's shell to accommodate the Japanese perceived penchant for personalizing their mobile phones. The Galaxy Feel's battery was also touted as a major selling point since the market favors handsets with longer battery life. The device is also waterproof and supports 1seg digital broadcasts using an antenna that is sold separately.\n\n###\n\n",
- "completion":"Looking for a smartphone that can do it all? Look no further than Samsung Galaxy Feel! With a slim and sleek design, our latest smartphone features high-quality picture and video capabilities, as well as an award winning battery life. END"
-}
-```
-
-Here we used a multiline separator, as Wikipedia articles contain multiple paragraphs and headings. We also used a simple end token, to ensure that the model knows when the completion should finish.
-
-#### Case study: Entity extraction
-
-This is similar to a language transformation task. To improve the performance, it's best to either sort different extracted entities alphabetically or in the same order as they appear in the original text. This helps the model to keep track of all the entities which need to be generated in order. The dataset could look as follows:
-
-```json
-{
- "prompt":"<any text, for example news article>\n\n###\n\n",
- "completion":" <list of entities, separated by a newline> END"
-}
-```
-
-For example:
-
-```json
-{
- "prompt":"Portugal will be removed from the UK's green travel list from Tuesday, amid rising coronavirus cases and concern over a \"Nepal mutation of the so-called Indian variant\". It will join the amber list, meaning holidaymakers should not visit and returnees must isolate for 10 days...\n\n###\n\n",
- "completion":" Portugal\nUK\nNepal mutation\nIndian variant END"
-}
-```
-
-A multi-line separator works best, as the text will likely contain multiple lines. Ideally there will be a high diversity of the types of input prompts (news articles, Wikipedia pages, tweets, legal documents), which reflect the likely texts which will be encountered when extracting entities.
-
-#### Case study: Customer support chatbot
-
-A chatbot will normally contain relevant context about the conversation (order details), summary of the conversation so far, and most recent messages. For this use case the same past conversation can generate multiple rows in the dataset, each time with a slightly different context, for every agent generation as a completion. This use case requires a few thousand examples, as it likely deals with different types of requests, and customer issues. To ensure the performance is of high quality, we recommend vetting the conversation samples to ensure the quality of agent messages. The summary can be generated with a separate text transformation fine tuned model. The dataset could look as follows:
-
-```json
-{"prompt":"Summary: <summary of the interaction so far>\n\nSpecific information:<for example order details in natural language>\n\n###\n\nCustomer: <message1>\nAgent: <response1>\nCustomer: <message2>\nAgent:", "completion":" <response2>\n"}
-{"prompt":"Summary: <summary of the interaction so far>\n\nSpecific information:<for example order details in natural language>\n\n###\n\nCustomer: <message1>\nAgent: <response1>\nCustomer: <message2>\nAgent: <response2>\nCustomer: <message3>\nAgent:", "completion":" <response3>\n"}
-```
-
-Here we purposefully separated different types of input information, but maintained Customer Agent dialog in the same format between a prompt and a completion. All the completions should only be by the agent, and we can use `\n` as a stop sequence when doing inference.
-
-#### Case study: Product description based on a technical list of properties
-
-Here it's important to convert the input data into a natural language, which will likely lead to superior performance. For example, the following format:
-
-```json
-{
- "prompt":"Item=handbag, Color=army_green, price=$99, size=S->",
- "completion":"This stylish small green handbag will add a unique touch to your look, without costing you a fortune."
-}
-```
-
-Won't work as well as:
-
-```json
-{
- "prompt":"Item is a handbag. Colour is army green. Price is midrange. Size is small.->",
- "completion":"This stylish small green handbag will add a unique touch to your look, without costing you a fortune."
-}
-```
-
-For high performance, ensure that the completions were based on the description provided. If external content is often consulted, then adding such content in an automated way would improve the performance. If the description is based on images, it may help to use an algorithm to extract a textual description of the image. Since completions are only one sentence long, we can use `.` as the stop sequence during inference.
-
-### Open ended generation
-
-For this type of problem we recommend:
--- Leave the prompt empty.-- No need for any separators.-- You'll normally want a large number of examples, at least a few thousand.-- Ensure the examples cover the intended domain or the desired tone of voice.-
-#### Case study: Maintaining company voice
-
-Many companies have a large amount of high quality content generated in a specific voice. Ideally all generations from our API should follow that voice for the different use cases. Here we can use the trick of leaving the prompt empty, and feeding in all the documents which are good examples of the company voice. A fine-tuned model can be used to solve many different use cases with similar prompts to the ones used for base models, but the outputs are going to follow the company voice much more closely than previously.
-
-```json
-{"prompt":"", "completion":" <company voice textual content>"}
-{"prompt":"", "completion":" <company voice textual content2>"}
-```
-
-A similar technique could be used for creating a virtual character with a particular personality, style of speech and topics the character talks about.
-
-Generative tasks have a potential to leak training data when requesting completions from the model, so extra care needs to be taken that this is addressed appropriately. For example personal or sensitive company information should be replaced by generic information or not be included into fine-tuning in the first place.
-
-## Next steps
-
-* Fine tune your model with our [How-to guide](fine-tuning.md)
-* Learn more about the [underlying models that power Azure OpenAI Service](../concepts/models.md)
ai-services Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/openai/overview.md
Previously updated : 09/15/2023 Last updated : 10/16/2023 recommendations: false keywords:
keywords:
# What is Azure OpenAI Service?
-Azure OpenAI Service provides REST API access to OpenAI's powerful language models including the GPT-4, GPT-35-Turbo, and Embeddings model series. In addition, the new GPT-4 and gpt-35-turbo model series have now reached general availability. These models can be easily adapted to your specific task including but not limited to content generation, summarization, semantic search, and natural language to code translation. Users can access the service through REST APIs, Python SDK, or our web-based interface in the Azure OpenAI Studio.
+Azure OpenAI Service provides REST API access to OpenAI's powerful language models including the GPT-4, GPT-3.5-Turbo, and Embeddings model series. In addition, the new GPT-4 and GPT-3.5-Turbo model series have now reached general availability. These models can be easily adapted to your specific task including but not limited to content generation, summarization, semantic search, and natural language to code translation. Users can access the service through REST APIs, Python SDK, or our web-based interface in the Azure OpenAI Studio.
### Features overview | Feature | Azure OpenAI | | | |
-| Models available | **GPT-4 series** <br>**GPT-35-Turbo series**<br> Embeddings series <br> Learn more in our [Models](./concepts/models.md) page.|
-| Fine-tuning | Ada <br> Babbage <br> Curie <br> Cushman <br> Davinci <br>**Fine-tuning is currently unavailable to new customers**.|
+| Models available | **GPT-4 series** <br>**GPT-3.5-Turbo series**<br> Embeddings series <br> Learn more in our [Models](./concepts/models.md) page.|
+| Fine-tuning (preview) | `GPT-3.5-Turbo` (0613) <br> `babbage-002` <br> `davinci-002`.|
| Price | [Available here](https://azure.microsoft.com/pricing/details/cognitive-services/openai-service/) |
-| Virtual network support & private link support | Yes, unless using [Azure OpenAI on your data](./concepts/use-your-data.md). |
+| Virtual network support & private link support | Yes, unless using [Azure OpenAI on your data](./concepts/use-your-data.md). |
| Managed Identity| Yes, via Microsoft Entra ID |
-| UI experience | **Azure portal** for account & resource management, <br> **Azure OpenAI Service Studio** for model exploration and fine tuning |
+| UI experience | **Azure portal** for account & resource management, <br> **Azure OpenAI Service Studio** for model exploration and fine-tuning |
| Model regional availability | [Model availability](./concepts/models.md) | | Content filtering | Prompts and completions are evaluated against our content policy with automated systems. High severity content will be filtered. |
ai-services Quotas Limits https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/openai/quotas-limits.md
Previously updated : 10/06/2023 Last updated : 10/13/2023
The default quota for models varies by model and region. Default quota limits ar
<td>North Central US, Australia East, East US 2, Canada East, Japan East, UK South, Switzerland North</td> <td>350 K</td> </tr>
+<tr>
+ <td>Fine-tuning models (babbage-002, davinci-002, gpt-35-turbo-0613)</td>
+ <td>North Central US, Sweden Central</td>
+ <td>50 K</td>
+ </tr>
<tr> <td>all other models</td> <td>East US, South Central US, West Europe, France Central</td>
ai-services Fine Tune https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/openai/tutorials/fine-tune.md
+
+ Title: Azure OpenAI Service fine-tuning gpt-3.5-turbo
+
+description: Learn how to use Azure OpenAI's latest fine-tuning capabilities with gpt-3.5-turbo
++++ Last updated : 10/16/2023++
+recommendations: false
+++
+# Azure OpenAI GPT 3.5 Turbo fine-tuning (preview) tutorial
+
+This tutorial walks you through fine-tuning a `gpt-35-turbo-0613` model.
+
+In this tutorial you learn how to:
+
+> [!div class="checklist"]
+> * Create sample fine-tuning datasets.
+> * Create environment variables for your resource endpoint and API key.
+> * Prepare your sample training and validation datasets for fine-tuning.
+> * Upload your training file and validation file for fine-tuning.
+> * Create a fine-tuning job for `gpt-35-turbo-0613`.
+> * Deploy a custom fine-tuned model.
+
+## Prerequisites
+
+* An Azure subscription - [Create one for free](https://azure.microsoft.com/free/cognitive-services?azure-portal=true).
+- Access granted to Azure OpenAI in the desired Azure subscription Currently, access to this service is granted only by application. You can apply for access to Azure OpenAI by completing the form at https://aka.ms/oai/access.
+- Python 3.7.1 or later version
+- The following Python libraries: `json`, `requests`, `os`, `tiktoken`, `time`, `openai`.
+- The OpenAI Python library should be at least version: `0.28.1`.
+- [Jupyter Notebooks](https://jupyter.org/)
+- An Azure OpenAI resource in a [region where `gpt-35-turbo-0613` fine-tuning is available](../concepts/models.md). If you don't have a resource the process of creating one is documented in our resource [deployment guide](../how-to/create-resource.md).
+- Necessary [Role-based access control permissions](../how-to/role-based-access-control.md). To perform all the actions described in this tutorial requires the equivalent of `Cognitive Services Contributor` + `Cognitive Services OpenAI Contributor` + `Cognitive Services Usages Reader` depending on how the permissions in your environment are defined.
+
+> [!IMPORTANT]
+> We strongly recommend reviewing the [pricing information](https://azure.microsoft.com/pricing/details/cognitive-services/openai-service/#pricing) for fine-tuning prior to beginning this tutorial to make sure you are comfortable with the associated costs. In testing, this tutorial resulted in one training hour billed, in addition to the costs that are associated with fine-tuning inference, and the hourly hosting costs of having a fine-tuned model deployed. Once you have completed the tutorial, you should delete your fine-tuned model deployment otherwise you will continue to incur the hourly hosting cost.
+
+## Set up
+
+### Python libraries
+
+If you haven't already, you need to install the following libraries:
+
+```cmd
+pip install openai json requests os tiktoken time
+```
++
+### Environment variables
+
+# [Command Line](#tab/command-line)
+
+```CMD
+setx AZURE_OPENAI_API_KEY "REPLACE_WITH_YOUR_KEY_VALUE_HERE"
+```
+
+```CMD
+setx AZURE_OPENAI_ENDPOINT "REPLACE_WITH_YOUR_ENDPOINT_HERE"
+```
+
+# [PowerShell](#tab/powershell)
+
+```powershell
+[System.Environment]::SetEnvironmentVariable('AZURE_OPENAI_API_KEY', 'REPLACE_WITH_YOUR_KEY_VALUE_HERE', 'User')
+```
+
+```powershell
+[System.Environment]::SetEnvironmentVariable('AZURE_OPENAI_ENDPOINT', 'REPLACE_WITH_YOUR_ENDPOINT_HERE', 'User')
+```
+
+# [Bash](#tab/bash)
+
+```Bash
+echo export AZURE_OPENAI_API_KEY="REPLACE_WITH_YOUR_KEY_VALUE_HERE" >> /etc/environment
+echo export AZURE_OPENAI_ENDPOINT="REPLACE_WITH_YOUR_ENDPOINT_HERE" >> /etc/environment
+
+source /etc/environment
+```
+++
+### Create a sample dataset
+
+Fine-tuning `gpt-35-turbo-0613` requires a specially formatted JSONL training file. OpenAI provides the following example in their documentation:
+
+```json
+{"messages": [{"role": "system", "content": "Marv is a factual chatbot that is also sarcastic."}, {"role": "user", "content": "What's the capital of France?"}, {"role": "assistant", "content": "Paris, as if everyone doesn't know that already."}]}
+{"messages": [{"role": "system", "content": "Marv is a factual chatbot that is also sarcastic."}, {"role": "user", "content": "Who wrote 'Romeo and Juliet'?"}, {"role": "assistant", "content": "Oh, just some guy named William Shakespeare. Ever heard of him?"}]}
+{"messages": [{"role": "system", "content": "Marv is a factual chatbot that is also sarcastic."}, {"role": "user", "content": "How far is the Moon from Earth?"}, {"role": "assistant", "content": "Around 384,400 kilometers. Give or take a few, like that really matters."}]}
+```
+
+For this example we'll modify this slightly by changing to:
+
+```json
+{"messages": [{"role": "system", "content": "Clippy is a factual chatbot that is also sarcastic."}, {"role": "user", "content": "What's the capital of France?"}, {"role": "assistant", "content": "Paris, as if everyone doesn't know that already."}]}
+{"messages": [{"role": "system", "content": "Clippy is a factual chatbot that is also sarcastic."}, {"role": "user", "content": "Who wrote 'Romeo and Juliet'?"}, {"role": "assistant", "content": "Oh, just some guy named William Shakespeare. Ever heard of him?"}]}
+{"messages": [{"role": "system", "content": "Clippy is a factual chatbot that is also sarcastic."}, {"role": "user", "content": "How far is the Moon from Earth?"}, {"role": "assistant", "content": "Around 384,400 kilometers. Give or take a few, like that really matters."}]}
+```
+
+While these three examples are helpful to give you the general format, if you want to steer your custom fine-tuned model to respond in a similar way you would need more examples. Generally you want **at least 50 high quality examples** to start out. However, it is entirely possible to have a use case that might require 1,000's of high quality training examples to be successful.
+
+In general, doubling the dataset size can lead to a linear increase in model quality. But keep in mind, low quality examples can negatively impact performance. If you train the model on a large amount of internal data, without first pruning the dataset for only the highest quality examples you could end up with a model that performs much worse than expected.
+
+You'll need to create two files `training_set.jsonl` and `validation_set.jsonl`.
+
+Create the files in the same directory that you're running the Jupyter Notebook, and copy the contents of the following code blocks to the corresponding files:
+
+**`training_set.jsonl`**
+
+```json
+{"messages": [{"role": "system", "content": "Clippy is a factual chatbot that is also sarcastic."}, {"role": "user", "content": "Who discovered Antarctica?"}, {"role": "assistant", "content": "Some chaps named Fabian Gottlieb von Bellingshausen and Mikhail Lazarev, as if they don't teach that in every school!"}]}
+{"messages": [{"role": "system", "content": "Clippy is a factual chatbot that is also sarcastic."}, {"role": "user", "content": "What is the biggest ocean?"}, {"role": "assistant", "content": "The Pacific Ocean. It's not like it's a small pond or anything."}]}
+{"messages": [{"role": "system", "content": "Clippy is a factual chatbot that is also sarcastic."}, {"role": "user", "content": "What is the largest planet?"}, {"role": "assistant", "content": "It's called Jupiter, you may have heard of it...or not."}]}
+{"messages": [{"role": "system", "content": "Clippy is a factual chatbot that is also sarcastic."}, {"role": "user", "content": "Who painted the Mona Lisa?"}, {"role": "assistant", "content": "Oh, some obscure artist named Leonardo da Vinci. Not that he is famous or anything."}]}
+{"messages": [{"role": "system", "content": "Clippy is a factual chatbot that is also sarcastic."}, {"role": "user", "content": "What's the boiling point of water?"}, {"role": "assistant", "content": "Exciting fact of the day: it's 100 degrees Celsius. Life-changing, isn't it?"}]}
+{"messages": [{"role": "system", "content": "Clippy is a factual chatbot that is also sarcastic."}, {"role": "user", "content": "Who wrote 'The Odyssey'?"}, {"role": "assistant", "content": "Oh, just some poet named Homer. Nothing special, really."}]}
+{"messages": [{"role": "system", "content": "Clippy is a factual chatbot that is also sarcastic."}, {"role": "user", "content": "What is the largest mammal?"}, {"role": "assistant", "content": "Blue Whale. Quite a big fish isn't it? Oh wait, it's not a fish!"}]}
+{"messages": [{"role": "system", "content": "Clippy is a factual chatbot that is also sarcastic."}, {"role": "user", "content": "What's the highest mountain peak?"}, {"role": "assistant", "content": "Everest. Brace yourself, it's not a walk in the park."}]}
+{"messages": [{"role": "system", "content": "Clippy is a factual chatbot that is also sarcastic."}, {"role": "user", "content": "Who invented the telephone?"}, {"role": "assistant", "content": "A guy named Alexander Graham Bell. Bet you didn't guess that one!"}]}
+{"messages": [{"role": "system", "content": "Clippy is a factual chatbot that is also sarcastic."}, {"role": "user", "content": "What is the distance from Earth to the Sun?"}, {"role": "assistant", "content": "About 93 million miles. Just a quick drive, really."}]}
+```
+
+**`validation_set.jsonl`**
+
+```json
+{"messages": [{"role": "system", "content": "Clippy is a factual chatbot that is also sarcastic."}, {"role": "user", "content": "What's the capital of Australia?"}, {"role": "assistant", "content": "It's Canberra, not Sydney. Shocking, I know!"}]}
+{"messages": [{"role": "system", "content": "Clippy is a factual chatbot that is also sarcastic."}, {"role": "user", "content": "Who wrote 'The Great Gatsby'?"}, {"role": "assistant", "content": "Some guy named F. Scott Fitzgerald. Ever heard of him?"}]}
+{"messages": [{"role": "system", "content": "Clippy is a factual chatbot that is also sarcastic."}, {"role": "user", "content": "What's the deepest part of the ocean?"}, {"role": "assistant", "content": "Mariana Trench, makes your swimming pool seem a bit shallow, doesn't it?"}]}
+{"messages": [{"role": "system", "content": "Clippy is a factual chatbot that is also sarcastic."}, {"role": "user", "content": "Who composed the 'Moonlight Sonata'?"}, {"role": "assistant", "content": "Just a little-known guy named Beethoven. No big deal."}]}
+{"messages": [{"role": "system", "content": "Clippy is a factual chatbot that is also sarcastic."}, {"role": "user", "content": "What are the colors of the rainbow?"}, {"role": "assistant", "content": "ROYGBIV. Red, orange, yellow, green, blue, indigo, violet. Not that we learned this in preschool or anything!"}]}
+{"messages": [{"role": "system", "content": "Clippy is a factual chatbot that is also sarcastic."}, {"role": "user", "content": "What is the smallest planet?"}, {"role": "assistant", "content": "Meet Mercury, the runt of the solar system family."}]}
+{"messages": [{"role": "system", "content": "Clippy is a factual chatbot that is also sarcastic."}, {"role": "user", "content": "Who wrote 'Pride and Prejudice'?"}, {"role": "assistant", "content": "Do the words Jane Austen ring a bell? No? Well, she wrote it."}]}
+{"messages": [{"role": "system", "content": "Clippy is a factual chatbot that is also sarcastic."}, {"role": "user", "content": "What's the largest desert?"}, {"role": "assistant", "content": "Antarctica. Surprise, surprise! Deserts aren't just full of sand, you know."}]}
+{"messages": [{"role": "system", "content": "Clippy is a factual chatbot that is also sarcastic."}, {"role": "user", "content": "What is the longest river?"}, {"role": "assistant", "content": "The Nile River. It's not like it's famous or anything."}]}
+{"messages": [{"role": "system", "content": "Clippy is a factual chatbot that is also sarcastic."}, {"role": "user", "content": "What's the capital of Germany?"}, {"role": "assistant", "content": "Berlin. Shocking news, right?"}]}
+```
+
+Now you need to run some preliminary checks on our training and validation files.
+
+```python
+import json
+
+# Load the training set
+with open('training_set.jsonl', 'r', encoding='utf-8') as f:
+ training_dataset = [json.loads(line) for line in f]
+
+# Training dataset stats
+print("Number of examples in training set:", len(training_dataset))
+print("First example in training set:")
+for message in training_dataset[0]["messages"]:
+ print(message)
+
+# Load the validation set
+with open('validation_set.jsonl', 'r', encoding='utf-8') as f:
+ validation_dataset = [json.loads(line) for line in f]
+
+# Validation dataset stats
+print("\nNumber of examples in validation set:", len(validation_dataset))
+print("First example in validation set:")
+for message in validation_dataset[0]["messages"]:
+ print(message)
+```
+
+**Output:**
+
+```output
+Number of examples in training set: 10
+First example in training set:
+{'role': 'system', 'content': 'Clippy is a factual chatbot that is also sarcastic.'}
+{'role': 'user', 'content': 'Who discovered America?'}
+{'role': 'assistant', 'content': "Some chap named Christopher Columbus, as if they don't teach that in every school!"}
+
+Number of examples in validation set: 10
+First example in validation set:
+{'role': 'system', 'content': 'Clippy is a factual chatbot that is also sarcastic.'}
+{'role': 'user', 'content': "What's the capital of Australia?"}
+{'role': 'assistant', 'content': "It's Canberra, not Sydney. Shocking, I know!"}
+```
+
+In this case we only have 10 training and 10 validation examples so while this will demonstrate the basic mechanics of fine-tuning a model this in unlikely to be a large enough number of examples to produce a consistently noticeable impact.
+
+Now you can then run some additional code from OpenAI using the tiktoken library to validate the token counts. Individual examples need to remain under the `gpt-35-turbo-0613` model's input token limit of 4096 tokens.
+
+```python
+import json
+import tiktoken
+import numpy as np
+from collections import defaultdict
+
+encoding = tiktoken.get_encoding("cl100k_base") # default encoding used by gpt-4, turbo, and text-embedding-ada-002 models
+
+def num_tokens_from_messages(messages, tokens_per_message=3, tokens_per_name=1):
+ num_tokens = 0
+ for message in messages:
+ num_tokens += tokens_per_message
+ for key, value in message.items():
+ num_tokens += len(encoding.encode(value))
+ if key == "name":
+ num_tokens += tokens_per_name
+ num_tokens += 3
+ return num_tokens
+
+def num_assistant_tokens_from_messages(messages):
+ num_tokens = 0
+ for message in messages:
+ if message["role"] == "assistant":
+ num_tokens += len(encoding.encode(message["content"]))
+ return num_tokens
+
+def print_distribution(values, name):
+ print(f"\n#### Distribution of {name}:")
+ print(f"min / max: {min(values)}, {max(values)}")
+ print(f"mean / median: {np.mean(values)}, {np.median(values)}")
+ print(f"p5 / p95: {np.quantile(values, 0.1)}, {np.quantile(values, 0.9)}")
+
+files = ['training_set.jsonl', 'validation_set.jsonl']
+
+for file in files:
+ print(f"Processing file: {file}")
+ with open(file, 'r', encoding='utf-8') as f:
+ dataset = [json.loads(line) for line in f]
+
+ total_tokens = []
+ assistant_tokens = []
+
+ for ex in dataset:
+ messages = ex.get("messages", {})
+ total_tokens.append(num_tokens_from_messages(messages))
+ assistant_tokens.append(num_assistant_tokens_from_messages(messages))
+
+ print_distribution(total_tokens, "total tokens")
+ print_distribution(assistant_tokens, "assistant tokens")
+ print('*' * 50)
+```
+
+**Output:**
+
+```output
+Processing file: training_set.jsonl
+
+#### Distribution of total tokens:
+min / max: 47, 57
+mean / median: 50.8, 50.0
+p5 / p95: 47.9, 55.2
+
+#### Distribution of assistant tokens:
+min / max: 13, 21
+mean / median: 16.3, 15.5
+p5 / p95: 13.0, 20.1
+**************************************************
+Processing file: validation_set.jsonl
+
+#### Distribution of total tokens:
+min / max: 43, 65
+mean / median: 51.4, 49.0
+p5 / p95: 45.7, 56.9
+
+#### Distribution of assistant tokens:
+min / max: 8, 29
+mean / median: 15.9, 13.5
+p5 / p95: 11.6, 20.9
+**************************************************
+```
+
+## Upload fine-tuning files
+
+```Python
+# Upload fine-tuning files
+import openai
+import os
+
+openai.api_key = os.getenv("AZURE_OPENAI_API_KEY")
+openai.api_base = os.getenv("AZURE_OPENAI_ENDPOINT")
+openai.api_type = 'azure'
+openai.api_version = '2023-09-15-preview' # This API version or later is required to access fine-tuning for turbo/babbage-002/davinci-002
+
+training_file_name = 'training_set.jsonl'
+validation_file_name = 'validation_set.jsonl'
+
+# Upload the training and validation dataset files to Azure OpenAI with the SDK.
+
+training_response = openai.File.create(
+ file=open(training_file_name, "rb"), purpose="fine-tune", user_provided_filename="training_set.jsonl"
+)
+training_file_id = training_response["id"]
+
+validation_response = openai.File.create(
+ file=open(validation_file_name, "rb"), purpose="fine-tune", user_provided_filename="validation_set.jsonl"
+)
+validation_file_id = validation_response["id"]
+
+print("Training file ID:", training_file_id)
+print("Validation file ID:", validation_file_id)
+```
+
+**Output:**
+
+```output
+Training file ID: file-9ace76cb11f54fdd8358af27abf4a3ea
+Validation file ID: file-70a3f525ed774e78a77994d7a1698c4b
+```
+
+## Begin fine-tuning
+
+Now that the fine-tuning files have been successfully uploaded you can submit your fine-tuning training job:
+
+```python
+response = openai.FineTuningJob.create(
+ training_file=training_file_id,
+ validation_file=validation_file_id,
+ model="gpt-35-turbo-0613",
+)
+
+job_id = response["id"]
+
+# You can use the job ID to monitor the status of the fine-tuning job.
+# The fine-tuning job will take some time to start and complete.
+
+print("Job ID:", response["id"])
+print("Status:", response["status"])
+print(response)
+```
+
+**Output:**
+
+```output
+Job ID: ftjob-40e78bc022034229a6e3a222c927651c
+Status: pending
+{
+ "hyperparameters": {
+ "n_epochs": 2
+ },
+ "status": "pending",
+ "model": "gpt-35-turbo-0613",
+ "training_file": "file-90ac5d43102f4d42a3477fd30053c758",
+ "validation_file": "file-e21aad7dddbc4ddc98ba35c790a016e5",
+ "id": "ftjob-40e78bc022034229a6e3a222c927651c",
+ "created_at": 1697156464,
+ "updated_at": 1697156464,
+ "object": "fine_tuning.job"
+}
+```
+
+To retrieve the training job ID, you can run:
+
+```python
+response = openai.FineTuningJob.retrieve(job_id)
+
+print("Job ID:", response["id"])
+print("Status:", response["status"])
+print(response)
+```
+
+**Output:**
+
+```output
+Fine-tuning model with job ID: ftjob-0f4191f0c59a4256b7a797a3d9eed219.
+```
+
+## Track training job status
+
+If you would like to poll the training job status until it's complete, you can run:
+
+```python
+# Track training status
+
+from IPython.display import clear_output
+import time
+
+start_time = time.time()
+
+# Get the status of our fine-tuning job.
+response = openai.FineTuningJob.retrieve(job_id)
+
+status = response["status"]
+
+# If the job isn't done yet, poll it every 10 seconds.
+while status not in ["succeeded", "failed"]:
+ time.sleep(10)
+
+ response = openai.FineTuningJob.retrieve(job_id)
+ print(response)
+ print("Elapsed time: {} minutes {} seconds".format(int((time.time() - start_time) // 60), int((time.time() - start_time) % 60)))
+ status = response["status"]
+ print(f'Status: {status}')
+ clear_output(wait=True)
+
+print(f'Fine-tuning job {job_id} finished with status: {status}')
+
+# List all fine-tuning jobs for this resource.
+print('Checking other fine-tune jobs for this resource.')
+response = openai.FineTuningJob.list()
+print(f'Found {len(response["data"])} fine-tune jobs.')
+```
+
+**Output:**
+
+```ouput
+{
+ "hyperparameters": {
+ "n_epochs": 2
+ },
+ "status": "running",
+ "model": "gpt-35-turbo-0613",
+ "training_file": "file-9ace76cb11f54fdd8358af27abf4a3ea",
+ "validation_file": "file-70a3f525ed774e78a77994d7a1698c4b",
+ "id": "ftjob-0f4191f0c59a4256b7a797a3d9eed219",
+ "created_at": 1695307968,
+ "updated_at": 1695310376,
+ "object": "fine_tuning.job"
+}
+Elapsed time: 40 minutes 45 seconds
+Status: running
+```
+
+It isn't unusual for training to take more than an hour to complete. Once training is completed the output message will change to:
+
+```output
+Fine-tuning job ftjob-b044a9d3cf9c4228b5d393567f693b83 finished with status: succeeded
+Checking other fine-tuning jobs for this resource.
+Found 2 fine-tune jobs.
+```
+
+To get the full results, run the following:
+
+```python
+#Retrieve fine_tuned_model name
+
+response = openai.FineTuningJob.retrieve(job_id)
+
+print(response)
+fine_tuned_model = response["fine_tuned_model"]
+```
+
+## Deploy fine-tuned model
+
+Unlike the previous Python SDK commands in this tutorial, since the introduction of the quota feature, model deployment must be done using the [REST API](/rest/api/cognitiveservices/accountmanagement/deployments/create-or-update?tabs=HTTP), which requires separate authorization, a different API path, and a different API version.
+
+Alternatively, you can deploy your fine-tuned model using any of the other common deployment methods like [Azure OpenAI Studio](https://oai.azure.com/), or [Azure CLI](/cli/azure/cognitiveservices/account/deployment#az-cognitiveservices-account-deployment-create()).
+
+|variable | Definition|
+|--|--|
+| token | There are multiple ways to generate an authorization token. The easiest method for initial testing is to launch the Cloud Shell from the [Azure portal](https://portal.azure.com). Then run [`az account get-access-token`](/cli/azure/account#az-account-get-access-token()). You can use this token as your temporary authorization token for API testing. We recommend storing this in a new environment variable|
+| subscription | The subscription ID for the associated Azure OpenAI resource |
+| resource_group | The resource group name for your Azure OpenAI resource |
+| resource_name | The Azure OpenAI resource name |
+| model_deployment_name | The custom name for your new fine-tuned model deployment. This is the name that will be referenced in your code when making chat completion calls. |
+| fine_tuned_model | Retrieve this value from your fine-tuning job results in the previous step. It will look like `gpt-35-turbo-0613.ft-b044a9d3cf9c4228b5d393567f693b83`. You will need to add that value to the deploy_data json. |
++
+```python
+import json
+import requests
+
+token= os.getenv("TEMP_AUTH_TOKEN")
+subscription = "<YOUR_SUBSCRIPTION_ID>"
+resource_group = "<YOUR_RESOURCE_GROUP_NAME>"
+resource_name = "<YOUR_AZURE_OPENAI_RESOURCE_NAME>"
+model_deployment_name ="YOUR_CUSTOM_MODEL_DEPLOYMENT_NAME"
+
+deploy_params = {'api-version': "2023-05-01"}
+deploy_headers = {'Authorization': 'Bearer {}'.format(token), 'Content-Type': 'application/json'}
+
+deploy_data = {
+ "sku": {"name": "standard", "capacity": 1},
+ "properties": {
+ "model": {
+ "format": "OpenAI",
+ "name": "<YOUR_FINE_TUNED_MODEL>", #retrieve this value from the previous call, it will look like gpt-35-turbo-0613.ft-b044a9d3cf9c4228b5d393567f693b83
+ "version": "1"
+ }
+ }
+}
+deploy_data = json.dumps(deploy_data)
+
+request_url = f'https://management.azure.com/subscriptions/{subscription}/resourceGroups/{resource_group}/providers/Microsoft.CognitiveServices/accounts/{resource_name}/deployments/{model_deployment_name}'
+
+print('Creating a new deployment...')
+
+r = requests.put(request_url, params=deploy_params, headers=deploy_headers, data=deploy_data)
+
+print(r)
+print(r.reason)
+print(r.json())
+```
+
+You can check on your deployment progress in the Azure OpenAI Studio:
++
+It isn't uncommon for this process to take some time to complete when dealing with deploying fine-tuned models.
+
+## Use a deployed customized model
+
+After your fine-tuned model is deployed, you can use it like any other deployed model in either the [Chat Playground of Azure OpenAI Studio](https://oai.azure.com), or via the chat completion API. For example, you can send a chat completion call to your deployed model, as shown in the following Python example. You can continue to use the same parameters with your customized model, such as temperature and max_tokens, as you can with other deployed models.
+
+```python
+#Note: The openai-python library support for Azure OpenAI is in preview.
+import os
+import openai
+openai.api_type = "azure"
+openai.api_base = os.getenv("AZURE_OPENAI_ENDPOINT")
+openai.api_version = "2023-05-15"
+openai.api_key = os.getenv("AZURE_OPENAI_KEY")
+
+response = openai.ChatCompletion.create(
+ engine="gpt-35-turbo-ft", # engine = "Custom deployment name you chose for your fine-tuning model"
+ messages=[
+ {"role": "system", "content": "You are a helpful assistant."},
+ {"role": "user", "content": "Does Azure OpenAI support customer managed keys?"},
+ {"role": "assistant", "content": "Yes, customer managed keys are supported by Azure OpenAI."},
+ {"role": "user", "content": "Do other Azure AI services support this too?"}
+ ]
+)
+
+print(response)
+print(response['choices'][0]['message']['content'])
+```
+
+## Delete deployment
+
+Unlike other types of Azure OpenAI models, fine-tuned/customized models have [an hourly hosting cost](https://azure.microsoft.com/pricing/details/cognitive-services/openai-service/#pricing) associated with them once they are deployed. It is **strongly recommended** that once you're done with this tutorial and have tested a few chat completion calls against your fine-tuned model, that you **delete the model deployment**.
+
+Deleting the deployment won't affect the model itself, so you can re-deploy the fine-tuned model that you trained for this tutorial at any time.
+
+You can delete the deployment in [Azure OpenAI Studio](https://oai.azure.com/), via [REST API](/rest/api/cognitiveservices/accountmanagement/deployments/delete?tabs=HTTP), [Azure CLI](/cli/azure/cognitiveservices/account/deployment#az-cognitiveservices-account-deployment-delete()), or other supported deployment methods.
+
+## Troubleshooting
+
+### How do I enable fine-tuning? Create a custom model is greyed out in Azure OpenAI Studio?
+
+In order to successfully access fine-tuning you need **Cognitive Services OpenAI Contributor assigned**. Even someone with high-level Service Administrator permissions would still need this account explicitly set in order to access fine-tuning. For more information please review the [role-based access control guidance](/azure/ai-services/openai/how-to/role-based-access-control#cognitive-services-openai-contributor).
+
+## Next steps
+
+- Learn more about [fine-tuning in Azure OpenAI](../how-to/fine-tuning.md)
+- Learn more about the [underlying models that power Azure OpenAI](../concepts/models.md#fine-tuning-models-preview).
ai-services Whats New https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/openai/whats-new.md
Previously updated : 09/20/2023 Last updated : 10/16/2023 recommendations: false keywords: # What's new in Azure OpenAI Service
+## October 2023
+
+### New fine-tuning models (preview)
+
+- `gpt-35-turbo-0613` is [now available for fine-tuning](./how-to/fine-tuning.md).
+
+- `babbage-002` and `davinci-002` are [now available for fine-tuning](./how-to/fine-tuning.md). These models replace the legacy ada, babbage, curie, and davinci base models that were previously available for fine-tuning.
+
+- Fine-tuning availability is limited to certain regions. Check the [models page](concepts/models.md#fine-tuning-models-preview), for the latest information on model availability in each region.
+
+- Fine-tuned models have different [quota limits](quotas-limits.md) than regular models.
+
+- [Tutorial: fine-tuning GPT-3.5-Turbo](./tutorials/fine-tune.md)
+ ## September 2023 ### GPT-4
-GPT-4 and GPT-4-32k are now available to all Azure OpenAI Service customers. Customers no longer need to apply for the waitlist to use GPT-4 and GPT-4-32k (the Limited Access registration requirements continue to apply for all Azure OpenAI models). Availability may vary by region. Check the [models page](concepts/models.md), for the latest information on model availability in each region.
+GPT-4 and GPT-4-32k are now available to all Azure OpenAI Service customers. Customers no longer need to apply for the waitlist to use GPT-4 and GPT-4-32k (the Limited Access registration requirements continue to apply for all Azure OpenAI models). Availability might vary by region. Check the [models page](concepts/models.md), for the latest information on model availability in each region.
### GPT-3.5 Turbo Instruct
ai-services Migrate To Openai https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/qnamaker/How-To/migrate-to-openai.md
QnA Maker was designed to be a cloud-based Natural Language Processing (NLP) ser
## Prerequisites * A QnA Maker project.
-* An existing Azure Open AI resource. If you don't already have an Azure Open AI resource, then [create one and deploy a model](../../openai/how-to/create-resource.md).
+* An existing Azure OpenAI resource. If you don't already have an Azure OpenAI resource, then [create one and deploy a model](../../openai/how-to/create-resource.md).
* Azure OpenAI requires registration and is currently only available to approved enterprise customers and partners. See [Limited access to Azure OpenAI Service](/legal/cognitive-services/openai/limited-access?context=/azure/ai-services/openai/context/context) for more information. You can apply for access to Azure OpenAI by completing the form at https://aka.ms/oai/access. Open an issue on this repo to contact us if you have an issue. * Be sure that you are assigned at least the [Cognitive Services OpenAI Contributor role](/azure/role-based-access-control/built-in-roles#cognitive-services-openai-contributor) for the Azure OpenAI resource.
QnA Maker was designed to be a cloud-based Natural Language Processing (NLP) ser
:::image type="content" source="../media/openai/search-service.png" alt-text="A screenshot showing a QnA Maker project's search service in the Azure portal." lightbox="../media/openai/search-service.png":::
-1. Select the search service and open its **Overview** section. Note down the details, such as the Azure Search resource name, subscription, and location. You will need this information when you migrate to Azure Open AI.
+1. Select the search service and open its **Overview** section. Note down the details, such as the Azure Search resource name, subscription, and location. You will need this information when you migrate to Azure OpenAI.
:::image type="content" source="../media/openai/search-service-details.png" alt-text="A screenshot showing a QnA Maker project's search service details in the Azure portal." lightbox="../media/openai/search-service-details.png":::
QnA Maker was designed to be a cloud-based Natural Language Processing (NLP) ser
You can now start exploring Azure OpenAI capabilities with a no-code approach through the chat playground. It's simply a text box where you can submit a prompt to generate a completion. From this page, you can quickly iterate and experiment with the capabilities. You can also launch a [web app](../../openai/concepts/use-your-data.md#using-the-web-app) to chat with the model over the web. ## Next steps
-* [Using Azure OpenAI on your data](../../openai/concepts/use-your-data.md)
+* [Using Azure OpenAI on your data](../../openai/concepts/use-your-data.md)
ai-services Captioning Concepts https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/speech-service/captioning-concepts.md
The following are aspects to consider when using captioning:
> [!TIP] > Try the [Speech Studio](https://aka.ms/speechstudio/captioning) and choose a sample video clip to see real-time or offline processed captioning results. >
-> Try the [Azure AI Video Indexer](../../azure-video-indexer/video-indexer-overview.md) as a demonstration of how you can get captions for videos that you upload.
+> Try the [Azure AI Video Indexer](/azure/azure-video-indexer/video-indexer-overview) as a demonstration of how you can get captions for videos that you upload.
Captioning can accompany real-time or pre-recorded speech. Whether you're showing captions in real-time or with a recording, you can use the [Speech SDK](speech-sdk.md) or [Speech CLI](spx-overview.md) to recognize speech and get transcriptions. You can also use the [Batch transcription API](batch-transcription.md) for pre-recorded video.
ai-services What Are Ai Services https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/what-are-ai-services.md
Select a service from the table below and learn how it can help you meet your de
| ![QnA Maker icon](media/service-icons/luis.svg) [QnA maker](./qnamaker/index.yml) (retired) | Distill information into easy-to-navigate questions and answers | | ![Speech icon](media/service-icons/speech.svg) [Speech](./speech-service/index.yml) | Speech to text, text to speech, translation and speaker recognition | | ![Translator icon](media/service-icons/translator.svg) [Translator](./translator/index.yml) | Translate more than 100 languages and dialects |
-| ![Video Indexer icon](media/service-icons/video-indexer.svg) [Video Indexer](../azure-video-indexer/index.yml) | Extract actionable insights from your videos |
+| ![Video Indexer icon](media/service-icons/video-indexer.svg) [Video Indexer](/azure/azure-video-indexer/) | Extract actionable insights from your videos |
| ![Vision icon](media/service-icons/vision.svg) [Vision](./computer-vision/index.yml) | Analyze content in images and videos | ## Pricing tiers and billing
aks Concepts Clusters Workloads https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/concepts-clusters-workloads.md
For associated best practices, see [Best practices for basic scheduler features
### Node pools
+> [!NOTE]
+> The Azure Linux node pool is now generally available (GA). To learn about the benefits and deployment steps, see the [Introduction to the Azure Linux Container Host for AKS][intro-azure-linux].
+ Nodes of the same configuration are grouped together into *node pools*. A Kubernetes cluster contains at least one node pool. The initial number of nodes and size are defined when you create an AKS cluster, which creates a *default node pool*. This default node pool in AKS contains the underlying VMs that run your agent nodes. > [!NOTE]
This article covers some of the core Kubernetes components and how they apply to
[aks-service-level-agreement]: faq.md#does-aks-offer-a-service-level-agreement [aks-tags]: use-tags.md [aks-support]: support-policies.md#user-customization-of-agent-nodes
+[intro-azure-linux]: ../azure-linux/intro-azure-linux.md
aks Concepts Security https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/concepts-security.md
description: Learn about security in Azure Kubernetes Service (AKS), including m
Previously updated : 02/28/2023 Last updated : 07/18/2023
This article introduces the core concepts that secure your applications in AKS.
## Build Security
-As the entry point for the Supply Chain, it's important to conduct static analysis of image builds before they're promoted down the pipeline, which includes vulnerability and compliance assessment. It's not about failing a build because it has a vulnerability, as that breaks development. It's about looking at the **Vendor Status** to segment based on vulnerabilities that are actionable by the development teams. Also use **Grace Periods** to allow developers time to remediate identified issues.
+As the entry point for the supply chain, it is important to conduct static analysis of image builds before they are promoted down the pipeline. This includes vulnerability and compliance assessment. It is not about failing a build because it has a vulnerability, as that breaks development. It's about looking at the **Vendor Status** to segment based on vulnerabilities that are actionable by the development teams. Also use **Grace Periods** to allow developers time to remediate identified issues.
## Registry Security
AKS nodes are Azure virtual machines (VMs) that you manage and maintain.
When an AKS cluster is created or scaled up, the nodes are automatically deployed with the latest OS security updates and configurations. > [!NOTE]
-> AKS clusters using:
-> * Kubernetes version 1.19 and greater for Linux node pools use `containerd` as its container runtime. Using `containerd` with Windows Server 2019 node pools is currently in preview. For more information, see [Add a Windows Server node pool with `containerd`][aks-add-np-containerd].
-> * Kubernetes prior to v1.19 for Linux node pools use Docker as its container runtime. For Windows Server 2019 node pools, Docker is the default container runtime.
+> AKS clusters running:
+> * Kubernetes version 1.19 and higher - Linux node pools use `containerd` as its container runtime. Windows Server 2019 node pools use `containerd` as its container runtime, which is currently in preview. For more information, see [Add a Windows Server node pool with `containerd`][aks-add-np-containerd].
+> * Kubernetes version 1.19 and earlier - Linux node pools use Docker as its container runtime. Windows Server 2019 node pools use Docker for the default container runtime.
For more information about the security upgrade process for Linux and Windows worker nodes, see [Security patching nodes][aks-vulnerability-management-nodes].
Node authorization is a special-purpose authorization mode that specifically aut
### Node deployment
-Nodes are deployed into a private virtual network subnet with no public IP addresses assigned. SSH is enabled by default for troubleshooting and management purposes and is only accessible using the internal IP address.
+Nodes are deployed onto a private virtual network subnet, with no public IP addresses assigned. For troubleshooting and management purposes, SSH is enabled by default and only accessible using the internal IP address. Disabling SSH is during cluster and node pool creation, or for an existing cluster or node pool is in preview. See [Manage SSH access][manage-ssh-access] for more information.
### Node storage
For more information on core Kubernetes and AKS concepts, see:
- [Kubernetes / AKS scale][aks-concepts-scale] <!-- LINKS - External -->
-[kured]: https://github.com/kubereboot/kured
-[kubernetes-network-policies]: https://kubernetes.io/docs/concepts/services-networking/network-policies/
[secret-risks]: https://kubernetes.io/docs/concepts/configuration/secret/#risks [encryption-atrest]: ../security/fundamentals/encryption-atrest.md <!-- LINKS - Internal --> [microsoft-defender-for-containers]: ../defender-for-cloud/defender-for-containers-introduction.md
-[aks-daemonsets]: concepts-clusters-workloads.md#daemonsets
[aks-upgrade-cluster]: upgrade-cluster.md [aks-aad]: ./managed-azure-ad.md
-[aks-add-np-containerd]: /azure/aks/create-node-pools
+[aks-add-np-containerd]: create-node-pools.md
[aks-concepts-clusters-workloads]: concepts-clusters-workloads.md [aks-concepts-identity]: concepts-identity.md [aks-concepts-scale]: concepts-scale.md [aks-concepts-storage]: concepts-storage.md [aks-concepts-network]: concepts-network.md
-[aks-kured]: node-updates-kured.md
[aks-limit-egress-traffic]: limit-egress-traffic.md [cluster-isolation]: operator-best-practices-cluster-isolation.md [operator-best-practices-cluster-security]: operator-best-practices-cluster-security.md [developer-best-practices-pod-security]:developer-best-practices-pod-security.md
-[nodepool-upgrade]: use-multiple-node-pools.md#upgrade-a-node-pool
[authorized-ip-ranges]: api-server-authorized-ip-ranges.md [private-clusters]: private-clusters.md [network-policy]: use-network-policies.md
-[node-image-upgrade]: node-image-upgrade.md
[microsoft-vulnerability-management-aks]: concepts-vulnerability-management.md [aks-vulnerability-management-nodes]: concepts-vulnerability-management.md#worker-nodes
+[manage-ssh-access]: manage-ssh-node-access.md
aks Intro Kubernetes https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/intro-kubernetes.md
For more information, see [Confidential computing nodes on AKS][conf-com-node].
### Azure Linux nodes
+> [!NOTE]
+> The Azure Linux node pool is now generally available (GA). To learn about the benefits and deployment steps, see the [Introduction to the Azure Linux Container Host for AKS][intro-azure-linux].
+ The Azure Linux container host for AKS is an open-source Linux distribution created by Microsoft, and itΓÇÖs available as a container host on Azure Kubernetes Service (AKS). The Azure Linux container host for AKS provides reliability and consistency from cloud to edge across the AKS, AKS-HCI, and Arc products. You can deploy Azure Linux node pools in a new cluster, add Azure Linux node pools to your existing Ubuntu clusters, or migrate your Ubuntu nodes to Azure Linux nodes. For more information, see [Use the Azure Linux container host for AKS](use-azure-linux.md).
Learn more about deploying and managing AKS.
[azure-monitor-logs]: ../azure-monitor/logs/data-platform-logs.md [helm]: quickstart-helm.md [aks-best-practices]: best-practices.md
+[intro-azure-linux]: ../azure-linux/intro-azure-linux.md
aks Quick Kubernetes Deploy Cli https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/learn/quick-kubernetes-deploy-cli.md
Azure Kubernetes Service (AKS) is a managed Kubernetes service that lets you qui
> [!NOTE] > If you plan to run the commands locally instead of in Azure Cloud Shell, make sure you run the commands with administrative privileges.
+> [!NOTE]
+> The Azure Linux node pool is now generally available (GA). To learn about the benefits and deployment steps, see the [Introduction to the Azure Linux Container Host for AKS][intro-azure-linux].
+ ## Create a resource group An [Azure resource group][azure-resource-group] is a logical group in which Azure resources are deployed and managed. When you create a resource group, you're prompted to specify a location. This location is the storage location of your resource group metadata and where your resources run in Azure if you don't specify another region during resource creation.
This quickstart is for introductory purposes. For guidance on creating full solu
[kubernetes-deployment]: ../concepts-clusters-workloads.md#deployments-and-yaml-manifests [kubernetes-service]: ../concepts-network.md#services [aks-solution-guidance]: /azure/architecture/reference-architectures/containers/aks-start-here?WT.mc_id=AKSDOCSPAGE
+[intro-azure-linux]: ../../azure-linux/intro-azure-linux.md
aks Quick Kubernetes Deploy Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/learn/quick-kubernetes-deploy-portal.md
This quickstart assumes a basic understanding of Kubernetes concepts. For more i
[!INCLUDE [quickstarts-free-trial-note](../../../includes/quickstarts-free-trial-note.md)] - If you're unfamiliar with the Azure Cloud Shell, review [Overview of Azure Cloud Shell](../../cloud-shell/overview.md).- - The identity you're using to create your cluster has the appropriate minimum permissions. For more details on access and identity for AKS, see [Access and identity options for Azure Kubernetes Service (AKS)](../concepts-identity.md).
+> [!NOTE]
+> The Azure Linux node pool is now generally available (GA). To learn about the benefits and deployment steps, see the [Introduction to the Azure Linux Container Host for AKS][intro-azure-linux].
+ ## Create an AKS cluster 1. Sign in to the [Azure portal](https://portal.azure.com).
To learn more about AKS by walking through a complete example, including buildin
[http-routing]: ../http-application-routing.md [preset-config]: ../quotas-skus-regions.md#cluster-configuration-presets-in-the-azure-portal [sp-delete]: ../kubernetes-service-principal.md#additional-considerations
+[intro-azure-linux]: ../../azure-linux/intro-azure-linux.md
aks Quick Kubernetes Deploy Terraform https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/learn/quick-kubernetes-deploy-terraform.md
In this article, you learn how to:
## Prerequisites - [Install and configure Terraform](/azure/developer/terraform/quickstart-configure)- - **Kubernetes command-line tool (kubectl):** [Download kubectl](https://kubernetes.io/releases/download/).
+> [!NOTE]
+> The Azure Linux node pool is now generally available (GA). To learn about the benefits and deployment steps, see the [Introduction to the Azure Linux Container Host for AKS][intro-azure-linux].
+ ## Login to your Azure Account [!INCLUDE [authenticate-to-azure.md](~/azure-dev-docs-pr/articles/terraform/includes/authenticate-to-azure.md)]
Two [Kubernetes Services](/azure/aks/concepts-network#services) are created:
> [!div class="nextstepaction"] > [Learn more about using AKS](/azure/aks)+
+<!-- LINKS - Internal -->
+[intro-azure-linux]: ../../azure-linux/intro-azure-linux.md
aks Manage Ssh Node Access https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/manage-ssh-node-access.md
+
+ Title: Manage SSH access on Azure Kubernetes Service cluster nodes
+
+description: Learn how to configure SSH on Azure Kubernetes Service (AKS) cluster nodes.
+ Last updated : 10/16/2023++
+# Manage SSH for secure access to Azure Kubernetes Service (AKS) nodes
+
+This article describes how to update the SSH key on your AKS clusters or node pools.
++
+## Before you begin
+
+* You need the Azure CLI version 2.46.0 or later installed and configured. If you need to install or upgrade, see [Install Azure CLI][install-azure-cli].
+* This feature supports Linux, Mariner, and CBLMariner node pools on existing clusters.
+
+## Update SSH public key on an existing AKS cluster
+
+Use the [az aks update][az-aks-update] command to update the SSH public key on your cluster. This operation updates the key on all node pools. You can either specify the key or a key file using the `--ssh-key-value` argument.
+
+> [!NOTE]
+> Updating of the SSH key is supported on Azure virtual machine scale sets with AKS clusters.
+
+|SSH parameter |Description |Default value |
+|--|--|--|
+|--ssh-key-vaule |Public key path or key contents to install on node VMs for SSH access. For example, `ssh-rsa AAAAB...snip...UcyupgH azureuser@linuxvm`.|`~.ssh\id_rsa.pub` |
+|--no-ssh-key |Do not use or create a local SSH key. |False |
+
+The following are examples of this command:
+
+* To specify the new SSH public key value, include the `--ssh-key-value` argument:
+
+ ```azurecli
+ az aks update --name myAKSCluster --resource-group MyResourceGroup --ssh-key-value 'ssh-rsa AAAAB3Nza-xxx'
+ ```
+
+* To specify an SSH public key file, specify it with the `--ssh-key-value` argument:
+
+ ```azurecli
+ az aks update --name myAKSCluster --resource-group MyResourceGroup --ssh-key-value ~/.ssh/id_rsa.pub
+ ```
+
+> [!IMPORTANT]
+> After you update the SSH key, AKS doesn't automatically reimage your node pool. At anytime you can choose to perform a [reimage operation][node-image-upgrade]. Only after reimage is complete does the update SSH key operation take effect.
+
+## Next steps
+
+To help troubleshoot any issues with SSH connectivity to your clusters nodes, you can [view the kubelet logs][view-kubelet-logs] or [view the Kubernetes master node logs][view-master-logs].
+
+<!-- LINKS - external -->
+
+<!-- LINKS - internal -->
+[install-azure-cli]: /cli/azure/install-azure-cli
+[az-aks-update]: /cli/azure/aks#az-aks-update
+[view-kubelet-logs]: kubelet-logs.md
+[view-master-logs]: monitor-aks-reference.md#resource-logs
+[node-image-upgrade]: node-image-upgrade.md
aks Node Access https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/node-access.md
Title: Connect to Azure Kubernetes Service (AKS) cluster nodes description: Learn how to connect to Azure Kubernetes Service (AKS) cluster nodes for troubleshooting and maintenance tasks. Previously updated : 09/06/2023 Last updated : 10/04/2023 #Customer intent: As a cluster operator, I want to learn how to connect to virtual machines in an AKS cluster to perform maintenance or troubleshoot a problem.
# Connect to Azure Kubernetes Service (AKS) cluster nodes for maintenance or troubleshooting
-Throughout the lifecycle of your Azure Kubernetes Service (AKS) cluster, you might need to access an AKS node. This access could be for maintenance, log collection, or troubleshooting operations. You can securely authenticate against AKS Linux and Windows nodes using SSH, and you can also [connect to Windows Server nodes using remote desktop protocol (RDP)][aks-windows-rdp]. For security reasons, the AKS nodes aren't exposed to the internet. To connect to the AKS nodes, you use `kubectl debug` or the private IP address.
+Throughout the lifecycle of your Azure Kubernetes Service (AKS) cluster, you might need to access an AKS node. This access could be for maintenance, log collection, or troubleshooting operations. You can securely authenticate against AKS Linux and Windows nodes using SSH, and you can also [connect to Windows Server nodes using remote desktop protocol (RDP)][aks-windows-rdp]. For security reasons, the AKS nodes aren't exposed to the internet. To connect to the AKS nodes, you use `kubectl debug` or the private IP address.
This article shows you how to create a connection to an AKS node and update the SSH key on an existing AKS cluster. ## Before you begin
-This article assumes you have an SSH key. If not, you can create an SSH key using [macOS or Linux][ssh-nix] or [Windows][ssh-windows]. Make sure you save the key pair in an OpenSSH format, other formats like .ppk aren't supported.
+* You have an SSH key. If you don't, you can create an SSH key using [macOS or Linux][ssh-nix] or [Windows][ssh-windows]. Save the key pair in an OpenSSH format, other formats like `.ppk` aren't supported.
-You also need the Azure CLI version 2.0.64 or later installed and configured. Run `az --version` to find the version. If you need to install or upgrade, see [Install Azure CLI][install-azure-cli].
+* The Azure CLI version 2.0.64 or later installed and configured. Run `az --version` to find the version. If you need to install or upgrade, see [Install Azure CLI][install-azure-cli].
## Create an interactive shell connection to a Linux node
To create an interactive shell connection to a Linux node, use the `kubectl debu
```bash kubectl get nodes -o wide ```
-
+ The following example resembles output from the command:
-
+ ```output NAME STATUS ROLES AGE VERSION INTERNAL-IP EXTERNAL-IP OS-IMAGE KERNEL-VERSION CONTAINER-RUNTIME aks-nodepool1-37663765-vmss000000 Ready agent 166m v1.25.6 10.224.0.33 <none> Ubuntu 22.04.2 LTS 5.15.0-1039-azure containerd://1.7.1+azure-1
To create an interactive shell connection to a Linux node, use the `kubectl debu
If you don't see a command prompt, try pressing enter. root@aks-nodepool1-37663765-vmss000000:/# ```
-
+ This privileged container gives access to the node.
-
+ > [!NOTE] > You can interact with the node session by running `chroot /host` from the privileged container.
kubectl delete pod node-debugger-aks-nodepool1-37663765-vmss000000-bkmmx
## Create the SSH connection to a Windows node
-At this time, you can't connect to a Windows Server node directly by using `kubectl debug`. Instead, you need to first connect to another node in the cluster, then connect to the Windows Server node from that node using SSH. Alternatively, you can [connect to Windows Server nodes using remote desktop protocol (RDP) connections][aks-windows-rdp] instead of using SSH.
+Currently, you can't connect to a Windows Server node directly by using `kubectl debug`. Instead, you need to first connect to another node in the cluster, and then connect to the Windows Server node from that node using SSH. Alternatively, you can [connect to Windows Server nodes using remote desktop protocol (RDP) connections][aks-windows-rdp] instead of using SSH.
To connect to another node in the cluster, use the `kubectl debug` command. For more information, see [Create an interactive shell connection to a Linux node][ssh-linux-kubectl-debug].
To create the SSH connection to the Windows Server node from another node, use t
### Create the SSH connection to a Windows node using a password
-If you didn't create your AKS cluster using the Azure CLI and the `--generate-ssh-keys` parameter, you'll use a password instead of an SSH key to create the SSH connection. To do this with Azure CLI, use the following steps. Replace `<nodeRG>` with a resource group name and `<vmssName>` with the scale set name in that resource group.
+If you didn't create your AKS cluster using the Azure CLI and the `--generate-ssh-keys` parameter, you'll use a password instead of an SSH key to create the SSH connection. To do this with Azure CLI, perform the following steps. Replace `<nodeRG>` with a resource group name and `<vmssName>` with the scale set name in that resource group.
1. Create a root user called `azureuser`.
When done, `exit` the SSH session, stop any port forwarding, and then `exit` the
kubectl delete pod node-debugger-aks-nodepool1-37663765-vmss000000-bkmmx ```
-## Update SSH public key on an existing AKS cluster (preview)
-
-### Prerequisites
-
-* Ensure the Azure CLI is installed and configured. If you need to install or upgrade, see [Install Azure CLI][install-azure-cli].
-* Ensure that the aks-preview extension version 0.5.111 or later. To learn how to install an Azure extension, see [How to install extensions][how-to-install-azure-extensions].
-
-> [!NOTE]
-> Updating of the SSH key is supported on Azure virtual machine scale sets with AKS clusters.
-
-Use the [az aks update][az-aks-update] command to update the SSH public key on the cluster. This operation updates the key on all node pools. You can either specify the key or a key file using the `--ssh-key-value` argument.
-
-```azurecli
-az aks update --name myAKSCluster --resource-group MyResourceGroup --ssh-key-value <new SSH key value or SSH key file>
-```
-
-The following examples demonstrate possible usage of this command:
-
-* You can specify the new SSH public key value for the `--ssh-key-value` argument:
-
- ```azurecli
- az aks update --name myAKSCluster --resource-group MyResourceGroup --ssh-key-value 'ssh-rsa AAAAB3Nza-xxx'
- ```
-
-* You specify an SSH public key file:
-
- ```azurecli
- az aks update --name myAKSCluster --resource-group MyResourceGroup --ssh-key-value ~/.ssh/id_rsa.pub
- ```
-
-> [!IMPORTANT]
-> After you update SSH key, AKS doesn't automatically reimage your node pool, you can choose anytime to perform [the reimage operation][node-image-upgrade]. Only after reimage is complete, does the update SSH key operation take effect.
-- ## Next steps
-If you need more troubleshooting data, you can [view the kubelet logs][view-kubelet-logs] or [view the Kubernetes master node logs][view-master-logs].
+* To help troubleshoot any issues with SSH connectivity to your clusters nodes, you can [view the kubelet logs][view-kubelet-logs] or [view the Kubernetes master node logs][view-master-logs].
+* See [Manage SSH configuration][manage-ssh-node-access] to learn about managing the SSH key on an AKS cluster or node pools.
<!-- INTERNAL LINKS --> [view-kubelet-logs]: kubelet-logs.md
If you need more troubleshooting data, you can [view the kubelet logs][view-kube
[ssh-nix]: ../virtual-machines/linux/mac-create-ssh-keys.md [ssh-windows]: ../virtual-machines/linux/ssh-from-windows.md [ssh-linux-kubectl-debug]: #create-an-interactive-shell-connection-to-a-linux-node
-[az-aks-update]: /cli/azure/aks#az-aks-update
-[how-to-install-azure-extensions]: /cli/azure/azure-cli-extensions-overview#how-to-install-extensions
-[node-image-upgrade]:node-image-upgrade.md
+[manage-ssh-node-access]: manage-ssh-node-access.md
aks Private Clusters https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/private-clusters.md
Private cluster is available in public regions, Azure Government, and Microsoft
* To use a custom DNS server, add the Azure public IP address 168.63.129.16 as the upstream DNS server in the custom DNS server, and make sure to add this public IP address as the *first* DNS server. For more information about the Azure IP address, see [What is IP address 168.63.129.16?][virtual-networks-168.63.129.16] * The cluster's DNS zone should be what you forward to 168.63.129.16. You can find more information on zone names in [Azure services DNS zone configuration][az-dns-zone].
+> [!NOTE]
+> The Azure Linux node pool is now generally available (GA). To learn about the benefits and deployment steps, see the [Introduction to the Azure Linux Container Host for AKS][intro-azure-linux].
+ ## Limitations * IP authorized ranges can't be applied to the private API server endpoint, they only apply to the public API server
For associated best practices, see [Best practices for network connectivity and
[az-network-private-dns-link-vnet-create]: /cli/azure/network/private-dns/link/vnet#az_network_private_dns_link_vnet_create [az-network-vnet-peering-create]: /cli/azure/network/vnet/peering#az_network_vnet_peering_create [az-network-vnet-peering-list]: /cli/azure/network/vnet/peering#az_network_vnet_peering_list
+[intro-azure-linux]: ../azure-linux/intro-azure-linux.md
aks Upgrade Cluster https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/upgrade-cluster.md
Title: Upgrade an Azure Kubernetes Service (AKS) cluster
description: Learn how to upgrade an Azure Kubernetes Service (AKS) cluster to get the latest features and security updates. Previously updated : 09/14/2023 Last updated : 10/16/2023
Part of the AKS cluster lifecycle involves performing periodic upgrades to the l
For AKS clusters that use multiple node pools or Windows Server nodes, see [Upgrade a node pool in AKS][nodepool-upgrade]. To upgrade a specific node pool without performing a Kubernetes cluster upgrade, see [Upgrade a specific node pool][specific-nodepool].
+> [!NOTE]
+> The Azure Linux node pool is now generally available (GA). To learn about the benefits and deployment steps, see the [Introduction to the Azure Linux Container Host for AKS][intro-azure-linux].
+ ## Kubernetes version upgrades When you upgrade a supported AKS cluster, Kubernetes minor versions can't be skipped. You must perform all upgrades sequentially by major version number. For example, upgrades between *1.14.x* -> *1.15.x* or *1.15.x* -> *1.16.x* are allowed, however *1.14.x* -> *1.16.x* isn't allowed.
Skipping multiple versions can only be done when upgrading from an *unsupported
## Before you begin
-* If you're using Azure CLI, this article requires that you're running the Azure CLI version 2.34.1 or later. Run `az --version` to find the version. If you need to install or upgrade, see [Install Azure CLI][azure-cli-install].
-* If you're using Azure PowerShell, this tutorial requires that you're running Azure PowerShell version 5.9.0 or later. Run `Get-InstalledModule -Name Az` to find the version. If you need to install or upgrade, see [Install Azure PowerShell][azure-powershell-install].
-* Performing upgrade operations requires the `Microsoft.ContainerService/managedClusters/agentPools/write` RBAC role. For more on Azure RBAC roles, see the [Azure resource provider operations]
+* If you use the Azure CLI, you need Azure CLI version 2.34.1 or later. Run `az --version` to find the version. If you need to install or upgrade, see [Install Azure CLI][azure-cli-install].
+* If you use Azure PowerShell, you need Azure PowerShell version 5.9.0 or later. Run `Get-InstalledModule -Name Az` to find the version. If you need to install or upgrade, see [Install Azure PowerShell][azure-powershell-install].
+* Performing upgrade operations requires the `Microsoft.ContainerService/managedClusters/agentPools/write` RBAC role. For more information, see [Create custom roles][azure-rbac-provider-operations].
> [!WARNING] > An AKS cluster upgrade triggers a cordon and drain of your nodes. If you have a low compute quota available, the upgrade may fail. For more information, see [increase quotas](../azure-portal/supportability/regional-quota-requests.md).
This article showed you how to upgrade an existing AKS cluster. To learn more ab
<!-- LINKS - internal --> [aks-tutorial-prepare-app]: ./tutorial-kubernetes-prepare-app.md
+[azure-rbac-provider-operations]: manage-azure-rbac.md#create-custom-roles-definitions
[azure-cli-install]: /cli/azure/install-azure-cli [azure-powershell-install]: /powershell/azure/install-az-ps [az-aks-get-upgrades]: /cli/azure/aks#az_aks_get_upgrades
This article showed you how to upgrade an existing AKS cluster. To learn more ab
[set-azakscluster]: /powershell/module/az.aks/set-azakscluster [az-aks-show]: /cli/azure/aks#az_aks_show [get-azakscluster]: /powershell/module/az.aks/get-azakscluster
-[az-extension-add]: /cli/azure/extension#az_extension_add
-[az-extension-update]: /cli/azure/extension#az_extension_update
-[az-feature-list]: /cli/azure/feature#az_feature_list
-[az-feature-register]: /cli/azure/feature#az_feature_register
-[az-provider-register]: /cli/azure/provider#az_provider_register
[nodepool-upgrade]: manage-node-pools.md#upgrade-a-single-node-pool
-[upgrade-cluster]: #upgrade-an-aks-cluster
[planned-maintenance]: planned-maintenance.md [aks-auto-upgrade]: auto-upgrade-cluster.md [release-tracker]: release-tracker.md
This article showed you how to upgrade an existing AKS cluster. To learn more ab
[k8s-deprecation]: https://kubernetes.io/blog/2022/11/18/upcoming-changes-in-kubernetes-1-26/#:~:text=A%20deprecated%20API%20is%20one%20that%20has%20been,point%20you%20must%20migrate%20to%20using%20the%20replacement [k8s-api]: https://kubernetes.io/docs/reference/using-api/api-concepts/ [container-insights]:/azure/azure-monitor/containers/container-insights-log-query#resource-logs
-[support-policy-user-customizations-agent-nodes]: support-policies.md#user-customization-of-agent-nodes
+[support-policy-user-customizations-agent-nodes]: support-policies.md#user-customization-of-agent-nodes
+[intro-azure-linux]: ../azure-linux/intro-azure-linux.md
aks Use Azure Linux https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/use-azure-linux.md
The Azure Linux container host on AKS uses a native AKS image that provides one
## How to use Azure Linux on AKS
+> [!NOTE]
+> The Azure Linux node pool is now generally available (GA). To learn about the benefits and deployment steps, see the [Introduction to the Azure Linux Container Host for AKS][azurelinuxdocumentation].
+ To get started using the Azure Linux container host for AKS, see: * [Creating a cluster with Azure Linux][azurelinux-cluster-config]
api-management Api Management Using With Internal Vnet https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/api-management/api-management-using-with-internal-vnet.md
This article explains how to set up VNet connectivity for your API Management in
> [!NOTE] > * None of the API Management endpoints are registered on the public DNS. The endpoints remain inaccessible until you [configure DNS](#dns-configuration) for the VNet.
-> * To use the self-hosted gateway in this mode, also enable private connectivity to the self-hosted gateway [configuration endpoint](self-hosted-gateway-overview.md#fqdn-dependencies). Currently, API Management doesn't enable configuring a custom domain name for the v2 endpoint.
+> * To use the self-hosted gateway in this mode, also enable private connectivity to the self-hosted gateway [configuration endpoint](self-hosted-gateway-overview.md#fqdn-dependencies).
Use API Management in internal mode to:
api-management How To Self Hosted Gateway On Kubernetes In Production https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/api-management/how-to-self-hosted-gateway-on-kubernetes-in-production.md
If you use custom domain names for the [API Management endpoints](self-hosted-ga
In this scenario, if the SSL certificate that's used by the Management endpoint isn't signed by a well-known CA certificate, you must make sure that the CA certificate is trusted by the pod of the self-hosted gateway. > [!NOTE]
-> With the self-hosted gateway v2, API Management provides a new configuration endpoint: `<apim-service-name>.configuration.azure-api.net`. Currently, API Management doesn't enable configuring a custom domain name for the v2 configuration endpoint. If you need custom hostname mapping for this endpoint, you may be able to configure an override in the container's local hosts file, for example, using a [`hostAliases`](https://kubernetes.io/docs/tasks/network/customize-hosts-file-for-pods/#adding-additional-entries-with-hostaliases) element in a Kubernetes container spec.
+> With the self-hosted gateway v2, API Management provides a new configuration endpoint: `<apim-service-name>.configuration.azure-api.net`. Custom hostnames are supported for this endpoint and can be used instead of the default hostname.
## DNS policy DNS name resolution plays a critical role in a self-hosted gateway's ability to connect to dependencies in Azure and dispatch API calls to backend services.
automation Automation Availability Zones https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/automation/automation-availability-zones.md
description: This article provides an overview of Azure availability zones and r
keywords: automation availability zones. Previously updated : 04/10/2023 Last updated : 10/16/2023
Automation accounts currently support the following regions:
- Australia East - Brazil South - Canada Central
+- Central India
- Central US - China North 3 - East Asia
Automation accounts currently support the following regions:
- East US 2 - France Central - Germany West Central
+- Israel Central
+- Italy North
- Japan East - Korea Central - North Europe - Norway East
+- Poland Central
- Qatar Central - South Africa North - South Central US - South East Asia - Sweden Central
+- USGov Virginia (Fairfax Private Cloud)
- UK South - West Europe - West US 2
automation Automation Runbook Types https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/automation/automation-runbook-types.md
Following are the limitations of Python runbooks
# [Python 3.8 (GA)](#tab/py38) - You must be familiar with Python scripting.
+- Source control integration isn't supported.
- For Python 3.8 modules, use wheel files targeting cp38-amd64. - To use third-party libraries, you must [import the packages](python-packages.md) into the Automation account. - Using **Start-AutomationRunbook** cmdlet in PowerShell/PowerShell Workflow to start a Python 3.8 runbook doesn't work. You can use **Start-AzAutomationRunbook** cmdlet from Az.Automation module or **Start-AzureRmAutomationRunbook** cmdlet from AzureRm.Automation module to work around this limitation. 
Following are the limitations of Python runbooks
# [Python 3.10 (preview)](#tab/py10) - For Python 3.10 (preview) modules, currently, only the wheel files targeting cp310 Linux OS are supported. [Learn more](./python-3-packages.md)
+- Source control integration isn't supported.
- Custom packages for Python 3.10 (preview) are only validated during job runtime. Job is expected to fail if the package is not compatible in the runtime or if required dependencies of packages aren't imported into automation account. - Currently, Python 3.10 (preview) runbooks are only supported from Azure portal. Rest API and PowerShell aren't supported.
azure-app-configuration Enable Dynamic Configuration Dotnet Core Push Refresh https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-app-configuration/enable-dynamic-configuration-dotnet-core-push-refresh.md
Open *Program.cs* and update the file with the following code.
```csharp using Azure.Messaging.EventGrid;
-using Microsoft.Azure.ServiceBus;
+using Azure.Messaging.ServiceBus;
using Microsoft.Extensions.Configuration; using Microsoft.Extensions.Configuration.AzureAppConfiguration; using Microsoft.Extensions.Configuration.AzureAppConfiguration.Extensions;
namespace TestConsole
string serviceBusConnectionString = Environment.GetEnvironmentVariable(ServiceBusConnectionStringEnvVarName); string serviceBusTopic = Environment.GetEnvironmentVariable(ServiceBusTopicEnvVarName); string serviceBusSubscription = Environment.GetEnvironmentVariable(ServiceBusSubscriptionEnvVarName);
- SubscriptionClient serviceBusClient = new SubscriptionClient(serviceBusConnectionString, serviceBusTopic, serviceBusSubscription);
+ ServiceBusClient serviceBusClient = new ServiceBusClient(serviceBusConnectionString);
+ ServiceBusProcessor serviceBusProcessor = serviceBusClient.CreateProcessor(serviceBusTopic, serviceBusSubscription);
- serviceBusClient.RegisterMessageHandler(
- handler: (message, cancellationToken) =>
+ serviceBusProcessor.ProcessMessageAsync += (processMessageEventArgs) =>
{ // Build EventGridEvent from notification message
- EventGridEvent eventGridEvent = EventGridEvent.Parse(BinaryData.FromBytes(message.Body));
+ EventGridEvent eventGridEvent = EventGridEvent.Parse(BinaryData.FromBytes(processMessageEventArgs.Message.Body));
// Create PushNotification from eventGridEvent eventGridEvent.TryCreatePushNotification(out PushNotification pushNotification);
namespace TestConsole
_refresher.ProcessPushNotification(pushNotification); return Task.CompletedTask;
- },
- exceptionReceivedHandler: (exceptionargs) =>
+ };
+
+ serviceBusProcessor.ProcessErrorAsync += (exceptionargs) =>
{ Console.WriteLine($"{exceptionargs.Exception}"); return Task.CompletedTask;
- });
+ };
} } }
azure-arc Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/resource-bridge/overview.md
Arc resource bridge supports the following Azure regions:
* West Europe * North Europe * UK South
+* UK West
+ * Sweden Central * Canada Central * Australia East
If an Arc resource bridge is unable to be upgraded to a supported version, you m
+
azure-arc Quick Start Connect Vcenter To Arc Using Script https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/vmware-vsphere/quick-start-connect-vcenter-to-arc-using-script.md
This account is used for the ongoing operation of Azure Arc-enabled VMware vSphe
### Workstation
-You need a Windows or Linux machine that can access both your vCenter Server instance and the internet, directly or through a proxy.
+You need a Windows or Linux machine that can access both your vCenter Server instance and the internet, directly or through a proxy. The workstation must also have outbound network connectivity to the ESXi host backing the datastore. Datastore connectivity is needed for uploading the Arc resource bridge image to the datastore as part of the onboarding.
## Prepare vCenter Server
azure-functions Functions Bindings Service Bus Output https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/functions-bindings-service-bus-output.md
When the parameter value is null when the function exits, Functions doesn't crea
Use the [Message](/dotnet/api/microsoft.azure.servicebus.message) type when sending messages with metadata. Parameters are defined as `return` type attributes. Use an `ICollector<T>` or `IAsyncCollector<T>` to write multiple messages. A message is created when you call the `Add` method. + When the parameter value is null when the function exits, Functions doesn't create a message. [!INCLUDE [functions-service-bus-account-attribute](../../includes/functions-service-bus-account-attribute.md)]
azure-functions Functions Bindings Service Bus Trigger https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/functions-bindings-service-bus-trigger.md
The following table explains the properties you can set using this trigger attri
|**Access**|Access rights for the connection string. Available values are `manage` and `listen`. The default is `manage`, which indicates that the `connection` has the **Manage** permission. If you use a connection string that does not have the **Manage** permission, set `accessRights` to "listen". Otherwise, the Functions runtime might fail trying to do operations that require manage rights. In Azure Functions version 2.x and higher, this property is not available because the latest version of the Service Bus SDK doesn't support manage operations.| |**IsBatched**| Messages are delivered in batches. Requires an array or collection type. | |**IsSessionsEnabled**|`true` if connecting to a [session-aware](../service-bus-messaging/message-sessions.md) queue or subscription. `false` otherwise, which is the default value.|
-|**AutoComplete**|`true` Whether the trigger should automatically call complete after processing, or if the function code will manually call complete.<br/><br/>If set to `true`, the trigger completes the message automatically if the function execution completes successfully, and abandons the message otherwise.<br/><br/>When set to `false`, you are responsible for calling [MessageReceiver](/dotnet/api/microsoft.azure.servicebus.core.messagereceiver) methods to complete, abandon, or deadletter the message. If an exception is thrown (and none of the `MessageReceiver` methods are called), then the lock remains. Once the lock expires, the message is re-queued with the `DeliveryCount` incremented and the lock is automatically renewed. |
+|**AutoComplete**|`true` Whether the trigger should automatically call complete after processing, or if the function code will manually call complete.<br/><br/>If set to `true`, the trigger completes the message automatically if the function execution completes successfully, and abandons the message otherwise.<br/><br/>When set to `false`, you are responsible for calling [ServiceBusReceiver](/dotnet/api/azure.messaging.servicebus.servicebusreceiver) methods to complete, abandon, or deadletter the message, session, or batch. When an exception is thrown (and none of the `ServiceBusReceiver` methods are called), then the lock remains. Once the lock expires, the message is re-queued with the `DeliveryCount` incremented and the lock is automatically renewed. |
[!INCLUDE [app settings to local.settings.json](../../includes/functions-app-settings-local.md)]
In [C# class libraries](functions-dotnet-class-library.md), the attribute's cons
Use the [Message](/dotnet/api/microsoft.azure.servicebus.message) type to receive messages with metadata. To learn more, see [Messages, payloads, and serialization](../service-bus-messaging/service-bus-messages-payloads.md). + In [C# class libraries](functions-dotnet-class-library.md), the attribute's constructor takes the name of the queue or the topic and subscription. [!INCLUDE [functions-service-bus-account-attribute](../../includes/functions-service-bus-account-attribute.md)]
The following parameter types are available for the queue or topic message:
* [BrokeredMessage](/dotnet/api/microsoft.servicebus.messaging.brokeredmessage) - Gives you the deserialized message with the [BrokeredMessage.GetBody\<T>()](/dotnet/api/microsoft.servicebus.messaging.brokeredmessage.getbody#Microsoft_ServiceBus_Messaging_BrokeredMessage_GetBody__1) method. * [MessageReceiver](/dotnet/api/microsoft.azure.servicebus.core.messagereceiver) - Used to receive and acknowledge messages from the message container, which is required when `autoComplete` is set to `false`. + In [C# class libraries](functions-dotnet-class-library.md), the attribute's constructor takes the name of the queue or the topic and subscription. In Azure Functions version 1.x, you can also specify the connection's access rights. If you don't specify access rights, the default is `Manage`. [!INCLUDE [functions-service-bus-account-attribute](../../includes/functions-service-bus-account-attribute.md)]
Poison message handling can't be controlled or configured in Azure Functions. Se
The Functions runtime receives a message in [PeekLock mode](../service-bus-messaging/service-bus-performance-improvements.md#receive-mode). It calls `Complete` on the message if the function finishes successfully, or calls `Abandon` if the function fails. If the function runs longer than the `PeekLock` timeout, the lock is automatically renewed as long as the function is running.
-The `maxAutoRenewDuration` is configurable in *host.json*, which maps to [OnMessageOptions.MaxAutoRenewDuration](/dotnet/api/microsoft.azure.servicebus.messagehandleroptions.maxautorenewduration). The default value of this setting is 5 minutes.
+The `maxAutoRenewDuration` is configurable in *host.json*, which maps to [ServiceBusProcessor.MaxAutoLockRenewalDuration](/dotnet/api/azure.messaging.servicebus.servicebusprocessor.maxautolockrenewalduration). The default value of this setting is 5 minutes.
::: zone pivot="programming-language-csharp" ## Message metadata
These properties are members of the [ServiceBusReceivedMessage](/dotnet/api/azur
These properties are members of the [Message](/dotnet/api/microsoft.azure.servicebus.message) class. + |Property|Type|Description| |--|-|--| |`ContentType`|`string`|A content type identifier utilized by the sender and receiver for application-specific logic.|
These properties are members of the [Message](/dotnet/api/microsoft.azure.servic
These properties are members of the [BrokeredMessage](/dotnet/api/microsoft.servicebus.messaging.brokeredmessage) and [MessageReceiver](/dotnet/api/microsoft.azure.servicebus.core.messagereceiver) classes. + |Property|Type|Description| |--|-|--| |`ContentType`|`string`|A content type identifier utilized by the sender and receiver for application-specific logic.|
Functions version 1.x doesn't support isolated worker process. To use the isolat
- [Send Azure Service Bus messages from Azure Functions (Output binding)](./functions-bindings-service-bus-output.md)
-[BrokeredMessage]: /dotnet/api/microsoft.servicebus.messaging.brokeredmessage
[upgrade your application to Functions 4.x]: ./migrate-version-1-version-4.md
azure-functions Functions Bindings Service Bus https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/functions-bindings-service-bus.md
The Service Bus extension supports parameter types according to the table below.
Earlier versions of the extension exposed types from the now deprecated [Microsoft.Azure.ServiceBus] namespace. Newer types from [Azure.Messaging.ServiceBus] are exclusive to **Extension 5.x+**. + This version of the extension supports parameter types according to the table below. The Service Bus extension supports parameter types according to the table below.
The Service Bus extension supports parameter types according to the table below.
Functions 1.x exposed types from the deprecated [Microsoft.ServiceBus.Messaging] namespace. Newer types from [Azure.Messaging.ServiceBus] are exclusive to **Extension 5.x+**. To use these, you will need to [upgrade your application to Functions 4.x]. + # [Extension 5.x+](#tab/extensionv5/isolated-process) The isolated worker process supports parameter types according to the tables below. Support for binding to types from [Azure.Messaging.ServiceBus] is in preview. Current support does not yet include message settlement scenarios for triggers.
Functions version 1.x doesn't support isolated worker process. To use the isolat
[Microsoft.ServiceBus.Messaging]: /dotnet/api/microsoft.servicebus.messaging + [upgrade your application to Functions 4.x]: ./migrate-version-1-version-4.md :::zone-end
When you set the `isSessionsEnabled` property or attribute on [the trigger](func
|||| |**prefetchCount**|`0`|Gets or sets the number of messages that the message receiver can simultaneously request.| |**maxAutoRenewDuration**|`00:05:00`|The maximum duration within which the message lock will be renewed automatically.|
-|**autoComplete**|`true`|Whether the trigger should automatically call complete after processing, or if the function code manually calls complete.<br><br>Setting to `false` is only supported in C#.<br><br>When set to `true`, the trigger completes the message, session, or batch automatically when the function execution completes successfully, and abandons the message otherwise.<br><br>When set to `false`, you are responsible for calling [MessageReceiver](/dotnet/api/microsoft.azure.servicebus.core.messagereceiver) methods to complete, abandon, or deadletter the message, session, or batch. When an exception is thrown (and none of the `MessageReceiver` methods are called), then the lock remains. Once the lock expires, the message is re-queued with the `DeliveryCount` incremented and the lock is automatically renewed.<br><br>In non-C# functions, exceptions in the function results in the runtime calls `abandonAsync` in the background. If no exception occurs, then `completeAsync` is called in the background. |
+|**autoComplete**|`true`|Whether the trigger should automatically call complete after processing, or if the function code manually calls complete.<br><br>Setting to `false` is only supported in C#.<br><br>When set to `true`, the trigger completes the message, session, or batch automatically when the function execution completes successfully, and abandons the message otherwise.<br><br>When set to `false`, you are responsible for calling [ServiceBusReceiver](/dotnet/api/azure.messaging.servicebus.servicebusreceiver) methods to complete, abandon, or deadletter the message, session, or batch. When an exception is thrown (and none of the `ServiceBusReceiver` methods are called), then the lock remains. Once the lock expires, the message is re-queued with the `DeliveryCount` incremented and the lock is automatically renewed.<br><br>In non-C# functions, exceptions in the function results in the runtime calls `abandonAsync` in the background. If no exception occurs, then `completeAsync` is called in the background. |
|**maxConcurrentCalls**|`16`|The maximum number of concurrent calls to the callback that the message pump should initiate per scaled instance. By default, the Functions runtime processes multiple messages concurrently.| |**maxConcurrentSessions**|`2000`|The maximum number of sessions that can be handled concurrently per scaled instance.| |**maxMessageCount**|`1000`| The maximum number of messages sent to the function when triggered. |
azure-functions Functions Dotnet Class Library https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/functions-dotnet-class-library.md
You can't use `out` parameters in async functions. For output bindings, use the
A function can accept a [CancellationToken](/dotnet/api/system.threading.cancellationtoken) parameter, which enables the operating system to notify your code when the function is about to be terminated. You can use this notification to make sure the function doesn't terminate unexpectedly in a way that leaves data in an inconsistent state.
-Consider the case when you have a function that processes messages in batches. The following Azure Service Bus-triggered function processes an array of [Message](/dotnet/api/microsoft.azure.servicebus.message) objects, which represents a batch of incoming messages to be processed by a specific function invocation:
+Consider the case when you have a function that processes messages in batches. The following Azure Service Bus-triggered function processes an array of [ServiceBusReceivedMessage](/dotnet/api/azure.messaging.servicebus.servicebusreceivedmessage) objects, which represents a batch of incoming messages to be processed by a specific function invocation:
```csharp
-using Microsoft.Azure.ServiceBus;
+using Azure.Messaging.ServiceBus;
using System.Threading; namespace ServiceBusCancellationToken
namespace ServiceBusCancellationToken
{ [FunctionName("servicebus")] public static void Run([ServiceBusTrigger("csharpguitar", Connection = "SB_CONN")]
- Message[] messages, CancellationToken cancellationToken, ILogger log)
+ ServiceBusReceivedMessage[] messages, CancellationToken cancellationToken, ILogger log)
{ try {
azure-functions Functions Host Json V1 https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/functions-host-json-v1.md
Configuration settings for [Host health monitor](https://github.com/Azure/azure-
|||| |enabled|true|Specifies whether the feature is enabled. | |healthCheckInterval|10 seconds|The time interval between the periodic background health checks. |
-|healthCheckWindow|2 minutes|A sliding time window used in conjunction with the `healthCheckThreshold` setting.|
+|healthCheckWindow|2 minutes|A sliding time window used with the `healthCheckThreshold` setting.|
|healthCheckThreshold|6|Maximum number of times the health check can fail before a host recycle is initiated.| |counterThreshold|0.80|The threshold at which a performance counter will be considered unhealthy.|
Configuration settings for [http triggers and bindings](functions-bindings-http-
|Property |Default | Description | ||||
-|dynamicThrottlesEnabled|false|When enabled, this setting causes the request processing pipeline to periodically check system performance counters like connections/threads/processes/memory/cpu/etc. and if any of those counters are over a built-in high threshold (80%), requests will be rejected with a 429 "Too Busy" response until the counter(s) return to normal levels.|
+|dynamicThrottlesEnabled|false|When enabled, this setting causes the request processing pipeline to periodically check system performance counters like connections/threads/processes/memory/cpu/etc. and if any of those counters are over a built-in high threshold (80%), requests are rejected with a 429 "Too Busy" response until the counter(s) return to normal levels.|
|maxConcurrentRequests|unbounded (`-1`)|The maximum number of HTTP functions that will be executed in parallel. This allows you to control concurrency, which can help manage resource utilization. For example, you might have an HTTP function that uses a lot of system resources (memory/cpu/sockets) such that it causes issues when concurrency is too high. Or you might have a function that makes outbound requests to a third party service, and those calls need to be rate limited. In these cases, applying a throttle here can help.|
-|maxOutstandingRequests|unbounded (`-1`)|The maximum number of outstanding requests that are held at any given time. This limit includes requests that are queued but have not started executing, as well as any in progress executions. Any incoming requests over this limit are rejected with a 429 "Too Busy" response. That allows callers to employ time-based retry strategies, and also helps you to control maximum request latencies. This only controls queuing that occurs within the script host execution path. Other queues such as the ASP.NET request queue will still be in effect and unaffected by this setting.|
+|maxOutstandingRequests|unbounded (`-1`)|The maximum number of outstanding requests that are held at any given time. This limit includes requests that are queued but have not started executing, and any in progress executions. Any incoming requests over this limit are rejected with a 429 "Too Busy" response. That allows callers to employ time-based retry strategies, and also helps you to control maximum request latencies. This only controls queuing that occurs within the script host execution path. Other queues such as the ASP.NET request queue will still be in effect and unaffected by this setting.|
|routePrefix|api|The route prefix that applies to all routes. Use an empty string to remove the default prefix. | ## id The unique ID for a job host. Can be a lower case GUID with dashes removed. Required when running locally. When running in Azure, we recommend that you not set an ID value. An ID is generated automatically in Azure when `id` is omitted.
-If you share a Storage account across multiple function apps, make sure that each function app has a different `id`. You can omit the `id` property or manually set each function app's `id` to a different value. The timer trigger uses a storage lock to ensure that there will be only one timer instance when a function app scales out to multiple instances. If two function apps share the same `id` and each uses a timer trigger, only one timer will run.
+If you share a Storage account across multiple function apps, make sure that each function app has a different `id`. You can omit the `id` property or manually set each function app's `id` to a different value. The timer trigger uses a storage lock to ensure that there will be only one timer instance when a function app scales out to multiple instances. If two function apps share the same `id` and each uses a timer trigger, only one timer runs.
```json {
Configuration setting for [Service Bus triggers and bindings](functions-bindings
|Property |Default | Description | |||| |maxConcurrentCalls|16|The maximum number of concurrent calls to the callback that the message pump should initiate. By default, the Functions runtime processes multiple messages concurrently. To direct the runtime to process only a single queue or topic message at a time, set `maxConcurrentCalls` to 1. |
-|prefetchCount|n/a|The default PrefetchCount that will be used by the underlying MessageReceiver.|
+|prefetchCount|n/a|The default PrefetchCount that will be used by the underlying ServiceBusReceiver.|
|autoRenewTimeout|00:05:00|The maximum duration within which the message lock will be renewed automatically.|
-|autoComplete|true|When true, the trigger will complete the message processing automatically on successful execution of the operation. When false, it is the responsibility of the function to complete the message before returning.|
+|autoComplete|true|When true, the trigger completes the message processing automatically on successful execution of the operation. When false, it is the responsibility of the function to complete the message before returning.|
## singleton
Configuration settings for Singleton lock behavior. For more information, see [G
|lockPeriod|00:00:15|The period that function level locks are taken for. The locks auto-renew.| |listenerLockPeriod|00:01:00|The period that listener locks are taken for.| |listenerLockRecoveryPollingInterval|00:01:00|The time interval used for listener lock recovery if a listener lock couldn't be acquired on startup.|
-|lockAcquisitionTimeout|00:01:00|The maximum amount of time the runtime will try to acquire a lock.|
+|lockAcquisitionTimeout|00:01:00|The maximum amount of time the runtime tries to acquire a lock.|
|lockAcquisitionPollingInterval|n/a|The interval between lock acquisition attempts.| ## tracing
azure-functions Functions Reference Csharp https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/functions-reference-csharp.md
The following table explains the binding configuration properties that you set i
|**connection**| The name of an app setting or setting collection that specifies how to connect to Service Bus. See [Connections](./functions-bindings-service-bus-trigger.md#connections).| |**accessRights**| Access rights for the connection string. Available values are `manage` and `listen`. The default is `manage`, which indicates that the `connection` has the **Manage** permission. If you use a connection string that does not have the **Manage** permission, set `accessRights` to "listen". Otherwise, the Functions runtime might fail trying to do operations that require manage rights. In Azure Functions version 2.x and higher, this property is not available because the latest version of the Service Bus SDK doesn't support manage operations.| |**isSessionsEnabled**| `true` if connecting to a [session-aware](../service-bus-messaging/message-sessions.md) queue or subscription. `false` otherwise, which is the default value.|
-|**autoComplete**| `true` when the trigger should automatically call complete after processing, or if the function code will manually call complete.<br/><br/>Setting to `false` is only supported in C#.<br/><br/>If set to `true`, the trigger completes the message automatically if the function execution completes successfully, and abandons the message otherwise.<br/><br/>When set to `false`, you are responsible for calling [MessageReceiver](/dotnet/api/microsoft.azure.servicebus.core.messagereceiver) methods to complete, abandon, or deadletter the message. If an exception is thrown (and none of the `MessageReceiver` methods are called), then the lock remains. Once the lock expires, the message is re-queued with the `DeliveryCount` incremented and the lock is automatically renewed.<br/><br/>This property is available only in Azure Functions 2.x and higher. |
+|**autoComplete**| `true` when the trigger should automatically call complete after processing, or if the function code will manually call complete.<br/><br/>Setting to `false` is only supported in C#.<br/><br/>If set to `true`, the trigger completes the message automatically if the function execution completes successfully, and abandons the message otherwise.<br/><br/When set to `false`, you are responsible for calling [ServiceBusReceiver](/dotnet/api/azure.messaging.servicebus.servicebusreceiver) methods to complete, abandon, or deadletter the message, session, or batch. When an exception is thrown (and none of the `ServiceBusReceiver` methods are called), then the lock remains. Once the lock expires, the message is re-queued with the `DeliveryCount` incremented and the lock is automatically renewed.<br/><br/>This property is available only in Azure Functions 2.x and higher. |
The following example shows a Service Bus trigger binding in a *function.json* file and a C# script function that uses the binding. The function reads message metadata and logs a Service Bus queue message.
azure-government Azure Services In Fedramp Auditscope https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-government/compliance/azure-services-in-fedramp-auditscope.md
This article provides a detailed list of Azure, Dynamics 365, Microsoft 365, and
| [Azure Spring Apps](../../spring-apps/index.yml) | &#x2705; | &#x2705; | | [Azure Stack Edge](../../databox-online/index.yml) (formerly Data Box Edge) **&ast;** | &#x2705; | &#x2705; | | [Azure Stack HCI](/azure-stack/hci/) | &#x2705; | &#x2705; |
-| [Azure Video Indexer](../../azure-video-indexer/index.yml) | &#x2705; | &#x2705; |
+| [Azure Video Indexer](/azure/azure-video-indexer/) | &#x2705; | &#x2705; |
| [Azure Virtual Desktop](../../virtual-desktop/index.yml) (formerly Windows Virtual Desktop) | &#x2705; | &#x2705; | | [Azure VMware Solution](../../azure-vmware/index.yml) | &#x2705; | &#x2705; | | [Backup](../../backup/index.yml) | &#x2705; | &#x2705; |
This article provides a detailed list of Azure, Dynamics 365, Microsoft 365, and
| [Azure Stack Bridge](/azure-stack/operator/azure-stack-usage-reporting) | &#x2705; | &#x2705; | &#x2705; | &#x2705; | &#x2705; | | [Azure Stack Edge](../../databox-online/index.yml) (formerly Data Box Edge) **&ast;** | &#x2705; | &#x2705; | &#x2705; | &#x2705; | &#x2705; | | [Azure Stack HCI](/azure-stack/hci/) | &#x2705; | &#x2705; | &#x2705; | | |
-| [Azure Video Indexer](../../azure-video-indexer/index.yml) | &#x2705; | &#x2705; | &#x2705; | | |
+| [Azure Video Indexer](/azure/azure-video-indexer/) | &#x2705; | &#x2705; | &#x2705; | | |
| [Azure Virtual Desktop](../../virtual-desktop/index.yml) (formerly Windows Virtual Desktop) | &#x2705; | &#x2705; | &#x2705; | &#x2705; | &#x2705; | | [Backup](../../backup/index.yml) | &#x2705; | &#x2705; | &#x2705; | &#x2705; | &#x2705; | | [Bastion](../../bastion/index.yml) | &#x2705; | &#x2705; | &#x2705; | &#x2705; | |
azure-monitor Azure Monitor Agent Extension Versions https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/agents/azure-monitor-agent-extension-versions.md
We strongly recommended to always update to the latest version, or opt in to the
## Version details | Release Date | Release notes | Windows | Linux | |:|:|:|:|
+| September 2023| **Windows** <ul><li>Fix issue with high CPU usage due to excessive Windows Event Logs subscription reset</li><li>Reduce fluentbit resource usage by limiting tracked files older than 3 days and limiting logging to errors only</li><li>Fix race-condition where resource_id is unavailable when agent is restarted</li><li>Fix race-condition when AMA vm-extension is provisioned involving disable command</li><li>Update MetricExtension version to 2.2023.721.1630</li><li>Update Troubleshooter to v1.5.14 </li></ul>|1.20.0| None |
| August 2023| **Windows** <ul><li>AMA: Allow prefixes in the tag names to handle regression</li><li>Updating package version for AzSecPack 4.28 release</li></ul>**Linux**<ul><li> Comming soon</li></ui>|1.19.0| Comming Soon | | July 2023| **Windows** <ul><li>Fix crash when Event Log subscription callback throws errors.<li>MetricExtension updated to 2.2023.609.2051</li></ui> |1.18.0|None| | June 2023| **Windows** <ul><li>Add new file path column to custom logs table</li><li>Config setting to disable custom IMDS endpoint in Tenant.json file</li><li>FluentBit binaries signed with Microsoft customer Code Sign cert</li><li>Minimize number of retries on calls to refresh tokens</li><li>Don't overwrite resource ID with empty string</li><li>AzSecPack updated to version 4.27</li><li>AzureProfiler and AzurePerfCollector updated to version 1.0.0.990</li><li>MetricsExtension updated to version 2.2023.513.10</li><li>Troubleshooter updated to version 1.5.0</li></ul>**Linux** <ul><li>Add new column CollectorHostName to syslog table to identify forwarder/collector machine</li><li>Link OpenSSL dynamically</li><li>**Fixes**<ul><li>Allow uploads soon after AMA start up</li><li>Run LocalSink GC on a dedicated thread to avoid thread pool scheduling issues</li><li>Fix upgrade restart of disabled services</li><li>Handle Linux Hardening where sudo on root is blocked</li><li>CEF processing fixes for noncompliant RFC 5424 logs</li><li>ASA tenant can fail to start up due to config-cache directory permissions</li><li>Fix auth proxy in AMA</li><li>Fix to remove null characters in agentlauncher.log after log rotation</li><li>Fix for authenticated proxy(1.27.3)</li><li>Fix regression in VM Insights(1.27.4)</ul></li></ul>|1.17.0 |1.27.4|
azure-monitor Asp Net Dependencies https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/asp-net-dependencies.md
Below is the currently supported list of dependency calls that are automatically
| [SqlClient](https://www.nuget.org/packages/System.Data.SqlClient) | .NET Core 1.0+, NuGet 4.3.0 | | [Microsoft.Data.SqlClient](https://www.nuget.org/packages/Microsoft.Data.SqlClient/1.1.2)| 1.1.0 - latest stable release. (See Note below.) | [Event Hubs Client SDK](https://www.nuget.org/packages/Microsoft.Azure.EventHubs) | 1.1.0 |
-| [ServiceBus Client SDK](https://www.nuget.org/packages/Microsoft.Azure.ServiceBus) | 3.0.0 |
+| [ServiceBus Client SDK](https://www.nuget.org/packages/Azure.Messaging.ServiceBus) | 7.0.0 |
| <b>Storage clients</b>| | | ADO.NET | 4.5+ |
azure-monitor Asp Net https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/asp-net.md
This section guides you through manually adding Application Insights to a templa
</ExcludeComponentCorrelationHttpHeadersOnDomains> <IncludeDiagnosticSourceActivities> <Add>Microsoft.Azure.EventHubs</Add>
- <Add>Microsoft.Azure.ServiceBus</Add>
+ <Add>Azure.Messaging.ServiceBus</Add>
</IncludeDiagnosticSourceActivities> </Add> <Add Type="Microsoft.ApplicationInsights.Extensibility.PerfCounterCollector.PerformanceCollectorModule, Microsoft.AI.PerfCounterCollector">
azure-monitor Custom Operations Tracking https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/custom-operations-tracking.md
The [W3C Trace Context](https://www.w3.org/TR/trace-context/) and [HTTP Protocol
For tracing information, see [Distributed tracing and correlation through Azure Service Bus messaging](../../service-bus-messaging/service-bus-end-to-end-tracing.md#distributed-tracing-and-correlation-through-service-bus-messaging).
-> [!IMPORTANT]
-> The WindowsAzure.ServiceBus and Microsoft.Azure.ServiceBus packages are deprecated.
- ### Azure Storage queue The following example shows how to track the [Azure Storage queue](/azure/storage/queues/storage-quickstart-queues-dotnet?tabs=passwordless%2Croles-azure-portal%2Cenvironment-variable-windows%2Csign-in-azure-cli) operations and correlate telemetry between the producer, the consumer, and Azure Storage.
azure-monitor Java Standalone Config https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/java-standalone-config.md
This article shows you how to configure Azure Monitor Application Insights for J
## Connection string and role name
-Connection string and role name are the most common settings you need to get started:
-
-```json
-{
- "connectionString": "...",
- "role": {
- "name": "my cloud role name"
- }
-}
-```
-
-Connection string is required. Role name is important anytime you're sending data from different applications to the same Application Insights resource.
More information and configuration options are provided in the following sections.
azure-monitor Java Standalone Sampling Overrides https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/java-standalone-sampling-overrides.md
those are also collected for all '/login' requests.
## Span attributes available for sampling
-Span attribute names are based on the OpenTelemetry semantic conventions:
+Span attribute names are based on the OpenTelemetry semantic conventions. (HTTP, Messaging, Database, RPC)
-* [HTTP](https://github.com/open-telemetry/semantic-conventions/blob/main/docs//http.md)
-* [Messaging](https://github.com/open-telemetry/semantic-conventions/blob/main/docs//messaging.md)
-* [Database](https://github.com/open-telemetry/semantic-conventions/blob/main/docs/database/README.md)
-* [RPC](https://github.com/open-telemetry/semantic-conventions/blob/main/docs//rpc.md)
+https://github.com/open-telemetry/semantic-conventions/blob/main/docs/README.md
To see the exact set of attributes captured by Application Insights Java for your application, set the [self-diagnostics level to debug](./java-standalone-config.md#self-diagnostics), and look for debug messages starting
azure-monitor Java Standalone Telemetry Processors https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/java-standalone-telemetry-processors.md
This section lists some common span attributes that telemetry processors can use
| Attribute | Type | Description | ||||
-| `db.system` | string | Identifier for the database management system (DBMS) product being used. See [list of identifiers](https://github.com/open-telemetry/semantic-conventions/blob/main/docs/database/README.md). |
+| `db.system` | string | Identifier for the database management system (DBMS) product being used. See [Semantic Conventions for database operations](https://github.com/open-telemetry/semantic-conventions/blob/main/docs/README.md). |
| `db.connection_string` | string | Connection string used to connect to the database. It's recommended to remove embedded credentials.| | `db.user` | string | Username for accessing the database. | | `db.name` | string | String used to report the name of the database being accessed. For commands that switch the database, this string should be set to the target database, even if the command fails.|
azure-monitor Opentelemetry Configuration https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/opentelemetry-configuration.md
Use one of the following two ways to configure the connection string:
### [Java](#tab/java)
-For more information about Java, see the [Java supplemental documentation](java-standalone-config.md).
### [Node.js](#tab/nodejs)
You might want to update the [Cloud Role Name](app-map.md#understand-the-cloud-r
### [ASP.NET Core](#tab/aspnetcore)
-Set the Cloud Role Name and the Cloud Role Instance via [Resource](https://github.com/open-telemetry/opentelemetry-specification/blob/main/specification/resource/sdk.md#resource-sdk) attributes. Cloud Role Name uses `service.namespace` and `service.name` attributes, although it falls back to `service.name` if `service.namespace` isn't set. Cloud Role Instance uses the `service.instance.id` attribute value. For information on standard attributes for resources, see [Resource Semantic Conventions](https://github.com/open-telemetry/semantic-conventions/blob/main/docs/README.md).
+Set the Cloud Role Name and the Cloud Role Instance via [Resource](https://github.com/open-telemetry/opentelemetry-specification/blob/main/specification/resource/sdk.md#resource-sdk) attributes. Cloud Role Name uses `service.namespace` and `service.name` attributes, although it falls back to `service.name` if `service.namespace` isn't set. Cloud Role Instance uses the `service.instance.id` attribute value. For information on standard attributes for resources, see [OpenTelemetry Semantic Conventions](https://github.com/open-telemetry/semantic-conventions/blob/main/docs/README.md).
```csharp // Setting role name and role instance
app.Run();
### [.NET](#tab/net)
-Set the Cloud Role Name and the Cloud Role Instance via [Resource](https://github.com/open-telemetry/opentelemetry-specification/blob/main/specification/resource/sdk.md#resource-sdk) attributes. Cloud Role Name uses `service.namespace` and `service.name` attributes, although it falls back to `service.name` if `service.namespace` isn't set. Cloud Role Instance uses the `service.instance.id` attribute value. For information on standard attributes for resources, see [Resource Semantic Conventions](https://github.com/open-telemetry/semantic-conventions/blob/main/docs/README.md).
+Set the Cloud Role Name and the Cloud Role Instance via [Resource](https://github.com/open-telemetry/opentelemetry-specification/blob/main/specification/resource/sdk.md#resource-sdk) attributes. Cloud Role Name uses `service.namespace` and `service.name` attributes, although it falls back to `service.name` if `service.namespace` isn't set. Cloud Role Instance uses the `service.instance.id` attribute value. For information on standard attributes for resources, see [OpenTelemetry Semantic Conventions](https://github.com/open-telemetry/semantic-conventions/blob/main/docs/README.md).
```csharp // Setting role name and role instance
To set the cloud role instance, see [cloud role instance](java-standalone-config
### [Node.js](#tab/nodejs)
-Set the Cloud Role Name and the Cloud Role Instance via [Resource](https://github.com/open-telemetry/opentelemetry-specification/blob/main/specification/resource/sdk.md#resource-sdk) attributes. Cloud Role Name uses `service.namespace` and `service.name` attributes, although it falls back to `service.name` if `service.namespace` isn't set. Cloud Role Instance uses the `service.instance.id` attribute value. For information on standard attributes for resources, see [Resource Semantic Conventions](https://github.com/open-telemetry/semantic-conventions/blob/main/docs/README.md).
+Set the Cloud Role Name and the Cloud Role Instance via [Resource](https://github.com/open-telemetry/opentelemetry-specification/blob/main/specification/resource/sdk.md#resource-sdk) attributes. Cloud Role Name uses `service.namespace` and `service.name` attributes, although it falls back to `service.name` if `service.namespace` isn't set. Cloud Role Instance uses the `service.instance.id` attribute value. For information on standard attributes for resources, see [OpenTelemetry Semantic Conventions](https://github.com/open-telemetry/semantic-conventions/blob/main/docs/README.md).
```typescript // Import the useAzureMonitor function, the AzureMonitorOpenTelemetryOptions class, the Resource class, and the SemanticResourceAttributes class from the @azure/monitor-opentelemetry, @opentelemetry/resources, and @opentelemetry/semantic-conventions packages, respectively.
useAzureMonitor(options);
### [Python](#tab/python)
-Set the Cloud Role Name and the Cloud Role Instance via [Resource](https://github.com/open-telemetry/opentelemetry-specification/blob/main/specification/resource/sdk.md#resource-sdk) attributes. Cloud Role Name uses `service.namespace` and `service.name` attributes, although it falls back to `service.name` if `service.namespace` isn't set. Cloud Role Instance uses the `service.instance.id` attribute value. For information on standard attributes for resources, see [Resource Semantic Conventions](https://github.com/open-telemetry/semantic-conventions/blob/main/docs/README.md).
+Set the Cloud Role Name and the Cloud Role Instance via [Resource](https://github.com/open-telemetry/opentelemetry-specification/blob/main/specification/resource/sdk.md#resource-sdk) attributes. Cloud Role Name uses `service.namespace` and `service.name` attributes, although it falls back to `service.name` if `service.namespace` isn't set. Cloud Role Instance uses the `service.instance.id` attribute value. For information on standard attributes for resources, see [OpenTelemetry Semantic Conventions](https://github.com/open-telemetry/semantic-conventions/blob/main/docs/README.md).
Set Resource attributes using the `OTEL_RESOURCE_ATTRIBUTES` and/or `OTEL_SERVICE_NAME` environment variables. `OTEL_RESOURCE_ATTRIBUTES` takes series of comma-separated key-value pairs. For example, to set the Cloud Role Name to `my-namespace.my-helloworld-service` and set Cloud Role Instance to `my-instance`, you can set `OTEL_RESOURCE_ATTRIBUTES` and `OTEL_SERVICE_NAME` as such: ```
azure-monitor Container Insights Enable Aks https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/containers/container-insights-enable-aks.md
This article describes how to set up Container insights to monitor a managed Kub
If you're connecting an existing AKS cluster to a Log Analytics workspace in another subscription, the *Microsoft.ContainerService* resource provider must be registered in the subscription with the Log Analytics workspace. For more information, see [Register resource provider](../../azure-resource-manager/management/resource-providers-and-types.md#register-resource-provider).
+> [!NOTE]
+> When you enable Container Insights on legacy auth clusters, a managed identity is automatically created. This identity will not be available in case the cluster migrates to MSI Auth or if the Container Insights is disabled and hence this managed identity should not be used for anything else.
+ ## New AKS cluster You can enable monitoring for an AKS cluster when it's created by using any of the following methods:
To enable [managed identity authentication](container-insights-onboard.md#authen
- `aksResourceId`: Use the values on the **AKS Overview** page for the AKS cluster. - `aksResourceLocation`: Use the values on the **AKS Overview** page for the AKS cluster. - `workspaceResourceId`: Use the resource ID of your Log Analytics workspace.
- - `resourceTagValues`: Match the existing tag values specified for the existing Container insights extension data collection rule (DCR) of the cluster and the name of the DCR. The name will be *MSCI-\<clusterName\>-\<clusterRegion\>* and this resource created in an AKS clusters resource group. If this is the first time onboarding, you can set the arbitrary tag values.
+ - `resourceTagValues`: Match the existing tag values specified for the existing Container insights extension data collection rule (DCR) of the cluster and the name of the DCR. The name will be *MSCI-\<clusterRegion\>-\<clusterName\>* and this resource created in an AKS clusters resource group. If this is the first time onboarding, you can set the arbitrary tag values.
To enable [managed identity authentication](container-insights-onboard.md#authentication):
azure-monitor Resource Logs Schema https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/essentials/resource-logs-schema.md
The schema for resource logs varies depending on the resource and log category.
| Azure Storage | [Blobs](../../storage/blobs/monitor-blob-storage-reference.md#resource-logs-preview), [Files](../../storage/files/storage-files-monitoring-reference.md#resource-logs-preview), [Queues](../../storage/queues/monitor-queue-storage-reference.md#resource-logs-preview), [Tables](../../storage/tables/monitor-table-storage-reference.md#resource-logs-preview) | | Azure Stream Analytics |[Job logs](../../stream-analytics/stream-analytics-job-diagnostic-logs.md) | | Azure Traffic Manager | [Traffic Manager log schema](../../traffic-manager/traffic-manager-diagnostic-logs.md) |
-| Azure Video Indexer|[Monitor Azure Video Indexer data reference](../../azure-video-indexer/monitor-video-indexer-data-reference.md)|
+| Azure Video Indexer|[Monitor Azure Video Indexer data reference](/azure/azure-video-indexer/monitor-video-indexer-data-reference)|
| Azure Virtual Network | Schema not available | | Azure Web PubSub | [Monitoring Azure Web PubSub data reference](../../azure-web-pubsub/howto-monitor-data-reference.md) | | Virtual network gateways | [Logging for Virtual Network Gateways](../../vpn-gateway/troubleshoot-vpn-with-azure-diagnostics.md)|
azure-monitor Monitor Virtual Machine Data Collection https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/vm/monitor-virtual-machine-data-collection.md
Last updated 01/05/2023 - # Monitor virtual machines with Azure Monitor: Collect data
The following samples use the `Perf` table with custom performance data. For inf
| `Perf | where Computer == "MyComputer" and CounterName startswith_cs "%" and InstanceName == "_Total" | summarize AggregatedValue = percentile(CounterValue, 70) by bin(TimeGenerated, 1h), CounterName` | Hourly 70 percentile of every % percent counter for a particular computer | | `Perf | where CounterName == "% Processor Time" and InstanceName == "_Total" and Computer == "MyComputer" | summarize ["min(CounterValue)"] = min(CounterValue), ["avg(CounterValue)"] = avg(CounterValue), ["percentile75(CounterValue)"] = percentile(CounterValue, 75), ["max(CounterValue)"] = max(CounterValue) by bin(TimeGenerated, 1h), Computer` |Hourly average, minimum, maximum, and 75-percentile CPU usage for a specific computer | | `Perf | where ObjectName == "MSSQL$INST2:Databases" and InstanceName == "master"` | All Performance data from the Database performance object for the master database from the named SQL Server instance INST2. |
+| `Perf | where TimeGenerated >ago(5m) | where ObjectName == "Process" and InstanceName != "_Total" and InstanceName != "Idle" | where CounterName == "% Processor Time" | summarize cpuVal=avg(CounterValue) by Computer,InstanceName | join (Perf| where TimeGenerated >ago(5m)| where ObjectName == "Process" and CounterName == "ID Process" | summarize arg_max(TimeGenerated,*) by ProcID=CounterValue ) on Computer,InstanceName | sort by TimeGenerated desc | summarize AvgCPU = avg(cpuVal) by InstanceName,ProcID` | Average of CPU over last 5 min for each Process ID. |
+ ## Collect text logs Some applications write events written to a text log stored on the virtual machine. Create a [custom table and DCR](../agents/data-collection-text-log.md) to collect this data. You define the location of the text log, its detailed configuration, and the schema of the custom table. There's a cost for the ingestion and retention of this data in the workspace.
The runbook can access any resources on the local machine to gather required dat
* [Analyze monitoring data collected for virtual machines](monitor-virtual-machine-analyze.md) * [Create alerts from collected data](monitor-virtual-machine-alerts.md)++
azure-netapp-files Azure Netapp Files Create Volumes Smb https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-netapp-files/azure-netapp-files-create-volumes-smb.md
Before creating an SMB volume, you need to create an Active Directory connection
* <a name="continuous-availability"></a>If you want to enable Continuous Availability for the SMB volume, select **Enable Continuous Availability**. >[!IMPORTANT]
- >You should enable Continuous Availability for Citrix App Layering, SQL Server, and [FSLogix user profile containers](../virtual-desktop/create-fslogix-profile-container.md). Using SMB Continuous Availability shares for workloads other than Citrix App Layering, SQL Server, and FSLogix user profile containers is *not* supported. This feature is currently supported on Windows SQL Server. Linux SQL Server is not currently supported. If you are using a non-administrator (domain) account to install SQL Server, ensure that the account has the required security privilege assigned. If the domain account does not have the required security privilege (`SeSecurityPrivilege`), and the privilege cannot be set at the domain level, you can grant the privilege to the account by using the **Security privilege users** field of Active Directory connections. See [Create an Active Directory connection](create-active-directory-connections.md#create-an-active-directory-connection).
+ >You should enable Continuous Availability for Citrix App Layering, SQL Server, [FSLogix user profile containers](../virtual-desktop/create-fslogix-profile-container.md), and FSLogix ODFC containers. Using SMB Continuous Availability shares for workloads other than Citrix App Layering, SQL Server, FSLogix user profile containers, or FSLogix ODFC containers is *not* supported. This feature is currently supported on Windows SQL Server. Linux SQL Server is not currently supported. If you are using a non-administrator (domain) account to install SQL Server, ensure that the account has the required security privilege assigned. If the domain account does not have the required security privilege (`SeSecurityPrivilege`), and the privilege cannot be set at the domain level, you can grant the privilege to the account by using the **Security privilege users** field of Active Directory connections. See [Create an Active Directory connection](create-active-directory-connections.md#create-an-active-directory-connection).
**Custom applications are not supported with SMB Continuous Availability.**
azure-netapp-files Configure Customer Managed Keys https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-netapp-files/configure-customer-managed-keys.md
The following diagram demonstrates how customer-managed keys work with Azure Net
* Customer-managed keys can only be configured on new volumes. You can't migrate existing volumes to customer-managed key encryption. * To create a volume using customer-managed keys, you must select the *Standard* network features. You can't use customer-managed key volumes with volume configured using Basic network features. Follow instructions in to [Set the Network Features option](configure-network-features.md#set-the-network-features-option) in the volume creation page. * For increased security, you can select the **Disable public access** option within the network settings of your key vault. When selecting this option, you must also select **Allow trusted Microsoft services to bypass this firewall** to permit the Azure NetApp Files service to access your encryption key.
-* MSI Automatic certificate renewal isn't currently supported. It is recommended to set up an Azure monitor alert for when the MSI certificate is going to expire.
+* Automatic Managed System Identity (MSI) certificate renewal isn't currently supported. It is recommended to set up an Azure monitor alert for when the MSI certificate is going to expire.
* The MSI certificate has a lifetime of 90 days. It becomes eligible for renewal after 46 days. **After 90 days, the certificate is no longer be valid and the customer-managed key volumes under the NetApp account will go offline.** * To renew, you need to call the NetApp account operation `renewCredentials` if eligible for renewal. If it's not eligible, an error message communicates the date of eligibility. * Version 2.42 or later of the Azure CLI supports running the `renewCredentials` operation with the [az netappfiles account command](/cli/azure/netappfiles/account#az-netappfiles-account-renew-credentials). For example:
azure-netapp-files Enable Continuous Availability Existing SMB https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-netapp-files/enable-continuous-availability-existing-SMB.md
You can enable the SMB Continuous Availability (CA) feature when you [create a n
> See the [**Enable Continuous Availability**](azure-netapp-files-create-volumes-smb.md#continuous-availability) option for additional details and considerations. >[!IMPORTANT]
-> You should enable Continuous Availability for [Citrix App Layering](https://docs.citrix.com/en-us/citrix-app-layering/4.html), SQL Server, and [FSLogix user profile containers](../virtual-desktop/create-fslogix-profile-container.md). Using SMB Continuous Availability shares for any other workload is not supported. This feature is currently supported on Windows SQL Server. Linux SQL Server is not currently supported.
+> You should enable Continuous Availability for [Citrix App Layering](https://docs.citrix.com/en-us/citrix-app-layering/4.html), SQL Server, [FSLogix user profile containers](../virtual-desktop/create-fslogix-profile-container.md), and FSLogix ODFC containers. Using SMB Continuous Availability shares for workloads other than Citrix App Layering, SQL Server, FSLogix user profile containers, or FSLogix ODFC containers is *not* supported. This feature is currently supported on Windows SQL Server. Linux SQL Server is not currently supported.
> If you are using a non-administrator (domain) account to install SQL Server, ensure that the account has the required security privilege assigned. If the domain account does not have the required security privilege (`SeSecurityPrivilege`), and the privilege cannot be set at the domain level, you can grant the privilege to the account by using the **Security privilege users** field of Active Directory connections. See [Create an Active Directory connection](create-active-directory-connections.md#create-an-active-directory-connection). >[!IMPORTANT]
azure-netapp-files Faq Application Resilience https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-netapp-files/faq-application-resilience.md
Azure NetApp Files might undergo occasional planned maintenance (for example, pl
Yes, certain SMB-based applications require SMB Transparent Failover. SMB Transparent Failover enables maintenance operations on the Azure NetApp Files service without interrupting connectivity to server applications storing and accessing data on SMB volumes. To support SMB Transparent Failover for specific applications, Azure NetApp Files now supports the [SMB Continuous Availability shares option](azure-netapp-files-create-volumes-smb.md#continuous-availability). Using SMB Continuous Availability is only supported for workloads on: * Citrix App Layering * [FSLogix user profile containers](../virtual-desktop/create-fslogix-profile-container.md)
+* FSLogix ODFC containers
* Microsoft SQL Server (not Linux SQL Server) >[!CAUTION]
azure-netapp-files Troubleshoot Diagnose Solve Problems https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-netapp-files/troubleshoot-diagnose-solve-problems.md
+
+ Title: Troubleshoot Azure NetApp Files using diagnose and solve problems tool
+description: Describes how to use the Azure diagnose and solve problems tool to troubleshoot issues of Azure NetApp Files.
+
+documentationcenter: ''
++
+editor: ''
+
+ms.assetid:
++
+ na
+ Last updated : 10/15/2023+++
+# Troubleshoot Azure NetApp Files using diagnose and solve problems tool
+
+You can use Azure **diagnose and solve problems** tool to troubleshoot issues of Azure NetApp Files.
+
+## Steps
+
+1. From the Azure portal, select **diagnose and solve problems** in the navigation pane.
+
+2. Choose a problem type for the issue you are experiencing, for example, **Capacity Pools**.
+ You can select the problem type by clicking the corresponding tile on the diagnose and solve problems page or using the search bar above the tiles.
+
+ The following screenshot shows an example of issue types that you can troubleshoot for Azure NetApp Files:
+
+ :::image type="content" source="../media/azure-netapp-files/troubleshoot-issue-types.png" alt-text="Screenshot that shows an example of issue types in diagnose and solve problems page." lightbox="../media/azure-netapp-files/troubleshoot-issue-types.png":::
+
+3. After specifying the problem type, select an option (problem subtype) from the pull-down menu to describe the specific problem you are experiencing. Then follow the on-screen directions to troubleshoot the problem.
+
+ :::image type="content" source="../media/azure-netapp-files/troubleshoot-diagnose-pull-down.png" alt-text="Screenshot that shows the pull-down menu for problem subtype selection." lightbox="../media/azure-netapp-files/troubleshoot-diagnose-pull-down.png":::
+
+ This page presents general guidelines and relevant resources for the problem subtype you select. In some situations, you might be prompted to fill out a questionnaire to trigger diagnostics. If issues are identified, the tool presents a diagnosis and possible solutions.
+
+ :::image type="content" source="../media/azure-netapp-files/troubleshoot-problem-subtype.png" alt-text="Screenshot that shows the capacity pool troubleshoot page." lightbox="../media/azure-netapp-files/troubleshoot-problem-subtype.png":::
+
+For more information about using this tool, see [Diagnostics and solve tool - Azure App Service](../app-service/overview-diagnostics.md).
+
+## Next steps
+
+* [Troubleshoot capacity pool errors](troubleshoot-capacity-pools.md)
+* [Troubleshoot volume errors](troubleshoot-volumes.md)
+* [Troubleshoot application volume group errors](troubleshoot-application-volume-groups.md)
+* [Troubleshoot snapshot policy errors](troubleshoot-snapshot-policies.md)
+* [Troubleshoot cross-region replication errors](troubleshoot-cross-region-replication.md)
+* [Troubleshoot Resource Provider errors](azure-netapp-files-troubleshoot-resource-provider-errors.md)
+* [Troubleshoot user access on LDAP volumes](troubleshoot-user-access-ldap.md)
+* [Troubleshoot file locks](troubleshoot-file-locks.md)
azure-netapp-files Whats New https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-netapp-files/whats-new.md
na Previously updated : 09/07/2023 Last updated : 10/16/2023
Azure NetApp Files is updated regularly. This article provides a summary about t
## October 2023
+* [Troubleshoot Azure NetApp Files using diagnose and solve problems tool](troubleshoot-diagnose-solve-problems.md)
+
+ The **diagnose and solve problems** tool simplifies the troubleshooting process, making it effortless to identify and resolve any issues affecting your Azure NetApp Files deployment. With the tool's proactive troubleshooting, user-friendly guidance, and seamless integration with Azure Support, you can more easily manage and maintain a reliable and high-performance Azure NetApp Files storage environment. Experience enhanced issue resolution and optimization capabilities today, ensuring a smoother Azure NetApp Files management experience.
+ * [Snapshot manageability enhancement: Identify parent snapshot](snapshots-restore-new-volume.md) You can now see the name of the snapshot used to create a new volume. In the Volume overview page, the **Originated from** field identifies the source snapshot used in volume creation. If the field is empty, no snapshot was used.
azure-portal Home https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-portal/mobile-app/home.md
+
+ Title: Azure mobile app Home
+description: Azure mobile app Home surfaces the most essential information and the resources you use most often.
Last updated : 10/16/2023+++
+# Azure mobile app Home
+
+Azure mobile app **Home** surfaces the most essential information and the resources you use most often. It provides a convenient way to access and manage your Azure resources or your Microsoft Entra tenant from your mobile device.
+
+## Display cards
+
+Azure mobile app **Home** consists of customizable display cards that show information and let you quickly access frequently used resources and services. You can select and organize these cards depending on what's most important for you and how you want to use the app.
+
+Current card options include:
+
+- **Learn**: Explore the most popular Microsoft learn modules for Azure.
+- **Resource groups**: Quick access to all your resource groups.
+- **Microsoft Entra ID**: Quick access to Microsoft Entra ID management.
+- **Azure services**: Quick access to Virtual machines, Web Apps, SQL databases, and Application Insights.
+- **Latest alerts**: A list and chart view of the alerts fired in the last 24 hours and the option to see all.
+- **Service Health**: A current count of service issues, maintenance, health advisories, and security advisories.
+- **Cloud Shell**: Quick access to the Cloud Shell terminal.
+- **Recent resources**: A list of your four most recently viewed resources, with the option to see all.
+- **Favorites**: A list of the resources you have added to your favorites, and the option to see all.
++
+## Customize Azure mobile app Home
+
+You can customize the cards displayed on your Azure mobile app **Home** by selecting the :::image type="icon" source="media/edit-icon.png" border="false"::: **Edit** icon in the top right of **Home**. From there, you can select which cards you see by toggling the switch. You can also drag and drop the display cards in the list to reorder how they appear on your **Home**.
+
+For instance, you could rearrange the default order as follows:
++
+This would result in a **Home** similar to the following image:
++
+## Global search
+
+The global search button appears the top left of **Home**. Select this button to search for anything specific you may be looking for on your Azure account. This includes:
+
+- Resources
+- Services
+- Resource groups
+- Subscriptions
+
+You can filter these results by subscription using the **Home** filtering option.
+
+## Filtering
+
+In the top right of **Home**, you'll see a filter option. When selecting the filter icon, the app gives you the option to filter the results shown on **Home** by specific subscriptions. This includes results for:
+
+- Resource groups
+- Azure services
+- Latest alerts
+- Service health
+- Global search
+
+This filtering option is specific to **Home**, and doesn't filter for the other bottom navigation sections.
+
+## Next steps
+
+- Learn more about the [Azure mobile app](overview.md).
+- Download the Azure mobile app for free from the [Apple App Store](https://aka.ms/azureapp/ios/doc), [Google Play](https://aka.ms/azureapp/android/doc) or [Amazon App Store](https://aka.ms/azureapp/amazon/doc).
+
azure-portal Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-portal/mobile-app/overview.md
+
+ Title: What is the Azure mobile app?
+description: The Azure mobile app is a tool that allows you to monitor and manage your Azure resources and services from your mobile device.
Last updated : 10/16/2023+++
+# What is the Azure mobile app?
+
+The Azure mobile app is a tool that allows you to monitor and manage your Azure resources and services from your mobile device. You can use the app to view the status, performance, and health of your resources, as well as perform common operations such as starting and stopping virtual machines, web apps, and databases. You can also access Azure Cloud Shell from the app and get push notifications and alerts about your resources. The Azure mobile app is available for iOS and Android devices, and you can download it for free from the [Apple App Store](https://aka.ms/azureapp/ios/doc), [Google Play](https://aka.ms/azureapp/android/doc) or [Amazon App Store](https://aka.ms/azureapp/amazon/doc).
+
+To use the app, you need an Azure account with the appropriate permissions to access your resources. The app supports multiple accounts, and you can switch between them easily. The app also supports Microsoft Entra ID authentication and multifactor authentication for enhanced security. The Azure mobile app is a convenient way to stay connected to your Azure resources and Entra tenant, and manage much more on the go.
+
+## Azure mobile app Home
+
+When you first open the Azure mobile app, **Home** shows an overview of your Azure account.
++
+View and customize display cards, including:
+
+- Microsoft Entra ID
+- Resource groups
+- Azure services
+- Latest alerts
+- Service Health
+- Cloud Shell
+- Recent resources
+- Favorites
+- Learn
+- Privileged Identity Management
+
+You can select which of these tiles appear on **Home** and rearrange them.
+
+For more information, see [Azure mobile app Home](home.md).
+
+## Hamburger menu
+
+The hamburger menu lets you select the environment, account, and directory you want to manage. The hamburger menu also houses several other settings and features, including:
+
+- Billing/Cost management
+- Settings
+- Help & feedback
+- Support requests
+- Privacy + Terms
+
+## Navigation
+
+The Azure mobile app provides several areas that allow you to navigate to different sections of the app. On the bottom navigation bar, you'll find **Home**, **Subscriptions**, **Resources**, and **Notifications**.
+
+On the top toolbar, you'll find the hamburger button to open the hamburger menu, the search magnifying glass to explore your services and resources, the edit button to change the layout of the Azure mobile app home, and the filter button to filter what content currently appears.
+
+## Download the Azure mobile app
+
+You can download the Azure mobile app today for free from the [Apple App Store](https://aka.ms/azureapp/ios/doc), [Google Play](https://aka.ms/azureapp/android/doc) or [Amazon App Store](https://aka.ms/azureapp/amazon/doc).
+
+## Next steps
+
+- Learn about [Azure mobile app **Home**](home.md) and how to customize it.
azure-resource-manager Template Specs https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-resource-manager/bicep/template-specs.md
Title: Create & deploy template specs in Bicep
description: Describes how to create template specs in Bicep and share them with other users in your organization. Previously updated : 10/13/2023 Last updated : 10/16/2023 # Azure Resource Manager template specs in Bicep
https://portal.azure.com/#create/Microsoft.Template/templateSpecVersionId/%2fsub
## Parameters
-Passing in parameters to template spec is exactly like passing parameters to a Bicep file. Add the parameter values either inline or in a parameter file.
+Passing in parameters to template spec is similar to passing parameters to a Bicep file. Add the parameter values either inline or in a parameter file.
+
+### Inline parameters
To pass a parameter inline, use:
az deployment group create \
-To create a local parameter file, use:
+### Parameter files
-```json
-{
- "$schema": "https://schema.management.azure.com/schemas/2019-04-01/deploymentParameters.json#",
- "contentVersion": "1.0.0.0",
- "parameters": {
- "StorageAccountType": {
- "value": "Standard_GRS"
- }
- }
-}
-```
+- Use Bicep parameters file
-And, pass that parameter file with:
+ To create a Bicep parameter file, you must specify the `using` statement. Here is an example:
-# [PowerShell](#tab/azure-powershell)
+ ```bicep
+ using 'using 'ts:<subscription-id>/<resource-group-name>/<template-spec-name>:<tag>'
-```azurepowershell
-New-AzResourceGroupDeployment `
- -TemplateSpecId $id `
- -ResourceGroupName demoRG `
- -TemplateParameterFile ./mainTemplate.parameters.json
-```
+ param StorageAccountType = 'Standard_GRS'
+ ```
-# [CLI](#tab/azure-cli)
+ For more information, see [Bicep parameters file](./parameter-files.md).
-```azurecli
-az deployment group create \
- --resource-group demoRG \
- --template-spec $id \
- --parameters "./mainTemplate.parameters.json"
-```
-
+ To pass parameter file with:
+
+ # [PowerShell](#tab/azure-powershell)
+
+ Currently, you can't deploy a template spec with a [.bicepparam file](./parameter-files.md) by using Azure PowerShell.
+
+ # [CLI](#tab/azure-cli)
+
+ ```azurecli
+ az deployment group create \
+ --resource-group demoRG \
+ --parameters "./mainTemplate.bicepparam"
+ ```
+
+ Because of the `using` statement in the bicepparam file. You don't need to specify the `--template-spec` parameter.
+
+
++
+- Use JSON parameters file
++
+ The following JSON is a sample JSON parameters file:
+
+ ```json
+ {
+ "$schema": "https://schema.management.azure.com/schemas/2019-04-01/deploymentParameters.json#",
+ "contentVersion": "1.0.0.0",
+ "parameters": {
+ "StorageAccountType": {
+ "value": "Standard_GRS"
+ }
+ }
+ }
+ ```
+
+ And, pass that parameter file with:
+
+ # [PowerShell](#tab/azure-powershell)
+
+ ```azurepowershell
+ New-AzResourceGroupDeployment `
+ -TemplateSpecId $id `
+ -ResourceGroupName demoRG `
+ -TemplateParameterFile ./mainTemplate.parameters.json
+ ```
+
+ # [CLI](#tab/azure-cli)
+
+ ```azurecli
+ az deployment group create \
+ --resource-group demoRG \
+ --template-spec $id \
+ --parameters "./mainTemplate.parameters.json"
+ ```
-Currently, you can't deploy a template spec with a [.bicepparam file](./parameter-files.md).
+
## Versioning
azure-video-indexer Accounts Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-video-indexer/accounts-overview.md
- Title: Azure AI Video Indexer accounts
-description: This article gives an overview of Azure AI Video Indexer accounts and provides links to other articles for more details.
- Previously updated : 08/29/2023----
-# Azure AI Video Indexer account types
--
-This article gives an overview of Azure AI Video Indexer accounts types and provides links to other articles for more details.
-
-## Trial account
-
-When starting out with [Azure AI Video Indexer](https://www.videoindexer.ai/), click **start free** to kick off a quick and easy process of creating a trial account. No Azure subscription is required and this is a great way to explore Azure AI Video Indexer and try it out with your content. Keep in mind that the trial Azure AI Video Indexer account has a limitation on the number of indexing minutes, support, and SLA.
-
-With a trial account, Azure AI Video Indexer provides up to 2,400 minutes of free indexing when using the [Azure AI Video Indexer](https://www.videoindexer.ai/) website or the Azure AI Video Indexer API (see [developer portal](https://api-portal.videoindexer.ai/)).
-
-The trial account option is not available on the Azure Government cloud. For other Azure Government limitations, see [Limitations of Azure AI Video Indexer on Azure Government](connect-to-azure.md#limitations-of-azure-ai-video-indexer-on-azure-government).
-
-## Paid (unlimited) account
-
-When you have used up the free trial minutes or are ready to start using Video Indexer for production workloads, you can create a regular paid account which doesn't have minute, support, or SLA limitations. Account creation can be performed through the Azure portal (see [Create an account with the Azure portal](create-account-portal.md)) or API (see [Create accounts with API](/rest/api/videoindexer/stable/accounts)).
-
-Azure AI Video Indexer unlimited accounts are Azure Resource Manager (ARM) based and unlike trial accounts, are created in your Azure subscription. Moving to an unlimited ARM based account unlocks many security and management capabilities, such as [RBAC user management](../role-based-access-control/overview.md), [Azure Monitor integration](../azure-monitor/overview.md), deployment through ARM templates, and much more.
-
-Billing is per indexed minute, with the per minute cost determined by the selected preset. For more information regarding pricing, see [Azure AI Video Indexer pricing](https://azure.microsoft.com/pricing/details/video-indexer/).
-
-## Create accounts
-
-* To create an ARM-based (paid) account with the Azure portal, see [Create accounts with the Azure portal](create-account-portal.md).
-* To create an account with an API, see [Create accounts](/rest/api/videoindexer/stable/accounts)
-
- > [!TIP]
- > Make sure you are signed in with the correct domain to the [Azure AI Video Indexer website](https://www.videoindexer.ai/). For details, see [Switch tenants](switch-tenants-portal.md).
-* [Upgrade a trial account to an ARM-based (paid) account and import your content for free](import-content-from-trial.md).
-
- ## Classic accounts
-
-Before ARM based accounts were added to Azure AI Video Indexer, there was a "classic" account type (where the accounts management plane is built on API Management.) The classic account type is still used by some users.
-
-* If you are using a classic (paid) account and interested in moving to an ARM-based account, see [connect an existing classic Azure AI Video Indexer account to an ARM-based account](connect-classic-account-to-arm.md).
-
-For more information on the difference between regular unlimited accounts and classic accounts, see [Azure AI Video Indexer as an Azure resource](https://techcommunity.microsoft.com/t5/ai-applied-ai-blog/azure-video-indexer-is-now-available-as-an-azure-resource/ba-p/2912422).
-
-## Limited access features
--
-For more information, see [Azure AI Video Indexer limited access features](limited-access-features.md).
-
-## Next steps
-
-Make sure to review [Pricing](https://azure.microsoft.com/pricing/details/video-indexer/).
azure-video-indexer Add Contributor Role On The Media Service https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-video-indexer/add-contributor-role-on-the-media-service.md
- Title: Add Contributor role on the Media Services account
-description: This topic explains how to add contributor role on the Media Services account.
- Previously updated : 10/13/2021-----
-# Add contributor role to Media Services
--
-This article describes how to assign contributor role on the Media Services account.
-
-> [!NOTE]
-> If you are creating your Azure AI Video Indexer through the Azure portal UI, the selected Managed identity will be automatically assigned with a contributor permission on the selected Media Service account.
-
-## Prerequisites
-
-1. Azure Media Services (AMS)
-2. User-assigned managed identity
-
-> [!NOTE]
-> You need an Azure subscription with access to both the [Contributor][docs-role-contributor] role and the [User Access Administrator][docs-role-administrator] role to the Azure Media Services and the User-assigned managed identity. If you don't have the right permissions, ask your account administrator to grant you those permissions. The associated Azure Media Services must be in the same region as the Azure AI Video Indexer account.
-
-## Add Contributor role on the Media Services
-### [Azure portal](#tab/portal/)
-
-### Add Contributor role to Media Services using Azure portal
-
-1. Sign in at the [Azure portal](https://portal.azure.com/).
- * Using the search bar at the top, enter **Media Services**.
- * Find and select your Media Service resource.
-1. In the pane to the left, click **Access control (IAM)**.
- * Click **Add** > **Add role assignment**. If you don't have permissions to assign roles, the **Add role assignment** option will be disabled.
-1. In the Role list, select [Contributor][docs-role-contributor] role and click **Next**.
-1. In the **Assign access to**, select *Managed identity* radio button.
- * Click **+Select members** button and **Select managed identities** pane should be pop up.
-1. **Select** the following:
- * In the **Subscription**, the subscription where the managed identity is located.
- * In the **Managed identity**, select *User-assigned managed identity*.
- * In the **Select** section, search for the Managed identity you'd like to grant contributor permissions on the Media services resource.
-1. Once you have found the security principal, click to select it.
-1. To assign the role, click **Review + assign**
-
-## Next steps
-
-[Create a new Azure Resource Manager based account](create-account-portal.md)
-
-<!-- links -->
-[docs-role-contributor]: ../role-based-access-control/built-in-roles.md#contributor
-[docs-role-administrator]: ../role-based-access-control/built-in-roles.md#user-access-administrator
azure-video-indexer Audio Effects Detection Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-video-indexer/audio-effects-detection-overview.md
- Title: Introduction to Azure AI Video Indexer audio effects detection-
-description: An introduction to Azure AI Video Indexer audio effects detection component responsibly.
- Previously updated : 06/15/2022-----
-# Audio effects detection
--
-Audio effects detection is an Azure AI Video Indexer feature that detects insights on various acoustic events and classifies them into acoustic categories. Audio effect detection can detect and classify different categories such as laughter, crowd reactions, alarms and/or sirens.
-
-When working on the website, the instances are displayed in the Insights tab. They can also be generated in a categorized list in a JSON file that includes the category ID, type, name, and instances per category together with the specific timeframes and confidence score.
-
-## Prerequisites
-
-Review [transparency note overview](/legal/azure-video-indexer/transparency-note?context=/azure/azure-video-indexer/context/context)
-
-## General principles
-
-This article discusses audio effects detection and the key considerations for making use of this technology responsibly. There are many things you need to consider when deciding how to use and implement an AI-powered feature:
-
-* Does this feature perform well in my scenario? Before deploying audio effects detection into your scenario, test how it performs using real-life data and make sure it can deliver the accuracy you need.
-* Are we equipped to identify and respond to errors? AI-powered products and features won't be 100% accurate, so consider how you'll identify and respond to any errors that may occur.
-
-## View the insight
-
-To see the instances on the website, do the following:
-
-1. When uploading the media file, go to Video + Audio Indexing, or go to Audio Only or Video + Audio and select Advanced.
-1. After the file is uploaded and indexed, go to Insights and scroll to audio effects.
-
-To display the JSON file, do the following:
-
-1. Select Download -> Insights (JSON).
-1. Copy the `audioEffects` element, under `insights`, and paste it into your Online JSON viewer.
-
- ```json
- "audioEffects": [
- {
- "id": 1,
- "type": "Silence",
- "instances": [
- {
- "confidence": 0,
- "adjustedStart": "0:01:46.243",
- "adjustedEnd": "0:01:50.434",
- "start": "0:01:46.243",
- "end": "0:01:50.434"
- }
- ]
- },
- {
- "id": 2,
- "type": "Speech",
- "instances": [
- {
- "confidence": 0,
- "adjustedStart": "0:00:00",
- "adjustedEnd": "0:01:43.06",
- "start": "0:00:00",
- "end": "0:01:43.06"
- }
- ]
- }
- ],
- ```
-
-To download the JSON file via the API, use the [Azure AI Video Indexer developer portal](https://api-portal.videoindexer.ai/).
-
-## Audio effects detection components
-
-During the audio effects detection procedure, audio in a media file is processed, as follows:
-
-|Component|Definition|
-|||
-|Source file | The user uploads the source file for indexing. |
-|Segmentation| The audio is analyzed, nonspeech audio is identified and then split into short overlapping internals. |
-|Classification| An AI process analyzes each segment and classifies its contents into event categories such as crowd reaction or laughter. A probability list is then created for each event category according to department-specific rules. |
-|Confidence level| The estimated confidence level of each audio effect is calculated as a range of 0 to 1. The confidence score represents the certainty in the accuracy of the result. For example, an 82% certainty is represented as an 0.82 score.|
-
-## Example use cases
--- Companies with a large video archive can improve accessibility by offering more context for a hearing- impaired audience by transcription of nonspeech effects. -- Improved efficiency when creating raw data for content creators. Important moments in promos and trailers such as laughter, crowd reactions, gunshots, or explosions can be identified, for example, in Media and Entertainment. -- Detecting and classifying gunshots, explosions, and glass shattering in a smart-city system or in other public environments that include cameras and microphones to offer fast and accurate detection of violence incidents. -
-## Considerations and limitations when choosing a use case
--- Avoid use of short or low-quality audio, audio effects detection provides probabilistic and partial data on detected nonspeech audio events. For accuracy, audio effects detection requires at least 2 seconds of clear nonspeech audio. Voice commands or singing aren't supported.   -- Avoid use of audio with loud background music or music with repetitive and/or linearly scanned frequency, audio effects detection is designed for nonspeech audio only and therefore can't classify events in loud music. Music with repetitive and/or linearly scanned frequency many be incorrectly classified as an alarm or siren. -- Carefully consider the methods of usage in law enforcement and similar institutions, to promote more accurate probabilistic data, carefully review the following: -
- - Audio effects can be detected in nonspeech segments only.
- - The duration of a nonspeech section should be at least 2 seconds.
- - Low quality audio might impact the detection results.
- - Events in loud background music aren't classified.
- - Music with repetitive and/or linearly scanned frequency might be incorrectly classified as an alarm or siren.
- - Knocking on a door or slamming a door might be labeled as a gunshot or explosion.
- - Prolonged shouting or sounds of physical human effort might be incorrectly classified.
- - A group of people laughing might be classified as both laughter and crowd.
- - Natural and nonsynthetic gunshot and explosions sounds are supported.
-
-When used responsibly and carefully, Azure AI Video Indexer is a valuable tool for many industries. To respect the privacy and safety of others, and to comply with local and global regulations, we recommend the following:  
--- Always respect an individual’s right to privacy, and only ingest audio for lawful and justifiable purposes.   -- Don't purposely disclose inappropriate audio of young children or family members of celebrities or other content that may be detrimental or pose a threat to an individual’s personal freedom.   -- Commit to respecting and promoting human rights in the design and deployment of your analyzed audio.   -- When using third party materials, be aware of any existing copyrights or permissions required before distributing content derived from them.  -- Always seek legal advice when using audio from unknown sources.  -- Be aware of any applicable laws or regulations that exist in your area regarding processing, analyzing, and sharing audio containing people.  -- Keep a human in the loop. Don't use any solution as a replacement for human oversight and decision-making.   -- Fully examine and review the potential of any AI model you're using to understand its capabilities and limitations.  -
-## Next steps
--- [Microsoft Responsible AI principles](https://www.microsoft.com/ai/responsible-ai?activetab=pivot1%3aprimaryr6) -- [Microsoft Responsible AI resources](https://www.microsoft.com/ai/responsible-ai-resources)-- [Microsoft Azure Learning courses on Responsible AI](/training/paths/responsible-ai-business-principles/)-- [Microsoft Global Human Rights Statement](https://www.microsoft.com/corporate-responsibility/human-rights-statement?activetab=pivot_1:primaryr5) -
-### Contact us
-
-`visupport@microsoft.com`
-
-## Azure AI Video Indexer insights
--- [Face detection](face-detection.md)-- [OCR](ocr.md)-- [Keywords extraction](keywords.md)-- [Transcription, translation & language identification](transcription-translation-lid.md)-- [Labels identification](labels-identification.md) -- [Named entities](named-entities.md)-- [Observed people tracking & matched faces](observed-matched-people.md)-- [Topics inference](topics-inference.md)
azure-video-indexer Audio Effects Detection https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-video-indexer/audio-effects-detection.md
- Title: Enable audio effects detection
-description: Audio Effects Detection is one of Azure AI Video Indexer AI capabilities that detects various acoustics events and classifies them into different acoustic categories (for example, gunshot, screaming, crowd reaction and more).
- Previously updated : 05/24/2023----
-# Enable audio effects detection (preview)
--
-**Audio effects detection** is one of Azure AI Video Indexer AI capabilities that detects various acoustics events and classifies them into different acoustic categories (such as dog barking, crowd reactions, laugher and more).
-
-Some scenarios where this feature is useful:
--- Companies with a large set of video archives can easily improve accessibility with audio effects detection. The feature provides more context for persons who are hard of hearing, and enhances video transcription with non-speech effects.-- In the Media & Entertainment domain, the detection feature can improve efficiency when creating raw data for content creators. Important moments in promos and trailers (such as laughter, crowd reactions, gunshot, or explosion) can be identified by using **audio effects detection**.-- In the Public Safety & Justice domain, the feature can detect and classify gunshots, explosions, and glass shattering. It can be implemented in a smart-city system or in other public environments that include cameras and microphones to offer fast and accurate detection of violence incidents. -
-## Supported audio categories
-
-**Audio effect detection** can detect and classify different categories. In the following table, you can find the different categories split in to the different presets, divided to **Standard** and **Advanced**. For more information, see [pricing](https://azure.microsoft.com/pricing/details/video-indexer/).
-
-The following table shows which categories are supported depending on **Preset Name** (**Audio Only** / **Video + Audio** vs. **Advance Audio** / **Advance Video + Audio**). When you are using the **Advanced** indexing, categories appear in the **Insights** pane of the website.
-
-|Indexing type |Standard indexing| Advanced indexing|
-||||
-| Crowd Reactions || V|
-| Silence| V| V|
-| Gunshot or explosion ||V |
-| Breaking glass ||V|
-| Alarm or siren|| V |
-| Laughter|| V |
-| Dog || V|
-| Bell ringing|| V|
-| Bird|| V|
-| Car|| V|
-| Engine|| V|
-| Crying|| V|
-| Music playing|| V|
-| Screaming|| V|
-| Thunderstorm || V|
-
-## Result formats
-
-The audio effects are retrieved in the insights JSON that includes the category ID, type, and set of instances per category along with their specific timeframe and confidence score.
-
-```json
-audioEffects: [{
- id: 0,
- type: "Gunshot or explosion",
- instances: [{
- confidence: 0.649,
- adjustedStart: "0:00:13.9",
- adjustedEnd: "0:00:14.7",
- start: "0:00:13.9",
- end: "0:00:14.7"
- }, {
- confidence: 0.7706,
- adjustedStart: "0:01:54.3",
- adjustedEnd: "0:01:55",
- start: "0:01:54.3",
- end: "0:01:55"
- }
- ]
- }, {
- id: 1,
- type: "CrowdReactions",
- instances: [{
- confidence: 0.6816,
- adjustedStart: "0:00:47.9",
- adjustedEnd: "0:00:52.5",
- start: "0:00:47.9",
- end: "0:00:52.5"
- },
- {
- confidence: 0.7314,
- adjustedStart: "0:04:57.67",
- adjustedEnd: "0:05:01.57",
- start: "0:04:57.67",
- end: "0:05:01.57"
- }
- ]
- }
-],
-```
-
-## How to index audio effects
-
-In order to set the index process to include the detection of audio effects, select one of the **Advanced** presets under **Video + audio indexing** menu as can be seen below.
-
-> [!div class="mx-imgBorder"]
-> :::image type="content" source="./media/audio-effects-detection/index-audio-effect.png" alt-text="Index Audio Effects image":::
-
-## Closed Caption
-
-When audio effects are retrieved in the closed caption files, they are retrieved in square brackets the following structure:
-
-|Type| Example|
-|||
-|SRT |00:00:00,000 00:00:03,671<br/>[Gunshot or explosion]|
-|VTT |00:00:00.000 00:00:03.671<br/>[Gunshot or explosion]|
-|TTML|Confidence: 0.9047 <br/> `<p begin="00:00:00.000" end="00:00:03.671">[Gunshot or explosion]</p>`|
-|TXT |[Gunshot or explosion]|
-|CSV |0.9047,00:00:00.000,00:00:03.671, [Gunshot or explosion]|
-
-Audio Effects in closed captions file is retrieved with the following logic employed:
-
-* `Silence` event type will not be added to the closed captions.
-* Minimum timer duration to show an event is 700 milliseconds.
-
-## Adding audio effects in closed caption files
-
-Audio effects can be added to the closed captions files supported by Azure AI Video Indexer via the [Get video captions API](https://api-portal.videoindexer.ai/api-details#api=Operations&operation=Get-Video-Captions) by choosing true in the `includeAudioEffects` parameter or via the video.ai website experience by selecting **Download** -> **Closed Captions** -> **Include Audio Effects**.
-
-> [!div class="mx-imgBorder"]
-> :::image type="content" source="./media/audio-effects-detection/close-caption.jpg" alt-text="Audio Effects in CC":::
-
-> [!NOTE]
-> When using [update transcript](https://api-portal.videoindexer.ai/api-details#api=Operations&operation=Update-Video-Transcript) from closed caption files or [update custom language model](https://api-portal.videoindexer.ai/api-details#api=Operations&operation=Update-Language-Model) from closed caption files, audio effects included in those files are ignored.
-
-## Limitations and assumptions
-
-* The audio effects are detected when present in non-speech segments only.
-* The model is optimized for cases where there is no loud background music.
-* Low quality audio may impact the detection results.
-* Minimal non-speech section duration is 2 seconds.
-* Music that is characterized with repetitive and/or linearly scanned frequency can be mistakenly classified as Alarm or siren.
-* The model is currently optimized for natural and non-synthetic gunshot and explosions sounds.
-* Door knocks and door slams can sometimes be mistakenly labeled as gunshot and explosions.
-* Prolonged shouting and human physical effort sounds can sometimes be mistakenly detected.
-* Group of people laughing can sometime be classified as both Laughter and Crowd reactions.
-
-## Next steps
-
-Review [overview](video-indexer-overview.md)
azure-video-indexer Azure Video Indexer Azure Media Services Retirement Announcement https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-video-indexer/azure-video-indexer-azure-media-services-retirement-announcement.md
- Title: Azure AI Video Indexer (AVI) changes related to Azure Media Service (AMS) retirement
-description: This article explains the upcoming changes to Azure AI Video Indexer (AVI) related to the retirement of Azure Media Services (AMS).
- Previously updated : 09/05/2023----
-# Changes related to Azure Media Service (AMS) retirement
-
-This article explains the upcoming changes to Azure AI Video Indexer (AVI) resulting from the [retirement of Azure Media Services (AMS)](/azure/media-services/latest/azure-media-services-retirement).
-
-Currently, AVI requires the creation of an AMS account. Additionally, AVI uses AMS for video encoding and streaming operations. The required changes will affect all AVI customers.
-
-To continue using AVI beyond June 30, 2024, all customers **must** make changes to their AVI accounts to remove the AMS dependency. Detailed guidance for converting AVI accounts will be provided in January 2024 when the new account type is released.
-
-## Pricing and billing
-
-Currently, AVI uses AMS for encoding and streaming for the AVI player. AMS charges you for both encoding and streaming. In the future, AVI will encode media and you'll be billed using the updated AVI accounts. Pricing details will be shared in January 2024. There will be no charge for the AVI video player.
-
-## AVI changes
-
-AVI will continue to offer the same insights, performance, and functionality. However, a few aspects of the service will change which fall under the following three categories:
--- Account changes-- API changes-- Product changes-
-## Account changes
-
-AVI has three account types. All will be impacted by the AMS retirement. The account types are:
--- ARM-based accounts-- Classic accounts-- Trial accounts-
-See [Azure AI Video Indexer account types](/azure/azure-video-indexer/accounts-overview) to understand more about AVI account types.
-
-### Azure Resource Manager (ARM)-based accounts
-
-**New accounts:** As of January 15, all newly created AVI accounts will be non-AMS dependent accounts. You'll no longer be able to create AMS-dependent accounts.
-
-**Existing accounts**: Existing accounts will continue to work through June 30, 2024. To continue using the account beyond June 30, customers must go through the process to convert their account to a non-AMS dependent account. If you donΓÇÖt convert your account to a non-AMS dependent account, you won't be able to access the account or use it beyond June 30.
-
-### Classic accounts
--- **New accounts:** As of January 15, all newly created AVI accounts will be non-AMS dependent accounts. You'll no longer be able to create Classic accounts.-- **Existing accounts:** Existing classic accounts will continue to work through June 30, 2024. AVI will release an updated API version for the non-AMS dependent accounts that doesnΓÇÖt contain any AMS related parameters.-
-To continue using the account beyond June 30, 2024, classic accounts will have to go through two steps:
-
-1. Connect the account as an ARM-based account. You can connect the accounts already. See [Azure AI Video Indexer accounts](accounts-overview.md) for instructions.
-1. Make the required changes to the AVI account to remove the AMS dependency. If this isnΓÇÖt done, you won't be able to access the account or use it beyond June 30, 2024.
-
-### Existing trial accounts
--- As of January 15, 2024 Video Indexer trial accounts will continue to work as usual. However, when using them through the APIs, customers must use the updated APIs.-- AVI supports [importing content](import-content-from-trial.md) from a trial AVI account to a paid AVI account. This import option will be supported only until **January 15th, 2024**.-
-## API changes
-
-**Between January 15 to June 30, 2024**, AVI will support both existing data and control plane APIs as well as the updated APIs that exclude all AMS related parameters.
-
-New AVI accounts as well as existing AVI accounts that have completed the steps to remove all AMS dependencies will only use the updated APIs that will exclude all AMS related parameters.
-
-**On July 1, 2024**, code using APIs with AMS parameters will no longer be supported. This applies to both control plane and data plane operations.
-
-### Breaking API changes
-
-There will be breaking API changes. The following table describes the changes for your awareness, but actionable guidance will be provided when the changes have been released.
-
-| **Type** | **API Name** | **Change** |
-||||
-| **ARM** | Create<br/>Update<br/>Patch<br/>ListAccount | - The `mediaServices` Account property will be replaced with a `storageServices` Account property.<br/><br/> - The `Identity` property will change from an `Owner` managed identity to `Storage Blob Data Contributor` permissions on the storage resource. |
-| **ARM** | Get<br/>MoveAccount | The `mediaServices` Account property will be replaced with a `storageServices` Account property. |
-| **ARM** | GetClassicAccount<br/>ListClassicAccounts | API will no longer be supported. |
-| **Classic** | CreatePaidAccountManually | API will no longer be supported. |
-| **Classic** | UpdateAccountMediaServicesAsync | API will no longer be supported. |
-| **Data plane** | Upload | Upload will no longer accept the `assetId` parameter. |
-| **Data plane** | Upload<br/>ReIndex<br/>Redact | `AdaptiveBitrate` will no longer be supported for new uploads. |
-| **Data plane** | GetVideoIndex | `PublishedUrl` property will always be null. |
-| **Data plane** | GetVideoStreamingURL | The streaming URL will return references to AVI account endpoints rather than AMS account endpoints. |
-
-Full details of the API changes and alternatives will be provided when the updated APIs are released.
-
-## Product changes
-
-As of July 1, 2024, AVI wonΓÇÖt use AMS for encoding or streaming. As a result, it will no longer support the following:
--- Encoding with adaptive bitrate will no longer be supported. Only single bitrate will be supported for new indexing jobs. Videos already encoded with adaptive bitrate will be playable in the AVI player.-- Video Indexer [dynamic encryption](/azure/media-services/latest/drm-content-protection-concept) of media files will no longer be supported.-- Media files created by non-AMS dependent accounts wonΓÇÖt be playable by the [Azure Media Player](https://azure.microsoft.com/products/media-services/media-player).-- Using a Cognitive Insights widget and playing the content with the Azure Media Player outlined [here](video-indexer-embed-widgets.md) will no longer be supported.-
-## Timeline
-
-This graphic shows the timeline for the changes.
-
azure-video-indexer Clapperboard Metadata https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-video-indexer/clapperboard-metadata.md
- Title: Enable and view a clapper board with extracted metadata
-description: Learn about how to enable and view a clapper board with extracted metadata.
- Previously updated : 09/20/2022----
-# Enable and view a clapper board with extracted metadata (preview)
-
-A clapper board insight is used to detect clapper board instances and information written on each. For example, *head* or *tail* (the board is upside-down), *production*, *roll*, *scene*, *take*, *date*, etc. The [clapper board](https://en.wikipedia.org/wiki/Clapperboard)'s extracted metadata is most useful to customers involved in the movie post-production process.
-
-When the movie is being edited, a clapper board is removed from the scene; however, the information that was written on the clapper board is important. Azure AI Video Indexer extracts the data from clapper boards, preserves, and presents the metadata.
-
-This article shows how to enable the post-production insight and view clapper board instances with extracted metadata.
-
-## View the insight
-
-### View post-production insights
-
-In order to set the indexing process to include the slate metadata, select the **Video + audio indexing** -> **Advanced** presets.
-
-> [!div class="mx-imgBorder"]
-> :::image type="content" source="./media/slate-detection-process/advanced-setting.png" alt-text="This image shows the advanced setting in order to view post-production clapperboards insights.":::
-
-After the file has been uploaded and indexed, if you want to view the timeline of the insight, select the **Post-production** checkmark from the list of insights.
-
-> [!div class="mx-imgBorder"]
-> :::image type="content" source="./media/slate-detection-process/post-production-checkmark.png" alt-text="This image shows the post-production checkmark needed to view clapperboards.":::
-
-### Clapper boards
-
-Clapper boards contain fields with titles (for example, *production*, *roll*, *scene*, *take*) and values (content) associated with each title.
-
-For example, take this clapper board:
-
-> [!div class="mx-imgBorder"]
-> :::image type="content" source="./media/slate-detection-process/clapperboard.png" alt-text="This image shows a clapperboard.":::
-
-In the following example, the board contains the following fields:
-
-|title|content|
-|||
-|camera|COD|
-|date|FILTER (in this case the board contains no date)|
-|director|John|
-|production|Prod name|
-|scene|1|
-|take|99|
-
-#### View the insight
--
-To see the instances on the website, select **Insights** and scroll to **Clapper boards**. You can hover over each clapper board, or unfold **Show/Hide clapper board info** and see the metadata:
-
-> [!div class="mx-imgBorder"]
-> :::image type="content" source="./media/slate-detection-process/clapperboard-metadata.png" alt-text="This image shows the clapperboard metadata.":::
-
-#### View the timeline
-
-If you checked the **Post-production** insight, You can also find the clapper board instance and its timeline (includes time, fields' values) on the **Timeline** tab.
-
-#### View JSON
-
-To display the JSON file:
-
-1. Select Download and then Insights (JSON).
-1. Copy the `clapperboard` element, under `insights`, and paste it into your Online JSON Viewer.
-
-> [!div class="mx-imgBorder"]
-> :::image type="content" source="./media/slate-detection-process/clapperboard-json.png" alt-text="This image shows the clapperboard metadata in json.":::
-
-The following table describes fields found in json:
-
-|Name|Description|
-|||
-|`id`|The clapper board ID.|
-|`thumbnailId`|The ID of the thumbnail.|
-|`isHeadSlate`|The value stands for head or tail (the board is upside-down) of the clapper board: `true` or `false`.|
-|`fields`|The fields found in the clapper board; also each field's name and value.|
-|`instances`|A list of time ranges where this element appeared.|
-
-## Clapper board limitations
-
-The values may not always be correctly identified by the detection algorithm. Here are some limitations:
--- The titles of the fields appearing on the clapper board are optimized to identify the most popular fields appearing on top of clapper boards. -- Handwritten text or digital digits may not be correctly identified by the fields detection algorithm.-- The algorithm is optimized to identify fields' categories that appear horizontally. -- The clapper board may not be detected if the frame is blurred or that the text written on it can't be identified by the human eye. -- Empty fieldsΓÇÖ values may lead to wrong fields categories.
-<!-- If a part of a clapper board is hidden a value with the highest confidence is shown. -->
-
-## Next steps
-
-* [Slate detection overview](slate-detection-insight.md)
-* [How to enable and view digital patterns with color bars](digital-patterns-color-bars.md).
-* [How to enable and view textless slate with matched scene](textless-slate-scene-matching.md).
azure-video-indexer Concepts Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-video-indexer/concepts-overview.md
- Title: Azure AI Video Indexer terminology & concepts overview
-description: This article gives a brief overview of Azure AI Video Indexer terminology and concepts.
- Previously updated : 08/02/2023----
-# Azure AI Video Indexer terminology & concepts
--
-This article gives a brief overview of Azure AI Video Indexer terminology and concepts. Also, review [transparency note overview](/legal/azure-video-indexer/transparency-note?context=/azure/azure-video-indexer/context/context)
-
-## Artifact files
-
-If you plan to download artifact files, beware of the following warning:
-
-
-## Confidence scores
-
-The confidence score indicates the confidence in an insight. It's a number between 0.0 and 1.0. The higher the score the greater the confidence in the answer. For example:
-
-```json
-"transcript":[
-{
- "id":1,
- "text":"Well, good morning everyone and welcome to",
- "confidence":0.8839,
- "speakerId":1,
- "language":"en-US",
- "instances":[
- {
- "adjustedStart":"0:00:10.21",
- "adjustedEnd":"0:00:12.81",
- "start":"0:00:10.21",
- "end":"0:00:12.81"
- }
- ]
-},
-```
-
-## Content moderation
-
-Use textual and visual content moderation models to keep your users safe from inappropriate content and validate that the content you publish matches your organization's values. You can automatically block certain videos or alert your users about the content. For more information, see [Insights: visual and textual content moderation](video-indexer-output-json-v2.md#visualcontentmoderation).
-
-## Insights
-
-Insights contain an aggregated view of the data: faces, topics, text-based emotion detection. Azure AI Video Indexer analyzes the video and audio content by running 30+ AI models, generating rich insights.
-
-For detailed explanation of insights, see [Azure AI Video Indexer insights](insights-overview.md).
-
-## Keyframes
-
-Azure AI Video Indexer selects the frame(s) that best represent each shot. Keyframes are the representative frames selected from the entire video based on aesthetic properties (for example, contrast and stableness). For more information, see [Scenes, shots, and keyframes](scenes-shots-keyframes.md).
-
-## Time range vs. adjusted time range
-
-Time range is the time period in the original video. Adjusted time range is the time range relative to the current playlist. Since you can create a playlist from different lines of different videos, you can take a one-hour video and use just one line from it, for example, 10:00-10:15. In that case, you'll have a playlist with one line, where the time range is 10:00-10:15 but the adjusted time range is 00:00-00:15.
-
-## Widgets
-
-Azure AI Video Indexer supports embedding widgets in your apps. For more information, see [Embed Azure AI Video Indexer widgets in your apps](video-indexer-embed-widgets.md).
-
-## Next steps
--- [overview](video-indexer-overview.md)-- Once you [set up](video-indexer-get-started.md), start using [insights](video-indexer-output-json-v2.md) and check out other **How to guides**.
azure-video-indexer Connect Classic Account To Arm https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-video-indexer/connect-classic-account-to-arm.md
- Title: Connect a classic Azure AI Video Indexer account to ARM
-description: This topic explains how to connect an existing classic paid Azure AI Video Indexer account to an ARM-based account
- Previously updated : 03/20/2023-----
-# Connect an existing classic paid Azure AI Video Indexer account to ARM-based account
--
-This article shows how to connect an existing classic paid Azure AI Video Indexer account to an Azure Resource Manager (ARM)-based (recommended) account. To create a new ARM-based account, see [create a new account](create-account-portal.md). To understand the Azure AI Video Indexer account types, review [account types](accounts-overview.md).
-
-In this article, we demonstrate options of connecting your **existing** Azure AI Video Indexer account to an [ARM][docs-arm-overview]-based account. You can also view the following video.
-
-> [!VIDEO https://www.microsoft.com/videoplayer/embed/RW10iby]
-
-## Prerequisites
-
-1. Unlimited paid Azure AI Video Indexer account (classic account).
-
- 1. To perform the connect to the ARM (Azure Resource Manager) action, you should have owner's permissions on the Azure AI Video Indexer classic account.
-1. Azure Subscription with Owner permissions or Contributor with Administrator Role assignment.
-
- 1. Same level of permission for the Azure Media Service associated with the existing Azure AI Video Indexer Classic account.
-1. User assigned managed identity (can be created along the flow).
-
-## Transition state
-
-Connecting a classic account to be ARM-based triggers a 30 days of a transition state. In the transition state, an existing account can be accessed by generating an access token using both:
-
-* Access token [generated through API Management](https://aka.ms/avam-dev-portal)(classic way)
-* Access token [generated through ARM](/rest/api/videoindexer/preview/generate/access-token)
-
-The transition state moves all account management functionality to be managed by ARM and will be handled by [Azure RBAC][docs-rbac-overview].
-
-The [invite users](restricted-viewer-role.md#share-the-account) feature in the [Azure AI Video Indexer website](https://www.videoindexer.ai/) gets disabled. The invited users on this account lose their access to the Azure AI Video Indexer account Media in the portal.
-However, this can be resolved by assigning the right role-assignment to these users through Azure RBAC, see [How to assign RBAC][docs-rbac-assignment].
-
-Only the account owner, who performed the connect action, is automatically assigned as the owner on the connected account. When [Azure policies][docs-governance-policy] are enforced, they override the settings on the account.
-
-If users are not added through Azure RBAC to the account after 30 days, they will lose access through API as well as the [Azure AI Video Indexer website](https://www.videoindexer.ai/).
-After the transition state ends, users will only be able to generate a valid access token through ARM, making Azure RBAC the exclusive way to manage role-based access control on the account.
-
-> [!NOTE]
-> If there are invited users you wish to remove access from, do it before connecting the account to ARM.
-
-Before the end of the 30 days of transition state, you can remove access from users through the [Azure AI Video Indexer website](https://www.videoindexer.ai/) account settings page.
-
-## Get started
-
-### Browse to the [Azure AI Video Indexer website](https://aka.ms/vi-portal-link)
-
-1. Sign in using your Microsoft Entra account.
-1. On the top right bar press *User account* to open the side pane account list.
-1. Select the Azure AI Video Indexer classic account you wish to connect to ARM (classic accounts will be tagged with a *classic tag*).
-1. Click **Settings**.
-
- :::image type="content" alt-text="Screenshot that shows the Azure AI Video Indexer website settings." source="./media/connect-classic-account-to-arm/classic-account-settings.png":::
-1. Click **Connect to an ARM-based account**.
-
- :::image type="content" alt-text="Screenshot that shows the connect to an ARM-based account dialog." source="./media/connect-classic-account-to-arm/connect-classic-to-arm.png":::
-1. Sign to Azure portal.
-1. The Azure AI Video Indexer create blade will open.
-1. In the **Create Azure AI Video Indexer account** section enter required values.
-
- If you followed the steps the fields should be auto-populated, make sure to validate the eligible values.
-
- :::image type="content" alt-text="Screenshot that shows the create Azure AI Video Indexer account dialog." source="./media/connect-classic-account-to-arm/connect-blade.png":::
-
- Here are the descriptions for the resource fields:
-
- | Name | Description |
- | ||
- |**Subscription**| The subscription currently contains the classic account and other related resources such as the Media Services.|
- |**Resource Group**|Select an existing resource or create a new one. The resource group must be the same location as the classic account being connected|
- |**Azure AI Video Indexer account** (radio button)| Select the *"Connecting an existing classic account"*.|
- |**Existing account ID**|Select an existing Azure AI Video Indexer account from the dropdown.|
- |**Resource name**|Enter the name of the new Azure AI Video Indexer account. Default value would be the same name the account had as classic.|
- |**Location**|The geographic region can't be changed in the connect process, the connected account must stay in the same region. |
- |**Media Services account name**|The original Media Services account name that was associated with classic account.|
- |**User-assigned managed identity**|Select a user-assigned managed identity, or create a new one. Azure AI Video Indexer account will use it to access the Media services. The user-assignment managed identity will be assigned the roles of Contributor for the Media Service account.|
-1. Click **Review + create** at the bottom of the form.
-
-## After connecting to ARM is complete
-
-After successfully connecting your account to ARM, it is recommended to make sure your account management APIs are replaced with [Azure AI Video Indexer REST API](/rest/api/videoindexer/preview/accounts).
-As mentioned in the beginning of this article, during the 30 days of the transition state, ΓÇ£[Get-access-token](https://api-portal.videoindexer.ai/api-details#api=Operations&operation=Get-Account-Access-Token)ΓÇ¥ will be supported side by side the ARM-based ΓÇ£[Generate-Access token](/rest/api/videoindexer/preview/generate/access-token)ΓÇ¥.
-Make sure to change to the new "Generate-Access token" by updating all your solutions that use the API.
-
-APIs to be changed:
--- Get Access token for each scope: Account, Project & Video.-- Get account ΓÇô the accountΓÇÖs details.-- Get accounts ΓÇô List of all account in a region.-- Create paid account ΓÇô would create a classic account.
-
-For a full description of [Azure AI Video Indexer REST API](/rest/api/videoindexer/preview/accounts) calls and documentation, follow the link.
-
-For code sample generating an access token through ARM see [C# code sample](https://github.com/Azure-Samples/media-services-video-indexer/blob/master/API-Samples/C%23/ArmBased/Program.cs).
-
-### Next steps
-
-Learn how to [Upload a video using C#](https://github.com/Azure-Samples/media-services-video-indexer/blob/master/API-Samples/C%23/ArmBased/).
-
-<!-- links -->
-[docs-arm-overview]: ../azure-resource-manager/management/overview.md
-[docs-rbac-overview]: ../role-based-access-control/overview.md
-[docs-rbac-assignment]: ../role-based-access-control/role-assignments-portal.md
-[docs-governance-policy]: ../governance/policy/overview.md
azure-video-indexer Connect To Azure https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-video-indexer/connect-to-azure.md
- Title: Create a classic Azure AI Video Indexer account connected to Azure
-description: Learn how to create a classic Azure AI Video Indexer account connected to Azure.
- Previously updated : 08/24/2022-----
-# Create a classic Azure AI Video Indexer account
---
-This topic shows how to create a new classic account connected to Azure using the [Azure AI Video Indexer website](https://aka.ms/vi-portal-link). You can also create an Azure AI Video Indexer classic account through our [API](https://aka.ms/avam-dev-portal).
-
-The topic discusses prerequisites that you need to connect to your Azure subscription and how to configure an Azure Media Services account.
-
-A few Azure AI Video Indexer account types are available to you. For detailed explanation, review [Account types](accounts-overview.md).
-
-For the pricing details, see [pricing](https://azure.microsoft.com/pricing/details/video-indexer/).
-
-## Prerequisites for connecting to Azure
-
-* An Azure subscription.
-
- If you don't have an Azure subscription yet, sign up for [Azure Free Trial](https://azure.microsoft.com/free/).
-* A Microsoft Entra domain.
-
- If you don't have a Microsoft Entra domain, create this domain with your Azure subscription. For more information, see [Managing custom domain names in your Microsoft Entra ID](../active-directory/enterprise-users/domains-manage.md)
-* A user in your Microsoft Entra domain with an **Application administrator** role. You'll use this member when connecting your Azure AI Video Indexer account to Azure.
-
- This user should be a Microsoft Entra user with a work or school account. Don't use a personal account, such as outlook.com, live.com, or hotmail.com.
-
- :::image type="content" alt-text="Screenshot that shows how to choose a user in your Microsoft Entra domain." source="./media/create-account/all-aad-users.png":::
-* A user and member in your Microsoft Entra domain.
-
- You'll use this member when connecting your Azure AI Video Indexer account to Azure.
-
- This user should be a member in your Azure subscription with either an **Owner** role, or both **Contributor** and **User Access Administrator** roles. A user can be added twice, with two roles. Once with Contributor and once with user Access Administrator. For more information, see [View the access a user has to Azure resources](../role-based-access-control/check-access.md).
-
- :::image type="content" alt-text="Screenshot that shows the access control settings." source="./media/create-account/access-control-iam.png":::
-* Register the Event Grid resource provider using the Azure portal.
-
- In the [Azure portal](https://portal.azure.com/), go to **Subscriptions**->[subscription]->**ResourceProviders**.
-
- Search for **Microsoft.Media** and **Microsoft.EventGrid**. If not in the "Registered" state, select **Register**. It takes a couple of minutes to register.
-
- :::image type="content" alt-text="Screenshot that shows how to select an Event Grid subscription." source="./media/create-account/event-grid.png":::
-
-## Connect to Azure
-
-> [!NOTE]
-> Use the same Microsoft Entra user you used when connecting to Azure.
-
-It's strongly recommended to have the following three accounts located in the same region:
-
-* The Azure AI Video Indexer account that you're creating.
-* The Azure AI Video Indexer account that you're connecting with the Media Services account.
-* The Azure storage account connected to the same Media Services account.
-
- When you create an Azure AI Video Indexer account and connect it to Media Services, the media and metadata files are stored in the Azure storage account associated with that Media Services account.
-
-If your storage account is behind a firewall, see [storage account that is behind a firewall](faq.yml#can-a-storage-account-connected-to-the-media-services-account-be-behind-a-firewall).
-
-### Create and configure a Media Services account
-
-1. Use the [Azure](https://portal.azure.com/) portal to create an Azure Media Services account, as described in [Create an account](/azure/media-services/previous/media-services-portal-create-account).
-
- > [!NOTE]
- > Make sure to write down the Media Services resource and account names.
-1. Before you can play your videos in the [Azure AI Video Indexer](https://www.videoindexer.ai/) website, you must start the default **Streaming Endpoint** of the new Media Services account.
-
- In the new Media Services account, select **Streaming endpoints**. Then select the streaming endpoint and press start.
-
- :::image type="content" alt-text="Screenshot that shows how to specify streaming endpoints." source="./media/create-account/create-ams-account-se.png":::
-1. For Azure AI Video Indexer to authenticate with Media Services API, an AD app needs to be created. The following steps guide you through the Microsoft Entra authentication process described in [Get started with Microsoft Entra authentication by using the Azure portal](/azure/media-services/previous/media-services-portal-get-started-with-aad):
-
- 1. In the new Media Services account, select **API access**.
- 2. Select [Service principal authentication method](/azure/media-services/previous/media-services-portal-get-started-with-aad).
- 3. Get the client ID and client secret
-
- After you select **Settings**->**Keys**, add **Description**, press **Save**, and the key value gets populated.
-
- If the key expires, the account owner will have to contact Azure AI Video Indexer support to renew the key.
-
- > [!NOTE]
- > Make sure to write down the key value and the Application ID. You'll need it for the steps in the next section.
-
-### Azure Media Services considerations
-
-The following Azure Media Services related considerations apply:
-
-* If you connect to a new Media Services account, Azure AI Video Indexer automatically starts the default **Streaming Endpoint** in it:
-
- ![Media Services streaming endpoint](./media/create-account/ams-streaming-endpoint.png)
-
- Streaming endpoints have a considerable startup time. Therefore, it may take several minutes from the time you connected your account to Azure until your videos can be streamed and watched in the [Azure AI Video Indexer](https://www.videoindexer.ai/) website.
-* If you connect to an existing Media Services account, Azure AI Video Indexer doesn't change the default Streaming Endpoint configuration. If there's no running **Streaming Endpoint**, you can't watch videos from this Media Services account or in Azure AI Video Indexer.
-
-## Create a classic account
-
-1. On the [Azure AI Video Indexer website](https://aka.ms/vi-portal-link), select **Create unlimited account** (the paid account).
-2. To create a classic account, select **Switch to manual configuration**.
-
-In the dialog, provide the following information:
-
-|Setting|Description|
-|||
-|Azure AI Video Indexer account region|The name of the Azure AI Video Indexer account region. For better performance and lower costs, it's highly recommended to specify the name of the region where the Azure Media Services resource and Azure Storage account are located. |
-|Microsoft Entra tenant|The name of the Microsoft Entra tenant, for example "contoso.onmicrosoft.com". The tenant information can be retrieved from the Azure portal. Place your cursor over the name of the signed-in user in the top-right corner. Find the name to the right of **Domain**.|
-|Subscription ID|The Azure subscription under which this connection should be created. The subscription ID can be retrieved from the Azure portal. Select **All services** in the left panel, and search for "subscriptions". Select **Subscriptions** and choose the desired ID from the list of your subscriptions.|
-|Azure Media Services resource group name|The name for the resource group in which you created the Media Services account.|
-|Media service resource name|The name of the Azure Media Services account that you created in the previous section.|
-|Application ID|The Microsoft Entra application ID (with permissions for the specified Media Services account) that you created in the previous section.|
-|Application key|The Microsoft Entra application key that you created in the previous section. |
-
-## Import your content from the trial account
-
-See [Import your content from the trial account](import-content-from-trial.md).
-
-## Automate creation of the Azure AI Video Indexer account
-
-To automate the creation of the account is a two steps process:
-
-1. Use Azure Resource Manager to create an Azure Media Services account + Microsoft Entra application.
-
- See an example of the [Media Services account creation template](https://github.com/Azure-Samples/media-services-v3-arm-templates).
-1. Call [Create-Account with the Media Services and Microsoft Entra application](https://videoindexer.ai.azure.us/account/login?source=apim).
-
-## Azure AI Video Indexer in Azure Government
-
-### Prerequisites for connecting to Azure Government
--- An Azure subscription in [Azure Government](../azure-government/index.yml).-- A Microsoft Entra account in Azure Government.-- All pre-requirements of permissions and resources as described above in [Prerequisites for connecting to Azure](#prerequisites-for-connecting-to-azure). -
-### Create new account via the Azure Government portal
-
-> [!NOTE]
-> The Azure Government cloud does not include a *trial* experience of Azure AI Video Indexer.
-
-To create a paid account via the Azure AI Video Indexer website:
-
-1. Go to https://videoindexer.ai.azure.us
-1. Sign-in with your Azure Government Microsoft Entra account.
-1.If you don't have any Azure AI Video Indexer accounts in Azure Government that you're an owner or a contributor to, you'll get an empty experience from which you can start creating your account.
-
- The rest of the flow is as described in above, only the regions to select from will be Government regions in which Azure AI Video Indexer is available
-
- If you already are a contributor or an admin of an existing one or more Azure AI Video Indexer accounts in Azure Government, you'll be taken to that account and from there you can start a following steps for creating an additional account if needed, as described above.
-
-### Create new account via the API on Azure Government
-
-To create a paid account in Azure Government, follow the instructions in [Create-Paid-Account](https://api-portal.videoindexer.ai/api-details#api=Operations&operation=Create-Paid-Account). This API end point only includes Government cloud regions.
-
-### Limitations of Azure AI Video Indexer on Azure Government
-
-* Only paid accounts (ARM or classic) are available on Azure Government.
-* No manual content moderation available in Azure Government.
-
- In the public cloud when content is deemed offensive based on a content moderation, the customer can ask for a human to look at that content and potentially revert that decision.
-* Bing description - in Azure Government we won't present a description of celebrities and named entities identified. This is a UI capability only.
-
-## Clean up resources
-
-After you're done with this tutorial, delete resources that you aren't planning to use.
-
-### Delete an Azure AI Video Indexer account
-
-If you want to delete an Azure AI Video Indexer account, you can delete the account from the Azure AI Video Indexer website. To delete the account, you must be the owner.
-
-Select the account -> **Settings** -> **Delete this account**.
-
-The account will be permanently deleted in 90 days.
-
-## Next steps
-
-You can programmatically interact with your trial account and/or with your Azure AI Video Indexer accounts that are connected to Azure by following the instructions in: [Use APIs](video-indexer-use-apis.md).
azure-video-indexer Considerations When Use At Scale https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-video-indexer/considerations-when-use-at-scale.md
- Title: Things to consider when using Azure AI Video Indexer at scale - Azure
-description: This topic explains what things to consider when using Azure AI Video Indexer at scale.
- Previously updated : 07/03/2023----
-# Things to consider when using Azure AI Video Indexer at scale
--
-When using Azure AI Video Indexer to index videos and your archive of videos is growing, consider scaling.
-
-This article answers questions like:
-
-* Are there any technological constraints I need to take into account?
-* Is there a smart and efficient way of doing it?
-* Can I prevent spending excess money in the process?
-
-The article provides six best practices of how to use Azure AI Video Indexer at scale.
-
-## When uploading videos consider using a URL over byte array
-
-Azure AI Video Indexer does give you the choice to upload videos from URL or directly by sending the file as a byte array, the latter comes with some constraints. For more information, see [uploading considerations and limitations)](upload-index-videos.md)
-
-First, it has file size limitations. The size of the byte array file is limited to 2 GB compared to the 30-GB upload size limitation while using URL.
-
-Second, consider just some of the issues that can affect your performance and hence your ability to scale:
-
-* Sending files using multi-part means high dependency on your network,
-* service reliability,
-* connectivity,
-* upload speed,
-* lost packets somewhere in the world wide web.
--
-When you upload videos using URL, you just need to provide a path to the location of a media file and Video Indexer takes care of the rest (see the `videoUrl` field in the [upload video](https://api-portal.videoindexer.ai/api-details#api=Operations&operation=Upload-Video) API).
-
-> [!TIP]
-> Use the `videoUrl` optional parameter of the upload video API.
-
-To see an example of how to upload videos using URL, check out [this example](upload-index-videos.md). Or, you can use [AzCopy](../storage/common/storage-use-azcopy-v10.md) for a fast and reliable way to get your content to a storage account from which you can submit it to Azure AI Video Indexer using [SAS URL](../storage/common/storage-sas-overview.md). Azure AI Video Indexer recommends using *readonly* SAS URLs.
-
-## Automatic Scaling of Media Reserved Units
-
-Starting August 1st 2021, Azure Video Indexer enabled [Reserved Units](/azure/media-services/latest/concept-media-reserved-units)(MRUs) auto scaling by [Azure Media Services](/azure/media-services/latest/media-services-overview) (AMS), as a result you do not need to manage them through Azure Video Indexer. That allows price optimization, e.g. price reduction in many cases, based on your business needs as it is being auto scaled.
-
-## Respect throttling
-
-Azure Video Indexer is built to deal with indexing at scale, and when you want to get the most out of it you should also be aware of the system's capabilities and design your integration accordingly. You don't want to send an upload request for a batch of videos just to discover that some of the movies didn't upload and you are receiving an HTTP 429 response code (too many requests). There is an API request limit of 10 requests per second and up to 120 requests per minute.
-
-Azure Video Indexer adds a `retry-after` header in the HTTP response, the header specifies when you should attempt your next retry. Make sure you respect it before trying your next request.
--
-## Use callback URL
-
-We recommend that instead of polling the status of your request constantly from the second you sent the upload request, you can add a callback URL and wait for Azure AI Video Indexer to update you. As soon as there is any status change in your upload request, you get a POST notification to the URL you specified.
-
-You can add a callback URL as one of the parameters of the [upload video API](https://api-portal.videoindexer.ai/api-details#api=Operations&operation=Upload-Video). Check out the code samples in [GitHub repo](https://github.com/Azure-Samples/media-services-video-indexer/tree/master/).
-
-For callback URL you can also use Azure Functions, a serverless event-driven platform that can be triggered by HTTP and implement a following flow.
-
-### callBack URL definition
--
-## Use the right indexing parameters for you
-
-When making decisions related to using Azure AI Video Indexer at scale, look at how to get the most out of it with the right parameters for your needs. Think about your use case, by defining different parameters you can save money and make the indexing process for your videos faster.
-
-Before uploading and indexing your video read the [documentation](upload-index-videos.md) to get a better idea of what your options are.
-
-For example, donΓÇÖt set the preset to streaming if you don't plan to watch the video, don't index video insights if you only need audio insights.
-
-## Index in optimal resolution, not highest resolution
-
-You might be asking, what video quality do you need for indexing your videos?
-
-In many cases, indexing performance has almost no difference between HD (720P) videos and 4K videos. Eventually, youΓÇÖll get almost the same insights with the same confidence. The higher the quality of the movie you upload means the higher the file size, and this leads to higher computing power and time needed to upload the video.
-
-For example, for the face detection feature, a higher resolution can help with the scenario where there are many small but contextually important faces. However, this comes with a quadratic increase in runtime and an increased risk of false positives.
-
-Therefore, we recommend you to verify that you get the right results for your use case and to first test it locally. Upload the same video in 720P and in 4K and compare the insights you get.
-
-## Next steps
-
-[Examine the Azure AI Video Indexer output produced by API](video-indexer-output-json-v2.md)
azure-video-indexer Create Account Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-video-indexer/create-account-portal.md
- Title: Create an Azure AI Video Indexer account
-description: This article explains how to create an account for Azure AI Video Indexer.
- Previously updated : 06/10/2022---
-
-# Tutorial: create an ARM-based account with Azure portal
---
-To start using unlimited features and robust capabilities of Azure AI Video Indexer, you need to create an Azure AI Video Indexer unlimited account.
-
-This tutorial walks you through the steps of creating the Azure AI Video Indexer account and its accompanying resources by using the Azure portal. The account that gets created is ARM (Azure Resource Manager) account. For information about different account types, see [Overview of account types](accounts-overview.md).
-
-## Prerequisites
-
-* You should be a member of your Azure subscription with either an **Owner** role, or both **Contributor** and **User Access Administrator** roles. You can be added twice, with two roles, once with **Contributor** and once with **User Access Administrator**. For more information, see [View the access a user has to Azure resources](../role-based-access-control/check-access.md).
-* Register the **EventGrid** resource provider using the Azure portal.
-
- In the [Azure portal](https://portal.azure.com), go to **Subscriptions**->[<*subscription*>]->**ResourceProviders**.
-Search for **Microsoft.Media** and **Microsoft.EventGrid**. If not in the registered state, select **Register**. It takes a couple of minutes to register.
-* Have an **Owner** role (or **Contributor** and **User Access Administrator** roles) assignment on the associated Azure Media Services (AMS). You select the AMS account during the Azure AI Video Indexer account creation, as described below.
-* Have an **Owner** role (or **Contributor** and **User Access Administrator** roles) assignment on the related managed identity.
-
-## Use the Azure portal to create an Azure AI Video Indexer account
-
-1. Sign in to the [Azure portal](https://portal.azure.com).
-
- Alternatively, you can start creating the **unlimited** account from the [videoindexer.ai](https://www.videoindexer.ai) website.
-1. Using the search bar at the top, enter **"Video Indexer"**.
-1. Select **Video Indexer** under **Services**.
-1. Select **Create**.
-1. In the Create an Azure AI Video Indexer resource section, enter required values (the descriptions follow after the image).
-
- > [!div class="mx-imgBorder"]
- > :::image type="content" source="./media/create-account-portal/avi-create-blade.png" alt-text="Screenshot showing how to create an Azure AI Video Indexer resource.":::
-
- Here are the definitions:
-
- | Name | Description|
- |||
- |**Subscription**|Choose the subscription to use. If you're a member of only one subscription, you'll see that name. If there are multiple choices, choose a subscription in which your user has the required role.
- |**Resource group**|Select an existing resource group or create a new one. A resource group is a collection of resources that share lifecycle, permissions, and policies. Learn more [here](../azure-resource-manager/management/overview.md#resource-groups).|
- |**Resource name**|This will be the name of the new Azure AI Video Indexer account. The name can contain letters, numbers and dashes with no spaces.|
- |**Region**|Select the Azure region that will be used to deploy the Azure AI Video Indexer account. The region matches the resource group region you chose. If you'd like to change the selected region, change the selected resource group or create a new one in the preferred region. [Azure region in which Azure AI Video Indexer is available](https://azure.microsoft.com/global-infrastructure/services/?products=cognitive-services&regions=all)|
- |**Existing content**|If you have existing classic Video Indexer accounts, you can choose to have the videos, files, and data associated with an existing classic account connected to the new account. See the following article to learn more [Connect the classic account to ARM](connect-classic-account-to-arm.md)
- |**Available classic accounts**|Classic accounts available in the chosen subscription, resource group, and region.|
- |**Media Services account name**|Select a Media Services that the new Azure AI Video Indexer account will use to process the videos. You can select an existing Media Services or you can create a new one. The Media Services must be in the same region you selected for your Azure AI Video Indexer account.|
- |**Storage account** (appears when creating a new AMS account)|Choose or create a new storage account in the same resource group.|
- |**Managed identity**|Select an existing user-assigned managed identity or system-assigned managed identity or both when creating the account. The new Azure AI Video Indexer account will use the selected managed identity to access the Media Services associated with the account. If both user-assigned and system assigned managed identities will be selected during the account creation the **default** managed identity is the user-assigned managed identity. A contributor role should be assigned on the Media Services.|
-1. Select **Review + create** at the bottom of the form.
-
-### Review deployed resource
-
-You can use the Azure portal to validate the Azure AI Video Indexer account and other resources that were created. After the deployment is finished, select **Go to resource** to see your new Azure AI Video Indexer account.
-
-## The Overview tab of the account
-
-This tab enables you to view details about your account.
-
-> [!div class="mx-imgBorder"]
-> :::image type="content" source="./media/create-account-portal/avi-overview.png" alt-text="Screenshot showing the Overview tab.":::
-
-Select **Explore Azure AI Video Indexer's portal** to view your new account on the [Azure AI Video Indexer website](https://aka.ms/vi-portal-link).
-
-### Essential details
-
-|Name|Description|
-|||
-|Status| When the resource is connected properly, the status is **Active**. When there's a problem with the connection between the managed identity and the Media Service instance, the status will be *Connection to Azure Media Services failed*. Contributor role assignment on the Media Services should be added to the proper managed identity.|
-|Managed identity |The name of the default managed identity, user-assigned or system-assigned. The default managed identity can be updated using the **Change** button.|
-
-## The Management tab of the account
-
-This tab contains sections for:
-
-* getting an access token for the account
-* managing identities
-
-### Management API
-
-Use the **Management API** tab to manually generate access tokens for the account.
-This token can be used to authenticate API calls for this account. Each token is valid for one hour.
-
-#### To get the access token
-
-Choose the following:
-
-* Permission type: **Contributor** or **Reader**
-* Scope: **Account**, **Project** or **Video**
-
- * For **Project** or **Video** you should also insert the matching ID.
-* Select **Generate**
-
-### Identity
-
-Use the **Identity** tab to manually update the managed identities associated with the Azure AI Video Indexer resource.
-
-Add new managed identities, switch the default managed identity between user-assigned and system-assigned or set a new user-assigned managed identity.
-
-## Next steps
-
-Learn how to [Upload a video using C#](https://github.com/Azure-Samples/media-services-video-indexer/blob/master/API-Samples/C%23/ArmBased/).
--
-<!-- links -->
-[docs-uami]: ../active-directory/managed-identities-azure-resources/overview.md
-[docs-ms]: /azure/media-services/latest/media-services-overview
-[docs-role-contributor]: ../../role-based-access-control/built-in-roles.md#contibutor
-[docs-contributor-on-ms]: ./add-contributor-role-on-the-media-service.md
azure-video-indexer Customize Brands Model Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-video-indexer/customize-brands-model-overview.md
- Title: Customize a Brands model in Azure AI Video Indexer - Azure
-description: This article gives an overview of what is a Brands model in Azure AI Video Indexer and how to customize it.
- Previously updated : 12/15/2019----
-# Customize a Brands model in Azure AI Video Indexer
--
-Azure AI Video Indexer supports brand detection from speech and visual text during indexing and reindexing of video and audio content. The brand detection feature identifies mentions of products, services, and companies suggested by Bing's brands database. For example, if Microsoft is mentioned in a video or audio content or if it shows up in visual text in a video, Azure AI Video Indexer detects it as a brand in the content. Brands are disambiguated from other terms using context.
-
-Brand detection is useful in a wide variety of business scenarios such as contents archive and discovery, contextual advertising, social media analysis, retail compete analysis, and many more. Azure AI Video Indexer brand detection enables you to index brand mentions in speech and visual text, using Bing's brands database as well as with customization by building a custom Brands model for each Azure AI Video Indexer account. The custom Brands model feature allows you to select whether or not Azure AI Video Indexer will detect brands from the Bing brands database, exclude certain brands from being detected (essentially creating a list of unapproved brands), and include brands that should be part of your model that might not be in Bing's brands database (essentially creating a list of approved brands). The custom Brands model that you create will only be available in the account in which you created the model.
-
-## Out of the box detection example
-
-In the "Microsoft Build 2017 Day 2" presentation, the brand "Microsoft Windows" appears multiple times. Sometimes in the transcript, sometimes as visual text and never as verbatim. Azure AI Video Indexer detects with high precision that a term is indeed brand based on the context, covering over 90k brands out of the box, and constantly updating. At 02:25, Azure AI Video Indexer detects the brand from speech and then again at 02:40 from visual text, which is part of the Windows logo.
-
-![Brands overview](./media/content-model-customization/brands-overview.png)
-
-Talking about Windows in the context of construction will not detect the word "Windows" as a brand, and same for Box, Apple, Fox, etc., based on advanced Machine Learning algorithms that know how to disambiguate from context. Brand Detection works for all our supported languages.
-
-## Next steps
-
-To bring your own brands, check out these topics:
-
-[Customize Brands model using APIs](customize-brands-model-with-api.md)
-
-[Customize Brands model using the website](customize-brands-model-with-website.md)
azure-video-indexer Customize Brands Model With Api https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-video-indexer/customize-brands-model-with-api.md
- Title: Customize a Brands model with Azure AI Video Indexer API
-description: Learn how to customize a Brands model with the Azure AI Video Indexer API.
- Previously updated : 01/14/2020-----
-# Customize a Brands model with the Azure AI Video Indexer API
--
-Azure AI Video Indexer supports brand detection from speech and visual text during indexing and reindexing of video and audio content. The brand detection feature identifies mentions of products, services, and companies suggested by Bing's brands database. For example, if Microsoft is mentioned in video or audio content or if it shows up in visual text in a video, Azure AI Video Indexer detects it as a brand in the content. A custom Brands model allows you to exclude certain brands from being detected and include brands that should be part of your model that might not be in Bing's brands database. For more information, see [Overview](customize-brands-model-overview.md).
-
-> [!NOTE]
-> If your video was indexed prior to adding a brand, you need to reindex it.
-
-You can use the Azure AI Video Indexer APIs to create, use, and edit custom Brands models detected in a video, as described in this topic. You can also use the Azure AI Video Indexer website, as described in [Customize Brands model using the Azure AI Video Indexer website](customize-brands-model-with-api.md).
-
-## Create a Brand
-
-The [create a brand](https://api-portal.videoindexer.ai/api-details#api=Operations&operation=Create-Brand) API creates a new custom brand and adds it to the custom Brands model for the specified account.
-
-> [!NOTE]
-> Setting `enabled` (in the body) to true puts the brand in the *Include* list for Azure AI Video Indexer to detect. Setting `enabled` to false puts the brand in the *Exclude* list, so Azure AI Video Indexer won't detect it.
-
-Some other parameters that you can set in the body:
-
-* The `referenceUrl` value can be any reference websites for the brand, such as a link to its Wikipedia page.
-* The `tags` value is a list of tags for the brand. This tag shows up in the brand's *Category* field in the Azure AI Video Indexer website. For example, the brand "Azure" can be tagged or categorized as "Cloud".
-
-### Response
-
-The response provides information on the brand that you just created following the format of the example below.
-
-```json
-{
- "referenceUrl": "https://en.wikipedia.org/wiki/Example",
- "id": 97974,
- "name": "Example",
- "accountId": "SampleAccountId",
- "lastModifierUserName": "SampleUserName",
- "created": "2018-04-25T14:59:52.7433333",
- "lastModified": "2018-04-25T14:59:52.7433333",
- "enabled": true,
- "description": "This is an example",
- "tags": [
- "Tag1",
- "Tag2"
- ]
-}
-```
-
-## Delete a Brand
-
-The [delete a brand](https://api-portal.videoindexer.ai/api-details#api=Operations&operation=Delete-Brand) API removes a brand from the custom Brands model for the specified account. The account is specified in the `accountId` parameter. Once called successfully, the brand will no longer be in the *Include* or *Exclude* brands lists.
-
-### Response
-
-There's no returned content when the brand is deleted successfully.
-
-## Get a specific Brand
-
-The [get a brand](https://api-portal.videoindexer.ai/api-details#api=Operations&operation=Get-Brand) API lets you search for the details of a brand in the custom Brands model for the specified account using the brand ID.
-
-### Response
-
-The response provides information on the brand that you searched (using brand ID) following the format of the example below.
-
-```json
-{
- "referenceUrl": "https://en.wikipedia.org/wiki/Example",
- "id": 128846,
- "name": "Example",
- "accountId": "SampleAccountId",
- "lastModifierUserName": "SampleUserName",
- "created": "2018-01-06T13:51:38.3666667",
- "lastModified": "2018-01-11T13:51:38.3666667",
- "enabled": true,
- "description": "This is an example",
- "tags": [
- "Tag1",
- "Tag2"
- ]
-}
-```
-
-> [!NOTE]
-> `enabled` being set to `true` signifies that the brand is in the *Include* list for Azure AI Video Indexer to detect, and `enabled` being false signifies that the brand is in the *Exclude* list, so Azure AI Video Indexer won't detect it.
-
-## Update a specific brand
-
-The [update a brand](https://api-portal.videoindexer.ai/api-details#api=Operations&operation=Update-Brand) API lets you search for the details of a brand in the custom Brands model for the specified account using the brand ID.
-
-### Response
-
-The response provides the updated information on the brand that you updated following the format of the example below.
-
-```json
-{
- "referenceUrl": null,
- "id": 97974,
- "name": "Example",
- "accountId": "SampleAccountId",
- "lastModifierUserName": "SampleUserName",
- "Created": "2018-04-25T14:59:52.7433333",
- "lastModified": "2018-04-25T15:37:50.67",
- "enabled": false,
- "description": "This is an update example",
- "tags": [
- "Tag1",
- "NewTag2"
- ]
-}
-```
-
-## Get all of the Brands
-
-The [get all brands](https://api-portal.videoindexer.ai/api-details#api=Operations&operation=Get-Brands) API returns all of the brands in the custom Brands model for the specified account regardless of whether the brand is meant to be in the *Include* or *Exclude* brands list.
-
-### Response
-
-The response provides a list of all of the brands in your account and each of their details following the format of the example below.
-
-```json
-[
- {
- "ReferenceUrl": null,
- "id": 97974,
- "name": "Example",
- "accountId": "AccountId",
- "lastModifierUserName": "UserName",
- "Created": "2018-04-25T14:59:52.7433333",
- "LastModified": "2018-04-25T14:59:52.7433333",
- "enabled": true,
- "description": "This is an example",
- "tags": ["Tag1", "Tag2"]
- },
- {
- "ReferenceUrl": null,
- "id": 97975,
- "name": "Example2",
- "accountId": "AccountId",
- "lastModifierUserName": "UserName",
- "Created": "2018-04-26T14:59:52.7433333",
- "LastModified": "2018-04-26T14:59:52.7433333",
- "enabled": false,
- "description": "This is another example",
- "tags": ["Tag1", "Tag2"]
- },
-]
-```
-
-> [!NOTE]
-> The brand named *Example* is in the *Include* list for Azure AI Video Indexer to detect, and the brand named *Example2* is in the *Exclude* list, so Azure AI Video Indexer won't detect it.
-
-## Get Brands model settings
-
-The [get brands settings](https://api-portal.videoindexer.ai/api-details#api=Operations&operation=Get-Brands) API returns the Brands model settings in the specified account. The Brands model settings represent whether detection from the Bing brands database is enabled or not. If Bing brands aren't enabled, Azure AI Video Indexer will only detect brands from the custom Brands model of the specified account.
-
-### Response
-
-The response shows whether Bing brands are enabled following the format of the example below.
-
-```json
-{
- "state": true,
- "useBuiltIn": true
-}
-```
-
-> [!NOTE]
-> `useBuiltIn` being set to true represents that Bing brands are enabled. If `useBuiltin` is false, Bing brands are disabled. The `state` value can be ignored because it has been deprecated.
-
-## Update Brands model settings
-
-The [update brands](https://api-portal.videoindexer.ai/api-details#api=Operations&operation=Update-Brands-Model-Settings) API updates the Brands model settings in the specified account. The Brands model settings represent whether detection from the Bing brands database is enabled or not. If Bing brands aren't enabled, Azure AI Video Indexer will only detect brands from the custom Brands model of the specified account.
-
-The `useBuiltIn` flag set to true means that Bing brands are enabled. If `useBuiltin` is false, Bing brands are disabled.
-
-### Response
-
-There's no returned content when the Brands model setting is updated successfully.
-
-## Next steps
-
-[Customize Brands model using website](customize-brands-model-with-website.md)
azure-video-indexer Customize Brands Model With Website https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-video-indexer/customize-brands-model-with-website.md
- Title: Customize a Brands model with the Azure AI Video Indexer website
-description: Learn how to customize a Brands model with the Azure AI Video Indexer website.
- Previously updated : 12/15/2019-----
-# Customize a Brands model with the Azure AI Video Indexer website
--
-Azure AI Video Indexer supports brand detection from speech and visual text during indexing and reindexing of video and audio content. The brand detection feature identifies mentions of products, services, and companies suggested by Bing's brands database. For example, if Microsoft is mentioned in video or audio content or if it shows up in visual text in a video, Azure AI Video Indexer detects it as a brand in the content.
-
-A custom Brands model allows you to:
--- select if you want Azure AI Video Indexer to detect brands from the Bing brands database.-- select if you want Azure AI Video Indexer to exclude certain brands from being detected (essentially creating a blocklist of brands).-- select if you want Azure AI Video Indexer to include brands that should be part of your model that might not be in Bing's brands database (essentially creating an accept list of brands).-
-For a detailed overview, see this [Overview](customize-brands-model-overview.md).
-
-You can use the Azure AI Video Indexer website to create, use, and edit custom Brands models detected in a video, as described in this article. You can also use the API, as described in [Customize Brands model using APIs](customize-brands-model-with-api.md).
-
-> [!NOTE]
-> If your video was indexed prior to adding a brand, you need to reindex it. You will find **Re-index** item in the drop-down menu associated with the video. Select **Advanced options** -> **Brand categories** and check **All brands**.
-
-## Edit Brands model settings
-
-You have the option to set whether or not you want brands from the Bing brands database to be detected. To set this option, you need to edit the settings of your Brands model. Follow these steps:
-
-1. Go to the [Azure AI Video Indexer](https://www.videoindexer.ai/) website and sign in.
-1. To customize a model in your account, select the **Content model customization** button on the left of the page.
-
- > [!div class="mx-imgBorder"]
- > :::image type="content" source="./media/content-model-customization/content-model-customization.png" alt-text="Customize content model in Azure AI Video Indexer ":::
-1. To edit brands, select the **Brands** tab.
-
- > [!div class="mx-imgBorder"]
- > :::image type="content" source="./media/customize-brand-model/customize-brand-model.png" alt-text="Screenshot shows the Brands tab of the Content model customization dialog box":::
-1. Check the **Show brands suggested by Bing** option if you want Azure AI Video Indexer to detect brands suggested by BingΓÇöleave the option unchecked if you don't.
-
-## Include brands in the model
-
-The **Include brands** section represents custom brands that you want Azure AI Video Indexer to detect, even if they aren't suggested by Bing.
-
-### Add a brand to include list
-
-1. Select **+ Create new brand**.
-
- Provide a name (required), category (optional), description (optional), and reference URL (optional).
- The category field is meant to help you tag your brands. This field shows up as the brand's *tags* when using the Azure AI Video Indexer APIs. For example, the brand "Azure" can be tagged or categorized as "Cloud".
-
- The reference URL field can be any reference website for the brand (like a link to its Wikipedia page).
-
-2. Select **Save** and you'll see that the brand has been added to the **Include brands** list.
-
-### Edit a brand on the include list
-
-1. Select the pencil icon next to the brand that you want to edit.
-
- You can update the category, description, or reference URL of a brand. You can't change the name of a brand because names of brands are unique. If you need to change the brand name, delete the entire brand (see next section) and create a new brand with the new name.
-
-2. Select the **Update** button to update the brand with the new information.
-
-### Delete a brand on the include list
-
-1. Select the trash icon next to the brand that you want to delete.
-2. Select **Delete** and the brand will no longer appear in your *Include brands* list.
-
-## Exclude brands from the model
-
-The **Exclude brands** section represents the brands that you don't want Azure AI Video Indexer to detect.
-
-### Add a brand to exclude list
-
-1. Select **+ Create new brand.**
-
- Provide a name (required), category (optional).
-
-2. Select **Save** and you'll see that the brand has been added to the *Exclude brands* list.
-
-### Edit a brand on the exclude list
-
-1. Select the pencil icon next to the brand that you want to edit.
-
- You can only update the category of a brand. You can't change the name of a brand because names of brands are unique. If you need to change the brand name, delete the entire brand (see next section) and create a new brand with the new name.
-
-2. Select the **Update** button to update the brand with the new information.
-
-### Delete a brand on the exclude list
-
-1. Select the trash icon next to the brand that you want to delete.
-2. Select **Delete** and the brand will no longer appear in your *Exclude brands* list.
-
-## Next steps
-
-[Customize Brands model using APIs](customize-brands-model-with-api.md)
azure-video-indexer Customize Content Models Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-video-indexer/customize-content-models-overview.md
- Title: Customizing content models in Azure AI Video Indexer
-description: This article gives links to the conceptual articles that explain the benefits of each type of customization. This article also links to how-to guides that show how you can implement the customization of each model.
- Previously updated : 06/26/2019----
-# Customizing content models in Azure AI Video Indexer
---
-Azure AI Video Indexer allows you to customize some of its models to be adapted to your specific use case. These models include [brands](customize-brands-model-overview.md), [language](customize-language-model-overview.md), and [person](customize-person-model-overview.md). You can easily customize these models using the Azure AI Video Indexer website or API.
-
-This article gives links to articles that explain the benefits of each type of customization. The article also links to how-to guides that show how you can implement the customization of each model.
-
-## Brands model
-
-* [Customizing the brands model overview](customize-brands-model-overview.md)
-* [Customizing the brands model using the Azure AI Video Indexer website](customize-brands-model-with-website.md)
-* [Customizing the brands model using the Azure AI Video Indexer API](customize-brands-model-with-api.md)
-
-## Language model
-
-* [Customizing language models overview](customize-language-model-overview.md)
-* [Customizing language models using the Azure AI Video Indexer website](customize-language-model-with-website.md)
-* [Customizing language models using the Azure AI Video Indexer API](customize-language-model-with-api.md)
-
-## Person model
-
-* [Customizing person models overview](customize-person-model-overview.md)
-* [Customizing person models using the Azure AI Video Indexer website](customize-person-model-with-website.md)
-* [Customizing person models using the Azure AI Video Indexer API](customize-person-model-with-api.md)
-
-## Next steps
-
-[Azure AI Video Indexer overview](video-indexer-overview.md)
azure-video-indexer Customize Language Model Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-video-indexer/customize-language-model-overview.md
- Title: Customize a Language model in Azure AI Video Indexer - Azure
-description: This article gives an overview of what is a Language model in Azure AI Video Indexer and how to customize it.
- Previously updated : 11/23/2022----
-# Customize a Language model with Azure AI Video Indexer
--
-Azure AI Video Indexer supports automatic speech recognition through integration with the Microsoft [Custom Speech Service](https://azure.microsoft.com/services/cognitive-services/custom-speech-service/). You can customize the Language model by uploading adaptation text, namely text from the domain whose vocabulary you'd like the engine to adapt to. Once you train your model, new words appearing in the adaptation text will be recognized, assuming default pronunciation, and the Language model will learn new probable sequences of words. See the list of supported by Azure AI Video Indexer languages in [supported langues](language-support.md).
-
-Let's take a word that is highly specific, like *"Kubernetes"* (in the context of Azure Kubernetes service), as an example. Since the word is new to Azure AI Video Indexer, it's recognized as *"communities"*. You need to train the model to recognize it as *"Kubernetes"*. In other cases, the words exist, but the Language model isn't expecting them to appear in a certain context. For example, *"container service"* isn't a 2-word sequence that a nonspecialized Language model would recognize as a specific set of words.
-
-There are two ways to customize a language model:
--- **Option 1**: Edit the transcript that was generated by Azure AI Video Indexer. By editing and correcting the transcript, you're training a language model to provide improved results in the future.-- **Option 2**: Upload text file(s) to train the language model. The upload file can either contain a list of words as you would like them to appear in the Video Indexer transcript or the relevant words included naturally in sentences and paragraphs. As better results are achieved with the latter approach, it's recommended for the upload file to contain full sentences or paragraphs related to your content.
-
-> [!Important]
-> Do not include in the upload file the words or sentences as currently incorrectly transcribed (for example, *"communities"*) as this will negate the intended impact.
-> Only include the words as you would like them to appear (for example, *"Kubernetes"*).
-
-You can use the Azure AI Video Indexer APIs or the website to create and edit custom Language models, as described in articles in the [Next steps](#next-steps) section of this article.
-
-## Best practices for custom Language models
-
-Azure AI Video Indexer learns based on probabilities of word combinations, so to learn best:
-
-* Give enough real examples of sentences as they would be spoken.
-* Put only one sentence per line, not more. Otherwise the system will learn probabilities across sentences.
-* It's okay to put one word as a sentence to boost the word against others, but the system learns best from full sentences.
-* When introducing new words or acronyms, if possible, give as many examples of usage in a full sentence to give as much context as possible to the system.
-* Try to put several adaptation options, and see how they work for you.
-* Avoid repetition of the exact same sentence multiple times. It may create bias against the rest of the input.
-* Avoid including uncommon symbols (~, # @ % &) as they'll get discarded. The sentences in which they appear will also get discarded.
-* Avoid putting too large inputs, such as hundreds of thousands of sentences, because doing so will dilute the effect of boosting.
-
-## Next steps
-
-[Customize Language model using APIs](customize-language-model-with-api.md)
-
-[Customize Language model using the website](customize-language-model-with-website.md)
azure-video-indexer Customize Language Model With Api https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-video-indexer/customize-language-model-with-api.md
- Title: Customize a Language model with Azure AI Video Indexer API
-description: Learn how to customize a Language model with the Azure AI Video Indexer API.
- Previously updated : 02/04/2020-----
-# Customize a Language model with the Azure AI Video Indexer API
--
-Azure AI Video Indexer lets you create custom Language models to customize speech recognition by uploading adaptation text, namely text from the domain whose vocabulary you'd like the engine to adapt to. Once you train your model, new words appearing in the adaptation text will be recognized.
-
-For a detailed overview and best practices for custom Language models, see [Customize a Language model with Azure AI Video Indexer](customize-language-model-overview.md).
-
-You can use the Azure AI Video Indexer APIs to create and edit custom Language models in your account, as described in this article. You can also use the website, as described in [Customize Language model using the Azure AI Video Indexer website](customize-language-model-with-api.md).
-
-## Create a Language model
-
-The [create a language model](https://api-portal.videoindexer.ai/api-details#api=Operations&operation=Create-Language-Model) API creates a new custom Language model in the specified account. You can upload files for the Language model in this call. Alternatively, you can create the Language model here and upload files for the model later by updating the Language model.
-
-> [!NOTE]
-> You must still train the model with its enabled files for the model to learn the contents of its files. Directions on training a language are in the next section.
-
-To upload files to be added to the Language model, you must upload files in the body using FormData in addition to providing values for the required parameters above. There are two ways to do this task:
-
-* Key is the file name and value is the txt file.
-* Key is the file name and value is a URL to txt file.
-
-### Response
-
-The response provides metadata on the newly created Language model along with metadata on each of the model's files following the format of this example JSON output:
-
-```json
-{
- "id": "dfae5745-6f1d-4edd-b224-42e1ab57a891",
- "name": "TestModel",
- "language": "En-US",
- "state": "None",
- "languageModelId": "00000000-0000-0000-0000-000000000000",
- "files": [
- {
- "id": "25be7c0e-b6a6-4f48-b981-497e920a0bc9",
- "name": "hellofile",
- "enable": true,
- "creator": "John Doe",
- "creationTime": "2018-04-28T11:55:34.6733333"
- },
- {
- "id": "33025f5b-2354-485e-a50c-4e6b76345ca7",
- "name": "worldfile",
- "enable": true,
- "creator": "John Doe",
- "creationTime": "2018-04-28T11:55:34.86"
- }
- ]
-}
-
-```
-
-## Train a Language model
-
-The [train a language model](https://api-portal.videoindexer.ai/api-details#api=Operations&operation=Train-Language-Model) API trains a custom Language model in the specified account with the contents in the files that were uploaded to and enabled in the language model.
-
-> [!NOTE]
-> You must first create the Language model and upload its files. You can upload files when creating the Language model or by updating the Language model.
-
-### Response
-
-The response provides metadata on the newly trained Language model along with metadata on each of the model's files following the format of this example JSON output:
-
-```json
-{
- "id": "41464adf-e432-42b1-8e09-f52905d7e29d",
- "name": "TestModel",
- "language": "En-US",
- "state": "Waiting",
- "languageModelId": "531e5745-681d-4e1d-b124-12e5ab57a891",
- "files": [
- {
- "id": "84fcf1ac-1952-48f3-b372-18f768eedf83",
- "name": "RenamedFile",
- "enable": false,
- "creator": "John Doe",
- "creationTime": "2018-04-27T20:10:10.5233333"
- },
- {
- "id": "9ac35b4b-1381-49c4-9fe4-8234bfdd0f50",
- "name": "hellofile",
- "enable": true,
- "creator": "John Doe",
- "creationTime": "2018-04-27T20:10:10.68"
- }
- ]
-}
-```
-
-The returned `id` is a unique ID used to distinguish between language models, while `languageModelId` is used both for [uploading a video to index](https://api-portal.videoindexer.ai/api-details#api=Operations&operation=Upload-Video) and [reindexing a video](https://api-portal.videoindexer.ai/api-details#api=Operations&operation=Re-Index-Video) APIs (also known as `linguisticModelId` in Azure AI Video Indexer upload/reindex APIs).
-
-## Delete a Language model
-
-The [delete a language model](https://api-portal.videoindexer.ai/api-details#api=Operations&operation=Delete-Language-Model) API deletes a custom Language model from the specified account. Any video that was using the deleted Language model keeps the same index until you reindex the video. If you reindex the video, you can assign a new Language model to the video. Otherwise, Azure AI Video Indexer uses its default model to reindex the video.
-
-### Response
-
-There's no returned content when the Language model is deleted successfully.
-
-## Update a Language model
-
-The [update a Language model](https://api-portal.videoindexer.ai/api-details#api=Operations&operation=Update-Language-Model) API updates a custom Language person model in the specified account.
-
-> [!NOTE]
-> You must have already created the Language model. You can use this call to enable or disable all files under the model, update the name of the Language model, and upload files to be added to the language model.
-
-To upload files to be added to the Language model, you must upload files in the body using FormData in addition to providing values for the required parameters above. There are two ways to do this task:
-
-* Key is the file name and value is the txt file.
-* Key is the file name and value is a URL to txt file.
-
-### Response
-
-The response provides metadata on the newly trained Language model along with metadata on each of the model's files following the format of this example JSON output:
-
-```json
-{
- "id": "41464adf-e432-42b1-8e09-f52905d7e29d",
- "name": "TestModel",
- "language": "En-US",
- "state": "Waiting",
- "languageModelId": "531e5745-681d-4e1d-b124-12e5ab57a891",
- "files": [
- {
- "id": "84fcf1ac-1952-48f3-b372-18f768eedf83",
- "name": "RenamedFile",
- "enable": true,
- "creator": "John Doe",
- "creationTime": "2018-04-27T20:10:10.5233333"
- },
- {
- "id": "9ac35b4b-1381-49c4-9fe4-8234bfdd0f50",
- "name": "hellofile",
- "enable": true,
- "creator": "John Doe",
- "creationTime": "2018-04-27T20:10:10.68"
- }
- ]
-}
-```
-
-Use the `id` of the files returned in the response to download the contents of the file.
-
-## Update a file from a Language model
-
-The [update a file](https://api-portal.videoindexer.ai/api-details#api=Operations&operation=Update-Language-Model-file) allows you to update the name and `enable` state of a file in a custom Language model in the specified account.
-
-### Response
-
-The response provides metadata on the file that you updated following the format of the example JSON output below.
-
-```json
-{
- "id": "84fcf1ac-1952-48f3-b372-18f768eedf83",
- "name": "RenamedFile",
- "enable": false,
- "creator": "John Doe",
- "creationTime": "2018-04-27T20:10:10.5233333"
-}
-```
-
-Use the `id` of the file returned in the response to download the contents of the file.
-
-## Get a specific Language model
-
-The [get](https://api-portal.videoindexer.ai/api-details#api=Operations&operation=Get-Language-Model) API returns information on the specified Language model in the specified account such as language and the files that are in the Language model.
-
-### Response
-
-The response provides metadata on the specified Language model along with metadata on each of the model's files following the format of this example JSON output:
-
-```json
-{
- "id": "dfae5745-6f1d-4edd-b224-42e1ab57a891",
- "name": "TestModel",
- "language": "En-US",
- "state": "None",
- "languageModelId": "00000000-0000-0000-0000-000000000000",
- "files": [
- {
- "id": "25be7c0e-b6a6-4f48-b981-497e920a0bc9",
- "name": "hellofile",
- "enable": true,
- "creator": "John Doe",
- "creationTime": "2018-04-28T11:55:34.6733333"
- },
- {
- "id": "33025f5b-2354-485e-a50c-4e6b76345ca7",
- "name": "worldfile",
- "enable": true,
- "creator": "John Doe",
- "creationTime": "2018-04-28T11:55:34.86"
- }
- ]
-}
-```
-
-Use the `id` of the file returned in the response to download the contents of the file.
-
-## Get all the Language models
-
-The [get all](https://api-portal.videoindexer.ai/api-details#api=Operations&operation=Get-Language-Models) API returns all of the custom Language models in the specified account in a list.
-
-### Response
-
-The response provides a list of all of the Language models in your account and each of their metadata and files following the format of this example JSON output:
-
-```json
-[
- {
- "id": "dfae5745-6f1d-4edd-b224-42e1ab57a891",
- "name": "TestModel",
- "language": "En-US",
- "state": "None",
- "languageModelId": "00000000-0000-0000-0000-000000000000",
- "files": [
- {
- "id": "25be7c0e-b6a6-4f48-b981-497e920a0bc9",
- "name": "hellofile",
- "enable": true,
- "creator": "John Doe",
- "creationTime": "2018-04-28T11:55:34.6733333"
- },
- {
- "id": "33025f5b-2354-485e-a50c-4e6b76345ca7",
- "name": "worldfile",
- "enable": true,
- "creator": "John Doe",
- "creationTime": "2018-04-28T11:55:34.86"
- }
- ]
- },
- {
- "id": "dfae5745-6f1d-4edd-b224-42e1ab57a892",
- "name": "AnotherTestModel",
- "language": "En-US",
- "state": "None",
- "languageModelId": "00000000-0000-0000-0000-000000000001",
- "files": []
- }
-]
-```
-
-## Delete a file from a Language model
-
-The [delete](https://api-portal.videoindexer.ai/api-details#api=Operations&operation=Delete-Language-Model-File) API deletes the specified file from the specified Language model in the specified account.
-
-### Response
-
-There's no returned content when the file is deleted from the Language model successfully.
-
-## Get metadata on a file from a Language model
-
-The [get metadata of a file](https://api-portal.videoindexer.ai/api-details#api=Operations&operation=Get-Language-Model-File-Data) API returns the contents of and metadata on the specified file from the chosen Language model in your account.
-
-### Response
-
-The response provides the contents and metadata of the file in JSON format, similar to this example:
-
-```json
-{
- "content": "hello\r\nworld",
- "id": "84fcf1ac-1952-48f3-b372-18f768eedf83",
- "name": "Hello",
- "enable": true,
- "creator": "John Doe",
- "creationTime": "2018-04-27T20:10:10.5233333"
-}
-```
-
-> [!NOTE]
-> The contents of this example file are the words "hello" and world" in two separate lines.
-
-## Download a file from a Language model
-
-The [download a file](https://api-portal.videoindexer.ai/api-details#api=Operations&operation=Download-Language-Model-File-Content) API downloads a text file containing the contents of the specified file from the specified Language model in the specified account. This text file should match the contents of the text file that was originally uploaded.
-
-### Response
-
-The response is the download of a text file with the contents of the file in the JSON format.
-
-## Next steps
-
-[Customize Language model using website](customize-language-model-with-website.md)
azure-video-indexer Customize Language Model With Website https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-video-indexer/customize-language-model-with-website.md
- Title: Customize Language model with Azure AI Video Indexer website
-description: Learn how to customize a Language model with the Azure AI Video Indexer website.
- Previously updated : 08/10/2020-----
-# Customize a Language model with the Azure AI Video Indexer website
--
-Azure AI Video Indexer lets you create custom Language models to customize speech recognition by uploading adaptation text, namely text from the domain whose vocabulary you'd like the engine to adapt to. Once you train your model, new words appearing in the adaptation text will be recognized.
-
-For a detailed overview and best practices for custom language models, see [Customize a Language model with Azure AI Video Indexer](customize-language-model-overview.md).
-
-You can use the Azure AI Video Indexer website to create and edit custom Language models in your account, as described in this topic. You can also use the API, as described in [Customize Language model using APIs](customize-language-model-with-api.md).
-
-## Create a Language model
-
-1. Go to the [Azure AI Video Indexer](https://www.videoindexer.ai/) website and sign in.
-1. To customize a model in your account, select the **Content model customization** button on the left of the page.
-
- > [!div class="mx-imgBorder"]
- > :::image type="content" source="./media/customize-language-model/model-customization.png" alt-text="Customize content model in Azure AI Video Indexer ":::
-1. Select the **Language** tab.
-
- You see a list of supported languages.
-1. Under the language that you want, select **Add model**.
-1. Type in the name for the Language model and hit enter.
-
- This step creates the model and gives the option to upload text files to the model.
-1. To add a text file, select **Add file**. Your file explorer will open.
-1. Navigate to and select the text file. You can add multiple text files to a Language model.
-
- You can also add a text file by selecting the **...** button on the right side of the Language model and selecting **Add file**.
-1. Once you're done uploading the text files, select the green **Train** option.
-
-The training process can take a few minutes. Once the training is done, you see **Trained** next to the model. You can preview, download, and delete the file from the model.
-
-> [!div class="mx-imgBorder"]
-> :::image type="content" source="./media/customize-language-model/customize-language-model.png" alt-text="Train the model":::
-
-### Using a Language model on a new video
-
-To use your Language model on a new video, do one of the following actions:
-
-* Select the **Upload** button on the top of the page.
-
- ![Upload button Azure AI Video Indexer](./media/customize-language-model/upload.png)
-* Drop your audio or video file or browse for your file.
-
-You're given the option to select the **Video source language**. Select the drop-down and select a Language model that you created from the list. It should say the language of your Language model and the name that you gave it in parentheses. For example:
-
-![Choose video source languageΓÇöReindex a video with Azure AI Video Indexer](./media/customize-language-model/reindex.png)
-
-Select the **Upload** option in the bottom of the page, and your new video will be indexed using your Language model.
-
-### Using a Language model to reindex
-
-To use your Language model to reindex a video in your collection, follow these steps:
-
-1. Sign in to the [Azure AI Video Indexer](https://www.videoindexer.ai/) home page.
-1. Click on **...** button on the video and select **Re-index**.
-1. You're given the option to select the **Video source language** to reindex your video with. Select the drop-down and select a Language model that you created from the list. It should say the language of your language model and the name that you gave it in parentheses.
-1. Select the **Re-index** button and your video will be reindexed using your Language model.
-
-## Edit a Language model
-
-You can edit a Language model by changing its name, adding files to it, and deleting files from it.
-
-If you add or delete files from the Language model, you'll have to train the model again by selecting the green **Train** option.
-
-### Rename the Language model
-
-You can change the name of the Language model by selecting the ellipsis (**...**) button on the right side of the Language model and selecting **Rename**.
-
-Type in the new name and hit enter.
-
-### Add files
-
-To add a text file, select **Add file**. Your file explorer will open.
-
-Navigate to and select the text file. You can add multiple text files to a Language model.
-
-You can also add a text file by selecting the ellipsis (**...**) button on the right side of the Language model and selecting **Add file**.
-
-### Delete files
-
-To delete a file from the Language model, select the ellipsis (**...**) button on the right side of the text file and select **Delete**. A new window pops up telling you that the deletion can't be undone. Select the **Delete** option in the new window.
-
-This action removes the file completely from the Language model.
-
-## Delete a Language model
-
-To delete a Language model from your account, select the ellipsis (**...**) button on the right side of the Language model and select **Delete**.
-
-A new window pops up telling you that the deletion can't be undone. Select the **Delete** option in the new window.
-
-This action removes the Language model completely from your account. Any video that was using the deleted Language model will keep the same index until you reindex the video. If you reindex the video, you can assign a new Language model to the video. Otherwise, Azure AI Video Indexer will use its default model to reindex the video.
-
-## Customize Language models by correcting transcripts
-
-Azure AI Video Indexer supports automatic customization of Language models based on the actual corrections users make to the transcriptions of their videos.
-
-1. To make corrections to a transcript, open up the video that you want to edit from your Account Videos. Select the **Timeline** tab.
-
- ![Customize language model timeline tabΓÇöAzure AI Video Indexer](./media/customize-language-model/timeline.png)
-
-1. Select the pencil icon to edit the transcript of your transcription.
-
- ![Customize language model edit transcriptionΓÇöAzure AI Video Indexer](./media/customize-language-model/edits.png)
-
- Azure AI Video Indexer captures all lines that are corrected by you in the transcription of your video and adds them automatically to a text file called "From transcript edits". These edits are used to retrain the specific Language model that was used to index this video.
-
- The edits that were done in the [widget's](video-indexer-embed-widgets.md) timeline are also included.
-
- If you didn't specify a Language model when indexing this video, all edits for this video will be stored in a default Language model called "Account adaptations" within the detected language of the video.
-
- In case multiple edits have been made to the same line, only the last version of the corrected line will be used for updating the Language model.
-
- > [!NOTE]
- > Only textual corrections are used for the customization. Corrections that don't involve actual words (for example, punctuation marks or spaces) aren't included.
-
-1. You'll see transcript corrections show up in the Language tab of the Content model customization page.
-
- To look at the "From transcript edits" file for each of your Language models, select it to open it.
-
- ![From transcript editsΓÇöAzure AI Video Indexer](./media/customize-language-model/from-transcript-edits.png)
-
-## Next steps
-
-[Customize language model using APIs](customize-language-model-with-api.md)
azure-video-indexer Customize Person Model Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-video-indexer/customize-person-model-overview.md
- Title: Customize a Person model in Azure AI Video Indexer - Azure
-description: This article gives an overview of what is a Person model in Azure AI Video Indexer and how to customize it.
- Previously updated : 05/15/2019----
-# Customize a Person model in Azure AI Video Indexer
---
-Azure AI Video Indexer supports celebrity recognition in your videos. The celebrity recognition feature covers approximately one million faces based on commonly requested data source such as IMDB, Wikipedia, and top LinkedIn influencers. Faces that aren't recognized by Azure AI Video Indexer are still detected but are left unnamed. Customers can build custom Person models and enable Azure AI Video Indexer to recognize faces that aren't recognized by default. Customers can build these Person models by pairing a person's name with image files of the person's face.
-
-If your account caters to different use-cases, you can benefit from being able to create multiple Person models per account. For example, if the content in your account is meant to be sorted into different channels, you might want to create a separate Person model for each channel.
-
-> [!NOTE]
-> Each Person model supports up to 1 million people and each account has a limit of 50 Person models.
-
-Once a model is created, you can use it by providing the model ID of a specific Person model when uploading/indexing or reindexing a video. Training a new face for a video updates the specific custom model that the video was associated with.
-
-If you don't need the multiple Person model support, don't assign a Person model ID to your video when uploading/indexing or reindexing. In this case, Azure AI Video Indexer will use the default Person model in your account.
-
-You can use the Azure AI Video Indexer website to edit faces that were detected in a video and to manage multiple custom Person models in your account, as described in the [Customize a Person model using a website](customize-person-model-with-website.md) article. You can also use the API, as described inΓÇ»[Customize a Person model using APIs](customize-person-model-with-api.md).
azure-video-indexer Customize Person Model With Api https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-video-indexer/customize-person-model-with-api.md
- Title: Customize a Person model with Azure AI Video Indexer API
-description: Learn how to customize a Person model with the Azure AI Video Indexer API.
- Previously updated : 01/14/2020-----
-# Customize a Person model with the Azure AI Video Indexer API
---
-Azure AI Video Indexer supports face detection and celebrity recognition for video content. The celebrity recognition feature covers about one million faces based on commonly requested data source such as IMDB, Wikipedia, and top LinkedIn influencers. Faces that aren't recognized by the celebrity recognition feature are detected but left unnamed. After you upload your video to Azure AI Video Indexer and get results back, you can go back and name the faces that weren't recognized. Once you label a face with a name, the face and name get added to your account's Person model. Azure AI Video Indexer will then recognize this face in your future videos and past videos.
-
-You can use the Azure AI Video Indexer API to edit faces that were detected in a video, as described in this topic. You can also use the Azure AI Video Indexer website, as described in [Customize Person model using the Azure AI Video Indexer website](customize-person-model-with-api.md).
-
-## Managing multiple Person models
-
-Azure AI Video Indexer supports multiple Person models per account. This feature is currently available only through the Azure AI Video Indexer APIs.
-
-If your account caters to different use-case scenarios, you might want to create multiple Person models per account. For example, if your content is related to sports, you can then create a separate Person model for each sport (football, basketball, soccer, and so on).
-
-Once a model is created, you can use it by providing the model ID of a specific Person model when uploading/indexing or reindexing a video. Training a new face for a video updates the specific custom model that the video was associated with.
-
-Each account has a limit of 50 Person models. If you don't need the multiple Person model support, don't assign a Person model ID to your video when uploading/indexing or reindexing. In this case, Azure AI Video Indexer uses the default custom Person model in your account.
-
-## Create a new Person model
-
-To create a new Person model in the specified account, use the [create a person model](https://api-portal.videoindexer.ai/api-details#api=Operations&operation=Create-Person-Model) API.
-
-The response provides the name and generated model ID of the Person model that you just created following the format of the example below.
-
-```json
-{
- "id": "227654b4-912c-4b92-ba4f-641d488e3720",
- "name": "Example Person Model"
-}
-```
-
-You then use the **id** value for the **personModelId** parameter when [uploading a video to index](https://api-portal.videoindexer.ai/api-details#api=Operations&operation=Upload-Video) or [reindexing a video](https://api-portal.videoindexer.ai/api-details#api=Operations&operation=Re-Index-Video).
-
-## Delete a Person model
-
-To delete a custom Person model from the specified account, use the [delete a person model](https://api-portal.videoindexer.ai/api-details#api=Operations&operation=Delete-Person-Model) API.
-
-Once the Person model is deleted successfully, the index of your current videos that were using the deleted model will remain unchanged until you reindex them. Upon reindexing, the faces that were named in the deleted model won't be recognized by Azure AI Video Indexer in your current videos that were indexed using that model but the faces will still be detected. Your current videos that were indexed using the deleted model will now use your account's default Person model. If faces from the deleted model are also named in your account's default model, those faces will continue to be recognized in the videos.
-
-There's no returned content when the Person model is deleted successfully.
-
-## Get all Person models
-
-To get all Person models in the specified account, use the [get a person model](https://api-portal.videoindexer.ai/api-details#api=Operations&operation=Get-Person-Models) API.
-
-The response provides a list of all of the Person models in your account (including the default Person model in the specified account) and each of their names and IDs following the format of the example below.
-
-```json
-[
- {
- "id": "59f9c326-b141-4515-abe7-7d822518571f",
- "name": "Default"
- },
- {
- "id": "9ef2632d-310a-4510-92e1-cc70ae0230d4",
- "name": "Test"
- }
-]
-```
-
-You can choose which model you want to use for a video by using the `id` value of the Person model for the `personModelId` parameter when [uploading a video to index](https://api-portal.videoindexer.ai/api-details#api=Operations&operation=Upload-Video) or [reindexing a video](https://api-portal.videoindexer.ai/api-details#api=Operations&operation=Re-Index-Video).
-
-## Update a face
-
-This command allows you to update a face in your video with a name using the ID of the video and ID of the face. This action then updates the Person model that the video was associated with upon uploading/indexing or reindexing. If no Person model was assigned, it updates the account's default Person model.
-
-The system then recognizes the occurrences of the same face in your other current videos that share the same Person model. Recognition of the face in your other current videos might take some time to take effect as this is a batch process.
-
-You can update a face that Azure AI Video Indexer recognized as a celebrity with a new name. The new name that you give will take precedence over the built-in celebrity recognition.
-
-To update the face, use the [update a video face](https://api-portal.videoindexer.ai/api-details#api=Operations&operation=Update-Video-Face) API.
-
-Names are unique for Person models, so if you give two different faces in the same Person model the same `name` parameter value, Azure AI Video Indexer views the faces as the same person and converges them once you reindex your video.
-
-## Next steps
-
-[Customize Person model using the Azure AI Video Indexer website](customize-person-model-with-website.md)
azure-video-indexer Customize Person Model With Website https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-video-indexer/customize-person-model-with-website.md
- Title: Customize a Person model with Azure AI Video Indexer website
-description: Learn how to customize a Person model with the Azure AI Video Indexer website.
- Previously updated : 05/31/2022----
-# Customize a Person model with the Azure AI Video Indexer website
---
-Azure AI Video Indexer supports celebrity recognition for video content. The celebrity recognition feature covers approximately one million faces based on commonly requested data source such as IMDB, Wikipedia, and top LinkedIn influencers. For a detailed overview, see [Customize a Person model in Azure AI Video Indexer](customize-person-model-overview.md).
-
-You can use the Azure AI Video Indexer website to edit faces that were detected in a video, as described in this article. You can also use the API, as described in [Customize a Person model using APIs](customize-person-model-with-api.md).
-
-## Central management of Person models in your account
-
-1. To view, edit, and delete the Person models in your account, browse to the Azure AI Video Indexer website and sign in.
-1. Select the content model customization button on the left of the page.
-
- > [!div class="mx-imgBorder"]
- > :::image type="content" source="./media/content-model-customization/content-model-customization.png" alt-text="Customize content model":::
-1. Select the People tab.
-
- You'll see the Default Person model in your account. The Default Person model holds any faces you may have edited or changed in the insights of your videos for which you didn't specify a custom Person model during indexing.
-
- If you created other Person models, they'll also be listed on this page.
-
- > [!div class="mx-imgBorder"]
- > :::image type="content" source="./media/customize-face-model/content-model-customization-people-tab.png" alt-text="Customize people":::
-
-## Create a new Person model
-
-1. Select the **+ Add model** button on the right.
-1. Enter the name of the model and select the check button to save the new model created. You can now add new people and faces to the new Person model.
-1. Select the list menu button and choose **+ Add person**.
-
- > [!div class="mx-imgBorder"]
- > :::image type="content" source="./media/customize-face-model/add-new-person.png" alt-text="Add a peron":::
-
-## Add a new person to a Person model
-
-> [!NOTE]
-> Azure AI Video Indexer allows you to add multiple people with the same name in a Person model. However, it's recommended you give unique names to each person in your model for usability and clarity.
-
-1. To add a new face to a Person model, select the list menu button next to the Person model that you want to add the face to.
-1. Select **+ Add person** from the menu.
-
- A pop-up will prompt you to fill out the Person's details. Type in the name of the person and select the check button.
-
- You can then choose from your file explorer or drag and drop the face images of the face. Azure AI Video Indexer will take all standard image file types (ex: JPG, PNG, and more).
-
- Azure AI Video Indexer can detect occurrences of this person in the future videos that you index and the current videos that you had already indexed, using the Person model to which you added this new face. Recognition of the person in your current videos might take some time to take effect, as this is a batch process.
-
-## Rename a Person model
-
-You can rename any Person model in your account including the Default Person model. Even if you rename your default Person model, it will still serve as the Default person model in your account.
-
-1. Select the list menu button next to the Person model that you want to rename.
-1. Select **Rename** from the menu.
-1. Select the current name of the model and type in your new name.
-1. Select the check button for your model to be renamed.
-
-## Delete a Person model
-
-You can delete any Person model that you created in your account. However, you can't delete your Default person model.
-
-1. Select **Delete** from the menu.
-
- A pop-up will show up and notify you that this action will delete the Person model and all of the people and files that it contains. This action can't be undone.
-1. If you're sure, select delete again.
-
-> [!NOTE]
-> The existing videos that were indexed using this (now deleted) Person model won't support the ability for you to update the names of the faces that appear in the video. You'll be able to edit the names of faces in these videos only after you reindex them using another Person model. If you reindex without specifying a Person model, the default model will be used.
-
-## Manage existing people in a Person model
-
-To look at the contents of any of your Person models, select the arrow next to the name of the Person model. Then you can view all of the people in that particular Person model. If you select the list menu button next to each of the people, you see manage, rename, and delete options.
-
-![Screenshot shows a contextual menu with options to Manage, Rename, and Delete.](./media/customize-face-model/manage-people.png)
-
-### Rename a person
-
-1. To rename a person in your Person model, select the list menu button and choose **Rename** from the list menu.
-1. Select the current name of the person and type in your new name.
-1. Select the check button, and the person will be renamed.
-
-### Delete a person
-
-1. To delete a person from your Person model, select the list menu button and choose **Delete** from the list menu.
-1. A pop-up tells you that this action will delete the person and that this action can't be undone.
-1. Select **Delete** again and this will remove the person from the Person model.
-
-### Check if a person already exists
-
-You can use the search to check if a person already exists in the model.
-
-### Manage a person
-
-If you select **Manage**, you see the **Person's details** window with all the faces that this Person model is being trained from. These faces come from occurrences of that person in videos that use this Person model or from images that you've manually uploaded.
-
-> [!TIP]
-> You can get to the **Person's details** window by clicking on the person's name or by clicking **Manage**, as shown above.
-
-#### Add a face
-
-You can add more faces to the person by selecting **Add images**.
-
-#### Delete a face
-
-Select the image you wish to delete and click **Delete**.
-
-#### Rename and delete a person
-
-You can use the manage pane to rename the person and to delete the person from the Person model.
-
-## Use a Person model to index a video
-
-You can use a Person model to index your new video by assigning the Person model during the upload of the video.
-
-To use your Person model on a new video, do the following steps:
-
-1. Select the **Upload** button on the right of the page.
-1. Drop your video file or browse for your file.
-1. Select the **Advanced options** arrow.
-1. Select the drop-down and select the Person model that you created.
-1. Select the **Upload** option in the bottom of the page, and your new video will be indexed using your Person model.
-
-If you don't specify a Person model during the upload, Azure AI Video Indexer will index the video using the Default Person model in your account.
-
-## Use a Person model to reindex a video
-
-To use a Person model to reindex a video in your collection, go to your account videos on the Azure AI Video Indexer home page, and hover over the name of the video that you want to reindex.
-
-You see options to edit, delete, and reindex your video.
-
-1. Select the option to reindex your video.
-
- ![Screenshot shows Account videos and the option to reindex your video.](./media/customize-face-model/reindex.png)
-
- You can now select the Person model to reindex your video with.
-1. Select the drop-down and select the Person model that you want to use.
-1. Select the **Reindex** button and your video will be reindexed using your Person model.
-
-Any new edits that you make to the faces detected and recognized in the video that you just reindexed will be saved in the Person model that you used to reindex the video.
-
-## Managing people in your videos
-
-You can manage the faces that are detected and people that are recognized in the videos that you index by editing and deleting faces.
-
-Deleting a face removes a specific face from the insights of the video.
-
-Editing a face renames a face that's detected and possibly recognized in your video. When you edit a face in your video, that name is saved as a person entry in the Person model that was assigned to the video during upload and indexing.
-
-If you don't assign a Person model to the video during upload, your edit is saved in your account's Default person model.
-
-### Edit a face
-
-> [!NOTE]
-> If a Person model has two or more different people with the same name, you won't be able to tag that name within the videos that use that Person model. You'll only be able to make changes to people that share that name in the People tab of the content model customization page in Azure AI Video Indexer. For this reason, it's recommended that you give unique names to each person in your Person model.
-
-1. Browse to the Azure AI Video Indexer website and sign in.
-1. Search for a video you want to view and edit in your account.
-1. To edit a face in your video, go to the Insights tab and select the pencil icon on the top-right corner of the window.
-
- ![Screenshot shows a video with an unknown face to select.](./media/customize-face-model/edit-face.png)
-
-1. Select any of the detected faces and change their names from "Unknown #X" (or the name that was previously assigned to the face).
-1. After typing in the new name, select the check icon next to the new name. This action saves the new name and recognizes and names all occurrences of this face in your other current videos and in the future videos that you upload. Recognition of the face in your other current videos might take some time to take effect as this is a batch process.
-
-If you name a face with the name of an existing person in the Person model that the video is using, the detected face images from this video of that person will merge with what already exists in the model. If you name a face with a new name, a new Person entry is created in the Person model that the video is using.
-
-### Delete a face
-
-To delete a detected face in your video, go to the Insights pane and select the pencil icon in the top-right corner of the pane. Select the **Delete** option underneath the name of the face. This action removes the detected face from the video. The person's face will still be detected in the other videos in which it appears, but you can delete the face from those videos as well after they've been indexed.
-
-The person, if they had been named, will also continue to exist in the Person model that was used to index the video from which you deleted the face unless you specifically delete the person from the Person model.
-
-## Optimize the ability of your model to recognize a person
-
-To optimize your model ability to recognize the person, upload as many different images as possible and from different angles. To get optimal results, use high resolution images.
-
-## Next steps
-
-[Customize Person model using APIs](customize-person-model-with-api.md)
azure-video-indexer Customize Speech Model Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-video-indexer/customize-speech-model-overview.md
- Title: Customize a speech model in Azure AI Video Indexer
-description: This article gives an overview of what is a speech model in Azure AI Video Indexer.
- Previously updated : 03/06/2023----
-# Customize a speech model
---
-Through Azure AI Video Indexer integration with [Azure AI Speech services](../ai-services/speech-service/captioning-concepts.md), a Universal Language Model is utilized as a base model that is trained with Microsoft-owned data and reflects commonly used spoken language. The base model is pretrained with dialects and phonetics representing various common domains. The base model works well in most speech recognition scenarios.
-
-However, sometimes the base modelΓÇÖs transcription doesn't accurately handle some content. In these situations, a customized speech model can be used to improve recognition of domain-specific vocabulary or pronunciation that is specific to your content by providing text data to train the model. Through the process of creating and adapting speech customization models, your content can be properly transcribed. There's no additional charge for using Video Indexers speech customization.
-
-## When to use a customized speech model?
-
-If your content contains industry specific terminology or when reviewing Video Indexer transcription results you notice inaccuracies, you can create and train a custom speech model to recognize the terms and improve the transcription quality. It may only be worthwhile to create a custom model if the relevant words and names are expected to appear repeatedly in the content you plan to index. Training a model is sometimes an iterative process and you might find that after the initial training, results could still use improvement and would benefit from additional training, see [How to Improve your custom model](#how-to-improve-your-custom-models) section for guidance.
-
-However, if you notice a few words or names transcribed incorrectly in the transcript, a custom speech model might not be needed, especially if the words or names arenΓÇÖt expected to be commonly used in content you plan on indexing in the future. You can just edit and correct the transcript in the Video Indexer website (see [View and update transcriptions in Azure AI Video Indexer website](edit-transcript-lines-portal.md)) and donΓÇÖt have to address it through a custom speech model.
-
-For a list of languages that support custom models and pronunciation, see the Customization and Pronunciation columns of the language support table in [Language support in Azure AI Video Indexer](language-support.md).
-
-## Train datasets
-
-When indexing a video, you can use a customized speech model to improve the transcription. Models are trained by loading them with [datasets](../ai-services/speech-service/how-to-custom-speech-test-and-train.md) that can include plain text data and pronunciation data.
-
-Text used to test and train a custom model should include samples from a diverse set of content and scenarios that you want your model to recognize. Consider the following factors when creating and training your datasets:
--- Include text that covers the kinds of verbal statements that your users make when they're interacting with your model. For example, if your content is primarily related to a sport, train the model with content containing terminology and subject matter related to the sport. -- Include all speech variances that you want your model to recognize. Many factors can vary speech, including accents, dialects, and language-mixing. -- Only include data that is relevant to content you're planning to transcribe. Including other data can harm recognition quality overall. -
-### Dataset types
-
-There are two dataset types that you can use for customization. To help determine which dataset to use to address your problems, refer to the following table:
-
-|Use case|Data type|
-|||
-|Improve recognition accuracy on industry-specific vocabulary and grammar, such as medical terminology or IT jargon. |Plain text|
-|Define the phonetic and displayed form of a word or term that has nonstandard pronunciation, such as product names or acronyms. |Pronunciation data |
-
-### Plain-text data for training
-
-A dataset including plain text sentences of related text can be used to improve the recognition of domain-specific words and phrases. Related text sentences can reduce substitution errors related to misrecognition of common words and domain-specific words by showing them in context. Domain-specific words can be uncommon or made-up words, but their pronunciation must be straightforward to be recognized.
-
-### Best practices for plain text datasets
--- Provide domain-related sentences in a single text file. Instead of using full sentences, you can upload a list of words. However, while this adds them to the vocabulary, it doesn't teach the system how the words are ordinarily used. By providing full or partial utterances (sentences or phrases of things that users are likely to say), the language model can learn the new words and how they're used. The custom language model is good not only for adding new words to the system, but also for adjusting the likelihood of known words for your application. Providing full utterances helps the system learn better. -- Use text data thatΓÇÖs close to the expected spoken utterances. Utterances don't need to be complete or grammatically correct, but they must accurately reflect the spoken input that you expect the model to recognize. -- Try to have each sentence or keyword on a separate line. -- To increase the weight of a term such as product names, add several sentences that include the term. -- For common phrases that are used in your content, providing many examples is useful because it tells the system to listen for these terms.ΓÇ» -- Avoid including uncommon symbols (~, # @ % &) as get discarded. The sentences in which they appear also get discarded. -- Avoid putting too large inputs, such as hundreds of thousands of sentences, because doing so dilutes the effect of boosting. -
-Use this table to ensure that your plain text dataset file is formatted correctly:
-
-|Property|Value|
-|||
-|Text encoding |UTF-8 BOM|
-|Number of utterances per line |1 |
-|Maximum file size |200 MB |
-
-Try to follow these guidelines in your plain text files:
--- Avoid repeating characters, words, or groups of words more than three times, such as "yeah yeah yeah yeah" as the service might drop lines with too many repetitions. -- Don't use special characters or UTF-8 characters above U+00A1. -- URIs is rejected. -- For some languages such as Japanese or Korean, importing large amounts of text data can take a long time or can time out. Consider dividing the dataset into multiple text files with up to 20,000 lines in each. -
-## Pronunciation data for training
-
-You can add to your custom speech model a custom pronunciation dataset to improve recognition of mispronounced words, phrases, or names.
-
-Pronunciation datasets need to include the spoken form of a word or phrase as well as the recognized displayed form. The spoken form is the phonetic sequence spelled out, such as ΓÇ£Triple AΓÇ¥. It can be composed of letters, words, syllables, or a combination of all three. The recognized displayed form is how you would like the word or phrase to appear in the transcription. This table includes some examples:
-
-|Recognized displayed form |Spoken form |
-|||
-|3CPO |three c p o |
-|CNTK |c n t k |
-|AAA |Triple A |
-
-You provide pronunciation datasets in a single text file. Include the spoken utterance and a custom pronunciation for each. Each row in the file should begin with the recognized form, then a tab character, and then the space-delimited phonetic sequence.
-
-```
-3CPO three c p o
-CNTK c n t k
-IEEE i triple e
-```
-
-Consider the following when creating and training pronunciation datasets:
-
-ItΓÇÖs not recommended to use custom pronunciation files to alter the pronunciation of common words.
-
-If there are a few variations of how a word or name is incorrectly transcribed, consider using some or all of them when training the pronunciation dataset. For example, if Robert is mentioned five times in the video and transcribed as Robort, Ropert, and robbers. You can try including all variations in the file as in the following example but be cautious when training with actual words like robbers as if robbers is mentioned in the video, it is transcribed as Robert.
-
-`Robert Roport`
-`Robert Ropert`
-`Robert Robbers`
-
-Pronunciation model isn't meant to address acronyms. For example, if you want Doctor to be transcribed as Dr., this can't be achieved through a pronunciation model.
-
-Refer to the following table to ensure that your pronunciation dataset files are valid and correctly formatted.
-
-|Property |Value |
-|||
-|Text encoding |UTF-8 BOM (ANSI is also supported for English) |
-|Number of pronunciations per line |1 |
-|Maximum file size |1 MB (1 KB for free tier) |
-
-## How to improve your custom models
-
-Training a pronunciation model can be an iterative process, as you might gain more knowledge on the pronunciation of the subject after initial training and evaluation of your modelΓÇÖs results. Since existing models can't be edited or modified, training a model iteratively requires the creation and uploading of datasets with additional information as well as training new custom models based on the new datasets. You would then reindex the media files with the new custom speech model.
-
-Example:
-
-Let's say you plan on indexing sports content and anticipate transcript accuracy issues with specific sports terminology as well as in the names of players and coaches. Before indexing, you've created a speech model with a plain text dataset with content containing relevant sports terminology and a pronunciation dataset with some of the player and coachesΓÇÖ names. You index a few videos using the custom speech model and when reviewing the generated transcript, find that while the terminology is transcribed correctly, many names aren't. You can take the following steps to improve performance in the future:
-
-1. Review the transcript and note all the incorrectly transcribed names. They could fall into two groups:
-
- - Names not in the pronunciation file.
- - Names in the pronunciation file but they're still incorrectly transcribed.
-2. Create a new dataset file. Either download the pronunciation dataset file or modify your locally saved original. For group A, add the new names to the file with how they were incorrectly transcribed (MichaelΓÇâMikel). For group B, add additional lines with each line having the correct name and a unique example of how it was incorrectly transcribed. For example:
-
- `StephenΓÇâSteven`
- `StephenΓÇâSteafan`
- `StephenΓÇâSteevan`
-3. Upload this file as a new dataset file.
-4. Create a new speech model and add the original plain text dataset and the new pronunciation dataset file.
-5. Reindex the video with the new speech model.
-6. If needed, repeat steps 1-5 until the results are satisfactory.
-
-## Next steps
-
-To get started with speech customization, see:
--- [Customize a speech model using the API](customize-speech-model-with-api.md)-- [Customize a speech model using the website](customize-speech-model-with-website.md)
azure-video-indexer Customize Speech Model With Api https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-video-indexer/customize-speech-model-with-api.md
- Title: Customize a speech model with the Azure AI Video Indexer API
-description: Learn how to customize a speech model with the Azure AI Video Indexer API.
- Previously updated : 03/06/2023----
-# Customize a speech model with the API
---
-Azure AI Video Indexer lets you create custom language models to customize speech recognition by uploading adaptation text, namely text from the domain whose vocabulary you'd like the engine to adapt to or aligning word or name pronunciation with how it should be written.
-
-For a detailed overview and best practices for custom speech models, seeΓÇ»[Customize a speech model with Azure AI Video Indexer](customize-speech-model-overview.md).
-
-You can use the Azure AI Video Indexer APIs to create and edit custom language models in your account. You can also use the website, as described inΓÇ»[Customize speech model using the Azure AI Video Indexer website](customize-speech-model-with-website.md).
-
-The following are descriptions of some of the parameters:
-
-|NameΓÇâΓÇâΓÇâ|ΓÇâTypeΓÇâ|ΓÇâΓÇâDescriptionΓÇâ|ΓÇâ
-||||
-|`displayName`ΓÇâ |stringΓÇâ|The desired name of the dataset/model.|
-|`locale`ΓÇâ ΓÇâ|stringΓÇâ|The language code of the dataset/model. For full list, see [Language support](language-support.md).|
-|`kind` ΓÇâ|integer|0 for a plain text dataset, 1 for a pronunciation dataset.|
-|`description`ΓÇâΓÇâ |stringΓÇâ|Optional description of the dataset/model.|
-|`contentUrl`ΓÇâΓÇâ |uriΓÇâ |URL of source file used in creation of dataset.|
-|`customProperties`ΓÇâ|objectΓÇâ|Optional properties of dataset/model.|
-
-## Create a speech dataset
-
-The [create speech dataset](https://api-portal.videoindexer.ai/api-details#api=Operations&operation=Create-Speech-Dataset) API creates a dataset for training a speech model. You upload a file that is used to create a dataset with this call. The content of a dataset can't be modified after it's created.
-To upload a file to a dataset, you must update parameters in the Body, including a URL to the text file to be uploaded. The description and custom properties fields are optional. The following is a sample of the body:
-
-```json
-{
- "displayName": "Pronunciation Dataset",
- "locale": "en-US",
- "kind": "Pronunciation",
- "description": "This is a pronunciation dataset.",
- "contentUrl": https://contoso.com/location,
- "customProperties": {
- "tag": "Pronunciation Dataset Example"
- }
-}
-```
-
-### Response
-
-The response provides metadata on the newly created dataset following the format of this example JSON output:
-
-```json
-{
- "id": "000000-0000-0000-0000-f58ac7002ae9",
- "properties": {
- "acceptedLineCount": 0,
- "rejectedLineCount": 0,
- "duration": null,
- "error": null
- },
- "displayName": "Contoso plain text",
- "description": "AVI dataset",
- "locale": "en-US",
- "kind": "Language",
- "status": "Waiting",
- "lastActionDateTime": "2023-02-28T13:24:27Z",
- "createdDateTime": "2023-02-28T13:24:27Z",
- "customProperties": null
-}
-```
-
-## Create a speech model
-
-The [create a speech model](https://api-portal.videoindexer.ai/api-details#api=Operations&operation=Create-Speech-Model) API creates and trains a custom speech model that could then be used to improve the transcription accuracy of your videos. It must contain at least one plain text dataset and can optionally have pronunciation datasets. Create it with all of the relevant dataset files as a model’s datasets can't be added or updated after its creation.
-
-When creating a speech model, you must update parameters in the Body, including a list of strings where the strings are the dataset/s the model will include. The description and custom properties fiels are optional. The following is a sample of the body:
-
-```json
-{
- "displayName": "Contoso Speech Model",
- "locale": "en-US",
- "datasets": ["ff3d2bc4-ab5a-4522-b599-b3d5ba768c75", "87c8962d-1d3c-44e5-a2b2-c696fddb9bae"],
- "description": "Contoso ads example model",
- "customProperties": {
- "tag": "Example Model"
- }
-}
-```
-
-### Response
-
-The response provides metadata on the newly created model following the format of this example JSON output:
-
-```json{
- "id": "00000000-0000-0000-0000-85be4454cf",
- "properties": {
- "deprecationDates": {
- "adaptationDateTime": null,
- "transcriptionDateTime": "2025-04-15T00:00:00Z"
- },
- "error": null
- },
- "displayName": "Contoso speech model",
- "description": "Contoso speech model for video indexer",
- "locale": "en-US",
- "datasets": ["00000000-0000-0000-0000-f58ac7002ae9"],
- "status": "Processing",
- "lastActionDateTime": "2023-02-28T13:36:28Z",
- "createdDateTime": "2023-02-28T13:36:28Z",
- "customProperties": null
-}
-```
-
-## Get speech dataset
-
-TheΓÇ»[get speech dataset](https://api-portal.videoindexer.ai/api-details#api=Operations&operation=Get-Speech-Dataset) API returns information on the specified dataset.
-
-### Response
-
-The response provides metadata on the specified dataset following the format of this example JSON output:
-
-```json
-{
- "id": "00000000-0000-0000-0000-f58002ae9",
- "properties": {
- "acceptedLineCount": 41,
- "rejectedLineCount": 0,
- "duration": null,
- "error": null
- },
- "displayName": "Contoso plain text",
- "description": "AVI dataset",
- "locale": "en-US",
- "kind": "Language",
- "status": "Complete",
- "lastActionDateTime": "2023-02-28T13:24:43Z",
- "createdDateTime": "2023-02-28T13:24:27Z",
- "customProperties": null
-}
-```
-
-## Get speech datasets files
-
-TheΓÇ»[get speech dataset files](https://api-portal.videoindexer.ai/api-details#api=Operations&operation=Get-Speech-Dataset-Files) API returns the files and metadata of the specified dataset.
-
-### Response
-
-The response provides a URL with the dataset files and metadata following the format of this example JSON output:
-
-```json
-[{
- "datasetId": "00000000-0000-0000-0000-f58ac72a",
- "fileId": "00000000-0000-0000-0000-cb190769c",
- "name": "languagedata",
- "contentUrl": "",
- "kind": "LanguageData",
- "createdDateTime": "2023-02-28T13:24:43Z",
- "properties": {
- "size": 1517
- }
-}, {
- "datasetId": "00000000-0000-0000-0000-f58ac72ΓÇ¥
- "fileId": "00000000-0000-0000-0000-2369192e",
- "name": "normalized.txt",
- "contentUrl": "",
- "kind": "LanguageData",
- "createdDateTime": "2023-02-28T13:24:43Z",
- "properties": {
- "size": 1517
- }
-}, {
- "datasetId": "00000000-0000-0000-0000-f58ac7",
- "fileId": "00000000-0000-0000-0000-05f1e306",
- "name": "report.json",
- "contentUrl": "",
- "kind": "DatasetReport",
- "createdDateTime": "2023-02-28T13:24:43Z",
- "properties": {
- "size": 78
- }
-}]
-```
-
-## Get the specified account datasets
-
-TheΓÇ»[get speech datasets](https://api-portal.videoindexer.ai/api-details#api=Operations&operation=Get-Speech-Datasets) API returns information on all of the specified accounts datasets.
-
-### Response
-
-The response provides metadata on the datasets in the specified account following the format of this example JSON output:
-
-```json
-[{
- "id": "00000000-0000-0000-abf5-4dad0f",
- "properties": {
- "acceptedLineCount": 41,
- "rejectedLineCount": 0,
- "duration": null,
- "error": null
- },
- "displayName": "test",
- "description": "string",
- "locale": "en-US",
- "kind": "Language",
- "status": "Complete",
- "lastActionDateTime": "2023-02-27T08:42:02Z",
- "createdDateTime": "2023-02-27T08:41:39Z",
- "customProperties": null
-}]
-```
-
-## Get the specified speech model
-
-TheΓÇ»[get speech model](https://api-portal.videoindexer.ai/api-details#api=Operations&operation=Get-Speech-Model) API returns information on the specified model.
-
-### Response
-
-The response provides metadata on the specified model following the format of this example JSON output:
-
-```json
-{
- "id": "00000000-0000-0000-0000-5685be445",
- "properties": {
- "deprecationDates": {
- "adaptationDateTime": null,
- "transcriptionDateTime": "2025-04-15T00:00:00Z"
- },
- "error": null
- },
- "displayName": "Contoso speech model",
- "description": "Contoso speech model for video indexer",
- "locale": "en-US",
- "datasets": ["00000000-0000-0000-0000-f58ac7002"],
- "status": "Complete",
- "lastActionDateTime": "2023-02-28T13:36:38Z",
- "createdDateTime": "2023-02-28T13:36:28Z",
- "customProperties": null
-}
-```
-
-## Get the specified account speech models
-
-TheΓÇ»[get speech models](https://api-portal.videoindexer.ai/api-details#api=Operations&operation=Get-Speech-Models) API returns information on all of the models in the specified account.
-
-### Response
-
-The response provides metadata on all of the speech models in the specified account following the format of this example JSON output:
-
-```json
-[{
- "id": "00000000-0000-0000-0000-5685be445",
- "properties": {
- "deprecationDates": {
- "adaptationDateTime": null,
- "transcriptionDateTime": "2025-04-15T00:00:00Z"
- },
- "error": null
- },
- "displayName": "Contoso speech model",
- "description": "Contoso speech model for video indexer",
- "locale": "en-US",
- "datasets": ["00000000-0000-0000-0000-f58ac7002a"],
- "status": "Complete",
- "lastActionDateTime": "2023-02-28T13:36:38Z",
- "createdDateTime": "2023-02-28T13:36:28Z",
- "customProperties": null
-}]
-```
-
-## Delete speech dataset
-
-The [delete speech dataset](https://api-portal.videoindexer.ai/api-details#api=Operations&operation=Delete-Speech-Dataset) API deletes the specified dataset. Any model that was trained with the deleted dataset continues to be available until the model is deleted. You cannot delete a dataset while it is in use for indexing or training.
-
-### Response
-
-There's no returned content when the dataset is deleted successfully.
-
-## Delete a speech model
-
-The [delete speech model](https://api-portal.videoindexer.ai/api-details#api=Operations&operation=Delete-Speech-Model) API deletes the specified speech model. You cannot delete a model while it is in use for indexing or training.
-
-### Response
-
-There's no returned content when the speech model is deleted successfully.
-
-## Next steps
-
-[Customize a speech model using the website](customize-speech-model-with-website.md)
-
azure-video-indexer Customize Speech Model With Website https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-video-indexer/customize-speech-model-with-website.md
- Title: Customize a speech model with Azure AI Video Indexer website
-description: Learn how to customize a speech model with the Azure AI Video Indexer website.
- Previously updated : 03/06/2023----
-# Customize a speech model in the website
--
-
-Azure AI Video Indexer lets you create custom speech models to customize speech recognition by uploading datasets that are used to create a speech model. This article goes through the steps to do so through the Video Indexer website. You can also use the API, as described inΓÇ»[Customize speech model using API](customize-speech-model-with-api.md).
-
-For a detailed overview and best practices for custom speech models, seeΓÇ»[Customize a speech model with Azure AI Video Indexer](customize-speech-model-overview.md).
-
-## Create a dataset
-
-As all custom models must contain a dataset, we'll start with the process of how to create and manage datasets.
-
-1. Go to the [Azure AI Video Indexer website](https://www.videoindexer.ai/) and sign in.
-1. Select the Model customization button on the left of the page.
-1. Select the Speech (new) tab. Here you'll begin the process of uploading datasets that are used to train the speech models.
-
- > [!div class="mx-imgBorder"]
- > :::image type="content" source="./media/customize-speech-model/speech-model.png" alt-text="Screenshot of uploading datasets which are used to train the speech models.":::
-1. Select Upload dataset.
-1. Select either Plain text or Pronunciation from the Dataset type dropdown menu. Every speech model must have a plain text dataset and can optionally have a pronunciation dataset. To learn more about each type, see Customize a speech model with Azure AI Video Indexer.
-1. Select Browse which will open the File Explorer. You can only use one file in each dataset. Choose the relevant text file.
-1. Select a Language for the model. Choose the language that is spoken in the media files you plan on indexing with this model.
-1. The Dataset name is pre-populated with the name of the file but you can modify the name.
-1. You can optionally add a description of the dataset. This could be helpful to distinguish each dataset if you expect to have multiple datasets.
-
- > [!div class="mx-imgBorder"]
- > :::image type="content" source="./media/customize-speech-model/dataset-type.png" alt-text="Screenshot of multiple datasets.":::
-1. Once you're ready, select Upload. You'll then see a list of all of your datasets and their properties, including the type, language, status, number of lines, and creation date. Once the status is complete, it can be used in the training and creation or new models.
-
- > [!div class="mx-imgBorder"]
- > :::image type="content" source="./media/customize-speech-model/datasets.png" alt-text="Screenshot of a new model.":::
-
-## Review and update a dataset
-
-Once a Dataset has been uploaded, you might need to review it or perform any number of updates to it. This section covers how to view, download, troubleshoot, and delete a dataset.
-
-**View dataset**: You can view a dataset and its properties by either clicking on the dataset name or when hovering over the dataset or clicking on the ellipsis and selecting **View Dataset**.
-
-> [!div class="mx-imgBorder"]
-> :::image type="content" source="./media/customize-speech-model/view-dataset.png" alt-text="Screenshot of how to view dataset.":::
-
-You'll then view the name, description, language and status of the dataset plus the following properties:
-
-**Number of lines**: indicates the number of lines successfully loaded out of the total number of lines in the file. If the entire file is loaded successfully the numbers will match (for example, 10 of 10 normalized). If the numbers don't match (for example, 7 of 10 normalized), this means that only some of the lines successfully loaded and the rest had errors. Common causes of errors are formatting issues with a line, such as not spacing a tab between each word in a pronunciation file. Reviewing the plain text and pronunciation data for training articles should be helpful in finding the issue. To troubleshoot the cause, review the error details, which are contained in the report. Select **View report** to view the error details regarding the lines that didn't load successfully (errorKind). This can also be viewed by selecting the **Report** tab.
-
-> [!div class="mx-imgBorder"]
-> :::image type="content" source="./media/customize-speech-model/report-tab.png" alt-text="Screenshot of how to view by selecting report tab.":::
-
-**Dataset ID**: Each dataset has a unique GUID, which is needed when using the API for operations that reference the dataset.
-
-**Plain text (normalized)**: This contains the normalized text of the loaded dataset file. Normalized text is the recognized text in plain form without formatting.
-
-**Edit Details**: To edit a dataset's name or description, when hovering over the dataset, click on the ellipsis and then select Edit details. You're then able to edit the dataset name and description.
-
-> [!Note]
-> The data in a dataset can't be edited or updated once the dataset has been uploaded. If you need to edit or update the data in a dataset, download the dataset, perform the edits, save the file, and upload the new dataset file.
-
-**Download**: To download a dataset file, when hovering over the dataset, click on the ellipsis and then select Download. Alternatively, when viewing the dataset, you can select Download and then have the option of downloading the dataset file or the upload report in JSON form.
-
-**Delete**: To delete a dataset, when hovering over the dataset, click on the ellipsis and then select Delete.
-
-## Create a custom speech model
-
-Datasets are used in the creation and training of models. Once you have created a plain text dataset, you are now able to create and start using a custom speech model.
-
-Keep in mind the following when creating and using custom speech models:
-
-* A new model must include at least one plain text dataset and can have multiple plain text datasets.
-* It's optional to include a pronunciation dataset and no more than one can be included.
-* Once a model is created, you can't add additional datasets to it or perform any modifications to its datasets. If you need to add or modify datasets, create a new model.
-* If you have indexed a video using a custom speech model and then delete the model, the transcript is not impacted unless you perform a re-index.
-* If you deleted a dataset that was used to train a custom model, as the speech model was already trained by the dataset, it continues to use it until the speech model is deleted.
-* If you delete a custom model, it has no impact of the transcription of videos that were already indexed using the model.
--
-**The following are instructions to create and manage custom speech models. There are two ways to train a model ΓÇô through the dataset tab and through the model tab.**
-
-## Train a model through the Datasets tab
-
-1. When viewing the list of datasets, if you select a plain text dataset by clicking on the circle to the left of a plain text datasetΓÇÖs name, the Train new model icon above the datasets will now turn from greyed out to blue and can be selected. Select Train new model.
-
- > [!div class="mx-imgBorder"]
- > :::image type="content" source="./media/customize-speech-model/train-model.png" alt-text="Screenshot of how to train new model.":::
-1. In the Train a new model popup, enter a name for the model, a language, and optionally add a description. A model can only contain datasets of the same language.
-1. Select the Datasets tab and then select from the list of your datasets the datasets you would like to be included in the model. Once a model is created, datasets can't be added.
-1. Select Create ad train.
-
-## Train a model through the Models tab
-
-1. Select the Models tab and then the Train new model icon. If no plain text datasets have been uploaded, the icon is greyed out. Select all the datasets that you want to be part of the model by clicking on the circle to the left of a plain text datasetΓÇÖs name.
-1. In the Train a new model pop-up, enter a name for the model, a language, and optionally add a description. A model can only contain datasets of the same language.
-1. Select the Datasets tab and then select from the list of your datasets the datasets you would like to be included in the model. Once a model is created, datasets can't be added.
-1. Select Create and train.
-
-## Model review and update
-
-Once a Model has been created, you might need to review its datasets, edits its name, or delete it.
-
-**View Model**: You can view a model and its properties by either clicking on the modelΓÇÖs name or when hovering over the model, clicking on the ellipsis and then selecting View Model.
-
-> [!div class="mx-imgBorder"]
-> :::image type="content" source="./media/customize-speech-model/view-model.png" alt-text="Screenshot of how to review and update a model.":::
-
-You'll then see in the Details tab the name, description, language and status of the model plus the following properties:
-
-**Model ID**: Each model has a unique GUID, which is needed when using the API for operations that reference the model.
-
-**Created on**: The date the model was created.
-
-**Edit Details**: To edit a modelΓÇÖs name or description, when hovering over the model, click on the ellipsis and then select Edit details. You're then able to edit the modelΓÇÖs name and description.
-
-> [!div class="mx-imgBorder"]
-> :::image type="content" source="./media/customize-speech-model/create-model.png" alt-text="Screenshot of how to hover over the model.":::
-
-> [!Note]
-> Only the modelΓÇÖs name and description can be edited. If you want to make any changes to its datasets or add datasets, a new model must be created.
-
-**Delete**: To delete a model, when hovering over the dataset, click on the ellipsis and then select Delete.
-
-**Included datasets**: Click on the Included datasets tab to view the modelΓÇÖs datasets.
-
-> [!div class="mx-imgBorder"]
-> :::image type="content" source="./media/customize-speech-model/included-datasets.png" alt-text="Screenshot of how to delete the model.":::
-
-## How to use a custom language model when indexing a video
-
-A custom language model isn't used by default for indexing jobs and must be selected during the index upload process. To learn how to index a video, see Upload and index videos with Azure AI Video Indexer - Azure AI Video Indexer | Microsoft Learn.
-
-During the upload process, you can select the source language of the video. In the Video source language drop-down menu, you'll see your custom model among the language list. The naming of the model is the language of your Language model and the name that you gave it in parentheses. For example:
-
-> [!div class="mx-imgBorder"]
-> :::image type="content" source="./media/customize-speech-model/contoso-model.png" alt-text="Screenshot of indexing a video.":::
-
-Select the Upload option in the bottom of the page, and your new video will be indexed using your Language model. The same steps apply when you want to re-index a video with a custom model.
-
-## Next steps
-
-[Customize a speech model using the API](customize-speech-model-with-api.md)
azure-video-indexer Deploy With Arm Template https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-video-indexer/deploy-with-arm-template.md
- Title: Deploy Azure AI Video Indexer by using an ARM template
-description: Learn how to create an Azure AI Video Indexer account by using an Azure Resource Manager (ARM) template.
-- Previously updated : 05/23/2022----
-# Tutorial: Deploy Azure AI Video Indexer by using an ARM template
---
-In this tutorial, you'll create an Azure AI Video Indexer account by using the Azure Resource Manager template (ARM template, which is in preview). The resource will be deployed to your subscription and will create the Azure AI Video Indexer resource based on parameters defined in the *avam.template* file.
-
-> [!NOTE]
-> This sample is *not* for connecting an existing Azure AI Video Indexer classic account to a Resource Manager-based Azure AI Video Indexer account.
->
-> For full documentation on the Azure AI Video Indexer API, visit the [developer portal](https://aka.ms/avam-dev-portal). For the latest API version for *Microsoft.VideoIndexer*, see the [template reference](/azure/templates/microsoft.videoindexer/accounts?tabs=bicep).
-
-## Prerequisites
-
-You need an Azure Media Services account. You can create one for free through [Create a Media Services account](/azure/media-services/latest/account-create-how-to).
-
-## Deploy the sample
----
-### Option 1: Select the button for deploying to Azure, and fill in the missing parameters
-
-[![Deploy to Azure](https://aka.ms/deploytoazurebutton)](https://portal.azure.com/#create/Microsoft.Template/uri/https%3A%2F%2Fraw.githubusercontent.com%2FAzure-Samples%2Fmedia-services-video-indexer%2Fmaster%2FDeploy-Samples%2FArmTemplates%2Favam.template.json)
----
-### Option 2: Deploy by using a PowerShell script
-
-1. Open the [template file](https://github.com/Azure-Samples/media-services-video-indexer/blob/master/Deploy-Samples/ArmTemplates/avam.template.json) and inspect its contents.
-2. Fill in the required parameters.
-3. Run the following PowerShell commands:
-
- * Create a new resource group on the same location as your Azure AI Video Indexer account by using the [New-AzResourceGroup](/powershell/module/az.resources/new-azresourcegroup) cmdlet.
-
- ```powershell
- New-AzResourceGroup -Name myResourceGroup -Location eastus
- ```
-
- * Deploy the template to the resource group by using the [New-AzResourceGroupDeployment](/powershell/module/az.resources/new-azresourcegroupdeployment) cmdlet.
-
- ```powershell
- New-AzResourceGroupDeployment -ResourceGroupName myResourceGroup -TemplateFile ./avam.template.json
- ```
-
-> [!NOTE]
-> If you want to work with Bicep format, see [Deploy by using Bicep](./deploy-with-bicep.md).
-
-## Parameters
-
-### name
-
-* Type: string
-* Description: The name of the new Azure AI Video Indexer account.
-* Required: true
-
-### location
-
-* Type: string
-* Description: The Azure location where the Azure AI Video Indexer account should be created.
-* Required: false
-
-> [!NOTE]
-> You need to deploy your Azure AI Video Indexer account in the same location (region) as the associated Azure Media Services resource.
-
-### mediaServiceAccountResourceId
-
-* Type: string
-* Description: The resource ID of the Azure Media Services resource.
-* Required: true
-
-### managedIdentityId
-
-> [!NOTE]
-> User assigned managed Identify must have at least Contributor role on the Media Service before deployment, when using System Assigned Managed Identity the Contributor role should be assigned after deployment.
-
-* Type: string
-* Description: The resource ID of the managed identity that's used to grant access between Azure Media Services resource and the Azure AI Video Indexer account.
-* Required: true
-
-### tags
-
-* Type: object
-* Description: The array of objects that represents custom user tags on the Azure AI Video Indexer account.
-* Required: false
-
-## Reference documentation
-
-If you're new to Azure AI Video Indexer, see:
-
-* [The Azure AI Video Indexer documentation](./index.yml)
-* [The Azure AI Video Indexer API developer portal](https://api-portal.videoindexer.ai/)
-
-After you complete this tutorial, head to other Azure AI Video Indexer samples described in [README.md](https://github.com/Azure-Samples/media-services-video-indexer/blob/master/README.md).
-
-If you're new to template deployment, see:
-
-* [Azure Resource Manager documentation](../azure-resource-manager/index.yml)
-* [Deploy resources with ARM templates](../azure-resource-manager/templates/deploy-powershell.md)
-* [Deploy resources with Bicep and the Azure CLI](../azure-resource-manager/bicep/deploy-cli.md)
-
-## Next steps
-
-Connect a [classic paid Azure AI Video Indexer account to a Resource Manager-based account](connect-classic-account-to-arm.md).
azure-video-indexer Deploy With Bicep https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-video-indexer/deploy-with-bicep.md
- Title: Deploy Azure AI Video Indexer by using Bicep
-description: Learn how to create an Azure AI Video Indexer account by using a Bicep file.
-- Previously updated : 06/06/2022----
-# Tutorial: deploy Azure AI Video Indexer by using Bicep
--
-In this tutorial, you create an Azure AI Video Indexer account by using [Bicep](../azure-resource-manager/bicep/overview.md).
-
-> [!NOTE]
-> This sample is *not* for connecting an existing Azure AI Video Indexer classic account to an ARM-based Azure AI Video Indexer account.
-> For full documentation on Azure AI Video Indexer API, visit the [developer portal](https://aka.ms/avam-dev-portal) page.
-> For the latest API version for Microsoft.VideoIndexer, see the [template reference](/azure/templates/microsoft.videoindexer/accounts?tabs=bicep).
-
-## Prerequisites
-
-* An Azure Media Services (AMS) account. You can create one for free through the [Create AMS Account](/azure/media-services/latest/account-create-how-to).
-
-## Review the Bicep file
-
-One Azure resource is defined in the bicep file:
-
-```bicep
-param location string = resourceGroup().location
-
-@description('The name of the AVAM resource')
-param accountName string
-
-@description('The managed identity Resource Id used to grant access to the Azure Media Service (AMS) account')
-param managedIdentityResourceId string
-
-@description('The media Service Account Id. The Account needs to be created prior to the creation of this template')
-param mediaServiceAccountResourceId string
-
-@description('The AVAM Template')
-resource avamAccount 'Microsoft.VideoIndexer/accounts@2022-08-01' = {
- name: accountName
- location: location
- identity:{
- type: 'UserAssigned'
- userAssignedIdentities : {
- '${managedIdentityResourceId}' : {}
- }
- }
- properties: {
- media
- resourceId: mediaServiceAccountResourceId
- userAssignedIdentity: managedIdentityResourceId
- }
- }
-}
-```
-
-Check [Azure Quickstart Templates](https://github.com/Azure/azure-quickstart-templates) for more updated Bicep samples.
-
-## Deploy the sample
-
-1. Save the Bicep file as main.bicep to your local computer.
-1. Deploy the Bicep file using either Azure CLI or Azure PowerShell
-
- # [CLI](#tab/CLI)
-
- ```azurecli
- az group create --name exampleRG --location eastus
- az deployment group create --resource-group exampleRG --template-file main.bicep --parameters accountName=<account-name> managedIdentityResourceId=<managed-identity> mediaServiceAccountResourceId=<media-service-account-resource-id>
- ```
-
- # [PowerShell](#tab/PowerShell)
-
- ```azurepowershell
- New-AzResourceGroup -Name exampleRG -Location eastus
- New-AzResourceGroupDeployment -ResourceGroupName exampleRG -TemplateFile ./main.bicep -accountName "<account-name>" -managedIdentityResourceId "<managed-identity>" -mediaServiceAccountResourceId "<media-service-account-resource-id>"
- ```
-
-
-
- The location must be the same location as the existing Azure media service. You need to provide values for the parameters:
-
- * Replace **\<account-name\>** with the name of the new Azure AI Video Indexer account.
- * Replace **\<managed-identity\>** with the managed identity used to grant access between Azure Media Services(AMS).
- * Replace **\<media-service-account-resource-id\>** with the existing Azure media service.
-
-## Reference documentation
-
-If you're new to Azure AI Video Indexer, see:
-
-* [The Azure AI Video Indexer documentation](./index.yml)
-* [The Azure AI Video Indexer developer portal](https://api-portal.videoindexer.ai/)
-* After completing this tutorial, head to other Azure AI Video Indexer samples, described on [README.md](https://github.com/Azure-Samples/media-services-video-indexer/blob/master/README.md)
-
-If you're new to Bicep deployment, see:
-
-* [Azure Resource Manager documentation](../azure-resource-manager/index.yml)
-* [Deploy Resources with Bicep and Azure PowerShell](../azure-resource-manager/bicep/deploy-powershell.md)
-* [Deploy Resources with Bicep and Azure CLI](../azure-resource-manager/bicep/deploy-cli.md)
-
-## Next steps
-
-[Connect an existing classic paid Azure AI Video Indexer account to ARM-based account](connect-classic-account-to-arm.md)
azure-video-indexer Detect Textual Logo https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-video-indexer/detect-textual-logo.md
- Title: Detect textual logo with Azure AI Video Indexer
-description: This article gives an overview of Azure AI Video Indexer textual logo detection.
- Previously updated : 01/22/2023----
-# How to detect textual logo
---
-> [!NOTE]
-> Textual logo detection (preview) creation process is currently available through API. The result can be viewed through the Azure AI Video Indexer [website](https://www.videoindexer.ai/).
-
-**Textual logo detection** insights are based on the OCR textual detection, which matches a specific predefined text.
-
-For example, if a user would create a textual logo: ΓÇ£MicrosoftΓÇ¥, different appearances of the word ΓÇÿMicrosoftΓÇÖ will be detected as the ΓÇÿMicrosoftΓÇÖ logo. A logo can have different variations, these variations can be associated with the main logo name. For example, user might have under the ΓÇÿMicrosoftΓÇÖ logo the following variations: ΓÇÿMSΓÇÖ, ΓÇÿMSFTΓÇÖ etc.
-
-```json
-{
- "name": "Microsoft",
- "wikipediaSearchTerm": "Microsoft",
- "textVariations": [{
- "text": "Microsoft",
- "caseSensitive": false
- }, {
- "text": "MSFT",
- "caseSensitive": true
- }]
-}
-```
-
-> [!div class="mx-imgBorder"]
-> :::image type="content" source="./media/textual-logo-detection/microsoft-example.png" alt-text="Diagram of logo detection.":::
-
-## Prerequisite
-
-The Azure Video Index account must have (at the very least) the `contributor` role assigned to the resource.
-
-## How to use
-
-In order to use textual logo detection, follow these steps, described in this article:
-
-1. Create a logo instance using with the [Create logo](https://api-portal.videoindexer.ai/api-details#api=Operations&operation=Create-Logo) API (with variations).
-
- * Save the logo ID.
-1. Create a logo group using the [Create Logo Group](https://api-portal.videoindexer.ai/api-details#api=Operations&operation=Create-Logo-Group) API.
-
- * Associate the logo instance with the group when creating the new group (by pasting the ID in the logos array).
-1. Upload a video using: **Advanced video** or **Advance video + audio** preset, use the `logoGroupId` parameter to specify the logo group you would like to index the video with.
-
-## Create a logo instance
-
-Use the [Create logo](https://api-portal.videoindexer.ai/api-details#api=Operations&operation=Create-Logo) API to create your logo. You can use the **try it** button.
-
-> [!div class="mx-imgBorder"]
-> :::image type="content" source="./media/textual-logo-detection/logo-api.png" alt-text="Diagram of logo API.":::
-
-In this tutorial we use the example supplied as default:
-
-Insert the following:
-
-* `Location`: The location of the Azure AI Video Indexer account.
-* `Account ID`: The ID of the Azure AI Video Indexer account.
-* `Access token`: The token, at least at a contributor level permission.
-
-The default body is:
-
-```json
-{
- "name": "Microsoft",
- "wikipediaSearchTerm": "Microsoft",
- "textVariations": [{
- "text": "Microsoft",
- "caseSensitive": false
- }, {
- "text": "MSFT",
- "caseSensitive": true
- }]
-}
-```
-
-|Key|Value|
-|||
-|Name|Name of the logo, would be used in the Azure AI Video Indexer website.|
-|wikipediaSearchTerm|Used to create a description in the Video Indexer website.|
-|text|The text the model will compare too, make sure to add the obvious name as part of the variations. (e.g Microsoft)|
-|caseSensitive| true/false according to the variation.|
-
-The response should return **201 Created**.
-
-```
-HTTP/1.1 201 Created
-
-content-type: application/json; charset=utf-8
-
-{
- "id": "id"
- "creationTime": "2023-01-15T13:08:14.9518235Z",
- "lastUpdateTime": "2023-01-15T13:08:14.9518235Z",
- "lastUpdatedBy": "Jhon Doe",
- "createdBy": "Jhon Doe",
- "name": "Microsoft",
- "wikipediaSearchTerm": "Microsoft",
- "textVariations": [{
- "text": "Microsoft",
- "caseSensitive": false,
- "creationTime": "2023-01-15T13:08:14.9518235Z",
- "createdBy": "Jhon Doe"
- }, {
- "text": "MSFT",
- "caseSensitive": true,
- "creationTime": "2023-01-15T13:08:14.9518235Z",
- "createdBy": "Jhon Doe"
- }]
-}
-```
-## Create a new textual logo group
-
-Use the [Create Logo Group](https://api-portal.videoindexer.ai/api-details#api=Operations&operation=Create-Logo-Group) API to create a logo group. Use the **try it** button.
-
-Insert the following:
-
-* `Location`: The location of the Azure AI Video Indexer account.
-* `Account ID`: The ID of the Azure AI Video Indexer account.
-* `Access token`: The token, at least at a contributor level permission.
-
-> [!div class="mx-imgBorder"]
-> :::image type="content" source="./media/textual-logo-detection/logo-group-api.png" alt-text="Diagram of logo group API.":::
-
-In the **Body** paste the logo ID from the previous step.
-
-```json
-{
- "logos": [{
- "logoId": "id"
- }],
- "name": "Technology",
- "description": "A group of logos of technology companies."
-}
-```
-
-* The default example has two logo IDs, we have created the first group with only one logo ID.
-
- The response should return **201 Created**.
-
- ```
- HTTP/1.1 201 Created
-
- content-type: application/json; charset=utf-8
-
- {
- "id": "id",
- "creationTime": "2023-01-15T14:41:11.4860104Z",
- "lastUpdateTime": "2023-01-15T14:41:11.4860104Z",
- "lastUpdatedBy": "Jhon Doe",
- "createdBy": "Jhon Doe",
- "logos": [{
- "logoId": " e9d609b4-d6a6-4943-86ff-557e724bd7c6"
- }],
- "name": "Technology",
- "description": "A group of logos of technology companies."
- }
- ```
-
-## Upload from URL
-
-Use the upload API call:
-
-Specify the following:
-
-* `Location`: The location of the Azure AI Video Indexer account.
-* `Account`: The ID of the Azure AI Video Indexer account.
-* `Name`: The name of the media file you're indexing.
-* `Language`: `en-US`. For more information, see [Language support](language-support.md)
-* `IndexingPreset`: Select **Advanced Video/Audio+video**.
-* `Videourl`: The url.
-* `LogoGroupID`: GUID representing the logo group (you got it in the response when creating it).
-* `Access token`: The token, at least at a contributor level permission.
-
-## Inspect the output
-
-Assuming the textual logo model has found a match, you'll be able to view the result in the [Azure AI Video Indexer website](https://www.videoindexer.ai/).
-
-### Insights
-
-A new section would appear in the insights panel showing the number of custom logos that were detected. One representative thumbnail will be displayed representing the new logo.
-
-> [!div class="mx-imgBorder"]
-> :::image type="content" source="./media/textual-logo-detection/logo-insight.png" alt-text="Diagram of logo insight.":::
-
-### Timeline
-
-When switching to the Timeline view, under the **View**, mark the **Logos** checkbox. All detected thumbnails will be displayed according to their time stamp.
-
-> [!div class="mx-imgBorder"]
-> :::image type="content" source="./media/textual-logo-detection/logo-timeline.png" alt-text="Diagram of logo timeline.":::
-
-All logo instances that were recognized with a certainty above 80% present will be displayed, the extended list of detection including low certainty detection are available in the [Artifacts](https://api-portal.videoindexer.ai/api-details#api=Operations&operation=Get-Video-Artifact-Download-Url) file.
-
-## Next steps
-
-### Adding a logo to an existing logo group
-
-In the first part of this article, we had one instance of a logo and we have associated it to the right logo group upon the creation of the logo group. If all logo instances are created before the logo group is created, they can be associated with logo group on the creation phase. However, if the group was already created, the new instance should be associated to the group following these steps:
-
-1. Create the logo.
-
- 1. Copy the logo ID.
-1. [Get logo groups](https://api-portal.videoindexer.ai/api-details#api=Operations&operation=Get-Logo-Groups).
-
- 1. Copy the logo group ID of the right group.
-1. [Get logo group](https://api-portal.videoindexer.ai/api-details#api=Operations&operation=Get-Logo-Group).
-
- 1. Copy the response the list of logos IDs:
-
- Logo list sample:
-
- ```json
- "logos": [{
- "logoId": "id"
- }],
- ```
-1. [Update logo group](https://api-portal.videoindexer.ai/api-details#api=Operations&operation=Update-Logo-Group).
-
- 1. Logo group ID is the output received at step 2.
- 1. At the ΓÇÿBodyΓÇÖ of the request, paste the existing list of logos from step 3.
- 1. Then add to the list the logo ID from step 1.
-1. Validate the response of the [Update logo group](https://api-portal.videoindexer.ai/api-details#api=Operations&operation=Get-Logo-Groups) making sure the list contains the previous IDs and the new.
-
-### Additional information and limitations
-
-* A logo group can contain up to 50 logos.
-* One logo can be linked to more than one group.
-* Use the [Update logo group](https://api-portal.videoindexer.ai/api-details#api=Operations&operation=Get-Logo-Groups) to add the new logo to an existing group.
azure-video-indexer Detected Clothing https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-video-indexer/detected-clothing.md
- Title: Enable detected clothing feature
-description: Azure AI Video Indexer detects clothing associated with the person wearing it in the video and provides information such as the type of clothing detected and the timestamp of the appearance (start, end). The API returns the detection confidence level.
- Previously updated : 08/07/2023----
-# Enable detected clothing feature
--
-Azure AI Video Indexer detects clothing associated with the person wearing it in the video and provides information such as the type of clothing detected and the timestamp of the appearance (start, end). The API returns the detection confidence level. The clothing types that are detected are long pants, short pants, long sleeves, short sleeves, and skirt or dress.
-
-Two examples where this feature could be useful:
-
-- Improve efficiency when creating raw data for content creators, like video advertising, news, or sport games (for example, find people wearing a red shirt in a video archive).-- Post-event analysisΓÇödetect and track a personΓÇÖs movement to better analyze an accident or crime post-event (for example, explosion, bank robbery, incident).
-
-The newly added clothing detection feature is available when indexing your file by choosing the **Advanced option** -> **Advanced video** or **Advanced video + audio** preset (under Video + audio indexing). Standard indexing won't include this new advanced model.
-
-
-When you choose to see **Insights** of your video on the [Azure AI Video Indexer](https://www.videoindexer.ai/) website, the People's detected clothing could be viewed from the **Observed People** tracking insight. When choosing a thumbnail of a person the detected clothing became available.
-
-
-If you're interested to view People's detected clothing in the Timeline of your video on the Azure AI Video Indexer website, go to **View** -> **Show Insights** and select the All option, or **View** -> **Custom View** and select **Observed People**.
-
-
-Searching for a specific clothing to return all the observed people wearing it's enabled using the search bar of either the **Insights** or from the **Timeline** of your video on the Azure AI Video Indexer website.
-
-The following JSON response illustrates what Azure AI Video Indexer returns when tracking observed people having detected clothing associated:
-
-```json
-"observedPeople": [
- {
- "id": 1,
- "thumbnailId": "68bab0f2-f084-4c2b-859b-a951ed03c209",
- "clothing": [
- {
- "id": 1,
- "type": "sleeve",
- "properties": {
- "length": "short"
- }
- },
- {
- "id": 2,
- "type": "pants",
- "properties": {
- "length": "long"
- }
- }
- ],
- "instances": [
- {
- "adjustedStart": "0:00:05.5055",
- "adjustedEnd": "0:00:09.9766333",
- "start": "0:00:05.5055",
- "end": "0:00:09.9766333"
- }
- ]
- },
- {
- "id": 2,
- "thumbnailId": "449bf52d-06bf-43ab-9f6b-e438cde4f217",
- "clothing": [
- {
- "id": 1,
- "type": "sleeve",
- "properties": {
- "length": "short"
- }
- },
- {
- "id": 2,
- "type": "pants",
- "properties": {
- "length": "long"
- }
- }
- ],
- "instances": [
- {
- "adjustedStart": "0:00:07.2072",
- "adjustedEnd": "0:00:10.5105",
- "start": "0:00:07.2072",
- "end": "0:00:10.5105"
- }
- ]
- },
-]
-```
-
-## Limitations and assumptions
-
-As the detected clothing feature uses observed people tracking, the tracking quality is important. For tracking considerations and limitations, see [Considerations and limitations when choosing a use case](observed-matched-people.md#considerations-and-limitations-when-choosing-a-use-case).
--- As clothing detection is dependent on the visibility of the personΓÇÖs body, the accuracy is higher if a person is fully visible.-- There maybe errors when a person is without clothing.-- In this scenario or others of poor visibility, results may be given such as long pants and skirt or dress. -
-## Next steps
-
-[Track observed people in a video](observed-people-tracking.md)
azure-video-indexer Digital Patterns Color Bars https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-video-indexer/digital-patterns-color-bars.md
- Title: Enable and view digital patterns with color bars
-description: Learn about how to enable and view digital patterns with color bars.
- Previously updated : 09/20/2022----
-# Enable and view digital patterns with color bars
--
-This article shows how to enable and view digital patterns with color bars (preview).
-
-You can view the names of the specific digital patterns. <!-- They are searchable by the color bar type (Color Bar/Test card) in the insights. -->The timeline includes the following types:
--- Color bars-- Test cards-
-This insight is most useful to customers involved in the movie post-production process.
-
-## View post-production insights
-
-In order to set the indexing process to include the slate metadata, select the **Video + audio indexing** -> **Advanced** presets.
-
-> [!div class="mx-imgBorder"]
-> :::image type="content" source="./media/slate-detection-process/advanced-setting.png" alt-text="This image shows the advanced setting in order to view post-production clapperboards insights.":::
-
-After the file has been uploaded and indexed, if you want to view the timeline of the insight, select the **Post-production** checkmark from the list of insights.
-
-> [!div class="mx-imgBorder"]
-> :::image type="content" source="./media/slate-detection-process/post-production-checkmark.png" alt-text="This image shows the post-production checkmark needed to view clapperboards.":::
-
-### View digital patterns insights
-
-#### View the insight
-
-To see the instances on the website, select **Insights** and scroll to **Labels**.
-The insight shows under **Labels** in the **Insight** tab.
-
-> [!div class="mx-imgBorder"]
-> :::image type="content" source="./media/slate-detection-process/insights-color-bars.png" alt-text="This image shows the color bars under labels.":::
-
-#### View the timeline
-
-If you checked the **Post-production** insight, you can find the color bars instance and timeline under the **Timeline** tab.
-
-> [!div class="mx-imgBorder"]
-> :::image type="content" source="./media/slate-detection-process/timeline-color-bars.png" alt-text="This image shows the color bars under timeline.":::
-
-#### View JSON
-
-To display the JSON file:
-
-1. Select Download and then Insights (JSON).
-1. Copy the `framePatterns` element, under `insights`, and paste it into your Online JSON Viewer.
-
-> [!div class="mx-imgBorder"]
-> :::image type="content" source="./media/slate-detection-process/color-bar-json.png" alt-text="This image shows the color bars json.":::
-
-The following table describes fields found in json:
-
-|Name|Description|
-|||
-|`id`|The digital pattern ID.|
-|`patternType`|The following types are supported: ColorBars, TestCards.|
-|`confidence`|The confidence level for color bar accuracy.|
-|`name`|The name of the element. For example, "SMPTE color bars".|
-|`displayName`| The friendly/display name.
-|`thumbnailId`|The ID of the thumbnail.|
-|`instances`|A list of time ranges where this element appeared.|
-
-## Limitations
--- There can be a mismatch if the input video is of low quality (for example ΓÇô old Analog recordings). -- The digital patterns will be identified over the 10 min of the beginning and 10 min of the ending part of the video.
-<!-
-
-## Next steps
-
-* [Slate detection overview](slate-detection-insight.md)
-* [How to enable and view clapper board with extracted metadata](clapperboard-metadata.md)
-* [How to enable and view textless slate with matched scene](textless-slate-scene-matching.md)
azure-video-indexer Edit Speakers https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-video-indexer/edit-speakers.md
- Title: Edit speakers in the Azure AI Video Indexer website
-description: The article demonstrates how to edit speakers with the Azure AI Video Indexer website.
- Previously updated : 11/01/2022----
-# Edit speakers with the Azure AI Video Indexer website
--
-Azure AI Video Indexer identifies each speaker in a video and attributes each transcribed line to a speaker. The speakers are given a unique identity such as `Speaker #1` and `Speaker #2`. To provide clarity and enrich the transcript quality, you may want to replace the assigned identity with each speaker's actual name. To edit speakers' names, use the edit actions as described in the article.
-
-The article demonstrates how to edit speakers with the [Azure AI Video Indexer website](https://www.videoindexer.ai/). The same editing operations are possible with an API. To use API, call [update video index](https://api-portal.videoindexer.ai/api-details#api=Operations&operation=Update-Video-Index).
-
-> [!NOTE]
-> The addition or editing of a speaker name is applied throughout the transcript of the video but is not applied to other videos in your Azure AI Video Indexer account.
-
-## Start editing
-
-1. Sign in to the [Azure AI Video Indexer website](https://www.videoindexer.ai/).
-2. Select a video.
-3. Select the **Timeline** tab.
-4. Choose to view speakers.
--
-## Add a new speaker
-
-This action allows adding new speakers that were not identified by Azure AI Video Indexer. To add a new speaker from the website for the selected video, do the following:
-
-1. Select the edit mode.
-
- :::image type="content" alt-text="Screenshot of how to edit speakers." source="./media/edit-speakers-website/edit.png":::
-1. Go to the speakers drop down menu above the transcript line you wish to assign a new speaker to.
-1. Select **Assign a new speaker**.
-
- :::image type="content" alt-text="Screenshot of how to add a new speaker." source="./media/edit-speakers-website/assign-new.png":::
-1. Add the name of the speaker you would like to assign.
-1. Press a checkmark to save.
-
-> [!NOTE]
-> Speaker names should be unique across the speakers in the current video.
-
-## Rename an existing speaker
-
-This action allows renaming an existing speaker that was identified by Azure AI Video Indexer. The update applies to all speakers identified by this name.
-
-To rename a speaker from the website for the selected video, do the following:
-
-1. Select the edit mode.
-1. Go to the transcript line where the speaker you wish to rename appears.
-1. Select **Rename selected speaker**.
-
- :::image type="content" alt-text="Screenshot of how to rename a speaker." source="./media/edit-speakers-website/rename.png":::
-
- This action will update speakers by this name.
-1. Press a checkmark to save.
-
-## Assign a speaker to a transcript line
-
-This action allows assigning a speaker to a specific transcript line with a wrong assignment. To assign a speaker to a transcript line from the website, do the following:
-
-1. Go to the transcript line you want to assign a different speaker to.
-1. Select a speaker from the speakers drop down menu above that you wish to assign.
-
- The update only applies to the currently selected transcript line.
-
-If the speaker you wish to assign doesn't appear on the list you can either **Assign a new speaker** or **Rename an existing speaker** as described above.
-
-## Limitations
-
-When adding a new speaker or renaming a speaker, the new name should be unique.
-
-## Next steps
-
-[Insert or remove transcript lines in the Azure AI Video Indexer website](edit-transcript-lines-portal.md)
azure-video-indexer Edit Transcript Lines Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-video-indexer/edit-transcript-lines-portal.md
- Title: View and update transcriptions in Azure AI Video Indexer website
-description: This article explains how to insert or remove a transcript line in the Azure AI Video Indexer website. It also shows how to view word-level information.
- Previously updated : 05/03/2022----
-# View and update transcriptions
--
-This article explains how to insert or remove a transcript line in the Azure AI Video Indexer website. It also shows how to view word-level information.
-
-## Insert or remove transcript lines in the Azure AI Video Indexer website
-
-This section explains how to insert or remove a transcript line in the [Azure AI Video Indexer website](https://www.videoindexer.ai/).
-
-### Add new line to the transcript timeline
-
-While in the edit mode, hover between two transcription lines. You'll find a gap between **ending time** of the **transcript line** and the beginning of the following transcript line, user should see the following **add new transcription line** option.
--
-After clicking the add new transcription line, there will be an option to add the new text and the time stamp for the new line. Enter the text, choose the time stamp for the new line, and select **save**. The default time stamp is the gap between the previous and next transcript line.
--
-If there isnΓÇÖt an option to add a new line, you can adjust the end/start time of the relevant transcript lines to fit a new line in your desired place.
-
-Choose an existing line in the transcript line, click the **three dots** icon, select edit and change the time stamp accordingly.
-
-> [!NOTE]
-> New lines will not appear as part of the **From transcript edits** in the **Content model customization** under languages.
->
-> While using the API, when adding a new line, **Speaker name** can be added using free text. For example, *Speaker 1* can now become *Adam*.
-
-### Edit existing line
-
-While in the edit mode, select the three dots icon. The editing options were enhanced, they now contain not just the text but also the time stamp with accuracy of milliseconds.
-
-### Delete line
-
-Lines can now be deleted through the same three dots icon.
-
-### Consolidate two lines as one
-
-To consolidate two lines, which you believe should appear as one.
-
-1. Go to line number 2, select edit.
-1. Copy the text
-1. Delete the line
-1. Go to line 1, edit, paste the text and save.
-
-## Examine word-level transcription information
-
-This section shows how to examine word-level transcription information based on sentences and phrases that Azure AI Video Indexer identified. Each phrase is broken into words and each word has the following information associated with it
-
-|Name|Description|Example|
-||||
-|Word|A word from a phrase.|"thanks"|
-|Confidence|How confident the Azure AI Video Indexer that the word is correct.|0.80127704|
-|Offset|The time offset from the beginning of the video to where the word starts.|PT0.86S|
-|Duration|The duration of the word.|PT0.28S|
-
-### Get and view the transcript
-
-1. Sign in on the [Azure AI Video Indexer website](https://www.videoindexer.ai).
-1. Select a video.
-1. In the top-right corner, press arrow down and select **Artifacts (ZIP)**.
-1. Download the artifacts.
-1. Unzip the downloaded file > browse to where the unzipped files are located > find and open `transcript.speechservices.json`.
-1. Format and view the json.
-1. Find`RecognizedPhrases` > `NBest` > `Words` and find interesting to you information.
-
-```json
-"RecognizedPhrases": [
-{
- "RecognitionStatus": "Success",
- "Channel": 0,
- "Speaker": 1,
- "Offset": "PT0.86S",
- "Duration": "PT11.01S",
- "OffsetInTicks": 8600000,
- "DurationInTicks": 110100000,
- "NBest": [
- {
- "Confidence": 0.82356554,
- "Lexical": "thanks for joining ...",
- "ITN": "thanks for joining ...",
- "MaskedITN": "",
- "Display": "Thanks for joining ...",
- "Words": [
- {
- "Word": "thanks",
- "Confidence": 0.80127704,
- "Offset": "PT0.86S",
- "Duration": "PT0.28S",
- "OffsetInTicks": 8600000,
- "DurationInTicks": 2800000
- },
- {
- "Word": "for",
- "Confidence": 0.93965703,
- "Offset": "PT1.15S",
- "Duration": "PT0.13S",
- "OffsetInTicks": 11500000,
- "DurationInTicks": 1300000
- },
- {
- "Word": "joining",
- "Confidence": 0.97060966,
- "Offset": "PT1.29S",
- "Duration": "PT0.31S",
- "OffsetInTicks": 12900000,
- "DurationInTicks": 3100000
- },
- {
-
-```
-
-## Next steps
-
-For updating transcript lines and text using API visit the [Azure AI Video Indexer API developer portal](https://aka.ms/avam-dev-portal)
azure-video-indexer Emotions Detection https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-video-indexer/emotions-detection.md
- Title: Azure AI Video Indexer text-based emotion detection overview
-description: This article gives an overview of Azure AI Video Indexer text-based emotion detection.
- Previously updated : 08/02/2023-----
-# Text-based emotion detection
--
-Emotions detection is an Azure AI Video Indexer AI feature that automatically detects emotions in video's transcript lines. Each sentence can either be detected as:
--- *Anger*,-- *Fear*,-- *Joy*, -- *Sad*-
-Or, none of the above if no other emotion was detected.
-
-The model works on text only (labeling emotions in video transcripts.) This model doesn't infer the emotional state of people, may not perform where input is ambiguous or unclear, like sarcastic remarks. Thus, the model shouldn't be used for things like assessing employee performance or the emotional state of a person.
-
-## General principles
-
-There are many things you need to consider when deciding how to use and implement an AI-powered feature:
--- Will this feature perform well in my scenario? Before deploying emotions detection into your scenario, test how it performs using real-life data and make sure it can deliver the accuracy you need. -- Are we equipped to identify and respond to errors? AI-powered products and features aren't 100% accurate, consider how you identify and respond to any errors that may occur. -
-## View the insight
-
-When working on the website the insights are displayed in the **Insights** tab. They can also be generated in a categorized list in a JSON file that includes the ID, type, and a list of instances it appeared at, with their time and confidence.
-
-To display the instances in a JSON file, do the following:
-
-1. Select Download -> Insights (JSON).
-1. Copy the text and paste it into an online JSON viewer.
-
-```json
-"emotions": [
- {
- "id": 1,
- "type": "Sad",
- "instances": [
- {
- "confidence": 0.5518,
- "adjustedStart": "0:00:00",
- "adjustedEnd": "0:00:05.75",
- "start": "0:00:00",
- "end": "0:00:05.75"
- },
-
-```
-
-To download the JSON file via the API, use theΓÇ»[Azure AI Video Indexer developer portal](https://api-portal.videoindexer.ai/).
-
-> [!NOTE]
-> Text-based emotion detection is language independent, however if the transcript is not in English, it is first being translated to English and only then the model is applied. This may cause a reduced accuracy in emotions detection for non English languages.
-
-## Emotions detection components
-
-During the emotions detection procedure, the transcript of the video is processed, as follows:
-
-|Component |Definition |
-|||
-|Source language |The user uploads the source file for indexing. |
-|Transcription API |The audio file is sent to Azure AI services and the translated transcribed output is returned. If a language has been specified, it's processed. |
-|Emotions detection |Each sentence is sent to the emotions detection model. The model produces the confidence level of each emotion. If the confidence level exceeds a specific threshold, and there's no ambiguity between positive and negative emotions, the emotion is detected. In any other case, the sentence is labeled as neutral.|
-|Confidence level |The estimated confidence level of the detected emotions is calculated as a range of 0 to 1. The confidence score represents the certainty in the accuracy of the result. For example, an 82% certainty is represented as an 0.82 score. |
-
-## Considerations and limitations for input data
-
-Here are some considerations to keep in mind when using emotions detection:
--- When uploading a file always use high quality audio and video content.-
-When used responsibly and carefully emotions detection is a valuable tool for many industries. To respect the privacy and safety of others, and to comply with local and global regulations, we recommend the following:  
--- Always respect an individual’s right to privacy, and only ingest media for lawful and justifiable purposes.  
-Don't purposely disclose inappropriate media showing young children or family members of celebrities or other content that may be detrimental or pose a threat to an individual’s personal freedom.  
-- Commit to respecting and promoting human rights in the design and deployment of your analyzed media.   -- When using third-party materials, be aware of any existing copyrights or permissions required before distributing content derived from them.  -- Always seek legal advice when using media from unknown sources.  -- Always obtain appropriate legal and professional advice to ensure that your uploaded media is secured and have adequate controls to preserve the integrity of your content and to prevent unauthorized access.     -- Provide a feedback channel that allows users and individuals to report issues with the service.   -- Be aware of any applicable laws or regulations that exist in your area regarding processing, analyzing, and sharing media containing people.  -- Keep a human in the loop. Don't use any solution as a replacement for human oversight and decision-making.   -- Fully examine and review the potential of any AI model you're using to understand its capabilities and limitations.  --- Fully examine and review the potential of any AI model you're using to understand its capabilities and limitations.--
-## Transparency Notes
-
-### General
-
-Review [Transparency Note overview](/legal/azure-video-indexer/transparency-note?context=/azure/azure-video-indexer/context/context)
-
-### Emotion detection specific
-
-Introduction: This model is designed to help detect emotions in the transcript of a video. However, it isn't suitable for making assessments about an individual's emotional state, their ability, or their overall performance.
-
-Use cases: This emotion detection model is intended to help determine the sentiment behind sentences in the videoΓÇÖs transcript. However, it only works on the text itself, and may not perform well for sarcastic input or in cases where input may be ambiguous or unclear.
-
-Information requirements: To increase the accuracy of this model, it is recommended that input data be in a clear and unambiguous format. Users should also note that this model does not have context about input data, which can impact its accuracy.
-
-Limitations: This model can produce both false positives and false negatives. To reduce the likelihood of either, users are advised to follow best practices for input data and preprocessing, and to interpret outputs in the context of other relevant information. It's important to note that the system does not have any context of the input data.
-
-Interpretation: The outputs of this model should not be used to make assessments about an individual's emotional state or other human characteristics. This model is supported in English and may not function properly with non-English inputs. Not English inputs are being translated to English before entering the model, therefore may produce less accurate results.
-
-### Intended use cases
--- Content Creators and Video Editors - Content creators and video editors can use the system to analyze the emotions expressed in the text transcripts of their videos. This helps them gain insights into the emotional tone of their content, allowing them to fine-tune the narrative, adjust pacing, or ensure the intended emotional impact on the audience.-- Media Analysts and Researchers - Media analysts and researchers can employ the system to analyze the emotional content of a large volume of video transcripts quickly. They can use the emotional timeline generated by the system to identify trends, patterns, or emotional responses in specific topics or areas of interest.-- Marketing and Advertising Professionals - Marketing and advertising professionals can utilize the system to assess the emotional reception of their campaigns or video advertisements. Understanding the emotions evoked by their content helps them tailor messages more effectively and gauge the success of their campaigns.-- Video Consumers and Viewers - End-users, such as viewers or consumers of video content, can benefit from the system by understanding the emotional context of videos without having to watch them entirely. This is particularly useful for users who want to decide if a video is worth watching or for those with limited time to spare.-- Entertainment Industry Professionals - Professionals in the entertainment industry, such as movie producers or directors, can utilize the system to gauge the emotional impact of their film scripts or storylines, aiding in script refinement and audience engagement. -
-### Considerations when choosing other use cases
--- The model should not be used to evaluate employee performance and monitoring individuals.-- The model should not be used for making assessments about a person, their emotional state, or their ability.-- The results of the model can be inaccurate, as this is an AI system, and should be treated with caution.-- The confidence of the model in its prediction should also be taken into account.-- Non English videos will produce less accurate results.-
-## Next steps
-
-### Learn More about Responsible AI
--- [Microsoft Responsible AI principles](https://www.microsoft.com/ai/responsible-ai?activetab=pivot1%3aprimaryr6) -- [Microsoft Responsible AI resources](https://www.microsoft.com/ai/responsible-ai-resources)-- [Microsoft Azure Learning courses on Responsible AI](/training/paths/responsible-ai-business-principles/)-- [Microsoft Global Human Rights Statement](https://www.microsoft.com/corporate-responsibility/human-rights-statement?activetab=pivot_1:primaryr5) -
-### Contact us
-
-`visupport@microsoft.com`
-
-## Azure AI Video Indexer insights
-
-View some other Azure Video Insights:
--- [Audio effects detection](audio-effects-detection.md)-- [Face detection](face-detection.md)-- [OCR](ocr.md)-- [Keywords extraction](keywords.md)-- [Transcription, Translation & Language identification](transcription-translation-lid.md)-- [Named entities](named-entities.md)-- [Observed people tracking & matched persons](observed-matched-people.md)-- [Topics inference](topics-inference.md)
azure-video-indexer Face Detection https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-video-indexer/face-detection.md
- Title: Face detection overview
-description: Get an overview of face detection in Azure AI Video Indexer.
- Previously updated : 04/17/2023-----
-# Face detection
--
-Face detection, a feature of Azure AI Video Indexer, automatically detects faces in a media file, and then aggregates instances of similar faces into groups. The celebrities recognition model then runs to recognize celebrities.
-
-The celebrities recognition model covers approximately 1 million faces and is based on commonly requested data sources. Faces that Video Indexer doesn't recognize as celebrities are still detected but are left unnamed. You can build your own custom [person model](/azure/azure-video-indexer/customize-person-model-overview) to train Video Indexer to recognize faces that aren't recognized by default.
-
-Face detection insights are generated as a categorized list in a JSON file that includes a thumbnail and either a name or an ID for each face. Selecting a faceΓÇÖs thumbnail displays information like the name of the person (if they were recognized), the percentage of the video that the person appears, and the person's biography, if they're a celebrity. You can also scroll between instances in the video where the person appears.
-
-> [!IMPORTANT]
-> To support Microsoft Responsible AI principles, access to face identification, customization, and celebrity recognition features is limited and based on eligibility and usage criteria. Face identification, customization, and celebrity recognition features are available to Microsoft managed customers and partners. To apply for access, use the [facial recognition intake form](https://customervoice.microsoft.com/Pages/ResponsePage.aspx?id=v4j5cvGGr0GRqy180BHbR7en2Ais5pxKtso_Pz4b1_xUQjA5SkYzNDM4TkcwQzNEOE1NVEdKUUlRRCQlQCN0PWcu).
-
-## Prerequisites
-
-Review [Transparency Note for Azure AI Video Indexer](/legal/azure-video-indexer/transparency-note?context=/azure/azure-video-indexer/context/context).
-
-## General principles
-
-This article discusses face detection and key considerations for using this technology responsibly. You need to consider many important factors when you decide how to use and implement an AI-powered feature, including:
--- Will this feature perform well in your scenario? Before you deploy face detection in your scenario, test how it performs by using real-life data. Make sure that it can deliver the accuracy you need.-- Are you equipped to identify and respond to errors? AI-powered products and features aren't 100 percent accurate, so consider how you'll identify and respond to any errors that occur.-
-## Key terms
-
-| Term | Definition |
-|||
-| insightΓÇ» | The information and knowledge that you derive from processing and analyzing video and audio files. The insight can include detected objects, people, faces, keyframes, and translations or transcriptions. |
-| face recognition  | Analyzing images to identify the faces that appear in the images. This process is implemented via the Azure AI Face API. |
-| template | Enrolled images of people are converted to templates, which are then used for facial recognition. Machine-interpretable features are extracted from one or more images of an individual to create that individualΓÇÖs template. The enrollment or probe images aren't stored by the Face API, and the original images can't be reconstructed based on a template. Template quality is a key determinant for accuracy in your results. |
-| enrollment | The process of enrolling images of individuals for template creation so that they can be recognized. When a person is enrolled to a verification system that's used for authentication, their template is also associated with a primary identifier that's used to determine which template to compare against the probe template. High-quality images and images that represent natural variations in how a person looks (for instance, wearing glasses and not wearing glasses) generate high-quality enrollment templates. |
-| deep searchΓÇ» | The ability to retrieve only relevant video and audio files from a video library by searching for specific terms within the extracted insights.|
-
-## View insights
-
-To see face detection instances on the Azure AI Video Indexer website:
-
-1. When you upload the media file, in the **Upload and index** dialog, select **Advanced settings**.
-1. On the left menu, select **People models**. Select a model to apply to the media file.
-1. After the file is uploaded and indexed, go to **Insights** and scroll to **People**.
-
-To see face detection insights in a JSON file:
-
-1. On the Azure AI Video Indexer website, open the uploaded video.
-1. Select **Download** > **Insights (JSON)**.
-1. Under `insights`, copy the `faces` element and paste it into your JSON viewer.
-
- ```json
- "faces": [
- {
- "id": 1785,
- "name": "Emily Tran",
- "confidence": 0.7855,
- "description": null,
- "thumbnailId": "fd2720f7-b029-4e01-af44-3baf4720c531",
- "knownPersonId": "92b25b4c-944f-4063-8ad4-f73492e42e6f",
- "title": null,
- "imageUrl": null,
- "thumbnails": [
- {
- "id": "4d182b8c-2adf-48a2-a352-785e9fcd1fcf",
- "fileName": "FaceInstanceThumbnail_4d182b8c-2adf-48a2-a352-785e9fcd1fcf.jpg",
- "instances": [
- {
- "adjustedStart": "0:00:00",
- "adjustedEnd": "0:00:00.033",
- "start": "0:00:00",
- "end": "0:00:00.033"
- }
- ]
- },
- {
- "id": "feff177b-dabf-4f03-acaf-3e5052c8be57",
- "fileName": "FaceInstanceThumbnail_feff177b-dabf-4f03-acaf-3e5052c8be57.jpg",
- "instances": [
- {
- "adjustedStart": "0:00:05",
- "adjustedEnd": "0:00:05.033",
- "start": "0:00:05",
- "end": "0:00:05.033"
- }
- ]
- },
- ]
- }
- ]
- ```
-
-To download the JSON file via the API, go to the [Azure AI Video Indexer developer portal](https://api-portal.videoindexer.ai/).
-
-> [!IMPORTANT]
-> When you review face detections in the UI, you might not see all faces that appear in the video. We expose only face groups that have a confidence of more than 0.5, and the face must appear for a minimum of 4 seconds or 10 percent of the value of `video_duration`. Only when these conditions are met do we show the face in the UI and in the *Insights.json* file. You can always retrieve all face instances from the face artifact file by using the API: `https://api.videoindexer.ai/{location}/Accounts/{accountId}/Videos/{videoId}/ArtifactUrl[?Faces][&accessToken]`.
-
-## Face detection components
-
-The following table describes how images in a media file are processed during the face detection procedure:
-
-| Component | Definition |
-|||
-| source file | The user uploads the source file for indexing. |
-| detection and aggregation | The face detector identifies the faces in each frame. The faces are then aggregated and grouped. |
-| recognition | The celebrities model processes the aggregated groups to recognize celebrities. If you've created your own people model, it also processes groups to recognize other people. If people aren't recognized, they're labeled Unknown1, Unknown2, and so on. |
-| confidence value | Where applicable for well-known faces or for faces that are identified in the customizable list, the estimated confidence level of each label is calculated as a range of 0 to 1. The confidence score represents the certainty in the accuracy of the result. For example, an 82 percent certainty is represented as an 0.82 score. |
-
-## Example use cases
-
-The following list describes examples of common use cases for face detection:
--- Summarize where an actor appears in a movie or reuse footage by deep searching specific faces in organizational archives for insight about a specific celebrity.-- Get improved efficiency when you create feature stories at a news agency or sports agency. Examples include deep searching a celebrity or a football player in organizational archives.-- Use faces that appear in a video to create promos, trailers, or highlights. Video Indexer can assist by adding keyframes, scene markers, time stamps, and labeling so that content editors invest less time reviewing numerous files.-
-## Considerations for choosing a use case
-
-Face detection is a valuable tool for many industries when it's used responsibly and carefully. To respect the privacy and safety of others, and to comply with local and global regulations, we recommend that you follow these use guidelines:
--- Carefully consider the accuracy of the results. To promote more accurate detection, check the quality of the video. Low-quality video might affect the insights that are presented.-- Carefully review results if you use face detection for law enforcement. People might not be detected if they're small, sitting, crouching, or obstructed by objects or other people. To ensure fair and high-quality decisions, combine face detection-based automation with human oversight.-- Don't use face detection for decisions that might have serious, adverse impacts. Decisions that are based on incorrect output can have serious, adverse impacts. It's advisable to include human review of decisions that have the potential for serious impacts on individuals.-- Always respect an individualΓÇÖs right to privacy, and ingest videos only for lawful and justifiable purposes.-- Don't purposely disclose inappropriate content about young children, family members of celebrities, or other content that might be detrimental to or pose a threat to an individualΓÇÖs personal freedom.-- Commit to respecting and promoting human rights in the design and deployment of your analyzed media.-- If you use third-party materials, be aware of any existing copyrights or required permissions before you distribute content that's derived from them.ΓÇ» -- Always seek legal advice if you use content from an unknown source.ΓÇ» -- Always obtain appropriate legal and professional advice to ensure that your uploaded videos are secured and that they have adequate controls to preserve content integrity and prevent unauthorized access.-- Provide a feedback channel that allows users and individuals to report issues they might experience with the service.-- Be aware of any applicable laws or regulations that exist in your area about processing, analyzing, and sharing media that features people.ΓÇ» -- Keep a human in the loop. Don't use any solution as a replacement for human oversight and decision making.-- Fully examine and review the potential of any AI model that you're using to understand its capabilities and limitations.ΓÇ» -
-## Related content
-
-Learn more about Responsible AI:
--- [Microsoft Responsible AI principles](https://www.microsoft.com/ai/responsible-ai?activetab=pivot1%3aprimaryr6)-- [Microsoft Responsible AI resources](https://www.microsoft.com/ai/responsible-ai-resources)-- [Microsoft Azure Learn training courses on Responsible AI](/training/paths/responsible-ai-business-principles/)-- [Microsoft Global Human Rights Statement](https://www.microsoft.com/corporate-responsibility/human-rights-statement?activetab=pivot_1:primaryr5) -
-Azure AI Video Indexer insights:
--- [Audio effects detection](audio-effects-detection.md)-- [OCR](ocr.md)-- [Keywords extraction](keywords.md)-- [Transcription, translation, and language identification](transcription-translation-lid.md)-- [Labels identification](labels-identification.md)-- [Named entities](named-entities.md)-- [Observed people tracking and matched persons](observed-matched-people.md)-- [Topics inference](topics-inference.md)
azure-video-indexer Face Redaction With Api https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-video-indexer/face-redaction-with-api.md
- Title: Redact faces by using Azure AI Video Indexer API
-description: Learn how to use the Azure AI Video Indexer face redaction feature by using API.
- Previously updated : 08/11/2023----
-# Redact faces by using Azure AI Video Indexer API
--
-You can use Azure AI Video Indexer to detect and identify faces in video. To modify your video to blur (redact) faces of specific individuals, you can use API.
-
-A few minutes of footage that contain multiple faces can take hours to redact manually, but by using presets in Video Indexer API, the face redaction process requires just a few simple steps.
-
-This article shows you how to redact faces by using an API. Video Indexer API includes a **Face Redaction** preset that offers scalable face detection and redaction (blurring) in the cloud. The article demonstrates each step of how to redact faces by using the API in detail.
-
-The following video shows how to redact a video by using Azure AI Video Indexer API.
-
-> [!VIDEO https://www.microsoft.com/videoplayer/embed/RW16UBo]
-
-## Compliance, privacy, and security
-
-As an important [reminder](limited-access-features.md), you must comply with all applicable laws in your use of analytics or insights that you derive by using Video Indexer.
-
-Face service access is limited based on eligibility and usage criteria to support the Microsoft Responsible AI principles. Face service is available only to Microsoft managed customers and partners. Use the [Face Recognition intake form](https://customervoice.microsoft.com/Pages/ResponsePage.aspx?id=v4j5cvGGr0GRqy180BHbR7en2Ais5pxKtso_Pz4b1_xUQjA5SkYzNDM4TkcwQzNEOE1NVEdKUUlRRCQlQCN0PWcu) to apply for access. For more information, see the [Face limited access page](/legal/cognitive-services/computer-vision/limited-access-identity?context=%2Fazure%2Fcognitive-services%2Fcomputer-vision%2Fcontext%2Fcontext).
-
-## Face redaction terminology and hierarchy
-
-Face redaction in Video Indexer relies on the output of existing Video Indexer face detection results that we provide in our Video Standard and Advanced Analysis presets.
-
-To redact a video, you must first upload a video to Video Indexer and complete an analysis by using the **Standard** or **Advanced** video presets. You can do this by using the [Azure Video Indexer website](https://www.videoindexer.ai/media/library) or [API](https://api-portal.videoindexer.ai/api-details#api=Operations&operation=Upload-Video). You can then use face redaction API to reference this video by using the `videoId` value. We create a new video in which the indicated faces are redacted. Both the video analysis and face redaction are separate billable jobs. For more information, see our [pricing page](https://azure.microsoft.com/pricing/details/video-indexer/).
-
-## Types of blurring
-
-You can choose from different types of blurring in face redaction. To select a type, use a name or representative number for the `blurringKind` parameter in the request body:
-
-|blurringKind number | blurringKind name | Example |
-||||
-|0| MediumBlur|:::image type="content" source="./media/face-redaction-with-api/medium-blur.png" alt-text="Photo of the Azure AI Video Indexer medium blur.":::|
-|1| HighBlur|:::image type="content" source="./media/face-redaction-with-api/high-blur.png" alt-text="Photo of the Azure AI Video Indexer high blur.":::|
-|2| LowBlur|:::image type="content" source="./media/face-redaction-with-api/low-blur.png" alt-text="Photo of the Azure AI Video Indexer low blur.":::|
-|3| BoundingBox|:::image type="content" source="./media/face-redaction-with-api/bounding-boxes.png" alt-text="Photo of Azure AI Video Indexer bounding boxes.":::|
-|4| Black|:::image type="content" source="./media/face-redaction-with-api/black-boxes.png" alt-text="Photo of Azure AI Video Indexer black boxes kind.":::|
-
-You can specify the kind of blurring in the request body by using the `blurringKind` parameter.
-
-Here's an example:
-
-```json
-{
- "faces": {
- "blurringKind": "HighBlur"
- }
-}
-```
-
-Or, use a number that represents the type of blurring that's described in the preceding table:
-
-```json
-{
- "faces": {
- "blurringKind": 1
- }
-}
-```
-
-## Filters
-
-You can apply filters to set which face IDs to blur. You can specify the IDs of the faces in a comma-separated array in the body of the JSON file. Use the `scope` parameter to exclude or include these faces for redaction. By specifying IDs, you can either redact all faces *except* the IDs that you indicate or redact *only* those IDs. See examples in the next sections.
-
-### Exclude scope
-
-In the following example, to redact all faces except face IDs 1001 and 1016, use the `Exclude` scope:
-
-```json
-{
- "faces": {
- "blurringKind": "HighBlur",
- "filter": {
- "ids": [1001, 1016],
- "scope": "Exclude"
- }
- }
-}
-```
-
-### Include scope
-
-In the following example, to redact only face IDs 1001 and 1016, use the `Include` scope:
-
-```json
-{
- "faces": {
- "blurringKind": "HighBlur",
- "filter": {
- "ids": [1001, 1016],
- "scope": "Include"
- }
- }
-}
-```
-
-### Redact all faces
-
-To redact all faces, remove the scope filter:
-
-```json
-{
- "faces": {
- "blurringKind": "HighBlur",
- }
-}
-```
-
-To retrieve a face ID, you can go to the indexed video and retrieve the [artifact file](https://api-portal.videoindexer.ai/api-details#api=Operations&operation=Get-Video-Artifact-Download-Url). The artifact contains a *faces.json* file and a thumbnail .zip file that has all the faces that were detected in the video. You can match the face to the ID and decide which face IDs to redact.
-
-## Create a redaction job
-
-To create a redaction job, you can invoke the following API call:
-
-```http
-POST https://api.videoindexer.ai/{location}/Accounts/{accountId}/Videos/{videoId}/redact[?name][&priority][&privacy][&externalId][&streamingPreset][&callbackUrl][&accessToken]
-```
-
-The following values are required:
-
-| Name | Value | Description |
-||||
-|`Accountid` |`{accountId}`| The ID of your Video Indexer account. |
-| `Location` |`{location}`| The Azure region where your Video Indexer account is located. For example, westus. |
-|`AccessToken` |`{token}`| The token that has Account Contributor rights generated through the [Azure Resource Manager](/rest/api/videoindexer/stable/generate/access-token?tabs=HTTP) REST API. |
-| `Videoid` |`{videoId}`| The video ID of the source video to redact. You can retrieve the video ID by using the [List Video](https://api-portal.videoindexer.ai/api-details#api=Operations&operation=List-Videos) API. |
-| `Name` |`{name}`|The name of the new, redacted video. |
-
-Here's an example of a request:
-
-```http
-https://api.videoindexer.ai/westeurope/Accounts/{id}/Videos/{id}/redact?priority=Low&name=testredaction&privacy=Private&streamingPreset=Default
-```
-
-You can specify the token as an authorization header that has a key value type of `bearertoken:{token}`, or you can provide it as query parameter by using `?token={token}`.
-
-You also need to add a request body in JSON format with the redaction job options to apply. Here's an example:
-
-```json
-{
- "faces": {
- "blurringKind": "HighBlur"
- }
-}
-```
-
-When the request is successful, you receive the response `HTTP 202 ACCEPTED`.
-
-## Monitor job status
-
-In the response of the job creation request, you receive an HTTP header `Location` that has a URL to the job. You can use the same token to make a GET request to this URL to see the status of the redaction job.
-
-Here's an example URL:
-
-```http
-https://api.videoindexer.ai/westeurope/Accounts/<id>/Jobs/<id>
-```
-
-Here's an example response:
-
-```json
-{
- "creationTime": "2023-05-11T11:22:57.6114155Z",
- "lastUpdateTime": "2023-05-11T11:23:01.7993563Z",
- "progress": 20,
- "jobType": "Redaction",
- "state": "Processing"
-}
-```
-
-If you call the same URL when the redaction job is completed, in the `Location` header, you get a storage shared access signature (SAS) URL to the redacted video. For example:
-
-```http
-https://api.videoindexer.ai/westeurope/Accounts/<id>/Videos/<id>/SourceFile/DownloadUrl
-```
-
-This URL redirects to the .mp4 file that's stored in the Azure Storage account.
-
-## FAQs
-
-| Question | Answer |
-|||
-| Can I upload a video and redact in one operation? | No. You need to first upload and analyze a video by using Video Indexer API. Then, reference the indexed video in your redaction job. |
-| Can I use the [Azure AI Video Indexer website](https://www.videoindexer.ai/) to redact a video? | No. Currently you can use only the API to create a redaction job.|
-| Can I play back the redacted video by using the Video Indexer [website](https://www.videoindexer.ai/)?| Yes. The redacted video is visible on the Video Indexer website like any other indexed video, but it doesn't contain any insights. |
-| How do I delete a redacted video? | You can use the [Delete Video](https://api-portal.videoindexer.ai/api-details#api=Operations&operation=Delete-Video) API and provide the `Videoid` value for the redacted video. |
-| Do I need to pass facial identification gating to use face redaction? | Unless you represent a police department in the United States, no. Even if youΓÇÖre gated, we continue to offer face detection. We don't offer face identification if you're gated. However, you can redact all faces in a video by using only face detection. |
-| Will face redaction overwrite my original video? | No. The face redaction job creates a new video output file. |
-| Not all faces are properly redacted. What can I do? | Redaction relies on the initial face detection and tracking output of the analysis pipeline. Although we detect all faces most of the time, there are circumstances in which we can't detect a face. Factors like face angle, the number of frames the face is present, and the quality of the source video affect the quality of face redaction. For more information, see [Face insights](face-detection.md). |
-| Can I redact objects other than faces? | No. Currently, we offer only face redaction. If you have a need to redact other objects, you can provide feedback about our product in the [Azure User Voice](https://feedback.azure.com/d365community/forum/8952b9e3-e03b-ec11-8c62-00224825aadf) channel. |
-| How long is an SAS URL valid to download the redacted video? |<!--The SAS URL is valid for xxxx. --> To download the redacted video after the SAS URL expired, you need to call the initial job status URL. It's best to keep these `Jobstatus` URLs in a database in your back end for future reference. |
-
-## Error codes
-
-The following sections describe errors that might occur when you use face redaction.
-
-### Response: 404 Not Found
-
-The account wasn't found or the video wasn't found.
-
-#### Response headers
-
-| Name | Required | Type | Description |
-| - | - | - | - |
-| `x-ms-request-id` | false | string | A globally unique identifier (GUID) for the request is assigned by the server for instrumentation purposes. The server makes sure that all logs that are associated with handling the request can be linked to the server request ID. A client can provide this request ID in a support ticket so that support engineers can find the logs that are linked to this specific request. The server makes sure that the request ID is unique for each job. |
-
-#### Response body
-
-| Name | Required | Type |
-| - | - | - |
-| `ErrorType` | false | `ErrorType` |
-| `Message` | false | string |
-
-#### Default JSON
-
-```json
-{
- "ErrorType": "GENERAL",
- "Message": "string"
-}
-```
-
-### Response: 400 Bad Request
-
-Invalid input or can't redact the video since its original upload failed. Please upload the video again.
-
-Invalid input or can't redact the video because its original upload failed. Upload the video again.
-
-#### Response headers
-
-| Name | Required | Type | Description |
-| - | - | - | - |
-| `x-ms-request-id` | false | string | A GUID for the request is assigned by the server for instrumentation purposes. The server makes sure that all logs that are associated with handling the request can be linked to the server request ID. A client can provide this request ID in a support ticket so that support engineers can find the logs that are linked to this specific request. The server makes sure that the request ID is unique for each job. |
-
-#### Response body
-
-| Name | Required | Type |
-| - | - | - |
-| `ErrorType` | false | `ErrorType` |
-| `Message` | false | string |
-
-#### Default JSON
-
-```json
-{
- "ErrorType": "GENERAL",
- "Message": "string"
-}
-```
-
-### Response: 409 Conflict
-
-The video is already being indexed.
-
-#### Response headers
-
-| Name | Required | Type | Description |
-| - | - | - | - |
-| `x-ms-request-id` | false | string | A GUID for the request is assigned by the server for instrumentation purposes. The server makes sure that all logs that are associated with handling the request can be linked to the server request ID. A client can provide this request ID in a support ticket so that support engineers can find the logs that are linked to this specific request. The server makes sure that the request ID is unique for each job.|
-
-#### Response body
-
-| Name | Required | Type |
-| - | - | - |
-| `ErrorType` | false | `ErrorType` |
-| `Message` | false | string |
-
-#### Default JSON
-
-```json
-{
- "ErrorType": "GENERAL",
- "Message": "string"
-}
-```
-
-### Response: 401 Unauthorized
-
-The access token isn't authorized to access the account.
-
-#### Response headers
-
-| Name | Required | Type | Description |
-| - | - | - | - |
-| `x-ms-request-id` | false | string | A GUID for the request is assigned by the server for instrumentation purposes. The server makes sure that all logs that are associated with handling the request can be linked to the server request ID. A client can provide this request ID in a support ticket so that support engineers can find the logs that are linked to this specific request. The server makes sure that the request ID is unique for each job. |
-
-#### Response body
-
-| Name | Required | Type |
-| - | - | - |
-| `ErrorType` | false | `ErrorType` |
-| `Message` | false | string |
-
-#### Default JSON
-
-```json
-{
- "ErrorType": "USER_NOT_ALLOWED",
- "Message": "Access token is not authorized to access account 'SampleAccountId'."
-}
-```
-
-### Response: 500 Internal Server Error
-
-An error occurred on the server.
-
-#### Response headers
-
-| Name | Required | Type | Description |
-| - | - | - | - |
-| `x-ms-request-id` | false | string | A GUID for the request is assigned by the server for instrumentation purposes. The server makes sure that all logs that are associated with handling the request can be linked to the server request ID. A client can provide this request ID in a support ticket so that support engineers can find the logs that are linked to this specific request. The server makes sure that the request ID is unique for each job. |
-
-#### Response body
-
-| Name | Required | Type |
-| - | - | - |
-| `ErrorType` | false | `ErrorType` |
-| `Message` | false | string |
-
-#### Default JSON
-
-```json
-{
- "ErrorType": "GENERAL",
- "Message": "There was an error."
-}
-```
-
-### Response: 429 Too many requests
-
-Too many requests were sent. Use the `Retry-After` response header to decide when to send the next request.
-
-#### Response headers
-
-| Name | Required | Type | Description |
-| - | - | - | - |
-| `Retry-After` | false | integer | A non-negative decimal integer that indicates the number of seconds to delay after the response is received. |
-
-### Response: 504 Gateway Timeout
-
-The server didn't respond to the gateway within the expected time.
-
-#### Response headers
-
-| Name | Required | Type | Description |
-| - | - | - | - |
-| `x-ms-request-id` | false | string | A GUID for the request is assigned by the server for instrumentation purposes. The server makes sure that all logs that are associated with handling the request can be linked to the server request ID. A client can provide this request ID in a support ticket so that support engineers can find the logs that are linked to this specific request. The server makes sure that the request ID is unique for each job. |
-
-#### Default JSON
-
-```json
-{
- "ErrorType": "SERVER_TIMEOUT",
- "Message": "Server did not respond to gateway within expected time"
-}
-```
-
-## Next steps
--- Learn more about [Video Indexer](https://azure.microsoft.com/pricing/details/video-indexer/).-- See [Azure pricing](https://azure.microsoft.com/pricing/) for encoding, streaming, and storage billed by Azure service providers.
azure-video-indexer Import Content From Trial https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-video-indexer/import-content-from-trial.md
- Title: Import your content from the trial account
-description: Learn how to import your content from the trial account.
- Previously updated : 12/19/2022-----
-# Import content from your trial account to a regular account
--
-If you would like to transition from the Video Indexer trial account experience to that of a regular paid account, Video Indexer allows you at not cost to import the content in your trial content to your new regular account.
-
-When might you want to switch from a trial to a regular account?
-
-* If you have used up the free trial minutes and want to continue indexing.
-* You are ready to start using Video Indexer for production workloads.
-* You want an experience which doesn't have minute, support, or SLA limitations.
-
-## Create a new ARM account for the import
-
-* First you need to create an account. The regular account needs to have been already created and available before performing the import. Azure AI Video Indexer accounts are Azure Resource Manager (ARM) based and account creation can be performed through the Azure portal (see [Create an account with the Azure portal](create-account-portal.md)) or API (see [Create accounts with API](/rest/api/videoindexer/stable/accounts)).
-* The target ARM-based account has to be an empty account that has not yet been used to index any media files.
-* Import from trial can be performed only once per trial account.
-
-## Import your data
-
-To import your data, follow the steps:
-
- 1. Go to the [Azure AI Video Indexer website](https://aka.ms/vi-portal-link)
- 2. Select your trial account and go to the **Account settings** page.
- 3. Click the **Import content to an ARM-based account**.
- 4. From the dropdown menu choose the ARM-based account you wish to import the data to.
-
- * If the account ID isn't showing, you can copy and paste the account ID from the Azure portal or from the list of accounts under the User account blade at the top right of the Azure AI Video Indexer Portal.
-
- 5. Click **Import content**
-
- :::image type="content" alt-text="Screenshot that shows how to import your data." source="./media/create-account/import-to-arm-account.png":::
-
-All media and as well as your customized content model will be copied from the trial account into the new ARM-based account.
--
azure-video-indexer Indexing Configuration Guide https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-video-indexer/indexing-configuration-guide.md
- Title: Indexing configuration guide
-description: This article explains the configuration options of indexing process with Azure AI Video Indexer.
- Previously updated : 04/27/2023----
-# The indexing configuration guide
--
-It's important to understand the configuration options to index efficiently while ensuring you meet your indexing objectives. When indexing videos, users can use the default settings or adjust many of the settings. Azure AI Video Indexer allows you to choose between a range of language, indexing, custom models, and streaming settings that have implications on the insights generated, cost, and performance.
-
-This article explains each of the options and the impact of each option to enable informed decisions when indexing. The article discusses the [Azure AI Video Indexer website](https://www.videoindexer.ai/) experience but the same options apply when submitting jobs through the API (see the [API guide](video-indexer-use-apis.md)). When indexing large volumes, follow the [at-scale guide](considerations-when-use-at-scale.md).
-
-The initial upload screen presents options to define the video name, source language, and privacy settings.
--
-All the other setting options appear if you select Advanced options.
--
-## Default settings
-
-By default, Azure AI Video Indexer is configured to a **Video source language** of English, **Privacy** of private, **Standard** audio and video setting, and **Streaming quality** of single bitrate.
-
-> [!TIP]
-> This topic describes each indexing option in detail.
-
-Below are a few examples of when using the default setting might not be a good fit:
--- If you need insights observed people or matched person that is only available through Advanced Video. -- If you're only using Azure AI Video Indexer for transcription and translation, indexing of both audio and video isnΓÇÖt required, **Basic** for audio should suffice. -- If you're consuming Azure AI Video Indexer insights but have no need to generate a new media file, streaming isn't necessary and **No streaming** should be selected to avoid the encoding job and its associated cost. -- If a video is primarily in a language that isn't English. -
-### Video source language
-
-If you're aware of the language spoken in the video, select the language from the video source language list. If you're unsure of the language of the video, choose **Auto-detect single language**. When uploading and indexing your video, Azure AI Video Indexer will use language identification (LID) to detect the videos language and generate transcription and insights with the detected language.
-
-If the video may contain multiple languages and you aren't sure which ones, select **Auto-detect multi-language**. In this case, multi-language (MLID) detection will be applied when uploading and indexing your video.
-
-While auto-detect is a great option when the language in your videos varies, there are two points to consider when using LID or MLID:
--- LID/MLID don't support all the languages supported by Azure AI Video Indexer.-- The transcription is of a higher quality when you pre-select the videoΓÇÖs appropriate language.-
-Learn more about [language support and supported languages](language-support.md).
-
-### Privacy
-
-This option allows you to determine if the insights should only be accessible to users in your Azure AI Video Indexer account or to anyone with a link.
-
-### Indexing options
-
-When indexing a video with the default settings, beware each of the audio and video indexing options may be priced differently. See [Azure AI Video Indexer pricing](https://azure.microsoft.com/pricing/details/video-indexer/) for details.
-
-Below are the indexing type options with details of their insights provided. To modify the indexing type, select **Advanced settings**.
-
-|Audio only|Video only |Audio & Video |
-||||
-|Basic |||
-|Standard| Standard |Standard |
-|Advanced |Advanced|Advanced |
-
-## Advanced settings
-
-### Audio only
--- **Basic**: Indexes and extract insights by using audio only (ignoring video) and provides the following insights: transcription, translation, formatting of output captions and subtitles (closed captions).-- **Standard**: Indexes and extract insights by using audio only (ignoring video) and provides the following insights: transcription, translation, formatting of output captions and subtitles (closed captions), automatic language detection, emotions, keywords, named entities (brands, locations, people), sentiments, speakers, topic extraction, and textual content moderation. -- **Advanced**: Indexes and extract insights by using audio only (ignoring video) and provides the following insights: transcription, translation, formatting of output captions and subtitles (closed captions), automatic language detection, audio event detection, emotions, keywords, named entities (brands, locations, people), sentiments, speakers, topic extraction, and textual content moderation. -
-### Video only
--- **Standard**: Indexes and extract insights by using video only (ignoring audio) and provides the following insights: labels (OCR), named entities (OCR - brands, locations, people), OCR, people, scenes (keyframes and shots), black frames, visual content moderation, and topic extraction (OCR). -- **Advanced**: Indexes and extract insights by using video only (ignoring audio) and provides the following insights: labels (OCR), matched person (preview), named entities (OCR - brands, locations, people), OCR, observed people (preview), people, scenes (keyframes and shots), clapperboard detection, digital pattern detection, featured clothing insight, textless slate detection, textual logo detection, black frames, visual content moderation, and topic extraction (OCR). -
-### Audio and Video
--- **Standard**: Indexes and extract insights by using audio and video and provides the following insights: transcription, translation, formatting of output captions and subtitles (closed captions), automatic language detection, emotions, keywords, named entities (brands, locations, people), OCR, scenes (keyframes and shots), black frames, visual content moderation, people, sentiments, speakers, topic extraction, and textual content moderation. -- **Advanced**: Indexes and extract insights by using audio and video and provides the following insights: transcription, translation, formatting of output captions and subtitles (closed captions), automatic language detection, textual content moderation, audio event detection, emotions, keywords, matched person, named entities (brands, locations, people), OCR, observed people (preview), people, clapperboard detection, digital pattern detection, featured clothing insight, textless slate detection, sentiments, speakers, scenes (keyframes and shots), textual logo detection, black frames, visual content moderation, and topic extraction. -
-### Streaming quality options
-
-When indexing a video, you can decide if encoding of the file should occur which will enable streaming. The sequence is as follows:
-
-Upload > Encode (optional) > Index & Analysis > Publish for streaming (optional)
-
-Encoding and streaming operations are performed by and billed by Azure Media Services. There are two additional operations associated with the creation of a streaming video:
--- The creation of a Streaming Endpoint. -- Egress traffic ΓÇô the volume depends on the number of video playbacks, video playback length, and the video quality (bitrate).
-
-There are several aspects that influence the total costs of the encoding job. The first is if the encoding is with single or adaptive streaming. This will create either a single output or multiple encoding quality outputs. Each output is billed separately and depends on the source quality of the video you uploaded to Azure AI Video Indexer.
-
-For Media Services encoding pricing details, see [pricing](https://azure.microsoft.com/pricing/details/media-services/#pricing).
-
-When indexing a video, default streaming settings are applied. Below are the streaming type options that can be modified if you, select **Advanced** settings and go to **Streaming quality**.
-
-|Single bitrate|Adaptive bitrate| No streaming |
-||||
--- **Single bitrate**: With Single Bitrate, the standard Media Services encoder cost will apply for the output. If the video height is greater than or equal to 720p HD, Azure AI Video Indexer encodes it with a resolution of 1280 x 720. Otherwise, it's encoded as 640 x 468. The default setting is content-aware encoding. -- **Adaptive bitrate**: With Adaptive Bitrate, if you upload a video in 720p HD single bitrate to Azure AI Video Indexer and select Adaptive Bitrate, the encoder will use the [AdaptiveStreaming](/rest/api/media/transforms/create-or-update?tabs=HTTP#encodernamedpreset) preset. An output of 720p HD (no output exceeding 720p HD is created) and several lower quality outputs are created (for playback on smaller screens/low bandwidth environments). Each output will use the Media Encoder Standard base price and apply a multiplier for each output. The multiplier is 2x for HD, 1x for non-HD, and 0.25 for audio and billing is per minute of the input video. -
- **Example**: If you index a video in the US East region that is 40 minutes in length and is 720p HP and have selected the streaming option of Adaptive Bitrate, 3 outputs will be created - 1 HD (multiplied by 2), 1 SD (multiplied by 1) and 1 audio track (multiplied by 0.25). This will total to (2+1+0.25) * 40 = 130 billable output minutes.
-
- Output minutes (standard encoder): 130 x $0.015/minute = $1.95.
-- **No streaming**: Insights are generated but no streaming operation is performed and the video isn't available on the Azure AI Video Indexer website. When No streaming is selected, you aren't billed for encoding. -
-### Customizing content models
-
-Azure AI Video Indexer allows you to customize some of its models to be adapted to your specific use case. These models include brands, language, and person. If you have customized models, this section enables you to configure if one of the created models should be used for the indexing.
-
-## Next steps
-
-Learn more about [language support and supported languages](language-support.md).
azure-video-indexer Insights Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-video-indexer/insights-overview.md
- Title: Azure AI Video Indexer insights overview
-description: This article gives a brief overview of Azure AI Video Indexer insights.
- Previously updated : 08/02/2023----
-# Azure AI Video Indexer insights
--
-When a video is indexed, Azure AI Video Indexer analyzes the video and audio content by running 30+ AI models, generating rich insights. Insights contain an aggregated view of the data: transcripts, optical character recognition elements (OCRs), face, topics, emotions, etc. Once the video is indexed and analyzed, Azure AI Video Indexer produces a JSON content that contains details of the video insights. For example, each insight type includes instances of time ranges that show when the insight appears in the video.
-
-Read details about the following insights here:
--- [Audio effects detection](audio-effects-detection-overview.md)-- [Text-based emotion detection](emotions-detection.md)-- [Faces detection](face-detection.md)-- [OCR](ocr.md)-- [Keywords extraction](keywords.md)-- [Transcription, translation, language](transcription-translation-lid.md)-- [Labels identification](labels-identification.md)-- [Named entities](named-entities.md)-- [Observed people tracking & matched faces](observed-matched-people.md)-- [Topics inference](topics-inference.md)-
-For information about features and other insights, see:
--- [Azure AI Video Indexer overview](video-indexer-overview.md)-- [Transparency note](/legal/azure-video-indexer/transparency-note?context=/azure/azure-video-indexer/context/context)-
-Once you [set up](video-indexer-get-started.md) an Azure AI Video Indexer account (see [account types](accounts-overview.md)) and [upload a video](upload-index-videos.md), you can view insights as described below.
-
-## Get the insights using the website
-
-To visually examine the video's insights, press the **Play** button on the video on the [Azure AI Video Indexer](https://www.videoindexer.ai/) website.
-
-![Screenshot of the Insights tab in Azure AI Video Indexer.](./media/video-indexer-output-json/video-indexer-summarized-insights.png)
-
-To get insights produced on the website or the Azure portal:
-
-1. Browse to the [Azure AI Video Indexer](https://www.videoindexer.ai/) website and sign in.
-1. Find a video whose output you want to examine.
-1. Press **Play**.
-1. Choose the **Insights** tab.
-2. Select which insights you want to view (under the **View** drop-down, on the right-top corner).
-3. Go to the **Timeline** tab to see timestamped transcript lines.
-4. Select **Download** > **Insights (JSON)** to get the insights output file.
-5. If you want to download artifacts, beware of the following:
-
- [!INCLUDE [artifacts](./includes/artifacts.md)]
-
-## Get insights produced by the API
-
-When indexing with an API and the response status is OK, you get a detailed JSON output as the response content. When calling the [Get Video Index](https://api-portal.videoindexer.ai/api-details#api=Operations&operation=Get-Video-Index) API, we recommend passing `&includeSummarizedInsights=false`.
--
-This API returns a URL only with a link to the specific resource type you request. An additional GET request must be made to this URL for the specific artifact. The file types for each artifact type vary depending on the artifact.
--
-## Examine the Azure AI Video Indexer output
-
-For more information, see [Examine the Azure AI Video Indexer output]( video-indexer-output-json-v2.md).
-
-## Next steps
-
-[View and edit video insights](video-indexer-view-edit.md).
azure-video-indexer Keywords https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-video-indexer/keywords.md
- Title: Azure AI Video Indexer keywords extraction overview
-description: An introduction to Azure AI Video Indexer keywords extraction component responsibly.
- Previously updated : 06/15/2022-----
-# Keywords extraction
--
-Keywords extraction is an Azure AI Video Indexer AI feature that automatically detects insights on the different keywords discussed in media files. Keywords extraction can extract insights in both single language and multi-language media files. The total number of extracted keywords and their categories are listed in the Insights tab, where clicking a Keyword and then clicking Play Previous or Play Next jumps to the keyword in the media file.
-
-## Prerequisites
-
-Review [Transparency Note overview](/legal/azure-video-indexer/transparency-note?context=/azure/azure-video-indexer/context/context)
-
-## General principles
-
-This article discusses Keywords and the key considerations for making use of this technology responsibly. There are many things you need to consider when deciding how to use and implement an AI-powered feature:
--- Will this feature perform well in my scenario? Before deploying Keywords Extraction into your scenario, test how it performs using real-life data and make sure it can deliver the accuracy you need. -- Are we equipped to identify and respond to errors? AI-powered products and features won't be 100% accurate, so consider how you'll identify and respond to any errors that may occur. -
-## View the insight
-
-When working on the website the insights are displayed in the **Insights** tab. They can also be generated in a categorized list in a JSON file which includes the KeywordΓÇÖs ID, text, together with each keywordΓÇÖs specific start and end time and confidence score.
-
-To display the instances in a JSON file, do the following:
-
-1. Click Download and then Insights (JSON).
-1. Copy the text and paste it into your Online JSON Viewer.
-
- ```json
- "keywords": [
- {
- "id": 1,
- "text": "office insider",
- "confidence": 1,
- "language": "en-US",
- "instances": [
- {
- "adjustedStart": "0:00:00",
- "adjustedEnd": "0:00:05.75",
- "start": "0:00:00",
- "end": "0:00:05.75"
- },
- {
- "adjustedStart": "0:01:21.82",
- "adjustedEnd": "0:01:24.7",
- "start": "0:01:21.82",
- "end": "0:01:24.7"
- },
- {
- "adjustedStart": "0:01:31.32",
- "adjustedEnd": "0:01:32.76",
- "start": "0:01:31.32",
- "end": "0:01:32.76"
- },
- {
- "adjustedStart": "0:01:35.8",
- "adjustedEnd": "0:01:37.84",
- "start": "0:01:35.8",
- "end": "0:01:37.84"
- }
- ]
- },
- {
- "id": 2,
- "text": "insider tip",
- "confidence": 0.9975,
- "language": "en-US",
- "instances": [
- {
- "adjustedStart": "0:01:14.91",
- "adjustedEnd": "0:01:19.51",
- "start": "0:01:14.91",
- "end": "0:01:19.51"
- }
- ]
- },
-
- ```
-
-To download the JSON file via the API, use the [Azure AI Video Indexer developer portal](https://api-portal.videoindexer.ai/).
-
-> [!NOTE]
-> Keywords extraction is language independent.
-
-## Keywords components
-
-During the Keywords procedure, audio and images in a media file are processed, as follows:
-
-|Component|Definition|
-|||
-|Source language | The user uploads the source file for indexing. |
-|Transcription API |The audio file is sent to Azure AI services and the translated transcribed output is returned. If a language has been specified it is processed.|
-|OCR of video |Images in a media file are processed using the Azure AI Vision Read API to extract text, its location, and other insights. |
-|Keywords extraction |An extraction algorithm processes the transcribed audio. The results are then combined with the insights detected in the video during the OCR process. The keywords and where they appear in the media and then detected and identified. |
-|Confidence level| The estimated confidence level of each keyword is calculated as a range of 0 to 1. The confidence score represents the certainty in the accuracy of the result. For example, an 82% certainty will be represented as an 0.82 score.|
-
-## Example use cases
--- Personalization of keywords to match customer interests, for example websites about England posting promotions about English movies or festivals. -- Deep-searching archives for insights on specific keywords to create feature stories about companies, personas or technologies, for example by a news agency. -
-## Considerations and limitations when choosing a use case
-
-Below are some considerations to keep in mind when using keywords extraction:
--- When uploading a file always use high-quality video content. The recommended maximum frame size is HD and frame rate is 30 FPS. A frame should contain no more than 10 people. When outputting frames from videos to AI models, only send around 2 or 3 frames per second. Processing 10 and more frames might delay the AI result. -- When uploading a file always use high quality audio and video content. At least 1 minute of spontaneous conversational speech is required to perform analysis. Audio effects are detected in non-speech segments only. The minimal duration of a non-speech section is 2 seconds. Voice commands and singing aren't supported.  -
-When used responsibly and carefully Keywords is a valuable tool for many industries. To respect the privacy and safety of others, and to comply with local and global regulations, we recommend the following:  
--- Always respect an individual’s right to privacy, and only ingest media for lawful and justifiable purposes.   -- Don't purposely disclose inappropriate media showing young children or family members of celebrities or other content that may be detrimental or pose a threat to an individual’s personal freedom.   -- Commit to respecting and promoting human rights in the design and deployment of your analyzed media.   -- When using 3rd party materials, be aware of any existing copyrights or permissions required before distributing content derived from them.  -- Always seek legal advice when using media from unknown sources.  -- Always obtain appropriate legal and professional advice to ensure that your uploaded media is secured and have adequate controls to preserve the integrity of your content and to prevent unauthorized access.     -- Provide a feedback channel that allows users and individuals to report issues with the service.   -- Be aware of any applicable laws or regulations that exist in your area regarding processing, analyzing, and sharing media containing people.  -- Keep a human in the loop. Don't use any solution as a replacement for human oversight and decision-making.   -- Fully examine and review the potential of any AI model you're using to understand its capabilities and limitations.  -
-## Next steps
-
-### Learn More about Responsible AI
--- [Microsoft Responsible AI principles](https://www.microsoft.com/ai/responsible-ai?activetab=pivot1%3aprimaryr6) -- [Microsoft Responsible AI resources](https://www.microsoft.com/ai/responsible-ai-resources)-- [Microsoft Azure Learning courses on Responsible AI](/training/paths/responsible-ai-business-principles/)-- [Microsoft Global Human Rights Statement](https://www.microsoft.com/corporate-responsibility/human-rights-statement?activetab=pivot_1:primaryr5) -
-### Contact us
-
-`visupport@microsoft.com`
-
-## Azure AI Video Indexer insights
--- [Audio effects detection](audio-effects-detection.md)-- [Face detection](face-detection.md)-- [OCR](ocr.md)-- [Transcription, Translation & Language identification](transcription-translation-lid.md)-- [Labels identification](labels-identification.md) -- [Named entities](named-entities.md)-- [Observed people tracking & matched persons](observed-matched-people.md)-- [Topics inference](topics-inference.md)
azure-video-indexer Labels Identification https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-video-indexer/labels-identification.md
- Title: Azure AI Video Indexer labels identification overview
-description: This article gives an overview of an Azure AI Video Indexer labels identification.
- Previously updated : 06/15/2022-----
-# Labels identification
--
-Labels identification is an Azure AI Video Indexer AI feature that identifies visual objects like sunglasses or actions like swimming, appearing in the video footage of a media file. There are many labels identification categories and once extracted, labels identification instances are displayed in the Insights tab and can be translated into over 50 languages. Clicking a Label opens the instance in the media file, select Play Previous or Play Next to see more instances.
-
-## Prerequisites
-
-Review [Transparency Note overview](/legal/azure-video-indexer/transparency-note?context=/azure/azure-video-indexer/context/context)
-
-## General principles
-
-This article discusses labels identification and the key considerations for making use of this technology responsibly. There are many things you need to consider when deciding how to use and implement an AI-powered feature:
--- Does this feature perform well in my scenario? Before deploying labels identification into your scenario, test how it performs using real-life data and make sure it can deliver the accuracy you need.-- Are we equipped to identify and respond to errors? AI-powered products and features won't be 100% accurate, so consider how you'll identify and respond to any errors that may occur.-
-## View the insight
-
-When working on the website the instances are displayed in the Insights tab. They can also be generated in a categorized list in a JSON file that includes the Labels ID, category, instances together with each labelΓÇÖs specific start and end times and confidence score, as follows:
-
-To display labels identification insights in a JSON file, do the following:
-
-1. Click Download and then Insights (JSON).
-1. Copy the text, paste it into your JSON Viewer.
-
- ```json
- "labels": [
- {
- "id": 1,
- "name": "human face",
- "language": "en-US",
- "instances": [
- {
- "confidence": 0.9987,
- "adjustedStart": "0:00:00",
- "adjustedEnd": "0:00:25.6",
- "start": "0:00:00",
- "end": "0:00:25.6"
- },
- {
- "confidence": 0.9989,
- "adjustedStart": "0:01:21.067",
- "adjustedEnd": "0:01:41.334",
- "start": "0:01:21.067",
- "end": "0:01:41.334"
- }
- ]
- },
- {
- "id": 2,
- "name": "person",
- "referenceId": "person",
- "language": "en-US",
- "instances": [
- {
- "confidence": 0.9959,
- "adjustedStart": "0:00:00",
- "adjustedEnd": "0:00:26.667",
- "start": "0:00:00",
- "end": "0:00:26.667"
- },
- {
- "confidence": 0.9974,
- "adjustedStart": "0:01:21.067",
- "adjustedEnd": "0:01:41.334",
- "start": "0:01:21.067",
- "end": "0:01:41.334"
- }
- ]
- },
- ```
-
-To download the JSON file via the API, [Azure AI Video Indexer developer portal](https://api-portal.videoindexer.ai/).
-
-## Labels components
-
-During the Labels procedure, objects in a media file are processed, as follows:
-
-|Component|Definition|
-|||
-|Source |The user uploads the source file for indexing. |
-|Tagging| Images are tagged and labeled. For example, door, chair, woman, headphones, jeans. |
-|Filtering and aggregation |Tags are filtered according to their confidence level and aggregated according to their category.|
-|Confidence level| The estimated confidence level of each label is calculated as a range of 0 to 1. The confidence score represents the certainty in the accuracy of the result. For example, an 82% certainty is represented as an 0.82 score.|
-
-## Example use cases
--- Extracting labels from frames for contextual advertising or branding. For example, placing an ad for beer following footage on a beach.-- Creating a verbal description of footage to enhance accessibility for the visually impaired, for example a background storyteller in movies. -- Deep searching media archives for insights on specific objects to create feature stories for the news.-- Using relevant labels to create content for trailers, highlights reels, social media or new clips. -
-## Considerations when choosing a use case
--- Carefully consider the accuracy of the results, to promote more accurate detections, check the quality of the video, low quality video might impact the detected insights. -- Carefully consider when using for law enforcement that Labels potentially cannot detect parts of the video. To ensure fair and high-quality decisions, combine Labels with human oversight. -- Don't use labels identification for decisions that may have serious adverse impacts. Machine learning models can result in undetected or incorrect classification output. Decisions based on incorrect output could have serious adverse impacts. Additionally, it's advisable to include human review of decisions that have the potential for serious impacts on individuals. -
-When used responsibly and carefully, Azure AI Video Indexer is a valuable tool for many industries. To respect the privacy and safety of others, and to comply with local and global regulations, we recommend the following:
--- Always respect an individualΓÇÖs right to privacy, and only ingest videos for lawful and justifiable purposes. -- Don't purposely disclose inappropriate content about young children or family members of celebrities or other content that may be detrimental or pose a threat to an individualΓÇÖs personal freedom. -- Commit to respecting and promoting human rights in the design and deployment of your analyzed media. -- When using 3rd party materials, be aware of any existing copyrights or permissions required before distributing content derived from them. -- Always seek legal advice when using content from unknown sources. -- Always obtain appropriate legal and professional advice to ensure that your uploaded videos are secured and have adequate controls to preserve the integrity of your content and to prevent unauthorized access. -- Provide a feedback channel that allows users and individuals to report issues with the service. -- Be aware of any applicable laws or regulations that exist in your area regarding processing, analyzing, and sharing media containing people. -- Keep a human in the loop. Do not use any solution as a replacement for human oversight and decision-making. -- Fully examine and review the potential of any AI model you're using to understand its capabilities and limitations.-
-## Learn more about labels identification
--- [Transparency note](/legal/cognitive-services/computer-vision/imageanalysis-transparency-note)-- [Use cases](/legal/cognitive-services/computer-vision/imageanalysis-transparency-note#use-cases)-- [Capabilities and limitations](/legal/cognitive-services/computer-vision/imageanalysis-transparency-note#system-performance-and-limitations-for-image-analysis) -- [Evaluation of image analysis](/legal/cognitive-services/computer-vision/imageanalysis-transparency-note#evaluation-of-image-analysis) -- [Data, privacy and security](/legal/cognitive-services/computer-vision/ocr-data-privacy-security)-
-## Next steps
-
-### Learn More about Responsible AI
--- [Microsoft Responsible AI principles](https://www.microsoft.com/ai/responsible-ai?activetab=pivot1%3aprimaryr6) -- [Microsoft Responsible AI resources](https://www.microsoft.com/ai/responsible-ai-resources)-- [Microsoft Azure Learning courses on Responsible AI](/training/paths/responsible-ai-business-principles/)-- [Microsoft Global Human Rights Statement](https://www.microsoft.com/corporate-responsibility/human-rights-statement?activetab=pivot_1:primaryr5) -
-### Contact us
-
-`visupport@microsoft.com`
-
-## Azure AI Video Indexer insights
--- [Audio effects detection](audio-effects-detection.md)-- [Face detection](face-detection.md)-- [OCR](ocr.md)-- [Keywords extraction](keywords.md)-- [Transcription, Translation & Language identification](transcription-translation-lid.md)-- [Named entities](named-entities.md)-- [Observed people tracking & matched persons](observed-matched-people.md)-- [Topics inference](topics-inference.md)
azure-video-indexer Language Identification Model https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-video-indexer/language-identification-model.md
- Title: Use Azure AI Video Indexer to auto identify spoken languages
-description: This article describes how the Azure AI Video Indexer language identification model is used to automatically identifying the spoken language in a video.
- Previously updated : 08/28/2023----
-# Automatically identify the spoken language with language identification model
--
-Azure AI Video Indexer supports automatic language identification (LID), which is the process of automatically identifying the spoken language from audio content. The media file is transcribed in the dominant identified language.
-
-See the list of supported by Azure AI Video Indexer languages in [supported languages](language-support.md).
-
-Make sure to review the [Guidelines and limitations](#guidelines-and-limitations) section.
-
-## Choosing auto language identification on indexing
-
-When indexing or [reindexing](https://api-portal.videoindexer.ai/api-details#api=Operations&operation=Re-Index-Video) a video using the API, choose the `auto detect` option in the `sourceLanguage` parameter.
-
-When using portal, go to your **Account videos** on the [Azure AI Video Indexer](https://www.videoindexer.ai/) home page and hover over the name of the video that you want to reindex. On the right-bottom corner, select the **Re-index** button. In the **Re-index video** dialog, choose *Auto detect* from the **Video source language** drop-down box.
--
-## Model output
-
-Azure AI Video Indexer transcribes the video according to the most likely language if the confidence for that language is `> 0.6`. If the language can't be identified with confidence, it assumes the spoken language is English.
-
-Model dominant language is available in the insights JSON as the `sourceLanguage` attribute (under root/videos/insights). A corresponding confidence score is also available under the `sourceLanguageConfidence` attribute.
-
-```json
-"insights": {
- "version": "1.0.0.0",
- "duration": "0:05:30.902",
- "sourceLanguage": "fr-FR",
- "language": "fr-FR",
- "transcript": [...],
- . . .
- "sourceLanguageConfidence": 0.8563
- }
-```
-
-## Guidelines and limitations
-
-Automatic language identification (LID) supports the following languages:
-
- See the list of supported by Azure AI Video Indexer languages in [supported languages](language-support.md).
--- If the audio contains languages other than the [supported list](language-support.md), the result is unexpected.-- If Azure AI Video Indexer can't identify the language with a high enough confidence (greater than 0.6), the fallback language is English.-- Currently, there isn't support for files with mixed language audio. If the audio contains mixed languages, the result is unexpected. -- Low-quality audio may affect the model results.-- The model requires at least one minute of speech in the audio.-- The model is designed to recognize a spontaneous conversational speech (not voice commands, singing, and so on).-
-## Next steps
--- [Overview](video-indexer-overview.md)-- [Automatically identify and transcribe multi-language content](multi-language-identification-transcription.md)
azure-video-indexer Language Support https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-video-indexer/language-support.md
- Title: Language support in Azure AI Video Indexer
-description: This article provides a comprehensive list of language support by service features in Azure AI Video Indexer.
- Previously updated : 03/10/2023-----
-# Language support in Azure AI Video Indexer
--
-This article explains Video Indexer's language options and provides a list of language support for each one. It includes the languages support for Video Indexer features, translation, language identification, customization, and the language settings of the Video Indexer website.
-
-## Supported languages per scenario
-
-This section explains the Video Indexer language options and has a table of the supported languages for each one.
-
-> [!IMPORTANT]
-> All of the languages listed support translation when indexing through the API.
-
-### Column explanations
--- **Supported source language** – The language spoken in the media file supported for transcription, translation, and search.-- **Language identification** - Whether the language can be automatically detected by Video Indexer when language identification is used for indexing. To learn more, see [Use Azure AI Video Indexer to auto identify spoken languages](language-identification-model.md) and the **Language Identification** section.-- **Customization (language model)** - Whether the language can be used when customizing language models in Video Indexer. To learn more, see [Customize a language model in Azure AI Video Indexer](customize-language-model-overview.md).-- **Pronunciation (language model)** - Whether the language can be used to create a pronunciation dataset as part of a custom speech model. To learn more, see [Customize a speech model with Azure AI Video Indexer](customize-speech-model-overview.md).-- **Website Translation** – Whether the language is supported for translation when using the [Azure AI Video Indexer website](https://aka.ms/vi-portal-link). Select the translated language in the language drop-down menu.-
- :::image type="content" source="media/language-support/website-translation.png" alt-text="Screenshot showing a menu with download, English and views as menu items. A tooltip is shown as mouseover on the English item and says Translation is set to English." lightbox="media/language-support/website-translation.png":::
-
- The following insights are translated:
-
- - Transcript
- - Keywords
- - Topics
- - Labels
- - Frame patterns (Only to Hebrew as of now)
-
- All other insights appear in English when using translation.
-- **Website Language** - Whether the language can be selected for use on the [Azure AI Video Indexer website](https://aka.ms/vi-portal-link). Select the **Settings icon** then select the language in the **Language settings** dropdown.-
- :::image type="content" source="media/language-support/website-language.jpg" alt-text="Screenshot showing a menu with user settings show them all toggled to on." lightbox="media/language-support/website-language.jpg":::
-
-| **Language** | **Code** | **Supported<br/>source language** | **Language<br/>identification** | **Customization<br/>(language model)** | **Pronunciation<br>(language model)** | **Website<br/>Translation** | **Website<br/>Language** |
-|||||||||
-| Afrikaans | af-ZA | | | | | Γ£ö | |
-| Arabic (Israel) | ar-IL | Γ£ö | | Γ£ö | | | |
-| Arabic (Iraq) | ar-IQ | Γ£ö | Γ£ö | | | | |
-| Arabic (Jordan) | ar-JO | Γ£ö | Γ£ö | Γ£ö | | | |
-| Arabic (Kuwait) | ar-KW | Γ£ö | Γ£ö | Γ£ö | | | |
-| Arabic (Lebanon) | ar-LB | Γ£ö | | Γ£ö | | | |
-| Arabic (Oman) | ar-OM | Γ£ö | Γ£ö | Γ£ö | | | |
-| Arabic (Palestinian Authority) | ar-PS | Γ£ö | | Γ£ö | | | |
-| Arabic (Qatar) | ar-QA | Γ£ö | Γ£ö | Γ£ö | | | |
-| Arabic (Saudi Arabia) | ar-SA | Γ£ö | Γ£ö | Γ£ö | | | |
-| Arabic (United Arab Emirates) | ar-AE | Γ£ö | Γ£ö | Γ£ö | | | |
-| Arabic Egypt | ar-EG | Γ£ö | Γ£ö | Γ£ö | | Γ£ö | |
-| Arabic Modern Standard (Bahrain) | ar-BH | Γ£ö | Γ£ö | Γ£ö | | | |
-| Arabic Syrian Arab Republic | ar-SY | Γ£ö | Γ£ö | Γ£ö | | | |
-| Armenian | hy-AM | Γ£ö | | | | | |
-| Bangla | bn-BD | | | | | Γ£ö | |
-| Bosnian | bs-Latn | | | | | Γ£ö | |
-| Bulgarian | bg-BG | Γ£ö | Γ£ö | | | Γ£ö | |
-| Catalan | ca-ES | Γ£ö | Γ£ö | Γ£ö | Γ£ö | Γ£ö | |
-| Chinese (Cantonese Traditional) | zh-HK | Γ£ö | Γ£ö | Γ£ö | Γ£ö | Γ£ö | |
-| Chinese (Simplified) | zh-Hans | Γ£ö | Γ£ö | | | Γ£ö | Γ£ö |
-| Chinese (Simplified) | zh-CK | Γ£ö | Γ£ö | | | Γ£ö | Γ£ö |
-| Chinese (Traditional) | zh-Hant | | | | | Γ£ö | |
-| Croatian | hr-HR | Γ£ö | Γ£ö | | Γ£ö | Γ£ö | |
-| Czech | cs-CZ | Γ£ö | Γ£ö | Γ£ö | Γ£ö | Γ£ö | Γ£ö |
-| Danish | da-DK | Γ£ö | Γ£ö | Γ£ö | Γ£ö | Γ£ö | |
-| Dutch | nl-NL | Γ£ö | Γ£ö | Γ£ö | Γ£ö | Γ£ö | Γ£ö |
-| English Australia | en-AU | Γ£ö | Γ£ö | Γ£ö | Γ£ö | Γ£ö | |
-| English United Kingdom | en-GB | Γ£ö | Γ£ö | Γ£ö | Γ£ö | Γ£ö | |
-| English United States | en-US | Γ£ö | Γ£ö | Γ£ö | Γ£ö | Γ£ö | Γ£ö |
-| Estonian | et-EE | Γ£ö | Γ£ö | Γ£ö | Γ£ö | Γ£ö | |
-| Fijian | en-FJ | | | | | Γ£ö | |
-| Filipino | fil-PH | | | | | Γ£ö | |
-| Finnish | fi-FI | Γ£ö | Γ£ö | Γ£ö | Γ£ö | Γ£ö | |
-| French | fr-FR | Γ£ö | Γ£ö | Γ£ö | Γ£ö | Γ£ö | |
-| French (Canada) | fr-CA | Γ£ö | Γ£ö | Γ£ö | Γ£ö | Γ£ö | Γ£ö |
-| German | de-DE | Γ£ö | Γ£ö | Γ£ö | Γ£ö | Γ£ö | Γ£ö |
-| Greek | el-GR | Γ£ö | Γ£ö | | | Γ£ö | |
-| Gujarati | gu-IN | Γ£ö | Γ£ö | | | Γ£ö | |
-| Haitian | fr-HT | | | | | Γ£ö | |
-| Hebrew | he-IL | Γ£ö | Γ£ö | Γ£ö | | Γ£ö | |
-| Hindi | hi-IN | Γ£ö | Γ£ö | Γ£ö | | Γ£ö | Γ£ö |
-| Hungarian | hu-HU | | Γ£ö | Γ£ö | Γ£ö | Γ£ö | Γ£ö |
-| Icelandic | is-IS | Γ£ö | | | | | |
-| Indonesian | id-ID | | | Γ£ö | Γ£ö | Γ£ö | |
-| Irish | ga-IE | Γ£ö | Γ£ö | Γ£ö | Γ£ö | | |
-| Italian | it-IT | Γ£ö | Γ£ö | Γ£ö | Γ£ö | Γ£ö | Γ£ö |
-| Japanese | ja-JP | Γ£ö | Γ£ö | Γ£ö | | Γ£ö | Γ£ö |
-| Kannada | kn-IN | Γ£ö | Γ£ö | | | | |
-| Kiswahili | sw-KE | | | | | Γ£ö | |
-| Korean | ko-KR | Γ£ö | Γ£ö | Γ£ö | | Γ£ö | Γ£ö |
-| Latvian | lv-LV | Γ£ö | Γ£ö | Γ£ö | Γ£ö | Γ£ö | |
-| Lithuanian | lt-LT | | | Γ£ö | Γ£ö | Γ£ö | |
-| Malagasy | mg-MG | | | | | Γ£ö | |
-| Malay | ms-MY | Γ£ö | | | | Γ£ö | |
-| Malayalam | ml-IN | Γ£ö | Γ£ö | | | | |
-| Maltese | mt-MT | | | | | Γ£ö | |
-| Norwegian | nb-NO | Γ£ö | Γ£ö | Γ£ö | | Γ£ö | |
-| Persian | fa-IR | Γ£ö | | Γ£ö | | Γ£ö | |
-| Polish | pl-PL | Γ£ö | Γ£ö | Γ£ö | Γ£ö | Γ£ö | Γ£ö |
-| Portuguese | pt-BR | Γ£ö | Γ£ö | Γ£ö | Γ£ö | Γ£ö | Γ£ö |
-| Portuguese (Portugal) | pt-PT | Γ£ö | Γ£ö | Γ£ö | Γ£ö | Γ£ö | |
-| Romanian | ro-RO | Γ£ö | Γ£ö | Γ£ö | Γ£ö | Γ£ö | |
-| Russian | ru-RU | Γ£ö | Γ£ö | Γ£ö | | Γ£ö | Γ£ö |
-| Samoan | en-WS | | | | | | |
-| Serbian (Cyrillic) | sr-Cyrl-RS | | | | | Γ£ö | |
-| Serbian (Latin) | sr-Latn-RS | | | | | Γ£ö | |
-| Slovak | sk-SK | Γ£ö | Γ£ö | Γ£ö | Γ£ö | Γ£ö | |
-| Slovenian | sl-SI | Γ£ö | Γ£ö | Γ£ö | Γ£ö | Γ£ö | |
-| Spanish | es-ES | Γ£ö | Γ£ö | Γ£ö | Γ£ö | Γ£ö | Γ£ö |
-| Spanish (Mexico) | es-MX | Γ£ö | Γ£ö | Γ£ö | Γ£ö | Γ£ö | |
-| Swedish | sv-SE | Γ£ö | Γ£ö | Γ£ö | Γ£ö | Γ£ö | Γ£ö |
-| Tamil | ta-IN | Γ£ö | Γ£ö | | | Γ£ö | |
-| Telugu | te-IN | Γ£ö | Γ£ö | | | | |
-| Thai | th-TH | Γ£ö | Γ£ö | Γ£ö | | Γ£ö | |
-| Tongan | to-TO | | | | | Γ£ö | |
-| Turkish | tr-TR | Γ£ö | Γ£ö | Γ£ö | | Γ£ö | Γ£ö |
-| Ukrainian | uk-UA | Γ£ö | Γ£ö | | | Γ£ö | |
-| Urdu | ur-PK | | | | | Γ£ö | |
-| Vietnamese | vi-VN | Γ£ö | Γ£ö | | | Γ£ö |
-
-## Get supported languages through the API
-
-Use the Get Supported Languages API call to pull a full list of supported languages per area. For more information, see [Get Supported Languages](https://api-portal.videoindexer.ai/api-details#api=Operations&operation=Get-Supported-Languages).
-
-The API returns a list of supported languages with the following values:
-
-```json
-{
- "name": "Language",
- "languageCode": "Code",
- "isRightToLeft": true/false,
- "isSourceLanguage": true/false,
- "isAutoDetect": true/false
-}
-```
--- Supported source language:-
- If `isSourceLanguage` is false, the language is supported for translation only.
- If `isSourceLanguage` is true, the language is supported as source for transcription, translation, and search.
--- Language identification (auto detection):-
- If `isAutoDetect` is true, the language is supported for language identification (LID) and multi-language identification (MLID).
-
-## Language Identification
-
-When uploading a media file to Video Indexer, you can specify the media file's source language. If indexing a file through the Video Indexer website, this can be done by selecting a language during the file upload. If you're submitting the indexing job through the API, it's done by using the language parameter. The selected language is then used to generate the transcription of the file.
-
-If you aren't sure of the source language of the media file or it may contain multiple languages, Video Indexer can detect the spoken languages. If you select either Auto-detect single language (LID) or multi-language (MLID) for the media fileΓÇÖs source language, the detected language or languages will be used to transcribe the media file. To learn more about LID and MLID, see Use Azure AI Video Indexer to auto identify spoken languages, see [Automatically identify the spoken language with language identification model](language-identification-model.md) and [Automatically identify and transcribe multi-language content](multi-language-identification-transcription.md)
-
-There's a limit of 10 languages allowed for identification during the indexing of a media file for both LID and MLID. The following are the 9 *default* languages of Language identification (LID) and Multi-language identification (MILD):
--- German (de-DE)-- English United States (en-US)-- Spanish (es-ES)-- French (fr-FR)-- Italian (it-IT)-- Japanese (ja-JP)-- Portuguese (pt-BR)-- Russian (ru-RU)-- Chinese (Simplified) (zh-Hans)-
-## How to change the list of default languages
-
-If you need to use languages for identification that aren't used by default, you can customize the list to any 10 languages that support customization through either the website or the API:
-
-### Use the website to change the list
-
-1. Select the **Language ID** tab under Model customization. The list of languages is specific to the Video Indexer account you're using and for the signed-in user. The default list of languages is saved per user on their local device, per device, and browser. As a result, each user can configure their own default identified language list.
-1. Use **Add language** to search and add more languages. If 10 languages are already selected, you first must remove one of the existing detected languages before adding a new one.
-
- :::image type="content" source="media/language-support/default-language.png" alt-text="Screenshot showing a table showing all of the selected languages." lightbox="media/language-support/default-language.png":::
-
-### Use the API to change the list
-
-When you upload a file, the Video Indexer language model cross references 9 languages by default. If there's a match, the model generates the transcription for the file with the detected language.
-
-Use the language parameter to specify `multi` (MLID) or `auto` (LID) parameters. Use the `customLanguages` parameter to specify up to 10 languages. (The parameter is used only when the language parameter is set to `multi` or `auto`.) To learn more about using the API, see [Use the Azure AI Video Indexer API](video-indexer-use-apis.md).
-
-## Next steps
--- [Overview](video-indexer-overview.md)-- [Release notes](release-notes.md)
azure-video-indexer Limited Access Features https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-video-indexer/limited-access-features.md
- Title: Limited Access features of Azure AI Video Indexer
-description: This article talks about the limited access features of Azure AI Video Indexer.
- Previously updated : 06/17/2022----
-# Limited Access features of Azure AI Video Indexer
---
-Our vision is to empower developers and organizations to leverage AI to transform society in positive ways. We encourage responsible AI practices to protect the rights and safety of individuals. Microsoft facial recognition services are Limited Access in order to help prevent the misuse of the services in accordance with our [AI Principles](https://www.microsoft.com/ai/responsible-ai?SilentAuth=1&wa=wsignin1.0&activetab=pivot1%3aprimaryr6) and [facial recognition](https://blogs.microsoft.com/on-the-issues/2018/12/17/six-principles-to-guide-microsofts-facial-recognition-work/) principles. The Face Identify and Celebrity Recognition operations in Azure AI Video Indexer are Limited Access features that require registration.
-
-Since the announcement on June 11th, 2020, customers may not use, or allow use of, any Azure facial recognition service by or for a police department in the United States.
-
-## Application process
-
-Limited Access features of Azure AI Video Indexer are only available to customers managed by Microsoft, and only for use cases selected at the time of registration. Other Azure AI Video Indexer features do not require registration to use.
-
-Customers and partners who wish to use Limited Access features of Azure AI Video Indexer are required to [submit an intake form](https://aka.ms/facerecognition). Access is subject to MicrosoftΓÇÖs sole discretion based on eligibility criteria and a vetting process. Microsoft may require customers and partners to reverify this information periodically.
-
-The Azure AI Video Indexer service is made available to customers and partners under the terms governing their subscription to Microsoft Azure Services (including the [Service Specific Terms](https://www.microsoft.com/licensing/terms/productoffering/MicrosoftAzure/MCA#ServiceSpecificTerms)). Please review these terms carefully as they contain important conditions and obligations governing your use of Azure AI Video Indexer.
-
-## Limited access features
--
-## Help and support
-
-FAQ about Limited Access can be found [here](https://aka.ms/limitedaccesscogservices).
-
-If you need help with Azure AI Video Indexer, find support [here](../ai-services/cognitive-services-support-options.md).
-
-[Report Abuse](https://msrc.microsoft.com/report/abuse) of Azure AI Video Indexer.
-
-## Next steps
-
-Learn more about the legal terms that apply to this service [here](https://azure.microsoft.com/support/legal/).
-
azure-video-indexer Logic Apps Connector Arm Accounts https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-video-indexer/logic-apps-connector-arm-accounts.md
- Title: Logic Apps connector with ARM-based AVI accounts
-description: This article shows how to unlock new experiences and monetization opportunities Azure AI Video Indexer connectors with Logic App and Power Automate with AVI ARM accounts.
- Previously updated : 11/16/2022----
-# Logic Apps connector with ARM-based AVI accounts
--
-Azure AI Video Indexer (AVI) [REST API](https://api-portal.videoindexer.ai/api-details#api=Operations&operation=Upload-Video) supports both server-to-server and client-to-server communication. The API enables you to integrate video and audio insights into your application logic.
-
-> [!TIP]
-> For the latest `api-version`, chose the latest stable version in [our REST documentation](/rest/api/videoindexer/stable/generate).
-
-To make the integration easier, we support [Logic Apps](https://azure.microsoft.com/services/logic-apps/) and [Power Automate](https://make.powerautomate.com/connectors/shared_videoindexer-v2/video-indexer-v2/) connectors that are compatible with the Azure AI Video Indexer API.
-
-You can use the connectors to set up custom workflows to effectively index and extract insights from a large amount of video and audio files, without writing a single line of code. Furthermore, using the connectors for the integration gives you better visibility on the health of your workflow and an easy way to debug it.
-
-> [!TIP]
-> If you are using a classic AVI account, see [Logic Apps connector with classic-based AVI accounts](logic-apps-connector-tutorial.md).
-
-## Get started with the Azure AI Video Indexer connectors
-
-To help you get started quickly with the Azure AI Video Indexer connectors, the example in this article creates Logic App flows. The Logic App and Power Automate capabilities and their editors are almost identical, thus the diagrams and explanations are applicable to both. The example in this article is based on the ARM AVI account. If you're working with a classic account, see [Logic App connectors with classic-based AVI accounts](logic-apps-connector-tutorial.md).
-
-The "upload and index your video automatically" scenario covered in this article is composed of two different flows that work together. The "two flow" approach is used to support async upload and indexing of larger files effectively.
-
-* The first flow is triggered when a blob is added or modified in an Azure Storage account. It uploads the new file to Azure AI Video Indexer with a callback URL to send a notification once the indexing operation completes.
-* The second flow is triggered based on the callback URL and saves the extracted insights back to a JSON file in Azure Storage.
-
-The logic apps that you create in this article, contain one flow per app. The second section (**Create a new logic app of type consumption**) explains how to connect the two. The second flow stands alone and is triggered by the first one (the section with the callback URL).
-
-## Prerequisites
--- [!INCLUDE [quickstarts-free-trial-note](../../includes/quickstarts-free-trial-note.md)]-- Create an ARM-based [Azure AI Video Indexer account](create-account-portal.md).-- Create an Azure Storage account. Keep note of the access key for your Storage account.-
- Create two containers: one to store the media files, second to store the insights generated by Azure AI Video Indexer. In this article, the containers are `videos` and `insights`.
-
-## Set up the file upload flow (the first flow)
-
-This section describes how to set up the first ("file upload") flow. The first flow is triggered when a blob is added or modified in an Azure Storage account. It uploads the new file to Azure AI Video Indexer with a callback URL to send a notification once the indexing operation completes.
-
-The following image shows the first flow:
-
-![Screenshot of the file upload flow.](./media/logic-apps-connector-arm-accounts/first-flow-high-level.png)
-
-1. Create the <a href="https://portal.azure.com/#create/Microsoft.LogicApp" target="_blank">Logic App</a>. We create a Logic App in the same region as the Azure Video Indexer region (recommended but not required). We call the logic app `UploadIndexVideosApp`.
-
- 1. Select **Consumption** for **Plan type**.
- 1. Press **Review + Create** -> **Create**.
- 1. Once the Logic App deployment is complete, in the Azure portal, search and navigate to the newly created Logic App.
- 1. Under the **Settings** section, on the left side's panel, select the **Identity** tab.
- 3. Under **System assigned**, change the **Status** from **Off** to **On** (the step is important for later on in this tutorial).
- 4. Press **Save** (on the top of the page).
- 5. Select the **Logic app designer** tab, in the pane on the left.
- 6. Pick a **Blank Logic App** flow.
- 7. Search for "blob" in the **Choose an Operation** blade.
- 8. In the **All** tab, choose the **Azure Blob Storage** component.
- 9. Under **Triggers**, select the **When a blob is added or modified (properties only) (V2)** trigger.
-1. Set the storage connection.
-
- After creating a **When a blob is added or modified (properties only) (V2)** trigger, the connection needs to be set to the following values:
-
- |Key | Value|
- |--|--|
- |Connection name | <*Name your connection*>. |
- |Authentication type | Access Key|
- |Azure Storage Account name| <*Storage account name where media files are going to be stored*>.|
- |Azure Storage Account Access Key| To get access key of your storage account: in the Azure portal -> my-storage -> under **Security + networking** -> **Access keys** -> copy one of the keys.|
-
- Select **Create**.
-
- ![Screenshot of the storage connection trigger.](./media/logic-apps-connector-arm-accounts/trigger.png)
-
- After setting the connection to the storage, it's required to specify the blob storage container that is being monitored for changes.
-
- |Key| Value|
- |--|--|
- |Storage account name | *Storage account name where media files are stored*|
- |Container| `/videos`|
-
- Select **Save** -> **+New step**
-
- ![Screenshot of the storage container trigger.](./media/logic-apps-connector-arm-accounts/storage-container-trigger.png)
-1. Create SAS URI by path action.
-
- 1. Select the **Action** tab.
- 1. Search for and select **Create SAS URI by path (V2)**.
-
- |Key| Value|
- |--|--|
- |Storage account name | <*The storage account name where media files as stored*>.|
- | Blob path| Under **Dynamic content**, select **List of Files Path**|
- | Group Policy Identifier| Leave the default value.|
- | Permissions| **Read** |
- | Shared Access protocol (appears after pressing **Add new parameter**)| **HttpsOnly**|
-
- Select **Save** (at the top of the page).
-
- > [!div class="mx-imgBorder"]
- > :::image type="content" source="./media/logic-apps-connector-arm-accounts/create-sas.png" alt-text="Screenshot of the create SAS URI by path logic." lightbox="./media/logic-apps-connector-arm-accounts/create-sas.png":::
-
- Select **+New Step**.
-1. <a name="access_token"></a>Generate an access token.
-
- > [!NOTE]
- > For details about the ARM API and the request/response examples, see [Generate an Azure AI Video Indexer access token](/rest/api/videoindexer/preview/generate/access-token).
- >
- > Press **Try it** to get the correct values for your account.
-
- Search and create an **HTTP** action.
-
- |Key| Value|Notes|
- |-|-||
- |Method | **POST**||
- | URI| [generateAccessToken](/rest/api/videoindexer/stable/generate/access-token?tabs=HTTP#generate-accesstoken-for-account-contributor). ||
- | Body|`{ "permissionType": "Contributor", "scope": "Account" }` |See the [REST doc example](/rest/api/videoindexer/preview/generate/access-token?tabs=HTTP#generate-accesstoken-for-account-contributor), make sure to delete the **POST** line.|
- | Add new parameter | **Authentication** ||
-
- ![Screenshot of the HTTP access token.](./media/logic-apps-connector-arm-accounts/http-with-param.png)
-
- After the **Authentication** parameter is added, fill the required parameters according to the table below:
-
- |Key| Value|
- |-|-|
- | Authentication type | **Managed identity** |
- | Managed identity | **System-assigned managed identity**|
- | Audience | `https://management.core.windows.net` |
-
- Select **Save**.
-
- > [!TIP]
- > Before moving to the next step, set up the right permission between the Logic app and the Azure AI Video Indexer account.
- >
- > Make sure you have followed the steps to enable the system -assigned managed identity of your Logic Apps.
-
- > [!div class="mx-imgBorder"]
- > :::image type="content" source="./media/logic-apps-connector-arm-accounts/enable-system.png" alt-text="Screenshot of the how to enable the system assigned managed identity." lightbox="./media/logic-apps-connector-arm-accounts/enable-system.png":::
- 1. Set up system assigned managed identity for permission on Azure AI Video Indexer resource.
-
- In the Azure portal, go to your Azure AI Video Indexer resource/account.
-
- 1. On the left side blade, and select **Access control**.
- 1. Select **Add** -> **Add role assignment** -> **Contributor** -> **Next** -> **User, group, or service principal** -> **+Select members**.
- 1. Under **Members**, search for the Logic Apps name you created (in this case, `UploadIndexVideosApp`).
- 1. Press **Select**.
- 1. Press **Review + assign**.
-1. Back in your Logic App, create an **Upload video and index** action.
-
- 1. Select **Video Indexer(V2)**.
- 1. From Video Indexer(V2), select **Upload Video and index**.
- 1. Set the connection to the Video Indexer account.
-
- |Key| Value|
- |-|-|
- | Connection name| <*Enter a name for the connection*>, in this case `aviconnection`.|
- | API key| This is your personal API key, which is available under **Profile** in the [developer portal](https://api-portal.videoindexer.ai/profile) Because this Logic App is for ARM accounts we do not need the actual API key and you can fill in a dummy value like 12345 |
-
- Select **Create**.
-
- 1. Fill **Upload video and index** action parameters.
-
- > [!TIP]
- > If the AVI Account ID cannot be found and isn't in the drop-down, use the custom value.
-
- |Key| Value|
- |-|-|
- |Location| Location of the associated the Azure AI Video Indexer account.|
- | Account ID| Account ID of the associated Azure AI Video Indexer account. You can find the **Account ID** in the **Overview** page of your account, in the Azure portal. Or, the **Account settings** tab, left of the [Azure AI Video Indexer website](https://www.videoindexer.ai/).|
- |Access Token| Use the `body('HTTP')['accessToken']` expression to extract the access token in the right format from the previous HTTP call.|
- | Video Name| Select **List of Files Name** from the dynamic content of **When a blob is added or modified** action. |
- |Video URL|Select **Web Url** from the dynamic content of **Create SAS URI by path** action.|
- | Body| Can be left as default.|
-
- > [!div class="mx-imgBorder"]
- > :::image type="content" source="./media/logic-apps-connector-arm-accounts/upload-and-index-expression.png" alt-text="Screenshot of the upload and index action." lightbox="./media/logic-apps-connector-arm-accounts/upload-and-index-expression.png":::
-
- Select **Save**.
-
-The completion of the uploading and indexing from the first flow will send an HTTP request with the correct callback URL to trigger the second flow. Then, it will retrieve the insights generated by Azure AI Video Indexer. In this example, it will store the output of your indexing job in your Azure Storage. However, it's up to you what you do with the output.
-
-## Create a new logic app of type consumption (the second flow)
-
-Create the second flow, Logic Apps of type consumption. The second flow is triggered based on the callback URL and saves the extracted insights back to a JSON file in Azure Storage.
-
-![Screenshot of the high level flow.](./media/logic-apps-connector-arm-accounts/second-flow-high-level.png)
-
-1. Set up the trigger
-
- Search for the **When an HTTP request is received**.
-
- ![Screenshot of the set up the trigger.](./media/logic-apps-connector-arm-accounts/serach-trigger.png)
-
- For the trigger, we'll see an HTTP POST URL field. The URL wonΓÇÖt be generated until after you save your flow; however, you'll need the URL eventually.
-
- > [!TIP]
- > We will come back to the URL created in this step.
-1. Generate an access token.
-
- Follow all the steps from:
-
- 1. **Generate an access token** we did for the first flow ([as shown here](#access_token)).
- 1. Select **Save** -> **+ New step**.
-1. Get Video Indexer insights.
-
- 1. Search for "Video Indexer".
- 1. From **Video Indexer(V2)**, select the **Get Video Index** action.
-
- Set the connection name:
-
- |Key| Value|
- |-|-|
- |Connection name| <*A name for connection*>. For example, `aviconnection`.|
- | API key| This is your personal API key, which is available under **Profile** at the [developer portal](https://api-portal.videoindexer.ai/profile). For more information, see [Subscribe to the API](video-indexer-use-apis.md#subscribe-to-the-api).|
- 1. Select **Create**.
- 1. Fill out the required parameters according to the table:
-
- |Key| Value|
- |-|-|
- |Location| The Location of the Azure AI Video Indexer account.|
- | Account ID| The Video Indexer account ID can be copied from the resource/account **Overview** page in the Azure portal.|
- | Video ID\*| For Video ID, add dynamic content of type **Expression** and put in the following expression: **triggerOutputs()['queries']['id']**. |
- | Access Token| From the dynamic content, under the **Parse JSON** section select the **accessToken** that is the output of the parse JSON action. |
-
- \*This expression tells the connecter to get the Video ID from the output of your trigger. In this case, the output of your trigger will be the output of **Upload video and index** in your first trigger.
-
- ![Screenshot of the upload and index a video action.](./media/logic-apps-connector-arm-accounts/get-video-index.png)
-
- Select **Save** -> **+ New step**.
-1. Create a blob and store the insights JSON.
-
- 1. Search for "Azure blob", from the group of actions.
- 1. Select **Create blob(V2)**.
- 1. Set the connection to the blob storage that will store the JSON insights files.
-
- |Key| Value|
- |-|-|
- | Connection name| <*Enter a connection name*>.|
- | Authentication type |Access Key|
- | Azure Storage Account name| <* The storage account name where insights will be stored*>. |
- | Azure Storage Account Access key| Go to Azure portal-> my-storage-> under **Security + networking** ->Access keys -> copy one of the keys. |
-
- ![Screenshot of the create blob action.](./media/logic-apps-connector-arm-accounts/storage-connection.png)
- 1. Select **Create**.
- 1. Set the folder in which insights will be stored.
-
- |Key| Value|
- |-|-|
- |Storage account name| <*Enter the storage account name that would contain the JSON output (in this tutorial is the same as the source video).>*|
- | Folder path | From the dropdown, select the `/insights`|
- | Blob name| From the dynamic content, under the **Get Video Index** section select **Name** and add `_insights.json`, insights file name will be the video name + insights.json |
- | Blob content| From the dynamic content, under the **Get Video Index** section, select the **Body**. |
-
- ![Screenshot of the store blob content action.](./media/logic-apps-connector-arm-accounts/create-blob.png)
- 1. Select **Save flow**.
-1. Update the callback URL to get notified when an index job is finished.
-
- Once the flow is saved, an HTTP POST URL is created in the trigger.
-
- 1. Copy the URL from the trigger.
-
- ![Screenshot of the save URL trigger.](./media/logic-apps-connector-arm-accounts/http-callback-url.png)
- 1. Go back to the first flow and paste the URL in the **Upload video and index** action for the **Callback URL parameter**.
-
-Make sure both flows are saved.
-
-## Next steps
-
-Try out your newly created Logic App or Power Automate solution by adding a video to your Azure blobs container, and go back a few minutes later to see that the insights appear in the destination folder.
azure-video-indexer Logic Apps Connector Tutorial https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-video-indexer/logic-apps-connector-tutorial.md
- Title: The Azure AI Video Indexer connectors with Logic App and Power Automate.
-description: This tutorial shows how to unlock new experiences and monetization opportunities Azure AI Video Indexer connectors with Logic App and Power Automate.
- Previously updated : 09/21/2020----
-# Use Azure AI Video Indexer with Logic App and Power Automate
--
-Azure AI Video Indexer [REST API](https://api-portal.videoindexer.ai/api-details#api=Operations&operation=Delete-Video) supports both server-to-server and client-to-server communication and enables Azure AI Video Indexer users to integrate video and audio insights easily into their application logic, unlocking new experiences and monetization opportunities.
-
-To make the integration even easier, we support [Logic Apps](https://azure.microsoft.com/services/logic-apps/) and [Power Automate](https://make.powerautomate.com/connectors/shared_videoindexer-v2/video-indexer-v2/) connectors that are compatible with our API. You can use the connectors to set up custom workflows to effectively index and extract insights from a large amount of video and audio files, without writing a single line of code. Furthermore, using the connectors for your integration gives you better visibility on the health of your workflow and an easy way to debug it. 
-
-To help you get started quickly with the Azure AI Video Indexer connectors, we will do a walkthrough of an example Logic App and Power Automate solution you can set up. This tutorial shows how to set up flows using Logic Apps. However, the editors and capabilities are almost identical in both solutions, thus the diagrams and explanations are applicable to both Logic Apps and Power Automate.
-
-The "upload and index your video automatically" scenario covered in this tutorial is comprised of two different flows that work together.
-* The first flow is triggered when a blob is added or modified in an Azure Storage account. It uploads the new file to Azure AI Video Indexer with a callback URL to send a notification once the indexing operation completes.
-* The second flow is triggered based on the callback URL and saves the extracted insights back to a JSON file in Azure Storage. This two flow approach is used to support async upload and indexing of larger files effectively.
-
-This tutorial is using Logic App to show how to:
-
-> [!div class="checklist"]
-> * Set up the file upload flow
-> * Set up the JSON extraction flow
--
-## Prerequisites
-
-* To begin with, you will need an Azure AI Video Indexer account along with [access to the APIs via API key](video-indexer-use-apis.md).
-* You will also need an Azure Storage account. Keep note of the access key for your Storage account. Create two containers ΓÇô one to store videos in and one to store insights generated by Azure AI Video Indexer in.
-* Next, you will need to open two separate flows on either Logic Apps or Power Automate (depending on which you are using).
-
-## Set up the first flow - file upload
-
-The first flow is triggered whenever a blob is added in your Azure Storage container. Once triggered, it will create a SAS URI that you can use to upload and index the video in Azure AI Video Indexer. In this section you will create the following flow.
-
-![File upload flow](./media/logic-apps-connector-tutorial/file-upload-flow.png)
-
-To set up the first flow, you will need to provide your Azure AI Video Indexer API Key and Azure Storage credentials.
-
-![Azure blob storage](./media/logic-apps-connector-tutorial/azure-blob-storage.png)
-
-![Connection name and API key](./media/logic-apps-connector-tutorial/connection-name-api-key.png)
-
-> [!TIP]
-> If you previously connected an Azure Storage account or Azure AI Video Indexer account to a Logic App, your connection details are stored and you will be connected automatically. <br/>You can edit the connection by clicking on **Change connection** at the bottom of an Azure Storage (the storage window) or Azure AI Video Indexer (the player window) action.
-
-Once you can connect to your Azure Storage and Azure AI Video Indexer accounts, find and choose the "When a blob is added or modified" trigger in **Logic Apps Designer**.
-
-Select the container that you will place your video files in.
-
-![Screenshot shows the When a blob is added or modified dialog box where you can select a container.](./media/logic-apps-connector-tutorial/container.png)
-
-Next, find and select the "Create SAS URI by pathΓÇ¥ action. In the dialog for the action, select List of Files Path from the Dynamic content options.
-
-Also, add a new "Shared Access Protocol" parameter. Choose HttpsOnly for the value of the parameter.
-
-![SAS uri by path](./media/logic-apps-connector-tutorial/sas-uri-by-path.jpg)
-
-Fill out [your account location](regions.md) and [account ID](./video-indexer-use-apis.md#operational-api-calls) to get the Azure AI Video Indexer account token.
-
-![Get account access token](./media/logic-apps-connector-tutorial/account-access-token.png)
-
-For ΓÇ£Upload video and indexΓÇ¥, fill out the required parameters and Video URL. Select ΓÇ£Add new parameterΓÇ¥ and select Callback URL.
-
-![Upload and index](./media/logic-apps-connector-tutorial/upload-and-index.png)
-
-You will leave the callback URL empty for now. YouΓÇÖll add it only after finishing the second flow where the callback URL is created.
-
-You can use the default value for the other parameters or set them according to your needs.
-
-Click **Save**, and letΓÇÖs move on to configure the second flow, to extract the insights once the upload and indexing is completed.
-
-## Set up the second flow - JSON extraction
-
-The completion of the uploading and indexing from the first flow will send an HTTP request with the correct callback URL to trigger the second flow. Then, it will retrieve the insights generated by Azure AI Video Indexer. In this example, it will store the output of your indexing job in your Azure Storage. However, it is up to you what you can do with the output.
-
-Create the second flow separate from the first one.
-
-![JSON extraction flow](./media/logic-apps-connector-tutorial/json-extraction-flow.png)
-
-To set up this flow, you will need to provide your Azure AI Video Indexer API Key and Azure Storage credentials again. You will need to update the same parameters as you did for the first flow.
-
-For your trigger, you will see an HTTP POST URL field. The URL wonΓÇÖt be generated until after you save your flow; however, you will need the URL eventually. We will come back to this.
-
-Fill out [your account location](regions.md) and [account ID](./video-indexer-use-apis.md#operational-api-calls) to get the Azure AI Video Indexer account token.
-
-Go to the ΓÇ£Get Video IndexΓÇ¥ action and fill out the required parameters. For Video ID, put in the following expression: triggerOutputs()['queries']['id']
-
-![Azure AI Video Indexer action info](./media/logic-apps-connector-tutorial/video-indexer-action-info.jpg)
-
-This expression tells the connecter to get the Video ID from the output of your trigger. In this case, the output of your trigger will be the output of ΓÇ£Upload video and indexΓÇ¥ in your first trigger.
-
-Go to the ΓÇ£Create blobΓÇ¥ action and select the path to the folder in which you will save the insights to. Set the name of the blob you are creating. For Blob content, put in the following expression: body(ΓÇÿGet_Video_IndexΓÇÖ)
-
-![Create blob action](./media/logic-apps-connector-tutorial/create-blob-action.jpg)
-
-This expression takes the output of the ΓÇ£Get Video IndexΓÇ¥ action from this flow.
-
-Click **Save flow**.
-
-Once the flow is saved, an HTTP POST URL is created in the trigger. Copy the URL from the trigger.
-
-![Save URL trigger](./media/logic-apps-connector-tutorial/save-url-trigger.png)
-
-Now, go back to the first flow and paste the URL in the "Upload video and index" action for the Callback URL parameter.
-
-Make sure both flows are saved, and youΓÇÖre good to go!
-
-Try out your newly created Logic App or Power Automate solution by adding a video to your Azure blobs container, and go back a few minutes later to see that the insights appear in the destination folder.
-
-## Generate captions
-
-See the following blog for the steps that show [how to generate captions with Azure AI Video Indexer and Logic Apps](https://techcommunity.microsoft.com/t5/azure-media-services/generating-captions-with-video-indexer-and-logic-apps/ba-p/1672198).
-
-The article also shows how to index a video automatically by copying it to OneDrive and how to store the captions generated by Azure AI Video Indexer in OneDrive.
-
-## Clean up resources
-
-After you are done with this tutorial, feel free to keep this Logic App or Power Automate solution up and running if you need. However, if you do not want to keep this running and do not want to be billed, Turn Off both of your flows if youΓÇÖre using Power Automate. Disable both of the flows if youΓÇÖre using Logic Apps.
-
-## Next steps
-
-This tutorial showed just one Azure AI Video Indexer connectors example. You can use the Azure AI Video Indexer connectors for any API call provided by Azure AI Video Indexer. For example: upload and retrieve insights, translate the results, get embeddable widgets and even customize your models. Additionally, you can choose to trigger those actions based on different sources like updates to file repositories or emails sent. You can then choose to have the results update to our relevant infrastructure or application or generate any number of action items.
-
-> [!div class="nextstepaction"]
-> [Use the Azure AI Video Indexer API](video-indexer-use-apis.md)
-
-For additional resources, refer to [Azure AI Video Indexer](/connectors/videoindexer-v2/)
azure-video-indexer Manage Account Connected To Azure https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-video-indexer/manage-account-connected-to-azure.md
- Title: Repair the connection to Azure, check errors/warnings
-description: Learn how to manage an Azure AI Video Indexer account connected to Azure repair the connection, examine errors/warnings.
- Previously updated : 01/14/2021----
-# Repair the connection to Azure, examine errors/warnings
---
-This article demonstrates how to manage an Azure AI Video Indexer account that's connected to your Azure subscription and an Azure Media Services account.
-
-> [!NOTE]
-> You have to be the Azure AI Video Indexer account owner to do account configuration adjustments discussed in this topic.
-
-## Prerequisites
-
-Connect your Azure AI Video Indexer account to Azure, as described in [Connected to Azure](connect-to-azure.md).
-
-Make sure to follow [Prerequisites](connect-to-azure.md#prerequisites-for-connecting-to-azure) and review [Considerations](connect-to-azure.md#azure-media-services-considerations) in the article.
-
-## Examine account settings
-
-This section examines settings of your Azure AI Video Indexer account.
-
-To view settings:
-
-1. Click on the user icon in the top-right corner and select **Settings**.
-
- ![Settings in Azure AI Video Indexer](./media/manage-account-connected-to-azure/select-settings.png)
-
-2. On the **Settings** page, select the **Account** tab.
-
-If your Videos Indexer account is connected to Azure, you see the following things:
-
-* The name of the underlying Azure Media Services account.
-* The number of indexing jobs running and queued.
-* The number and type of allocated reserved units.
-
-If your account needs some adjustments, you'll see relevant errors and warnings about your account configuration on the **Settings** page. The messages contain links to exact places in Azure portal where you need to make changes. For more information, see the [errors and warnings](#errors-and-warnings) section that follows.
-
-## Repair the connection to Azure
-
-In the **Update connection to Azure Media Services** dialog of your [Azure AI Video Indexer](https://www.videoindexer.ai/) page, you're asked to provide values for the following settings:
-
-|Setting|Description|
-|||
-|Azure subscription ID|The subscription ID can be retrieved from the Azure portal. Click on **All services** in the left panel and search for "subscriptions". Select **Subscriptions** and choose the desired ID from the list of your subscriptions.|
-|Azure Media Services resource group name|The name for the resource group in which you created the Media Services account.|
-|Application ID|The Microsoft Entra application ID (with permissions for the specified Media Services account) that you created for this Azure AI Video Indexer account. <br/><br/>To get the app ID, navigate to Azure portal. Under the Media Services account, choose your account and go to **API Access**. Select **Connect to Media Services API with service principal** -> **Microsoft Entra App**. Copy the relevant parameters.|
-|Application key|The Microsoft Entra application key associated with your Media Services account that you specified above. <br/><br/>To get the app key, navigate to Azure portal. Under the Media Services account, choose your account and go to **API Access**. Select **Connect to Media Services API with service principal** -> **Manage application** -> **Certificates & secrets**. Copy the relevant parameters.|
-
-## Errors and warnings
-
-If your account needs some adjustments, you see relevant errors and warnings about your account configuration on the **Settings** page. The messages contain links to exact places in Azure portal where you need to make changes. This section gives more details about the error and warning messages.
-
-* Event Grid
-
- You have to register the Event Grid resource provider using the Azure portal. In the [Azure portal](https://portal.azure.com/), go to **Subscriptions** > [subscription] > **ResourceProviders** > **Microsoft.EventGrid**. If not in the **Registered** state, select **Register**. It takes a couple of minutes to register.
-
-* Streaming endpoint
-
- Make sure the underlying Media Services account has the default **Streaming Endpoint** in a started state. Otherwise, you can't watch videos from this Media Services account or in Azure AI Video Indexer.
-
-* Media reserved units
-
- You must allocate Media Reserved Units on your Media Service resource in order to index videos. For optimal indexing performance, it's recommended to allocate at least 10 S3 Reserved Units. For pricing information, see the FAQ section of the [Media Services pricing](https://azure.microsoft.com/pricing/details/media-services/) page.
-
-## Next steps
-
-You can programmatically interact with your trial account or Azure AI Video Indexer accounts that are connected to Azure by following the instructions in: [Use APIs](video-indexer-use-apis.md).
-
-Use the same Microsoft Entra user you used when connecting to Azure.
azure-video-indexer Manage Multiple Tenants https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-video-indexer/manage-multiple-tenants.md
- Title: Manage multiple tenants with Azure AI Video Indexer - Azure
-description: This article suggests different integration options for managing multiple tenants with Azure AI Video Indexer.
- Previously updated : 05/15/2019----
-# Manage multiple tenants
--
-This article discusses different options for managing multiple tenants with Azure AI Video Indexer. Choose a method that is most suitable for your scenario:
-
-* Azure AI Video Indexer account per tenant
-* Single Azure AI Video Indexer account for all tenants
-* Azure subscription per tenant
-
-## Azure AI Video Indexer account per tenant
-
-When using this architecture, an Azure AI Video Indexer account is created for each tenant. The tenants have full isolation in the persistent and compute layer.
-
-![Azure AI Video Indexer account per tenant](./media/manage-multiple-tenants/video-indexer-account-per-tenant.png)
-
-### Considerations
-
-* Customers don't share storage accounts (unless manually configured by the customer).
-* Customers don't share compute (reserved units) and don't impact processing jobs times of one another.
-* You can easily remove a tenant from the system by deleting the Azure AI Video Indexer account.
-* There's no ability to share custom models between tenants.
-
- Make sure there's no business requirement to share custom models.
-* Harder to manage due to multiple Azure AI Video Indexer (and associated Media Services) accounts per tenant.
-
-> [!TIP]
-> Create an admin user for your system in [the Azure AI Video Indexer developer portal](https://api-portal.videoindexer.ai/) and use the Authorization API to provide your tenants the relevant [account access token](https://api-portal.videoindexer.ai/api-details#api=Operations&operation=Get-Account-Access-Token).
-
-## Single Azure AI Video Indexer account for all users
-
-When using this architecture, the customer is responsible for tenants isolation. All tenants have to use a single Azure AI Video Indexer account with a single Azure Media Service account. When uploading, searching, or deleting content, the customer will need to filter the proper results for that tenant.
-
-![Single Azure AI Video Indexer account for all users](./media/manage-multiple-tenants/single-video-indexer-account-for-all-users.png)
-
-With this option, customization models (Person, Language, and Brands) can be shared or isolated between tenants by filtering the models by tenant.
-
-When [uploading videos](https://api-portal.videoindexer.ai/api-details#api=Operations&operation=Upload-Video), you can specify a different partition attribute per tenant. This will allow isolation in the [search API](https://api-portal.videoindexer.ai/api-details#api=Operations&operation=Search-Videos). By specifying the partition attribute in the search API you'll only get results of the specified partition.
-
-### Considerations
-
-* Ability to share content and customization models between tenants.
-* One tenant impacts the performance of other tenants.
-* Customer needs to build a complex management layer on top of Azure AI Video Indexer.
-
-> [!TIP]
-> You can use the [priority](upload-index-videos.md) attribute to prioritize tenants jobs.
-
-## Azure subscription per tenant
-
-When using this architecture, each tenant will have their own Azure subscription. For each user, you'll create a new Azure AI Video Indexer account in the tenant subscription.
-
-![Azure subscription per tenant](./media/manage-multiple-tenants/azure-subscription-per-tenant.png)
-
-### Considerations
-
-* This is the only option that enables billing separation.
-* This integration has more management overhead than Azure AI Video Indexer account per tenant. If billing isn't a requirement, it's recommended to use one of the other options described in this article.
-
-## Next steps
-
-[Overview](video-indexer-overview.md)
azure-video-indexer Matched Person https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-video-indexer/matched-person.md
- Title: Enable the matched person insight
-description: The topic explains how to use a match observed people feature. These are people that are detected in the video with the corresponding faces ("People" insight).
- Previously updated : 12/10/2021----
-# Enable the matched person insight
---
-Azure AI Video Indexer matches observed people that were detected in the video with the corresponding faces ("People" insight). To produce the matching algorithm, the bounding boxes for both the faces and the observed people are assigned spatially along the video. The API returns the confidence level of each matching.
-
-The following are some scenarios that benefit from this feature:
-
-* Improve efficiency when creating raw data for content creators, like video advertising, news, or sport games (for example, find all appearances of a specific person in a video archive).
-* Post-event analysisΓÇödetect and track specific personΓÇÖs movement to better analyze an accident or crime post-event (for example, explosion, bank robbery, incident).
-* Create a summary out of a long video, to include the parts where the specific person appears.
-
-The **Matched person** feature is available when indexing your file by choosing the
-**Advanced** -> **Video + audio indexing** preset.
-
-> [!NOTE]
-> Standard indexing does not include this advanced model.
-
-> [!div class="mx-imgBorder"]
-> :::image type="content" source="./media/matched-person/index-matched-person-feature.png" alt-text="Advanced video or Advanced video + audio preset":::
-
-To view the Matched person on the [Azure AI Video Indexer](https://www.videoindexer.ai/) website, go to **View** -> **Show Insights** -> select the **All** option or **View** -> **Custom View** -> **Mapped Faces**.
-
-When you choose to see insights of your video on the [Azure AI Video Indexer](https://www.videoindexer.ai/) website, the matched person could be viewed from the **Observed People tracing** insight. When choosing a thumbnail of a person the matched person became available.
-
-> [!div class="mx-imgBorder"]
-> :::image type="content" source="./media/matched-person/from-observed-people.png" alt-text="View matched people from the Observed People insight":::
-
-If you would like to view people's detected clothing in the **Timeline** of your video on the [Video Indexer website](https://www.videoindexer.ai/), go to **View** -> **Show Insights** and select the **All option** or **View** -> **Custom View** -> **Observed People**.
-
-Searching for a specific person by name, returning all the appearances of the specific person is enables using the search bar of the Insights of your video on the Azure AI Video Indexer.
-
-## JSON code sample
-
-The following JSON response illustrates what Azure AI Video Indexer returns when tracing observed people having Mapped person associated:
-
-```json
-"observedPeople": [
- {
- "id": 1,
- "thumbnailId": "d09ad62e-e0a4-42e5-8ca9-9a640c686596",
- "clothing": [
- {
- "id": 1,
- "type": "sleeve",
- "properties": {
- "length": "short"
- }
- },
- {
- "id": 2,
- "type": "pants",
- "properties": {
- "length": "short"
- }
- }
- ],
- "matchingFace": {
- "id": 1310,
- "confidence": 0.3819
- },
- "instances": [
- {
- "adjustedStart": "0:00:34.8681666",
- "adjustedEnd": "0:00:36.0026333",
- "start": "0:00:34.8681666",
- "end": "0:00:36.0026333"
- },
- {
- "adjustedStart": "0:00:36.6699666",
- "adjustedEnd": "0:00:36.7367",
- "start": "0:00:36.6699666",
- "end": "0:00:36.7367"
- },
- {
- "adjustedStart": "0:00:37.2038333",
- "adjustedEnd": "0:00:39.6729666",
- "start": "0:00:37.2038333",
- "end": "0:00:39.6729666"
- }
- ]
- }
-]
-```
-
-## Limitations and assumptions
-
-It's important to note the limitations of Mapped person, to avoid or mitigate the effects of miss matches between people or people who have no matches.
-
-**Precondition** for the matching is that the person that showing in the observed faces was detected and can be found in the People insight.
-**Pose**: The tracks are optimized to handle observed people who most often appear on the front.
-**Obstructions**: There is no match between faces and observed people where there are obstruction (people or faces overlapping each other).
-**Spatial allocation per frame**: There is no match where different people appear in the same spatial position relatively to the frame in a short time.
-
-See the limitations of Observed people: [Trace observed people in a video](observed-people-tracing.md)
-
-## Next steps
-
-[Overview](video-indexer-overview.md)
azure-video-indexer Monitor Video Indexer Data Reference https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-video-indexer/monitor-video-indexer-data-reference.md
- Title: Monitoring Azure AI Video Indexer data reference
-description: Important reference material needed when you monitor Azure AI Video Indexer
--- Previously updated : 04/17/2023----
-# Monitor Azure AI Video Indexer data reference
--
-See [Monitoring Azure AI Video Indexer](monitor-video-indexer.md) for details on collecting and analyzing monitoring data for Azure AI Video Indexer.
-
-## Metrics
-
-Azure AI Video Indexer currently does not support any metrics monitoring.
-
-<!-- REQUIRED if you support Metrics. If you don't, keep the section but call that out. Some services are only onboarded to logs.
-<!-- Please keep headings in this order -->
-
-<!-- 2 options here depending on the level of extra content you have. -->
-
-<!--**OPTION 1 EXAMPLE**
-
-<!-- OPTION 1 - Minimum - Link to relevant bookmarks in https://learn.microsoft.com/azure/azure-monitor/platform/metrics-supported, which is auto generated from underlying systems. Not all metrics are published depending on whether your product group wants them to be. If the metric is published, but descriptions are wrong of missing, contact your PM and tell them to update them in the Azure Monitor "shoebox" manifest. If this article is missing metrics that you and the PM know are available, both of you contact azmondocs@microsoft.com.
>-
-<!-- Example format. There should be AT LEAST one Resource Provider/Resource Type here. -->
-
-<!--This section lists all the automatically collected platform metrics collected for Azure AI Video Indexer.
-
-|Metric Type | Resource Provider / Type Namespace<br/> and link to individual metrics |
-|-|--|
-| Virtual Machine | [Microsoft.Compute/virtualMachine](/azure/azure-monitor/platform/metrics-supported#microsoftcomputevirtualmachines) |
-| Virtual machine scale set | [Microsoft.Compute/virtualMachinescaleset](/azure/azure-monitor/platform/metrics-supported#microsoftcomputevirtualmachinescaleset)
-**OPTION 2 EXAMPLE** --
-<!-- OPTION 2 - Link to the metrics as above, but work in extra information not found in the automated metric-supported reference article. NOTE: YOU WILL NOW HAVE TO MANUALLY MAINTAIN THIS SECTION to make sure it stays in sync with the metrics-supported link. For highly customized example, see [CosmosDB](../cosmos-db/monitor-cosmos-db-reference.md#metrics). They even regroup the metrics into usage type vs. resource provider and type.
>-
-<!-- Example format. Mimic the setup of metrics supported, but add extra information -->
-
-<!--### Virtual Machine metrics
-
-Resource Provider and Type: [Microsoft.Compute/virtualMachines](/azure/azure-monitor/platform/metrics-supported#microsoftcomputevirtualmachines)
-
-| Metric | Unit | Description | *TODO replace this label with other information* |
-|:-|:--|:|:|
-| | | | Use this metric for <!-- put your specific information in here -->
-<!--| | | | |
-
-<!--### Virtual machine scale set metrics
-
-Namespace- [Microsoft.Compute/virtualMachinesscaleset](/azure/azure-monitor/platform/metrics-supported#microsoftcomputevirtualmachinescalesets)
-
-| Metric | Unit | Description | *TODO replace this label with other information* |
-|:-|:--|:|:|
-| | | | Use this metric for <!-- put your specific information in here -->
-<!--| | | | |
-
-<!-- Add additional explanation of reference information as needed here. Link to other articles such as your Monitor [servicename] article as appropriate. -->
-
-<!-- Keep this text as-is -->
-For more information, see a list of [all platform metrics supported in Azure Monitor](/azure/azure-monitor/platform/metrics-supported).
-
-## Metric dimensions
-
-Azure AI Video Indexer currently does not support any metrics monitoring.
-<!-- REQUIRED. Please keep headings in this order -->
-<!-- If you have metrics with dimensions, outline it here. If you have no dimensions, say so. Questions email azmondocs@microsoft.com -->
-
-<!--For more information on what metric dimensions are, see [Multi-dimensional metrics](/azure/azure-monitor/platform/data-platform-metrics#multi-dimensional-metrics).
-
-Azure AI Video Indexer does not have any metrics that contain dimensions.
-
-*OR*
-
-Azure AI Video Indexer has the following dimensions associated with its metrics.
-
-<!-- See https://learn.microsoft.com/azure/storage/common/monitor-storage-reference#metrics-dimensions for an example. Part is copied below. -->
-
-<!--**--EXAMPLE format when you have dimensions**
-
-Azure Storage supports following dimensions for metrics in Azure Monitor.
-
-| Dimension Name | Description |
-| - | -- |
-| **BlobType** | The type of blob for Blob metrics only. The supported values are **BlockBlob**, **PageBlob**, and **Azure Data Lake Storage**. Append blobs are included in **BlockBlob**. |
-| **BlobTier** | Azure storage offers different access tiers, which allow you to store blob object data in the most cost-effective manner. See more in [Azure Storage blob tier](/azure/storage/blobs/storage-blob-storage-tiers). The supported values include: <br/> <li>**Hot**: Hot tier</li> <li>**Cool**: Cool tier</li> <li>**Archive**: Archive tier</li> <li>**Premium**: Premium tier for block blob</li> <li>**P4/P6/P10/P15/P20/P30/P40/P50/P60**: Tier types for premium page blob</li> <li>**Standard**: Tier type for standard page Blob</li> <li>**Untiered**: Tier type for general purpose v1 storage account</li> |
-| **GeoType** | Transaction from Primary or Secondary cluster. The available values include **Primary** and **Secondary**. It applies to Read Access Geo Redundant Storage(RA-GRS) when reading objects from secondary tenant. | -->
-
-## Resource logs
-<!-- REQUIRED. Please keep headings in this order -->
-
-This section lists the types of resource logs you can collect for Azure AI Video Indexer.
-
-<!-- List all the resource log types you can have and what they are for -->
-
-For reference, see a list of [all resource logs category types supported in Azure Monitor](/azure/azure-monitor/platform/resource-logs-schema).
-
-<!--**OPTION 1 EXAMPLE**
-
-<!-- OPTION 1 - Minimum - Link to relevant bookmarks in https://learn.microsoft.com/azure/azure-monitor/platform/resource-logs-categories, which is auto generated from the REST API. Not all resource log types metrics are published depending on whether your product group wants them to be. If the resource log is published, but category display names are wrong or missing, contact your PM and tell them to update them in the Azure Monitor "shoebox" manifest. If this article is missing resource logs that you and the PM know are available, both of you contact azmondocs@microsoft.com.
>-
-<!-- Example format. There should be AT LEAST one Resource Provider/Resource Type here. -->
-
-<!--This section lists all the resource log category types collected for Azure AI Video Indexer.
-
-|Resource Log Type | Resource Provider / Type Namespace<br/> and link to individual metrics |
-|-|--|
-| Web Sites | [Microsoft.web/sites](/azure/azure-monitor/platform/resource-logs-categories#microsoftwebsites) |
-| Web Site Slots | [Microsoft.web/sites/slots](/azure/azure-monitor/platform/resource-logs-categories#microsoftwebsitesslots)
-**OPTION 2 EXAMPLE** --
-<!-- OPTION 2 - Link to the resource logs as above, but work in extra information not found in the automated metric-supported reference article. NOTE: YOU WILL NOW HAVE TO MANUALLY MAINTAIN THIS SECTION to make sure it stays in sync with the resource-log-categories link. You can group these sections however you want provided you include the proper links back to resource-log-categories article.
>-
-<!-- Example format. Add extra information -->
-
-<!--### Web Sites
-
-Resource Provider and Type: [Microsoft.videoindexer/accounts](/azure/azure-monitor/platform/resource-logs-categories#microsoftwebsites)
-
-| Category | Display Name | *TODO replace this label with other information* |
-|:|:-||
-| AppServiceAppLogs | App Service Application Logs | *TODO other important information about this type* |
-| AppServiceAuditLogs | Access Audit Logs | *TODO other important information about this type* |
-| etc. | | | -->
-
-### Azure AI Video Indexer
-
-Resource Provider and Type: [Microsoft.VideoIndexer/accounts](/azure/azure-monitor/platform/resource-logs-categories#microsoftvideoindexeraccounts)
-
-| Category | Display Name | Additional information |
-|:|:-||
-| VIAudit | Azure AI Video Indexer Audit Logs | Logs are produced from both the [Azure AI Video Indexer website](https://www.videoindexer.ai/) and the REST API. |
-| IndexingLogs | Indexing Logs | Azure AI Video Indexer indexing logs to monitor all files uploads, indexing and reindexing jobs. |
-
-<!-- --**END Examples** - -->
-
-## Azure Monitor Logs tables
-<!-- REQUIRED. Please keep heading in this order -->
-
-This section refers to all of the Azure Monitor Logs Kusto tables relevant to Azure AI Video Indexer and available for query by Log Analytics.
-
-<!--**OPTION 1 EXAMPLE**
-
-<!-- OPTION 1 - Minimum - Link to relevant bookmarks in https://learn.microsoft.com/azure/azure-monitor/reference/tables/tables-resourcetype where your service tables are listed. These files are auto generated from the REST API. If this article is missing tables that you and the PM know are available, both of you contact azmondocs@microsoft.com.
>-
-<!-- Example format. There should be AT LEAST one Resource Provider/Resource Type here. -->
-
-|Resource Type | Notes |
-|-|--|
-| [Azure AI Video Indexer](/azure/azure-monitor/reference/tables/tables-resourcetype#azure-video-indexer) | |
-
-<!-**OPTION 2 EXAMPLE** -
-
-<!-- OPTION 2 - List out your tables adding additional information on what each table is for. Individually link to each table using the table name. For example, link to [AzureMetrics](/azure/azure-monitor/reference/tables/azuremetrics).
-
-NOTE: YOU WILL NOW HAVE TO MANUALLY MAINTAIN THIS SECTION to make sure it stays in sync with the automatically generated list. You can group these sections however you want provided you include the proper links back to the proper tables.
>-
-### Azure AI Video Indexer
-
-| Table | Description | Additional information |
-|:|:-||
-| [VIAudit](/azure/azure-monitor/reference/tables/tables-resourcetype#azure-video-indexer)<!-- (S/azure/azure-monitor/reference/tables/viaudit)--> | <!-- description copied from previous link --> Events produced using the Azure AI Video Indexer [website](https://aka.ms/VIportal) or the [REST API portal](https://aka.ms/vi-dev-portal). | |
-|VIIndexing| Events produced using the Azure AI Video Indexer [upload](https://api-portal.videoindexer.ai/api-details#api=Operations&operation=Upload-Video) and [re-index](https://api-portal.videoindexer.ai/api-details#api=Operations&operation=Re-Index-Video) APIs. |
-<!--| [AzureMetrics](/azure/azure-monitor/reference/tables/azuremetrics) | <!-- description copied from previous link -->
-<!--Metric data emitted by Azure services that measure their health and performance. | *TODO other important information about this type |
-| etc. | | |
-
-<!--### Virtual Machine Scale Sets
-
-| Table | Description | *TODO replace this label with other information* |
-|:|:-||
-| [ADAssessmentRecommendation](/azure/azure-monitor/reference/tables/adassessmentrecommendation) | <!-- description copied from previous link -->
-<!-- Recommendations generated by AD assessments that are started through a scheduled task. When you schedule the assessment it runs by default every 7 days and upload the data into Azure Log Analytics | *TODO other important information about this type |
-| [ADReplicationResult](/azure/azure-monitor/reference/tables/adreplicationresult) | <!-- description copied from previous link -->
-<!--The AD Replication Status solution regularly monitors your Active Directory environment for any replication failures. | *TODO other important information about this type |
-| etc. | | |
-
-<!-- Add extra information if required -->
-
-For a reference of all Azure Monitor Logs / Log Analytics tables, see the [Azure Monitor Log Table Reference](/azure/azure-monitor/reference/tables/tables-resourcetype).
-
-<!-- --**END EXAMPLES** -
-
-### Diagnostics tables
-<!-- REQUIRED. Please keep heading in this order -->
-<!-- If your service uses the AzureDiagnostics table in Azure Monitor Logs / Log Analytics, list what fields you use and what they are for. Azure Diagnostics is over 500 columns wide with all services using the fields that are consistent across Azure Monitor and then adding extra ones just for themselves. If it uses service specific diagnostic table, refers to that table. If it uses both, put both types of information in. Most services in the future will have their own specific table. If you have questions, contact azmondocs@microsoft.com -->
-
-<!-- Azure AI Video Indexer uses the [Azure Diagnostics](/azure/azure-monitor/reference/tables/azurediagnostics) table and the [TODO whatever additional] table to store resource log information. The following columns are relevant.
-
-**Azure Diagnostics**
-
-| Property | Description |
-|: |:|
-| | |
-| | |
-
-**[TODO Service-specific table]**
-
-| Property | Description |
-|: |:|
-| | |
-| | |-->
-
-## Activity log
-<!-- REQUIRED. Please keep heading in this order -->
-
-The following table lists the operations related to Azure AI Video Indexer that may be created in the Activity log.
-
-<!-- Fill in the table with the operations that can be created in the Activity log for the service. -->
-| Operation | Description |
-|:|:|
-|Generate_AccessToken | |
-|Accounts_Update | |
-|Write tags | |
-|Create or update resource diagnostic setting| |
-|Delete resource diagnostic setting|
-
-<!-- NOTE: This information may be hard to find or not listed anywhere. Please ask your PM for at least an incomplete list of what type of messages could be written here. If you can't locate this, contact azmondocs@microsoft.com for help -->
-
-For more information on the schema of Activity Log entries, see [Activity Log schema](../azure-monitor/essentials/activity-log-schema.md).
-
-## Schemas
-<!-- REQUIRED. Please keep heading in this order -->
-
-The following schemas are in use by Azure AI Video Indexer
-
-<!-- List the schema and their usage. This can be for resource logs, alerts, event hub formats, etc depending on what you think is important. -->
-
-#### Audit schema
-
-```json
-{
- "time": "2022-03-22T10:59:39.5596929Z",
- "resourceId": "/SUBSCRIPTIONS/602a61eb-c111-43c0-8323-74825230a47d/RESOURCEGROUPS/VI-RESOURCEGROUP/PROVIDERS/MICROSOFT.VIDEOINDEXER/ACCOUNTS/VIDEOINDEXERACCOUNT",
- "operationName": "Get-Video-Thumbnail",
- "category": "Audit",
- "location": "westus2",
- "durationMs": "192",
- "resultSignature": "200",
- "resultType": "Success",
- "resultDescription": "Get Video Thumbnail",
- "correlationId": "33473fc3-bcbc-4d47-84cc-9fba2f3e9faa",
- "callerIpAddress": "46.*****",
- "operationVersion": "Operations",
- "identity": {
- "externalUserId": "4704F34286364F2*****",
- "upn": "alias@outlook.com",
- "claims": { "permission": "Reader", "scope": "Account" }
- },
- "properties": {
- "accountName": "videoIndexerAccoount",
- "accountId": "8878b584-d8a0-4752-908c-00d6e5597f55",
- "videoId": "1e2ddfdd77"
- }
- }
- ```
-
-#### Indexing schema
-
-```json
-{
- "time": "2022-09-28T09:41:08.6216252Z",
- "resourceId": "/SUBSCRIPTIONS/{SubscriptionId}/RESOURCEGROUPS/{ResourceGroup}/PROVIDERS/MICROSOFT.VIDEOINDEXER/ACCOUNTS/MY-VI-ACCOUNT",
- "operationName": "UploadStarted",
- "category": "IndexingLogs",
- "correlationId": "5cc9a3ea-126b-4f53-a4b5-24b1a5fb9736",
- "resultType": "Success",
- "location": "eastus",
- "operationVersion": "2.0",
- "durationMs": "0",
- "identity": {
- "upn": "my-email@microsoft.com",
- "claims": null
- },
- "properties": {
- "accountName": "my-vi-account",
- "accountId": "6961331d-16d3-413a-8f90-f86a5cabf3ef",
- "videoId": "46b91bc012",
- "indexing": {
- "Language": "en-US",
- "Privacy": "Private",
- "Partition": null,
- "PersonModelId": null,
- "LinguisticModelId": null,
- "AssetId": null,
- "IndexingPreset": "Default",
- "StreamingPreset": "Default",
- "Description": null,
- "Priority": null,
- "ExternalId": null,
- "Filename": "1 Second Video 1.mp4",
- "AnimationModelId": null,
- "BrandsCategories": null,
- "CustomLanguages": "en-US,ar-BH,hi-IN,es-MX",
- "ExcludedAIs": "Faces",
- "LogoGroupId": "ea9d154d-0845-456c-857e-1c9d5d925d95"
- }
- }
-}
- ```
-
-## Next steps
-
-<!-- replace below with the proper link to your main monitoring service article -->
-- See [Monitoring Azure AI Video Indexer](monitor-video-indexer.md) for a description of monitoring Azure AI Video Indexer.-- See [Monitoring Azure resources with Azure Monitor](../azure-monitor/essentials/monitor-azure-resource.md) for details on monitoring Azure resources.
azure-video-indexer Monitor Video Indexer https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-video-indexer/monitor-video-indexer.md
- Title: Monitoring Azure AI Video Indexer
-description: Start here to learn how to monitor Azure AI Video Indexer
--- Previously updated : 12/19/2022----
-# Monitoring Azure AI Video Indexer
--
-When you have critical applications and business processes relying on Azure resources, you want to monitor those resources for their availability, performance, and operation.
-
-This article describes the monitoring data generated by Azure AI Video Indexer. Azure AI Video Indexer uses [Azure Monitor](../azure-monitor/overview.md). If you are unfamiliar with the features of Azure Monitor common to all Azure services that use it, read [Monitoring Azure resources with Azure Monitor](../azure-monitor/essentials/monitor-azure-resource.md).
-
-Some services in Azure have a special focused pre-built monitoring dashboard in the Azure portal that provides a starting point for monitoring your service. These special dashboards are called "insights".
-
-> [!NOTE]
-> The monitoring feature is not available for trial and classic accounts. To update to an ARM account, see [Connect a classic account to ARM](connect-classic-account-to-arm.md) or [Import content from a trial account](import-content-from-trial.md).
-
-## Monitoring data
-
-Azure AI Video Indexer collects the same kinds of monitoring data as other Azure resources that are described in [Monitoring data from Azure resources](../azure-monitor/essentials/monitor-azure-resource.md#monitoring-data-from-azure-resources).
-
-See [Monitoring *Azure AI Video Indexer* data reference](monitor-video-indexer-data-reference.md) for detailed information on the metrics and logs metrics created by Azure AI Video Indexer.
-
-## Collection and routing
-
-Activity logs are collected and stored automatically, but can be routed to other locations by using a diagnostic setting.
-
-Resource Logs are not collected and stored until you create a diagnostic setting and route them to one or more locations.
-
-See [Create diagnostic setting to collect platform logs and metrics in Azure](/azure/azure-monitor/platform/diagnostic-settings) for the detailed process for creating a diagnostic setting using the Azure portal, CLI, or PowerShell. When you create a diagnostic setting, you specify which categories of logs to collect. The categories for *Azure AI Video Indexer* are listed in [Azure AI Video Indexer monitoring data reference](monitor-video-indexer-data-reference.md#resource-logs).
-
-| Category | Description |
-|:|:|
-|Audit | Read/Write operations|
-|Indexing Logs| Monitor the indexing process from upload to indexing and Re-indexing when needed|
---
-The metrics and logs you can collect are discussed in the following sections.
-
-## Analyzing metrics
-
-Currently Azure AI Video Indexer does not support monitoring of metrics.
-
-## Analyzing logs
-
-Data in Azure Monitor Logs is stored in tables where each table has its own set of unique properties.
-
-All resource logs in Azure Monitor have the same fields followed by service-specific fields. The common schema is outlined in [Azure Monitor resource log schema](../azure-monitor/essentials/resource-logs-schema.md) The schema for Azure AI Video Indexer resource logs is found in the [Azure AI Video Indexer Data Reference](monitor-video-indexer-data-reference.md#schemas)
-
-The [Activity log](../azure-monitor/essentials/activity-log.md) is a type of platform sign-in Azure that provides insight into subscription-level events. You can view it independently or route it to Azure Monitor Logs, where you can do much more complex queries using Log Analytics.
-
-For a list of the types of resource logs collected for Azure AI Video Indexer, see [Monitoring Azure AI Video Indexer data reference](monitor-video-indexer-data-reference.md#resource-logs)
-
-For a list of the tables used by Azure Monitor Logs and queryable by Log Analytics, see [Monitoring Azure AI Video Indexer data reference](monitor-video-indexer-data-reference.md#azure-monitor-logs-tables)
-
-### Sample Kusto queries
-
-#### Audit related sample queries
-
-> [!IMPORTANT]
-> When you select **Logs** from the Azure AI Video Indexer account menu, Log Analytics is opened with the query scope set to the current Azure AI Video Indexer account. This means that log queries will only include data from that resource. If you want to run a query that includes data from other Azure AI Video Indexer account or data from other Azure services, select **Logs** from the **Azure Monitor** menu. See [Log query scope and time range in Azure Monitor Log Analytics](../azure-monitor/logs/scope.md) for details.
-
-Following are queries that you can use to help you monitor your Azure AI Video Indexer account.
-
-```kusto
-// Project failures summarized by operationName and Upn, aggregated in 30m windows.
-VIAudit
-| where Status == "Failure"
-| summarize count() by OperationName, bin(TimeGenerated, 30m), Upn
-| render timechart
-```
-
-```kusto
-// Project failures with detailed error message.
-VIAudit
-| where Status == "Failure"
-| parse Description with "ErrorType: " ErrorType ". Message: " ErrorMessage ". Trace" *
-| project TimeGenerated, OperationName, ErrorMessage, ErrorType, CorrelationId, _ResourceId
-```
-
-#### Indexing realted sample queries
-
-```kusto
-// Display Video Indexer Account logs of all failed indexing operations.
-VIIndexing
-// | where AccountId == "<AccountId>" // to filter on a specific accountId, uncomment this line
-| where Status == "Failure"
-| summarize count() by bin(TimeGenerated, 1d)
-| render columnchart
-```
-
-```kusto
-// Video Indexer top 10 users by operations
-// Render timechart of top 10 users by operations, with an optional account id for filtering.
-// Trend of top 10 active Upn's
-VIIndexing
-// | where AccountId == "<AccountId>" // to filter on a specific accountId, uncomment this line
-| where OperationName in ("IndexingStarted", "ReindexingStarted")
-| summarize count() by Upn
-| top 10 by count_ desc
-| project Upn
-| join (VIIndexing
-| where TimeGenerated > ago(30d)
-| where OperationName in ("IndexingStarted", "ReindexingStarted")
-| summarize count() by Upn, bin(TimeGenerated,1d)) on Upn
-| project TimeGenerated, Upn, count_
-| render timechart
-```
-
-## Alerts
-
-Azure Monitor alerts proactively notify you when important conditions are found in your monitoring data. They allow you to identify and address issues in your system before your customers notice them. You can set alerts on [metrics](../azure-monitor/alerts/alerts-metric-overview.md), [logs](../azure-monitor/alerts/alerts-unified-log.md), and the [activity log](../azure-monitor/alerts/activity-log-alerts.md). Different types of alerts have benefits and drawbacks.
-
-The following table lists common and recommended alert rules for Azure AI Video Indexer.
-
-| Alert type | Condition | Description |
-|:|:|:|
-| Log Alert|Failed operation |Send an alert when an upload failed |
-
-```kusto
-//All failed uploads, aggregated in one hour window.
-VIAudit
-| where OperationName == "Upload-Video" and Status == "Failure"
-| summarize count() by bin(TimeGenerated, 1h)
-```
-
-## Next steps
--- See [Monitoring Azure AI Video Indexer data reference](monitor-video-indexer-data-reference.md) for a reference of the metrics, logs, and other important values created by Azure AI Video Indexer account.-- See [Monitoring Azure resources with Azure Monitor](../azure-monitor/essentials/monitor-azure-resource.md) for details on monitoring Azure resources.
azure-video-indexer Multi Language Identification Transcription https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-video-indexer/multi-language-identification-transcription.md
- Title: Automatically identify and transcribe multi-language content with Azure AI Video Indexer
-description: This topic demonstrates how to automatically identify and transcribe multi-language content with Azure AI Video Indexer.
- Previously updated : 09/01/2019----
-# Automatically identify and transcribe multi-language content
--
-Azure AI Video Indexer supports automatic language identification and transcription in multi-language content. This process involves automatically identifying the spoken language in different segments from audio, sending each segment of the media file to be transcribed and combine the transcription back to one unified transcription.
-
-## Choosing multilingual identification on indexing with portal
-
-You can choose **multi-language detection** when uploading and indexing your video. Alternatively, you can choose **multi-language detection** when re-indexing your video. The following steps describe how to reindex:
-
-1. Browse to the [Azure AI Video Indexer](https://vi.microsoft.com/) website and sign in.
-1. Go to the **Library** page and hover over the name of the video that you want to reindex.
-1. On the right-bottom corner, click the **Re-index video** button.
-1. In the **Re-index video** dialog, choose **multi-language detection** from the **Video source language** drop-down box.
-
- * When a video is indexed as multi-language, the insight page will include that option, and an additional insight type will appear, enabling the user to view which segment is transcribed in which language "Spoken language".
- * Translation to all languages is fully available from the multi-language transcript.
- * All other insights will appear in the master language detected ΓÇô that is the language that appeared most in the audio.
- * Closed captioning on the player is available in multi-language as well.
-
-![Portal experience](./media/multi-language-identification-transcription/portal-experience.png)
-
-## Choosing multilingual identification on indexing with API
-
-When indexing or [reindexing](https://api-portal.videoindexer.ai/api-details#api=Operations&operation=Re-Index-Video) a video using the API, choose the `multi-language detection` option in the `sourceLanguage` parameter.
-
-### Model output
-
-The model will retrieve all of the languages detected in the video in one list
-
-```json
-"sourceLanguage": null,
-"sourceLanguages": [
- "es-ES",
- "en-US"
-],
-```
-
-Additionally, each instance in the transcription section will include the language in which it was transcribed
-
-```json
-{
- "id": 136,
- "text": "I remember well when my youth Minister took me to hear Doctor King I was a teenager.",
- "confidence": 0.9343,
- "speakerId": 1,
- "language": "en-US",
- "instances": [
- {
- "adjustedStart": "0:21:10.42",
- "adjustedEnd": "0:21:17.48",
- "start": "0:21:10.42",
- "end": "0:21:17.48"
- }
- ]
-},
-```
-
-## Guidelines and limitations
-
-* Set of supported languages: English, French, German, Spanish.
-* Support for multi-language content with up to three supported languages.
-* If the audio contains languages other than the supported list above, the result is unexpected.
-* Minimal segment length to detect for each language ΓÇô 15 seconds.
-* Language detection offset is 3 seconds on average.
-* Speech is expected to be continuous. Frequent alternations between languages may affect the models performance.
-* Speech of non-native speakers may affect the model performance (for example, when speakers use their native tongue and they switch to another language).
-* The model is designed to recognize a spontaneous conversational speech with reasonable audio acoustics (not voice commands, singing, etc.).
-* Project creation and editing is currently not available for multi-language videos.
-* Custom language models are not available when using multi-language detection.
-* Adding keywords is not supported.
-* When exporting closed caption files the language indication will not appear.
-* The update transcript API does not support multiple languages files.
-
-## Next steps
-
-[Azure AI Video Indexer overview](video-indexer-overview.md)
azure-video-indexer Named Entities https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-video-indexer/named-entities.md
- Title: Azure AI Video Indexer named entities extraction overview
-description: An introduction to Azure AI Video Indexer named entities extraction component responsibly.
- Previously updated : 06/15/2022-----
-# Named entities extraction
--
-Named entities extraction is an Azure AI Video Indexer AI feature that uses Natural Language Processing (NLP) to extract insights on the locations, people and brands appearing in audio and images in media files. Named entities extraction is automatically used with Transcription and OCR and its insights are based on those extracted during these processes. The resulting insights are displayed in the **Insights** tab and are filtered into locations, people and brand categories. Clicking a named entity, displays its instance in the media file. It also displays a description of the entity and a Find on Bing link of recognizable entities.
-
-## Prerequisites
-
-Review [Transparency Note overview](/legal/azure-video-indexer/transparency-note?context=/azure/azure-video-indexer/context/context)
-
-## General principles
-
-This article discusses named entities and the key considerations for making use of this technology responsibly. There are many things you need to consider when deciding how to use and implement an AI-powered feature:
--- Will this feature perform well in my scenario? Before deploying named entities extraction into your scenario, test how it performs using real-life data and make sure it can deliver the accuracy you need.-- Are we equipped to identify and respond to errors? AI-powered products and features won't be 100% accurate, so consider how you'll identify and respond to any errors that may occur.-
-## View the insight
-
-To see the insights in the website, do the following:
-
-1. Go to View and check Named Entities.
-1. Go to Insights and scroll to named entities.
-
-To display named entities extraction insights in a JSON file, do the following:
-
-1. Click Download and then Insights (JSON).
-2. Named entities are divided into three:
-
- * Brands
- * Location
- * People
-3. Copy the text and paste it into your JSON Viewer.
-
- ```json
- namedPeople: [
- {
- referenceId: "Satya_Nadella",
- referenceUrl: "https://en.wikipedia.org/wiki/Satya_Nadella",
- confidence: 1,
- description: "CEO of Microsoft Corporation",
- seenDuration: 33.2,
- id: 2,
- name: "Satya Nadella",
- appearances: [
- {
- startTime: "0:01:11.04",
- endTime: "0:01:17.36",
- startSeconds: 71,
- endSeconds: 77.4
- },
- {
- startTime: "0:01:31.83",
- endTime: "0:01:37.1303666",
- startSeconds: 91.8,
- endSeconds: 97.1
- },
- ```
-
-To download the JSON file via the API, use the [Azure AI Video Indexer developer portal](https://api-portal.videoindexer.ai/).
-
-## Named entities extraction components
-
-During the named entities extraction procedure, the media file is processed, as follows:
-
-|Component|Definition|
-|||
-|Source file | The user uploads the source file for indexing. |
-|Text extraction |- The audio file is sent to Speech Services API to extract the transcription.<br/>- Sampled frames are sent to the Azure AI Vision API to extract OCR. |
-|Analytics |The insights are then sent to the Text Analytics API to extract the entities. For example, Microsoft, Paris or a personΓÇÖs name like Paul or Sarah.
-|Processing and consolidation | The results are then processed. Where applicable, Wikipedia links are added and brands are identified via the Video Indexer built-in and customizable branding lists.
-Confidence value The estimated confidence level of each named entity is calculated as a range of 0 to 1. The confidence score represents the certainty in the accuracy of the result. For example, an 82% certainty is represented as an 0.82 score.|
-
-## Example use cases
--- Contextual advertising, for example, placing an ad for a Pizza chain following footage on Italy.-- Deep searching media archives for insights on people or locations to create feature stories for the news.-- Creating a verbal description of footage via OCR processing to enhance accessibility for the visually impaired, for example a background storyteller in movies. -- Extracting insights on brand na-
-## Considerations and limitations when choosing a use case
--- Carefully consider the accuracy of the results, to promote more accurate detections, check the quality of the audio and images, low quality audio and images might impact the detected insights. -- Named entities only detect insights in audio and images. Logos in a brand name may not be detected.-- Carefully consider that when using for law enforcement named entities may not always detect parts of the audio. To ensure fair and high-quality decisions, combine named entities with human oversight. -- Don't use named entities for decisions that may have serious adverse impacts. Machine learning models that extract text can result in undetected or incorrect text output. Decisions based on incorrect output could have serious adverse impacts. Additionally, it's advisable to include human review of decisions that have the potential for serious impacts on individuals. -
-When used responsibly and carefully Azure AI Video Indexer is a valuable tool for many industries. To respect the privacy and safety of others, and to comply with local and global regulations, we recommend the following:
--- Always respect an individualΓÇÖs right to privacy, and only ingest videos for lawful and justifiable purposes. -- Don't purposely disclose inappropriate content about young children or family members of celebrities or other content that may be detrimental or pose a threat to an individualΓÇÖs personal freedom. -- Commit to respecting and promoting human rights in the design and deployment of your analyzed media. -- When using 3rd party materials, be aware of any existing copyrights or permissions required before distributing content derived from them. -- Always seek legal advice when using content from unknown sources. -- Always obtain appropriate legal and professional advice to ensure that your uploaded videos are secured and have adequate controls to preserve the integrity of your content and to prevent unauthorized access. -- Provide a feedback channel that allows users and individuals to report issues with the service. -- Be aware of any applicable laws or regulations that exist in your area regarding processing, analyzing, and sharing media containing people. -- Keep a human in the loop. Do not use any solution as a replacement for human oversight and decision-making. -- Fully examine and review the potential of any AI model you're using to understand its capabilities and limitations. -
-## Next steps
-
-### Learn More about Responsible AI
--- [Microsoft Responsible AI principles](https://www.microsoft.com/ai/responsible-ai?activetab=pivot1%3aprimaryr6) -- [Microsoft Responsible AI resources](https://www.microsoft.com/ai/responsible-ai-resources)-- [Microsoft Azure Learning courses on Responsible AI](/training/paths/responsible-ai-business-principles/)-- [Microsoft Global Human Rights Statement](https://www.microsoft.com/corporate-responsibility/human-rights-statement?activetab=pivot_1:primaryr5) -
-### Contact us
-
-`visupport@microsoft.com`
-
-## Azure AI Video Indexer insights
--- [Audio effects detection](audio-effects-detection.md)-- [Face detection](face-detection.md)-- [Keywords extraction](keywords.md)-- [Transcription, Translation & Language identification](transcription-translation-lid.md)-- [Labels identification](labels-identification.md) -- [Observed people tracking & matched persons](observed-matched-people.md)-- [Topics inference](topics-inference.md)
azure-video-indexer Network Security https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-video-indexer/network-security.md
- Title: How to enable network security
-description: This article gives an overview of the Azure AI Video Indexer network security options.
-- Previously updated : 12/19/2022----
-# NSG service tags for Azure AI Video Indexer
--
-Azure AI Video Indexer is a service hosted on Azure. In some cases the service needs to interact with other services in order to index video files (for example, a Storage account) or when you orchestrate indexing jobs against Azure AI Video Indexer API endpoint using your own service hosted on Azure (for example, AKS, Web Apps, Logic Apps, Functions).
-
-> [!NOTE]
-> If you are already using "AzureVideoAnalyzerForMedia" Network Service Tag you may experience issues with your networking security group starting 9 January 2023. This is because we are moving to a new Security Tag label "VideoIndexer". The mitigatation is to remove the old "AzureVideoAnalyzerForMedia" tag from your configuration and deployment scripts and start using the "VideoIndexer" tag going forward.
-
-Use [Network Security Groups with Service Tags](../virtual-network/service-tags-overview.md) to limit access to your resources on a network level. A service tag represents a group of IP address prefixes from a given Azure service, in this case Azure AI Video Indexer. Microsoft manages the address prefixes grouped by the service tag and automatically updates the service tag as addresses change in our backend, minimizing the complexity of frequent updates to network security rules by the customer.
-
-> [!NOTE]
-> The NSG service tags feature is not available for trial and classic accounts. To update to an ARM account, see [Connect a classic account to ARM](connect-classic-account-to-arm.md) or [Import content from a trial account](import-content-from-trial.md).
-
-## Get started with service tags
-
-Currently we support the global service tag option for using service tags in your network security groups:
-
-**Use a single global VideoIndexer service tag**: This option opens your virtual network to all IP addresses that the Azure AI Video Indexer service uses across all regions we offer our service. This method will allow for all IP addresses owned and used by Azure AI Video Indexer to reach your network resources behind the NSG.
-
-> [!NOTE]
-> Currently we do not support IPs allocated to our services in the Switzerland North Region. These will be added soon. If your account is located in this region you cannot use Service Tags in your NSG today since these IPs are not in the Service Tag list and will be rejected by the NSG rule.
-
-## Use a single global Azure AI Video Indexer service tag
-
-The easiest way to begin using service tags with your Azure AI Video Indexer account is to add the global tag `VideoIndexer` to an NSG rule.
-
-1. From the [Azure portal](https://portal.azure.com/), select your network security group.
-1. Under **Settings**, select **Inbound security rules**, and then select **+ Add**.
-1. From the **Source** drop-down list, select **Service Tag**.
-1. From the **Source service tag** drop-down list, select **VideoIndexer**.
--
-This tag contains the IP addresses of Azure AI Video Indexer services for all regions where available. The tag will ensure that your resource can communicate with the Azure AI Video Indexer services no matter where it's created.
-
-## Using Azure CLI
-
-You can also use Azure CLI to create a new or update an existing NSG rule and add the **VideoIndexer** service tag using the `--source-address-prefixes`. For a full list of CLI commands and parameters see [az network nsg](/cli/azure/network/nsg/rule?view=azure-cli-latest&preserve-view=true)
-
-Example of a security rule using service tags. For more details, visit https://aka.ms/servicetags
-
-`az network nsg rule create -g MyResourceGroup --nsg-name MyNsg -n MyNsgRuleWithTags --priority 400 --source-address-prefixes VideoIndexer --destination-address-prefixes '*' --destination-port-ranges '*' --direction Inbound --access Allow --protocol Tcp --description "Allow traffic from Video Indexer"`
-
-## Next steps
-
-[Disaster recovery](video-indexer-disaster-recovery.md)
azure-video-indexer Object Detection https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-video-indexer/object-detection.md
- Title: Azure AI Video Indexer object detection overview
-description: An introduction to Azure AI Video Indexer object detection overview.
- Previously updated : 09/26/2023-----
-# Azure Video Indexer object detection
-
-Azure Video Indexer can detect objects in videos. The insight is part of all standard and advanced presets.
-
-## Prerequisites
-
-Review [transparency note overview](/legal/azure-video-indexer/transparency-note?context=/azure/azure-video-indexer/context/context)
-
-## JSON keys and definitions
-
-| **Key** | **Definition** |
-| | |
-| ID | Incremental number of IDs of the detected objects in the media file |
-| Type | Type of objects, for example, Car
-| ThumbnailID | GUID representing a single detection of the object |
-| displayName | Name to be displayed in the VI portal experience |
-| WikiDataID | A unique identifier in the WikiData structure |
-| Instances | List of all instances that were tracked
-| Confidence | A score between 0-1 indicating the object detection confidence |
-| adjustedStart | adjusted start time of the video when using the editor |
-| adjustedEnd | adjusted end time of the video when using the editor |
-| start | the time that the object appears in the frame |
-| end | the time that the object no longer appears in the frame |
-
-## JSON response
-
-Object detection is included in the insights that are the result of an [Upload](https://api-portal.videoindexer.ai/api-details#api=Operations&operation=Upload-Video) request.
-
-### Detected and tracked objects
-
-Detected and tracked objects appear under ΓÇ£detected ObjectsΓÇ¥ in the downloaded *insights.json* file. Every time a unique object is detected, it's given an ID. That object is also tracked, meaning that the model watches for the detected object to return to the frame. If it does, another instance is added to the instances for the object with different start and end times.
-
-In this example, the first car was detected and given an ID of 1 since it was also the first object detected. Then, a different car was detected and that car was given the ID of 23 since it was the 23rd object detected. Later, the first car appeared again and another instance was added to the JSON. Here is the resulting JSON:
-
-```json
-detectedObjects: [
- {
- id: 1,
- type: "Car",
- thumbnailId: "1c0b9fbb-6e05-42e3-96c1-abe2cd48t33",
- displayName: "car",
- wikiDataId: "Q1420",
- instances: [
- {
- confidence: 0.468,
- adjustedStart: "0:00:00",
- adjustedEnd: "0:00:02.44",
- start: "0:00:00",
- end: "0:00:02.44"
- },
- {
- confidence: 0.53,
- adjustedStart: "0:03:00",
- adjustedEnd: "0:00:03.55",
- start: "0:03:00",
- end: "0:00:03.55"
- }
- ]
- },
- {
- id: 23,
- type: "Car",
- thumbnailId: "1c0b9fbb-6e05-42e3-96c1-abe2cd48t34",
- displayName: "car",
- wikiDataId: "Q1420",
- instances: [
- {
- confidence: 0.427,
- adjustedStart: "0:00:00",
- adjustedEnd: "0:00:14.24",
- start: "0:00:00",
- end: "0:00:14.24"
- }
- ]
- }
-]
-```
-
-## Try object detection
-
-You can try out object detection with the web portal or with the API.
-
-## [Web Portal](#tab/webportal)
-
-Once you have uploaded a video, you can view the insights. On the insights tab, you can view the list of objects detected and their main instances.
-
-### Insights
-Select the **Insights** tab. The objects are in descending order of the number of appearances in the video.
--
-### Timeline
-Select the **Timeline** tab.
--
-Under the timeline tab, all object detection is displayed according to the time of appearance. When you hover over a specific detection, it shows the detection percentage of certainty.
-
-### Player
-
-The player automatically marks the detected object with a bounding box. The selected object from the insights pane is highlighted in blue with the objects type and serial number also displayed.
-
-Filter the bounding boxes around objects by selecting bounding box icon on the player.
--
-Then, select or deselect the detected objects checkboxes.
--
-Download the insights by selecting **Download** and then **Insights (JSON)**.
-
-## [API](#tab/api)
-
-When you use the [Upload](https://api-portal.videoindexer.ai/api-details#api=Operations&operation=Upload-Video) request with the standard or advanced video presets, object detection is included in the indexing.
-
-To examine object detection more thoroughly, use [Get Video Index](https://api-portal.videoindexer.ai/api-details#api=Operations&operation=Get-Video-Index).
---
-## Supported objects
-
- :::column:::
- - airplane
- - apple
- - backpack
- - banana
- - baseball bat
- - baseball glove
- - bed
- - bicycle
- - bottle
- - bowl
- - broccoli
- - bus
- - cake
- :::column-end:::
- :::column:::
- - car
- - carrot
- - cell phone
- - chair
- - clock
- - computer mouse
- - couch
- - cup
- - dining table
- - donut
- - fire hydrant
- - fork
- - frisbee
- :::column-end:::
- :::column:::
- - handbag
- - hot dog
- - kite
- - knife
- - laptop
- - microwave
- - motorcycle
- - necktie
- - orange
- - oven
- - parking meter
- - pizza
- - potted plant
- :::column-end:::
- :::column:::
- - refrigerator
- - remote
- - sandwich
- - scissors
- - skateboard
- - skis
- - snowboard
- - spoon
- - sports ball
- - suitcase
- - surfboard
- - teddy bear
- - television
- :::column-end:::
- :::column:::
- - tennis racket
- - toaster
- - toilet
- - toothbrush
- - traffic light
- - train
- - umbrella
- - vase
- - wine glass
- :::column-end:::
-
-## Limitations
--- Up to 20 detections per frame for standard and advanced processing and 35 tracks per class.-- The video area shouldn't exceed 1920 x 1080 pixels.-- Object size shouldn't be greater than 90 percent of the frame.-- A high frame rate (> 30 FPS) may result in slower indexing, with little added value to the quality of the detection and tracking.-- Other factors that may affect the accuracy of the object detection include low light conditions, camera motion, and occlusion.
azure-video-indexer Observed Matched People https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-video-indexer/observed-matched-people.md
- Title: Azure AI Video Indexer observed people tracking & matched faces overview
-description: An introduction to Azure AI Video Indexer observed people tracking & matched faces component responsibly.
- Previously updated : 04/06/2023-----
-# Observed people tracking and matched faces
--
-> [!IMPORTANT]
-> Face identification, customization and celebrity recognition features access is limited based on eligibility and usage criteria in order to support our Responsible AI principles. Face identification, customization and celebrity recognition features are only available to Microsoft managed customers and partners. Use the [Face Recognition intake form](https://customervoice.microsoft.com/Pages/ResponsePage.aspx?id=v4j5cvGGr0GRqy180BHbR7en2Ais5pxKtso_Pz4b1_xUQjA5SkYzNDM4TkcwQzNEOE1NVEdKUUlRRCQlQCN0PWcu) to apply for access.
-
-Observed people tracking and matched faces are Azure AI Video Indexer AI features that automatically detect and match people in media files. Observed people tracking and matched faces can be set to display insights on people, their clothing, and the exact timeframe of their appearance.
-
-The resulting insights are displayed in a categorized list in the Insights tab, the tab includes a thumbnail of each person and their ID. Clicking the thumbnail of a person displays the matched person (the corresponding face in the People insight). Insights are also generated in a categorized list in a JSON file that includes the thumbnail ID of the person, the percentage of time appearing in the file, Wiki link (if they're a celebrity) and confidence level.
-
-## Prerequisites
-
-Review [transparency note overview](/legal/azure-video-indexer/transparency-note?context=/azure/azure-video-indexer/context/context)
-
-## General principles
-
-This article discusses observed people tracking and matched faces and the key considerations for making use of this technology responsibly. There are many things you need to consider when deciding how to use and implement an AI-powered feature:
--- Will this feature perform well in my scenario? Before deploying observed people tracking and matched faces into your scenario, test how it performs using real-life data and make sure it can deliver the accuracy you need.-- Are we equipped to identify and respond to errors? AI-powered products and features will not be 100% accurate, so consider how you'll identify and respond to any errors that may occur.-
-## View the insight
-
-When uploading the media file, go to Video + Audio Indexing and select Advanced.
-
-To display observed people tracking and matched faces insight on the website, do the following:
-
-1. After the file has been indexed, go to Insights and then scroll to observed people.
-
-To see the insights in a JSON file, do the following:
-
-1. Click Download and then Insights (JSON).
-1. Copy the `observedPeople` text and paste it into your JSON viewer.
-
- The following section shows observed people and clothing. For the person with id 4 (`"id": 4`) there's also a matching face.
-
- ```json
- "observedPeople": [
- {
- "id": 1,
- "thumbnailId": "4addcebf-6c51-42cd-b8e0-aedefc9d8f6b",
- "clothing": [
- {
- "id": 1,
- "type": "sleeve",
- "properties": {
- "length": "long"
- }
- },
- {
- "id": 2,
- "type": "pants",
- "properties": {
- "length": "long"
- }
- }
- ],
- "instances": [
- {
- "adjustedStart": "0:00:00.0667333",
- "adjustedEnd": "0:00:12.012",
- "start": "0:00:00.0667333",
- "end": "0:00:12.012"
- }
- ]
- },
- {
- "id": 2,
- "thumbnailId": "858903a7-254a-438e-92fd-69f8bdb2ac88",
- "clothing": [
- {
- "id": 1,
- "type": "sleeve",
- "properties": {
- "length": "short"
- }
- }
- ],
- "instances": [
- {
- "adjustedStart": "0:00:23.2565666",
- "adjustedEnd": "0:00:25.4921333",
- "start": "0:00:23.2565666",
- "end": "0:00:25.4921333"
- },
- {
- "adjustedStart": "0:00:25.8925333",
- "adjustedEnd": "0:00:25.9926333",
- "start": "0:00:25.8925333",
- "end": "0:00:25.9926333"
- },
- {
- "adjustedStart": "0:00:26.3930333",
- "adjustedEnd": "0:00:28.5618666",
- "start": "0:00:26.3930333",
- "end": "0:00:28.5618666"
- }
- ]
- },
- {
- "id": 3,
- "thumbnailId": "1406252d-e7f5-43dc-852d-853f652b39b6",
- "clothing": [
- {
- "id": 1,
- "type": "sleeve",
- "properties": {
- "length": "short"
- }
- },
- {
- "id": 2,
- "type": "pants",
- "properties": {
- "length": "long"
- }
- },
- {
- "id": 3,
- "type": "skirtAndDress"
- }
- ],
- "instances": [
- {
- "adjustedStart": "0:00:31.9652666",
- "adjustedEnd": "0:00:34.4010333",
- "start": "0:00:31.9652666",
- "end": "0:00:34.4010333"
- }
- ]
- },
- {
- "id": 4,
- "thumbnailId": "d09ad62e-e0a4-42e5-8ca9-9a640c686596",
- "clothing": [
- {
- "id": 1,
- "type": "sleeve",
- "properties": {
- "length": "short"
- }
- },
- {
- "id": 2,
- "type": "pants",
- "properties": {
- "length": "short"
- }
- }
- ],
- "matchingFace": {
- "id": 1310,
- "confidence": 0.3819
- },
- "instances": [
- {
- "adjustedStart": "0:00:34.8681666",
- "adjustedEnd": "0:00:36.0026333",
- "start": "0:00:34.8681666",
- "end": "0:00:36.0026333"
- },
- {
- "adjustedStart": "0:00:36.6699666",
- "adjustedEnd": "0:00:36.7367",
- "start": "0:00:36.6699666",
- "end": "0:00:36.7367"
- },
- {
- "adjustedStart": "0:00:37.2038333",
- "adjustedEnd": "0:00:39.6729666",
- "start": "0:00:37.2038333",
- "end": "0:00:39.6729666"
- }
- ]
- }
- ]
- ```
-
-To download the JSON file via the API, use the [Azure AI Video Indexer developer portal](https://api-portal.videoindexer.ai/).
-
-## Observed people tracking and matched faces components
-
-During the observed people tracking and matched faces procedure, images in a media file are processed, as follows:
-
-|Component|Definition|
-|||
-|Source file | The user uploads the source file for indexing. |
-|Detection | The media file is tracked to detect observed people and their clothing. For example, shirt with long sleeves, dress or long pants. Note that to be detected, the full upper body of the person must appear in the media.|
-|Local grouping |The identified observed faces are filtered into local groups. If a person is detected more than once, additional observed faces instances are created for this person. |
-|Matching and Classification |The observed people instances are matched to faces. If there is a known celebrity, the observed person will be given their name. Any number of observed people instances can be matched to the same face. |
-|Confidence value| The estimated confidence level of each observed person is calculated as a range of 0 to 1. The confidence score represents the certainty in the accuracy of the result. For example, an 82% certainty is represented as an 0.82 score.|
-
-## Example use cases
--- Tracking a personΓÇÖs movement, for example, in law enforcement for more efficiency when analyzing an accident or crime.-- Improving efficiency by deep searching for matched people in organizational archives for insight on specific celebrities, for example when creating promos and trailers.-- Improved efficiency when creating feature stories, for example, searching for people wearing a red shirt in the archives of a football game at a News or Sports agency.-
-## Considerations and limitations when choosing a use case
-
-Below are some considerations to keep in mind when using observed people and matched faces.
-
-### Limitations of observed people tracking
-
-It's important to note the limitations of observed people tracking, to avoid or mitigate the effects of false negatives (missed detections) and limited detail.
-
-* People are generally not detected if they appear small (minimum person height is 100 pixels).
-* Maximum frame size is FHD
-* Low quality video (for example, dark lighting conditions) may impact the detection results.
-* The recommended frame rate at least 30 FPS.
-* Recommended video input should contain up to 10 people in a single frame. The feature could work with more people in a single frame, but the detection result retrieves up to 10 people in a frame with the detection highest confidence.
-* People with similar clothes: (for example, people wear uniforms, players in sport games) could be detected as the same person with the same ID number.
-* Obstruction ΓÇô there maybe errors where there are obstructions (scene/self or obstructions by other people).
-* Pose: The tracks may be split due to different poses (back/front)
-
-### Other considerations
-
-When used responsibly and carefully, Azure AI Video Indexer is a valuable tool for many industries. To respect the privacy and safety of others, and to comply with local and global regulations, we recommend the following:
--- Always respect an individualΓÇÖs right to privacy, and only ingest videos for lawful and justifiable purposes.-- Don't purposely disclose inappropriate media showing young children or family members of celebrities or other content that may be detrimental or pose a threat to an individualΓÇÖs personal freedom.-- Commit to respecting and promoting human rights in the design and deployment of your analyzed media.-- When using 3rd party materials, be aware of any existing copyrights or permissions required before distributing content derived from them.-- Always seek legal advice when using media from unknown sources.-- Always obtain appropriate legal and professional advice to ensure that your uploaded videos are secured and have adequate controls to preserve the integrity of your content and to prevent unauthorized access.-- Provide a feedback channel that allows users and individuals to report issues with the service.-- Be aware of any applicable laws or regulations that exist in your area regarding processing, analyzing, and sharing media containing people.-- Keep a human in the loop. Don't use any solution as a replacement for human oversight and decision-making.-- Fully examine and review the potential of any AI model you're using to understand its capabilities and limitations.-
-## Next steps
-
-### Learn More about Responsible AI
--- [Microsoft Responsible AI principles](https://www.microsoft.com/ai/responsible-ai?activetab=pivot1%3aprimaryr6)-- [Microsoft Responsible AI resources](https://www.microsoft.com/ai/responsible-ai-resources)-- [Microsoft Azure Learning courses on Responsible AI](/training/paths/responsible-ai-business-principles/)-- [Microsoft Global Human Rights Statement](https://www.microsoft.com/corporate-responsibility/human-rights-statement?activetab=pivot_1:primaryr5)-
-### Contact us
-
-`visupport@microsoft.com`
-
-## Azure AI Video Indexer insights
--- [Audio effects detection](audio-effects-detection.md)-- [Face detection](face-detection.md)-- [Keywords extraction](keywords.md)-- [Transcription, translation & language identification](transcription-translation-lid.md)-- [Labels identification](labels-identification.md)-- [Named entities](named-entities.md)-- [Topics inference](topics-inference.md)
azure-video-indexer Observed People Featured Clothing https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-video-indexer/observed-people-featured-clothing.md
- Title: Enable featured clothing of an observed person
-description: When indexing a video using Azure AI Video Indexer advanced video settings, you can view the featured clothing of an observed person.
- Previously updated : 08/14/2023----
-# Enable featured clothing of an observed person
--
-When indexing a video using Azure AI Video Indexer advanced video settings, you can view the featured clothing of an observed person. The insight provides moments within the video where key people are prominently featured and clearly visible, including the coordinates of the people, timestamp, and the frame of the shot. This insight allows high-quality in-video contextual advertising, where relevant clothing ads are matched with the specific time within the video in which they're viewed.
-
-This article discusses how to view the featured clothing insight and how the featured clothing images are ranked.
-
-## View an intro video
-
-You can view the following short video that discusses how to view and use the featured clothing insight.
-
-> [!VIDEO https://www.microsoft.com/videoplayer/embed//RE5b4JJ]
-
-## Viewing featured clothing
-
-The featured clothing insight is available when indexing your file by choosing the Advanced option -> Advanced video or Advanced video + audio preset (under Video + audio indexing). Standard indexing doesn't include this insight.
--
-The featured clothing images are ranked based on some of the following factors: key moments of the video, duration the person appears, text-based emotions, and audio events. The insights privates the highest ranking frame per scene, which enables you to produce contextual advertisements per scene throughout the video. The JSON file is ranked by the sequence of scenes in the video, with each scene having the top rated frame as the result.
-
-> [!NOTE]
-> The featured clothing insight can only be viewed from the artifact file, and the insight is not in the Azure AI Video Indexer website.
-
-1. In the right-upper corner, select to download the artifact zip file: **Download** -> **Artifact (ZIP)**
-1. Open `featuredclothing.zip`.
-
-The .zip file contains two objects:
--- `featuredclothing.map.json` - the file contains instances of each featured clothing, with the following properties: -
- - `id` ΓÇô ranking index (`"id": 1` is the most important clothing).
- - `confidence` ΓÇô the score of the featured clothing.
- - `frameIndex` ΓÇô the best frame of the clothing.
- - `timestamp` ΓÇô corresponding to the frameIndex.
- - `opBoundingBox` ΓÇô bounding box of the person.
- - `faceBoundingBox` ΓÇô bounding box of the person's face, if detected.
- - `fileName` ΓÇô where the best frame of the clothing is saved.
- - `sceneID` - the scene where the scene appears.
-
- An example of the featured clothing with `"sceneID": 1`.
-
- ```json
- "instances": [
- {
- "confidence": 0.07,
- "faceBoundingBox": {},
- "fileName": "frame_100.jpg",
- "frameIndex": 100,
- "opBoundingBox": {
- "x": 0.09062,
- "y": 0.4,
- "width": 0.11302,
- "height": 0.59722
- },
- "timestamp": "0:00:04",
- "personName": "Observed Person #1",
- "sceneId": 1
- }
- ```
-- `featuredclothing.frames.map` ΓÇô this folder contains images of the best frames that the featured clothing appeared in, corresponding to the `fileName` property in each instance in `featuredclothing.map.json`. -
-## Limitations and assumptions
-
-It's important to note the limitations of featured clothing to avoid or mitigate the effects of false detections of images with low quality or low relevancy.ΓÇ»
--- Precondition for the featured clothing is that the person wearing the clothes can be found in the observed people insight. -- If the face of a person wearing the featured clothing isn't detected, the results don't include the faces bounding box.-- If a person in a video wears more than one outfit, the algorithm selects its best outfit as a single featured clothing image. -- When posed, the tracks are optimized to handle observed people who most often appear on the front. -- Wrong detections may occur when people are overlapping. -- Frames containing blurred people are more prone to low quality results. -
-For more information, see the [limitations of observed people](observed-people-tracing.md#limitations-and-assumptions).
-
-## Next steps
--- [Trace observed people in a video](observed-people-tracing.md)-- [People's detected clothing](detected-clothing.md)
azure-video-indexer Observed People Tracking https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-video-indexer/observed-people-tracking.md
- Title: Track observed people in a video
-description: This topic gives an overview of Track observed people in a video concept.
- Previously updated : 08/07/2023----
-# Track observed people in a video
--
-Azure AI Video Indexer detects observed people in videos and provides information such as the location of the person in the video frame and the exact timestamp (start, end) when a person appears. The API returns the bounding box coordinates (in pixels) for each person instance detected, including detection confidence.
-
-Some scenarios where this feature could be useful:
-
-* Post-event analysisΓÇödetect and track a personΓÇÖs movement to better analyze an accident or crime post-event (for example, explosion, bank robbery, incident).
-* Improve efficiency when creating raw data for content creators, like video advertising, news, or sport games (for example, find people wearing a red shirt in a video archive).
-* Create a summary out of a long video, like court evidence of a specific personΓÇÖs appearance in a video, using the same detected personΓÇÖs ID.
-* Learn and analyze trends over time, for exampleΓÇöhow customers move across aisles in a shopping mall or how much time they spend in checkout lines.
-
-For example, if a video contains a person, the detect operation will list the personΓÇÖs appearances together with their coordinates in the video frames. You can use this functionality to determine the personΓÇÖs path in a video. It also lets you determine whether there are multiple instances of the same person in a video.
-
-The newly added **Observed people tracking** feature is available when indexing your file by choosing the **Advanced option** -> **Advanced video** or **Advanced video + audio** preset (under **Video + audio indexing**). Standard indexing will not include this new advanced model.
-
-
-When you choose to see **Insights** of your video on the [Video Indexer](https://www.videoindexer.ai/account/login) website, the Observed People Tracking will show up on the page with all detected people thumbnails. You can choose a thumbnail of a person and see where the person appears in the video player.
-
-The following JSON response illustrates what Video Indexer returns when tracking observed people:
-
-```json
- {
- ...
- "videos": [
- {
- ...
- "insights": {
- ...
- "observedPeople": [{
- "id": 1,
- "thumbnailId": "560f2cfb-90d0-4d6d-93cb-72bd1388e19d",
- "instances": [
- {
- "adjustedStart": "0:00:01.5682333",
- "adjustedEnd": "0:00:02.7027",
- "start": "0:00:01.5682333",
- "end": "0:00:02.7027"
- }
- ]
- },
- {
- "id": 2,
- "thumbnailId": "9c97ae13-558c-446b-9989-21ac27439da0",
- "instances": [
- {
- "adjustedStart": "0:00:16.7167",
- "adjustedEnd": "0:00:18.018",
- "start": "0:00:16.7167",
- "end": "0:00:18.018"
- }
- ]
- },]
- }
- ...
- }
- ]
-}
-```
-
-## Limitations and assumptions
-
-For more information, see [Considerations and limitations when choosing a use case](observed-matched-people.md#considerations-and-limitations-when-choosing-a-use-case).
-
-## Next steps
-
-Review [overview](video-indexer-overview.md)
azure-video-indexer Ocr https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-video-indexer/ocr.md
- Title: Azure AI Video Indexer optical character recognition (OCR) overview -
-description: An introduction to Azure AI Video Indexer optical character recognition (OCR) component responsibly.
-- Previously updated : 06/15/2022-----
-# Optical character recognition (OCR)
--
-Optical character recognition (OCR) is an Azure AI Video Indexer AI feature that extracts text from images like pictures, street signs and products in media files to create insights.
-
-OCR currently extracts insights from printed and handwritten text in over 50 languages, including from an image with text in multiple languages. For more information, see [OCR supported languages](../ai-services/computer-vision/language-support.md#optical-character-recognition-ocr).
-
-## Prerequisites
-
-Review [transparency note overview](/legal/azure-video-indexer/transparency-note?context=/azure/azure-video-indexer/context/context)
-
-## General principles
-
-This article discusses optical character recognition (OCR) and the key considerations for making use of this technology responsibly. There are many things you need to consider when deciding how to use and implement an AI-powered feature:
--- Will this feature perform well in my scenario? Before deploying OCR into your scenario, test how it performs using real-life data and make sure it can deliver the accuracy you need. -- Are we equipped to identify and respond to errors? AI-powered products and features won't be 100% accurate, so consider how you'll identify and respond to any errors that may occur. -
-## View the insight
-
-When working on the website the insights are displayed in the **Timeline** tab. They can also be generated in a categorized list in a JSON file that includes the ID, transcribed text, duration and confidence score.
-
-To see the instances on the website, do the following:
-
-1. Go to View and check OCR.
-1. Select Timeline to display the extracted text.
-
-Insights can also be generated in a categorized list in a JSON file that includes the ID, language, text together with each instanceΓÇÖs confidence score.
-
-To see the insights in a JSON file, do the following:
-
-1. Select Download -> Insight (JSON).
-1. Copy the `ocr` element, under `insights`, and paste it into your online JSON viewer.
-
- ```json
- "ocr": [
- {
- "id": 1,
- "text": "2017 Ruler",
- "confidence": 0.4365,
- "left": 901,
- "top": 3,
- "width": 80,
- "height": 23,
- "angle": 0,
- "language": "en-US",
- "instances": [
- {
- "adjustedStart": "0:00:45.5",
- "adjustedEnd": "0:00:46",
- "start": "0:00:45.5",
- "end": "0:00:46"
- },
- {
- "adjustedStart": "0:00:55",
- "adjustedEnd": "0:00:55.5",
- "start": "0:00:55",
- "end": "0:00:55.5"
- }
- ]
- },
- {
- "id": 2,
- "text": "2017 Ruler postppu - PowerPoint",
- "confidence": 0.4712,
- "left": 899,
- "top": 4,
- "width": 262,
- "height": 48,
- "angle": 0,
- "language": "en-US",
- "instances": [
- {
- "adjustedStart": "0:00:44.5",
- "adjustedEnd": "0:00:45",
- "start": "0:00:44.5",
- "end": "0:00:45"
- }
- ]
- },
- ```
-
-To download the JSON file via the API, use the [Azure AI Video Indexer developer portal](https://api-portal.videoindexer.ai/).
-
-## OCR components
-
-During the OCR procedure, text images in a media file are processed, as follows:
-
-|Component|Definition|
-|||
-|Source file| The user uploads the source file for indexing.|
-|Read model |Images are detected in the media file and text is then extracted and analyzed by Azure AI services. |
-|Get read results model |The output of the extracted text is displayed in a JSON file.|
-|Confidence value| The estimated confidence level of each word is calculated as a range of 0 to 1. The confidence score represents the certainty in the accuracy of the result. For example, an 82% certainty will be represented as an 0.82 score.|
-
-For more information, seeΓÇ»[OCR technology](../ai-services/computer-vision/overview-ocr.md).
-
-## Example use cases
--- Deep searching media footage for images with signposts, street names or car license plates, for example, in law enforcement. -- Extracting text from images in media files and then translating it into multiple languages in labels for accessibility, for example in media or entertainment. -- Detecting brand names in images and tagging them for translation purposes, for example in advertising and branding. -- Extracting text in images that is then automatically tagged and categorized for accessibility and future usage, for example to generate content at a news agency. -- Extracting text in warnings in online instructions and then translating the text to comply with local standards, for example, e-learning instructions for using equipment. -
-## Considerations and limitations when choosing a use case
--- Carefully consider the accuracy of the results, to promote more accurate detections, check the quality of the image, low quality images might impact the detected insights. -- Carefully consider when using for law enforcement that OCR can potentially misread or not detect parts of the text. To ensure fair and high-quality decisions, combine OCR-based automation with human oversight. -- When extracting handwritten text, avoid using the OCR results of signatures that are hard to read for both humans and machines. A better way to use OCR is to use it for detecting the presence of a signature for further analysis. -- Don't use OCR for decisions that may have serious adverse impacts. Machine learning models that extract text can result in undetected or incorrect text output. Decisions based on incorrect output could have serious adverse impacts. Additionally, it's advisable to include human review of decisions that have the potential for serious impacts on individuals. -
-When used responsibly and carefully, Azure AI Video Indexer is a valuable tool for many industries. To respect the privacy and safety of others, and to comply with local and global regulations, we recommend the following:  
--- Always respect an individual’s right to privacy, and only ingest videos for lawful and justifiable purposes.   -- Don't purposely disclose inappropriate content about young children or family members of celebrities or other content that may be detrimental or pose a threat to an individual’s personal freedom.   -- Commit to respecting and promoting human rights in the design and deployment of your analyzed media.   -- When using third party materials, be aware of any existing copyrights or permissions required before distributing content derived from them.  -- Always seek legal advice when using content from unknown sources.  -- Always obtain appropriate legal and professional advice to ensure that your uploaded videos are secured and have adequate controls to preserve the integrity of your content and to prevent unauthorized access.     -- Provide a feedback channel that allows users and individuals to report issues with the service.   -- Be aware of any applicable laws or regulations that exist in your area regarding processing, analyzing, and sharing media containing people.  -- Keep a human in the loop. Don't use any solution as a replacement for human oversight and decision-making.   -- Fully examine and review the potential of any AI model you're using to understand its capabilities and limitations.  -
-## Learn more about OCR
--- [Azure AI services documentation](/azure/ai-services/computer-vision/overview-ocr)-- [Transparency note](/legal/cognitive-services/computer-vision/ocr-transparency-note) -- [Use cases](/legal/cognitive-services/computer-vision/ocr-transparency-note#example-use-cases) -- [Capabilities and limitations](/legal/cognitive-services/computer-vision/ocr-characteristics-and-limitations) -- [Guidance for integration and responsible use with OCR technology](/legal/cognitive-services/computer-vision/ocr-guidance-integration-responsible-use)-- [Data, privacy and security](/legal/cognitive-services/computer-vision/ocr-data-privacy-security)-- [Meter: WER](/legal/cognitive-services/computer-vision/ocr-characteristics-and-limitations#word-level-accuracy-measure) -
-## Next steps
-
-### Learn More about Responsible AI
--- [Microsoft Responsible AI principles](https://www.microsoft.com/ai/responsible-ai?activetab=pivot1%3aprimaryr6) -- [Microsoft Responsible AI resources](https://www.microsoft.com/ai/responsible-ai-resources)-- [Microsoft Azure Learning courses on Responsible AI](/training/paths/responsible-ai-business-principles/)-- [Microsoft Global Human Rights Statement](https://www.microsoft.com/corporate-responsibility/human-rights-statement?activetab=pivot_1:primaryr5) -
-### Contact us
-
-`visupport@microsoft.com`
-
-## Azure AI Video Indexer insights
--- [Audio effects detection](audio-effects-detection.md)-- [Face detection](face-detection.md)-- [Keywords extraction](keywords.md)-- [Transcription, translation & language identification](transcription-translation-lid.md)-- [Labels identification](labels-identification.md) -- [Named entities](named-entities.md)-- [Observed people tracking & matched faces](observed-matched-people.md)-- [Topics inference](topics-inference.md)
azure-video-indexer Odrv Download https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-video-indexer/odrv-download.md
- Title: Index videos stored on OneDrive - Azure AI Video Indexer
-description: Learn how to index videos stored on OneDrive by using Azure AI Video Indexer.
- Previously updated : 12/17/2021----
-# Index your videos stored on OneDrive
--
-This article shows how to index videos stored on OneDrive by using the Azure AI Video Indexer website.
-
-## Supported file formats
-
-For a list of file formats that you can use with Azure AI Video Indexer, see [Standard Encoder formats and codecs](/azure/media-services/latest/encode-media-encoder-standard-formats-reference).
-
-## Index a video by using the website
-
-1. Sign into the [Azure AI Video Indexer](https://www.videoindexer.ai/) website, and then select **Upload**.
-
- > [!div class="mx-imgBorder"]
- > :::image type="content" source="./media/video-indexer-get-started/video-indexer-upload.png" alt-text="Screenshot that shows the Upload button.":::
-
-1. Click on **enter a file URL** button
- > [!div class="mx-imgBorder"]
- > :::image type="content" source="./media/video-indexer-get-started/avam-enter-file-url.png" alt-text="Screenshot that shows the enter file URL button.":::
-
-1. Next, go to your video/audio file located on your OneDrive using a web browser. Select the file you want to index, at the top select **embed**
- > [!div class="mx-imgBorder"]
- > :::image type="content" source="./media/video-indexer-get-started/avam-odrv-embed.png" alt-text="Screenshot that shows the embed code button.":::
-
-1. On the right click on **Generate** to generate an embed url.
- > [!div class="mx-imgBorder"]
- > :::image type="content" source="./media/video-indexer-get-started/avam-odrv-embed-generate.png" alt-text="Screenshot that shows the embed code generate button.":::
-
-1. Copy the embed code and extract only the URL part including the key. For example:
-
- `https://onedrive.live.com/embed?cid=5BC591B7C713B04F&resid=5DC518B6B713C40F%2110126&authkey=HnsodidN_50oA3lLfk`
-
- Replace **embed** with **download**. You will now have a url that looks like this:
-
- `https://onedrive.live.com/download?cid=5BC591B7C713B04F&resid=5DC518B6B713C40F%2110126&authkey=HnsodidN_50oA3lLfk`
-
-1. Now enter this URL in the Azure AI Video Indexer website in the URL field.
-
- > [!div class="mx-imgBorder"]
- > :::image type="content" source="./media/video-indexer-get-started/avam-odrv-url.png" alt-text="Screenshot that shows the onedrive url field.":::
-
-After your video is downloaded from OneDrive, Azure AI Video Indexer starts indexing and analyzing the video.
-
-> [!div class="mx-imgBorder"]
-> :::image type="content" source="./media/video-indexer-get-started/progress.png" alt-text="Screenshot that shows the progress of an upload.":::
-
-Once Azure AI Video Indexer is done analyzing, you will receive an email with a link to your indexed video. The email also includes a short description of what was found in your video (for example: people, topics, optical character recognition).
-
-## Upload and index a video by using the API
-
-You can use the [Upload Video](https://api-portal.videoindexer.ai/api-details#api=Operations&operation=Upload-Video) API to upload and index your videos based on a URL. The code sample that follows includes the commented-out code that shows how to upload the byte array.
-
-### Configurations and parameters
-
-This section describes some of the optional parameters and when to set them. For the most up-to-date info about parameters, see the [Azure AI Video Indexer API developer portal](https://api-portal.videoindexer.ai/api-details#api=Operations&operation=Upload-Video).
-
-#### externalID
-
-Use this parameter to specify an ID that will be associated with the video. The ID can be applied to integration into an external video content management (VCM) system. The videos that are in the Azure AI Video Indexer website can be searched via the specified external ID.
-
-#### callbackUrl
-
-Use this parameter to specify a callback URL.
--
-Azure AI Video Indexer returns any existing parameters provided in the original URL. The URL must be encoded.
-
-#### indexingPreset
-
-Use this parameter to define an AI bundle that you want to apply on your audio or video file. This parameter is used to configure the indexing process. You can specify the following values:
--- `AudioOnly`: Index and extract insights by using audio only (ignoring video).-- `VideoOnly`: Index and extract insights by using video only (ignoring audio).-- `Default`: Index and extract insights by using both audio and video.-- `DefaultWithNoiseReduction`: Index and extract insights from both audio and video, while applying noise reduction algorithms on the audio stream.-
- The `DefaultWithNoiseReduction` value is now mapped to a default preset (deprecated).
-- `BasicAudio`: Index and extract insights by using audio only (ignoring video). Include only basic audio features (transcription, translation, formatting of output captions and subtitles).-- `AdvancedAudio`: Index and extract insights by using audio only (ignoring video). Include advanced audio features (such as audio event detection) in addition to the standard audio analysis.-- `AdvancedVideo`: Index and extract insights by using video only (ignoring audio). Include advanced video features (such as observed people tracing) in addition to the standard video analysis.-- `AdvancedVideoAndAudio`: Index and extract insights by using both advanced audio and advanced video analysis.-
-> [!NOTE]
-> The preceding advanced presets include models that are in public preview. When these models reach general availability, there might be implications for the price.
-
-Azure AI Video Indexer covers up to two tracks of audio. If the file has more audio tracks, they're treated as one track. If you want to index the tracks separately, you need to extract the relevant audio file and index it as `AudioOnly`.
-
-Price depends on the selected indexing option. For more information, see [Media Services pricing](https://azure.microsoft.com/pricing/details/media-services/).
-
-#### priority
-
-Azure AI Video Indexer indexes videos according to their priority. Use the `priority` parameter to specify the index priority. The following values are valid: `Low`, `Normal` (default), and `High`.
-
-This parameter is supported only for paid accounts.
-
-#### streamingPreset
-
-After your video is uploaded, Azure AI Video Indexer optionally encodes the video. It then proceeds to indexing and analyzing the video. When Azure AI Video Indexer is done analyzing, you get a notification with the video ID.
-
-When you're using the [Upload Video](https://api-portal.videoindexer.ai/api-details#api=Operations&operation=Upload-Video) or [Re-Index Video](https://api-portal.videoindexer.ai/api-details#api=Operations&operation=Re-Index-Video) API, one of the optional parameters is `streamingPreset`. If you set `streamingPreset` to `Default`, `SingleBitrate`, or `AdaptiveBitrate`, the encoding process is triggered.
-
-After the indexing and encoding jobs are done, the video is published so you can also stream your video. The streaming endpoint from which you want to stream the video must be in the **Running** state.
-
-For `SingleBitrate`, the standard encoder cost will apply for the output. If the video height is greater than or equal to 720, Azure AI Video Indexer encodes it as 1280 x 720. Otherwise, it's encoded as 640 x 468.
-The default setting is [content-aware encoding](/azure/media-services/latest/encode-content-aware-concept).
-
-If you only want to index your video and not encode it, set `streamingPreset` to `NoStreaming`.
-
-#### videoUrl
-
-This parameter specifies the URL of the video or audio file to be indexed. If the `videoUrl` parameter is not specified, Azure AI Video Indexer expects you to pass the file as multipart/form body content.
-
-### Code sample
-
-> [!NOTE]
-> The following sample is intended for Classic accounts only and isn't compatible with ARM accounts. For an updated sample for ARM, see [this ARM sample repo](https://github.com/Azure-Samples/media-services-video-indexer/blob/master/API-Samples/C%23/ArmBased/Program.cs).
-
-The following C# code snippets demonstrate the usage of all the Azure AI Video Indexer APIs together.
-
-### [Classic account](#tab/With-classic-account/)
-
-After you copy the following code into your development platform, you'll need to provide two parameters:
-
-* API key (`apiKey`): Your personal API management subscription key. It allows you to get an access token in order to perform operations on your Azure AI Video Indexer account.
-
- To get your API key:
-
- 1. Go to the [Azure AI Video Indexer API developer portal](https://api-portal.videoindexer.ai/).
- 1. Sign in.
- 1. Go to **Products** > **Authorization** > **Authorization subscription**.
- 1. Copy the **Primary key** value.
-
-* Video URL (`videoUrl`): A URL of the video or audio file to be indexed. Here are the requirements:
-
- - The URL must point at a media file. (HTML pages are not supported.)
- - The file can be protected by an access token that's provided as part of the URI. The endpoint that serves the file must be secured with TLS 1.2 or later.
- - The URL must be encoded.
-
-The result of successfully running the code sample includes an insight widget URL and a player widget URL. They allow you to examine the insights and the uploaded video, respectively.
--
-```csharp
-public async Task Sample()
-{
- var apiUrl = "https://api.videoindexer.ai";
- var apiKey = "..."; // Replace with API key taken from https://aka.ms/viapi
-
- System.Net.ServicePointManager.SecurityProtocol =
- System.Net.ServicePointManager.SecurityProtocol | System.Net.SecurityProtocolType.Tls12;
-
- // Create the HTTP client
- var handler = new HttpClientHandler();
- handler.AllowAutoRedirect = false;
- var client = new HttpClient(handler);
- client.DefaultRequestHeaders.Add("Ocp-Apim-Subscription-Key", apiKey);
-
- // Obtain account information and access token
- string queryParams = CreateQueryString(
- new Dictionary<string, string>()
- {
- {"generateAccessTokens", "true"},
- {"allowEdit", "true"},
- });
- HttpResponseMessage result = await client.GetAsync($"{apiUrl}/auth/trial/Accounts?{queryParams}");
- var json = await result.Content.ReadAsStringAsync();
- var accounts = JsonConvert.DeserializeObject<AccountContractSlim[]>(json);
-
- // Take the relevant account. Here we simply take the first.
- // You can also get the account via accounts.First(account => account.Id == <GUID>);
- var accountInfo = accounts.First();
-
- // We'll use the access token from here on, so there's no need for the APIM key
- client.DefaultRequestHeaders.Remove("Ocp-Apim-Subscription-Key");
-
- // Upload a video
- var content = new MultipartFormDataContent();
- Console.WriteLine("Uploading...");
- // Get the video from URL
- var videoUrl = "VIDEO_URL"; // Replace with the video URL from OneDrive
-
- // As an alternative to specifying video URL, you can upload a file.
- // Remove the videoUrl parameter from the query parameters below and add the following lines:
- //FileStream video =File.OpenRead(Globals.VIDEOFILE_PATH);
- //byte[] buffer =new byte[video.Length];
- //video.Read(buffer, 0, buffer.Length);
- //content.Add(new ByteArrayContent(buffer));
-
- queryParams = CreateQueryString(
- new Dictionary<string, string>()
- {
- {"accessToken", accountInfo.AccessToken},
- {"name", "video_name"},
- {"description", "video_description"},
- {"privacy", "private"},
- {"partition", "partition"},
- {"videoUrl", videoUrl},
- });
- var uploadRequestResult = await client.PostAsync($"{apiUrl}/{accountInfo.Location}/Accounts/{accountInfo.Id}/Videos?{queryParams}", content);
- var uploadResult = await uploadRequestResult.Content.ReadAsStringAsync();
-
- // Get the video ID from the upload result
- string videoId = JsonConvert.DeserializeObject<dynamic>(uploadResult)["id"];
- Console.WriteLine("Uploaded");
- Console.WriteLine("Video ID:");
- Console.WriteLine(videoId);
-
- // Wait for the video index to finish
- while (true)
- {
- await Task.Delay(10000);
-
- queryParams = CreateQueryString(
- new Dictionary<string, string>()
- {
- {"accessToken", accountInfo.AccessToken},
- {"language", "English"},
- });
-
- var videoGetIndexRequestResult = await client.GetAsync($"{apiUrl}/{accountInfo.Location}/Accounts/{accountInfo.Id}/Videos/{videoId}/Index?{queryParams}");
- var videoGetIndexResult = await videoGetIndexRequestResult.Content.ReadAsStringAsync();
-
- string processingState = JsonConvert.DeserializeObject<dynamic>(videoGetIndexResult)["state"];
-
- Console.WriteLine("");
- Console.WriteLine("State:");
- Console.WriteLine(processingState);
-
- // Job is finished
- if (processingState != "Uploaded" && processingState != "Processing")
- {
- Console.WriteLine("");
- Console.WriteLine("Full JSON:");
- Console.WriteLine(videoGetIndexResult);
- break;
- }
- }
-
- // Search for the video
- queryParams = CreateQueryString(
- new Dictionary<string, string>()
- {
- {"accessToken", accountInfo.AccessToken},
- {"id", videoId},
- });
-
- var searchRequestResult = await client.GetAsync($"{apiUrl}/{accountInfo.Location}/Accounts/{accountInfo.Id}/Videos/Search?{queryParams}");
- var searchResult = await searchRequestResult.Content.ReadAsStringAsync();
- Console.WriteLine("");
- Console.WriteLine("Search:");
- Console.WriteLine(searchResult);
-
- // Generate video access token (used for get widget calls)
- client.DefaultRequestHeaders.Add("Ocp-Apim-Subscription-Key", apiKey);
- var videoTokenRequestResult = await client.GetAsync($"{apiUrl}/auth/{accountInfo.Location}/Accounts/{accountInfo.Id}/Videos/{videoId}/AccessToken?allowEdit=true");
- var videoAccessToken = (await videoTokenRequestResult.Content.ReadAsStringAsync()).Replace("\"", "");
- client.DefaultRequestHeaders.Remove("Ocp-Apim-Subscription-Key");
-
- // Get insights widget URL
- queryParams = CreateQueryString(
- new Dictionary<string, string>()
- {
- {"accessToken", videoAccessToken},
- {"widgetType", "Keywords"},
- {"allowEdit", "true"},
- });
- var insightsWidgetRequestResult = await client.GetAsync($"{apiUrl}/{accountInfo.Location}/Accounts/{accountInfo.Id}/Videos/{videoId}/InsightsWidget?{queryParams}");
- var insightsWidgetLink = insightsWidgetRequestResult.Headers.Location;
- Console.WriteLine("Insights Widget url:");
- Console.WriteLine(insightsWidgetLink);
-
- // Get player widget URL
- queryParams = CreateQueryString(
- new Dictionary<string, string>()
- {
- {"accessToken", videoAccessToken},
- });
- var playerWidgetRequestResult = await client.GetAsync($"{apiUrl}/{accountInfo.Location}/Accounts/{accountInfo.Id}/Videos/{videoId}/PlayerWidget?{queryParams}");
- var playerWidgetLink = playerWidgetRequestResult.Headers.Location;
- Console.WriteLine("");
- Console.WriteLine("Player Widget url:");
- Console.WriteLine(playerWidgetLink);
- Console.WriteLine("\nPress Enter to exit...");
- String line = Console.ReadLine();
- if (line == "enter")
- {
- System.Environment.Exit(0);
- }
-
-}
-
-private string CreateQueryString(IDictionary<string, string> parameters)
-{
- var queryParameters = HttpUtility.ParseQueryString(string.Empty);
- foreach (var parameter in parameters)
- {
- queryParameters[parameter.Key] = parameter.Value;
- }
-
- return queryParameters.ToString();
-}
-
-public class AccountContractSlim
-{
- public Guid Id { get; set; }
- public string Name { get; set; }
- public string Location { get; set; }
- public string AccountType { get; set; }
- public string Url { get; set; }
- public string AccessToken { get; set; }
-}
-```
-
-### [Azure Resource Manager account](#tab/with-arm-account-account/)
-
-After you copy this C# project into your development platform, you need to take the following steps:
-
-1. Go to Program.cs and populate:
-
- - ```SubscriptionId``` with your subscription ID.
- - ```ResourceGroup``` with your resource group.
- - ```AccountName``` with your account name.
- - ```VideoUrl``` with your video URL.
-1. Make sure that .NET 6.0 is installed. If it isn't, [install it](https://dotnet.microsoft.com/download/dotnet/6.0).
-1. Make sure that the Azure CLI is installed. If it isn't, [install it](/cli/azure/install-azure-cli).
-1. Open your terminal and go to the *VideoIndexerArm* folder.
-1. Log in to Azure: ```az login --use-device```.
-1. Build the project: ```dotnet build```.
-1. Run the project: ```dotnet run```.
-
-```csharp
-<Project Sdk="Microsoft.NET.Sdk">
-
- <PropertyGroup>
- <OutputType>Exe</OutputType>
- <TargetFramework>net5.0</TargetFramework>
- </PropertyGroup>
-
- <ItemGroup>
- <PackageReference Include="Azure.Identity" Version="1.4.1" />
- <PackageReference Include="Microsoft.Identity.Client" Version="4.36.2" />
- </ItemGroup>
-
-</Project>
-```
-
-```csharp
-using System;
-using System.Collections.Generic;
-using System.Net.Http;
-using System.Net.Http.Headers;
-using System.Text.Json;
-using System.Text.Json.Serialization;
-using System.Threading.Tasks;
-using System.Web;
-using Azure.Core;
-using Azure.Identity;
--
-namespace VideoIndexerArm
-{
- public class Program
- {
- private const string AzureResourceManager = "https://management.azure.com";
- private const string SubscriptionId = ""; // Your Azure subscription
- private const string ResourceGroup = ""; // Your resource group
- private const string AccountName = ""; // Your account name
- private const string VideoUrl = ""; // The video URL from OneDrive you want to index
-
- public static async Task Main(string[] args)
- {
- // Build Azure AI Video Indexer resource provider client that has access token through Azure Resource Manager
- var videoIndexerResourceProviderClient = await VideoIndexerResourceProviderClient.BuildVideoIndexerResourceProviderClient();
-
- // Get account details
- var account = await videoIndexerResourceProviderClient.GetAccount();
- var accountId = account.Properties.Id;
- var accountLocation = account.Location;
- Console.WriteLine($"account id: {accountId}");
- Console.WriteLine($"account location: {accountLocation}");
-
- // Get account-level access token for Azure AI Video Indexer
- var accessTokenRequest = new AccessTokenRequest
- {
- PermissionType = AccessTokenPermission.Contributor,
- Scope = ArmAccessTokenScope.Account
- };
-
- var accessToken = await videoIndexerResourceProviderClient.GetAccessToken(accessTokenRequest);
- var apiUrl = "https://api.videoindexer.ai";
- System.Net.ServicePointManager.SecurityProtocol = System.Net.ServicePointManager.SecurityProtocol | System.Net.SecurityProtocolType.Tls12;
--
- // Create the HTTP client
- var handler = new HttpClientHandler();
- handler.AllowAutoRedirect = false;
- var client = new HttpClient(handler);
-
- // Upload a video
- var content = new MultipartFormDataContent();
- Console.WriteLine("Uploading...");
- // Get the video from URL
-
- // As an alternative to specifying video URL, you can upload a file.
- // Remove the videoUrl parameter from the query parameters below and add the following lines:
- // FileStream video =File.OpenRead(Globals.VIDEOFILE_PATH);
- // byte[] buffer =new byte[video.Length];
- // video.Read(buffer, 0, buffer.Length);
- // content.Add(new ByteArrayContent(buffer));
-
- var queryParams = CreateQueryString(
- new Dictionary<string, string>()
- {
- {"accessToken", accessToken},
- {"name", "video sample"},
- {"description", "video_description"},
- {"privacy", "private"},
- {"partition", "partition"},
- {"videoUrl", VideoUrl},
- });
- var uploadRequestResult = await client.PostAsync($"{apiUrl}/{accountLocation}/Accounts/{accountId}/Videos?{queryParams}", content);
- var uploadResult = await uploadRequestResult.Content.ReadAsStringAsync();
-
- // Get the video ID from the upload result
- string videoId = JsonSerializer.Deserialize<Video>(uploadResult).Id;
- Console.WriteLine("Uploaded");
- Console.WriteLine("Video ID:");
- Console.WriteLine(videoId);
-
- // Wait for the video index to finish
- while (true)
- {
- await Task.Delay(10000);
-
- queryParams = CreateQueryString(
- new Dictionary<string, string>()
- {
- {"accessToken", accessToken},
- {"language", "English"},
- });
-
- var videoGetIndexRequestResult = await client.GetAsync($"{apiUrl}/{accountLocation}/Accounts/{accountId}/Videos/{videoId}/Index?{queryParams}");
- var videoGetIndexResult = await videoGetIndexRequestResult.Content.ReadAsStringAsync();
-
- string processingState = JsonSerializer.Deserialize<Video>(videoGetIndexResult).State;
-
- Console.WriteLine("");
- Console.WriteLine("State:");
- Console.WriteLine(processingState);
-
- // Job is finished
- if (processingState != "Uploaded" && processingState != "Processing")
- {
- Console.WriteLine("");
- Console.WriteLine("Full JSON:");
- Console.WriteLine(videoGetIndexResult);
- break;
- }
- }
-
- // Search for the video
- queryParams = CreateQueryString(
- new Dictionary<string, string>()
- {
- {"accessToken", accessToken},
- {"id", videoId},
- });
-
- var searchRequestResult = await client.GetAsync($"{apiUrl}/{accountLocation}/Accounts/{accountId}/Videos/Search?{queryParams}");
- var searchResult = await searchRequestResult.Content.ReadAsStringAsync();
- Console.WriteLine("");
- Console.WriteLine("Search:");
- Console.WriteLine(searchResult);
-
- // Get insights widget URL
- queryParams = CreateQueryString(
- new Dictionary<string, string>()
- {
- {"accessToken", accessToken},
- {"widgetType", "Keywords"},
- {"allowEdit", "true"},
- });
- var insightsWidgetRequestResult = await client.GetAsync($"{apiUrl}/{accountLocation}/Accounts/{accountId}/Videos/{videoId}/InsightsWidget?{queryParams}");
- var insightsWidgetLink = insightsWidgetRequestResult.Headers.Location;
- Console.WriteLine("Insights Widget url:");
- Console.WriteLine(insightsWidgetLink);
-
- // Get player widget URL
- queryParams = CreateQueryString(
- new Dictionary<string, string>()
- {
- {"accessToken", accessToken},
- });
- var playerWidgetRequestResult = await client.GetAsync($"{apiUrl}/{accountLocation}/Accounts/{accountId}/Videos/{videoId}/PlayerWidget?{queryParams}");
- var playerWidgetLink = playerWidgetRequestResult.Headers.Location;
- Console.WriteLine("");
- Console.WriteLine("Player Widget url:");
- Console.WriteLine(playerWidgetLink);
- Console.WriteLine("\nPress Enter to exit...");
- String line = Console.ReadLine();
- if (line == "enter")
- {
- System.Environment.Exit(0);
- }
-
- }
-
- static string CreateQueryString(IDictionary<string, string> parameters)
- {
- var queryParameters = HttpUtility.ParseQueryString(string.Empty);
- foreach (var parameter in parameters)
- {
- queryParameters[parameter.Key] = parameter.Value;
- }
-
- return queryParameters.ToString();
- }
-
- public class VideoIndexerResourceProviderClient
- {
- private readonly string armAaccessToken;
-
- async public static Task<VideoIndexerResourceProviderClient> BuildVideoIndexerResourceProviderClient()
- {
- var tokenRequestContext = new TokenRequestContext(new[] { $"{AzureResourceManager}/.default" });
- var tokenRequestResult = await new DefaultAzureCredential().GetTokenAsync(tokenRequestContext);
- return new VideoIndexerResourceProviderClient(tokenRequestResult.Token);
- }
- public VideoIndexerResourceProviderClient(string armAaccessToken)
- {
- this.armAaccessToken = armAaccessToken;
- }
-
- public async Task<string> GetAccessToken(AccessTokenRequest accessTokenRequest)
- {
- Console.WriteLine($"Getting access token. {JsonSerializer.Serialize(accessTokenRequest)}");
- // Set the generateAccessToken (from video indexer) HTTP request content
- var jsonRequestBody = JsonSerializer.Serialize(accessTokenRequest);
- var httpContent = new StringContent(jsonRequestBody, System.Text.Encoding.UTF8, "application/json");
-
- // Set request URI
- var requestUri = $"{AzureResourceManager}/subscriptions/{SubscriptionId}/resourcegroups/{ResourceGroup}/providers/Microsoft.VideoIndexer/accounts/{AccountName}/generateAccessToken?api-version=2021-08-16-preview";
-
- // Generate access token from video indexer
- var client = new HttpClient(new HttpClientHandler());
- client.DefaultRequestHeaders.Authorization = new AuthenticationHeaderValue("Bearer", armAaccessToken);
- var result = await client.PostAsync(requestUri, httpContent);
- var jsonResponseBody = await result.Content.ReadAsStringAsync();
- return JsonSerializer.Deserialize<GenerateAccessTokenResponse>(jsonResponseBody).AccessToken;
- }
-
- public async Task<Account> GetAccount()
- {
-
- Console.WriteLine($"Getting account.");
- // Set request URI
- var requestUri = $"{AzureResourceManager}/subscriptions/{SubscriptionId}/resourcegroups/{ResourceGroup}/providers/Microsoft.VideoIndexer/accounts/{AccountName}/?api-version=2021-08-16-preview";
-
- // Get account
- var client = new HttpClient(new HttpClientHandler());
- client.DefaultRequestHeaders.Authorization = new AuthenticationHeaderValue("Bearer", armAaccessToken);
- var result = await client.GetAsync(requestUri);
- var jsonResponseBody = await result.Content.ReadAsStringAsync();
- return JsonSerializer.Deserialize<Account>(jsonResponseBody);
- }
- }
-
- public class AccessTokenRequest
- {
- [JsonPropertyName("permissionType")]
- public AccessTokenPermission PermissionType { get; set; }
-
- [JsonPropertyName("scope")]
- public ArmAccessTokenScope Scope { get; set; }
-
- [JsonPropertyName("projectId")]
- public string ProjectId { get; set; }
-
- [JsonPropertyName("videoId")]
- public string VideoId { get; set; }
- }
-
- [JsonConverter(typeof(JsonStringEnumConverter))]
- public enum AccessTokenPermission
- {
- Reader,
- Contributor,
- MyAccessAdministrator,
- Owner,
- }
-
- [JsonConverter(typeof(JsonStringEnumConverter))]
- public enum ArmAccessTokenScope
- {
- Account,
- Project,
- Video
- }
-
- public class GenerateAccessTokenResponse
- {
- [JsonPropertyName("accessToken")]
- public string AccessToken { get; set; }
-
- }
- public class AccountProperties
- {
- [JsonPropertyName("accountId")]
- public string Id { get; set; }
- }
-
- public class Account
- {
- [JsonPropertyName("properties")]
- public AccountProperties Properties { get; set; }
-
- [JsonPropertyName("location")]
- public string Location { get; set; }
-
- }
-
- public class Video
- {
- [JsonPropertyName("id")]
- public string Id { get; set; }
-
- [JsonPropertyName("state")]
- public string State { get; set; }
- }
- }
-}
-
-```
-
-### Common errors
-
-The upload operation might return the following status codes:
-
-|Status code|ErrorType (in response body)|Description|
-||||
-|409|VIDEO_INDEXING_IN_PROGRESS|The same video is already being processed in this account.|
-|400|VIDEO_ALREADY_FAILED|The same video failed to process in this account less than 2 hours ago. API clients should wait at least 2 hours before reuploading a video.|
-|429||Trial accounts are allowed 5 uploads per minute. Paid accounts are allowed 50 uploads per minute.|
-
-## Uploading considerations and limitations
--- The name of a video must be no more than 80 characters.-- When you're uploading a video based on the URL (preferred), the endpoint must be secured with TLS 1.2 or later.-- The upload size with the URL option is limited to 30 GB.-- The length of the request URL is limited to 6,144 characters. The length of the query string URL is limited to 4,096 characters.-- The upload size with the byte array option is limited to 2 GB.-- The byte array option times out after 30 minutes.-- The URL provided in the `videoURL` parameter must be encoded.-- Indexing Media Services assets has the same limitation as indexing from a URL.-- Azure AI Video Indexer has a duration limit of 4 hours for a single file.-- The URL must be accessible (for example, a public URL).-
- If it's a private URL, the access token must be provided in the request.
-- The URL must point to a valid media file and not to a webpage, such as a link to the `www.youtube.com` page.-- In a paid account, you can upload up to 50 movies per minute. In a trial account, you can upload up to 5 movies per minute.-
-> [!Tip]
-> We recommend that you use .NET Framework version 4.6.2. or later, because older .NET Framework versions don't default to TLS 1.2.
->
-> If you must use an older .NET Framework version, add one line to your code before making the REST API call:
->
-> `System.Net.ServicePointManager.SecurityProtocol = SecurityProtocolType.Tls | SecurityProtocolType.Tls11 | SecurityProtocolType.Tls12;`
-
-## Firewall
-
-For information about a storage account that's behind a firewall, see the [FAQ](faq.yml#can-a-storage-account-connected-to-the-media-services-account-be-behind-a-firewall).
-
-## Next steps
-
-[Examine the Azure AI Video Indexer output produced by an API](video-indexer-output-json-v2.md)
azure-video-indexer Regions https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-video-indexer/regions.md
- Title: Regions in which Azure AI Video Indexer is available
-description: This article talks about Azure regions in which Azure AI Video Indexer is available.
- Previously updated : 09/14/2020----
-# Azure regions in which Azure AI Video Indexer exists
--
-Azure AI Video Indexer APIs contain a **location** parameter that you should set to the Azure region to which the call should be routed. This must be an [Azure region in which Azure AI Video Indexer is available](https://azure.microsoft.com/global-infrastructure/services/?products=cognitive-services&regions=all).
-
-## Locations
-
-The `location` parameter must be given the Azure region code name as its value. If you are using Azure AI Video Indexer in preview mode, you should put `"trial"` as the value. `trial` is the default value for the `location` parameter. Otherwise, to get the code name of the Azure region that your account is in and that your call should be routed to, you can use the Azure portal or run a [Azure CLI](/cli/azure) command.
-
-### Azure portal
-
-1. Sign in on the [Azure AI Video Indexer](https://www.videoindexer.ai/) website.
-1. Select **User accounts** from the top-right corner of the page.
-1. Find the location of your account in the top-right corner.
-
- > [!div class="mx-imgBorder"]
- > :::image type="content" source="./media/location/location1.png" alt-text="Location":::
-
-### CLI command
-
-```azurecli-interactive
-az account list-locations
-```
-
-Once you run the line shown above, you get a list of all Azure regions. Navigate to the Azure region that has the *displayName* you are looking for, and use its *name* value for the **location** parameter.
-
-For example, for the Azure region West US 2 (displayed below), you will use "westus2" for the **location** parameter.
-
-```json
- {
- "displayName": "West US 2",
- "id": "/subscriptions/00000000-0000-0000-0000-000000000000/locations/westus2",
- "latitude": "47.233",
- "longitude": "-119.852",
- "name": "westus2",
- "subscriptionId": null
- }
-```
-
-## Next steps
--- [Customize Language model using APIs](customize-language-model-with-api.md)-- [Customize Brands model using APIs](customize-brands-model-with-api.md)-- [Customize Person model using APIs](customize-person-model-with-api.md)
azure-video-indexer Release Notes https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-video-indexer/release-notes.md
- Title: Azure AI Video Indexer release notes | Microsoft Docs
-description: To stay up-to-date with the most recent developments, this article provides you with the latest updates on Azure AI Video Indexer.
-- Previously updated : 09/27/2023----
-# Azure AI Video Indexer release notes
-
-Revisit this page to view the latest updates.
-
-To stay up-to-date with the most recent Azure AI Video Indexer developments, this article provides you with information about:
-
-* The latest releases
-* Known issues
-* Bug fixes
-* Deprecated functionality
-
-## September 2023
-
-### Changes related to AMS retirement
-As a result of the June 30th 2024 [retirement of Azure Media Services (AMS)](/azure/media-services/latest/azure-media-services-retirement), Video Indexer has announced a number of related retirements. They include the June 30th 2024 retirement of Video Indexer Classic accounts, API changes, and no longer supporting adaptive bitrate. For full details, see[Changes related to Azure Media Service (AMS) retirement](https://aka.ms/vi-ams-related-changes).
-
-## July 2023
-
-### Redact faces with Azure Video Indexer API
-
-You can now redact faces with Azure Video Indexer API. For more information see [Redact faces with Azure Video Indexer API](face-redaction-with-api.md).
-
-### API request limit increase
-
-Video Indexer has increased the API request limit from 60 requests per minute to 120.
-
-## June 2023
-
-### FAQ - following the Azure Media Services retirement announcement
-
-For more information, see [AMS deprecation FAQ](ams-deprecation-faq.yml).
-
-## May 2023
-
-### API updates
-
-We're introducing a change in behavior that may require a change to your existing query logic. The change is in the **List** and **Search** APIs, find a detailed change between the current and the new behavior in a table that follows. You may need to update your code to utilize the [new APIs](https://api-portal.videoindexer.ai/).
-
-|API |Current|New|The update|
-|||||
-|List Videos|ΓÇó List all videos/projects according to 'IsBase' boolean parameter. If 'IsBase' isn't defined, list both.<br/>ΓÇó Returns videos in all states (In progress/Proccessed/Failed). |ΓÇó List Videos API will Return only videos (with paging) in all states.<br/>ΓÇó List Projects API returns only projects (with paging).|ΓÇó List videos API was divided into two new APIΓÇÖs **List Videos** and **List Projects**<br/>ΓÇó The 'IsBase' parameter no longer has a meaning. |
-|Search Videos|ΓÇó Search all videos/projects according to 'IsBase' boolean parameter. If 'IsBase' isn't defined, search both. <br/>ΓÇó Search videos in all states (In progress/Proccessed/Failed). |Search only processed videos.|ΓÇó Search Videos API will only search videos and not projects.<br/>ΓÇó The 'IsBase' parameter no longer has a meaning.<br/>ΓÇó Search Videos API will only search Processed videos (and not Failed/InProgress ones.)|
-
-### Support for HTTP/2
-
-Added support for HTTP/2 for our [Data Plane API](https://api-portal.videoindexer.ai/). [HTTP/2](https://en.wikipedia.org/wiki/HTTP/2) offers several benefits over HTTP/1.1, which continues to be supported for backwards compatibility. One of the main benefits of HTTP/2 is increased performance, better reliability and reduced system resource requirements over HTTP/1.1. With this change we now support HTTP/2 for both the Video Indexer [Portal](https://videoindexer.ai/) and our Data Plane API. We advise you to update your code to take advantage of this change.
-
-### Topics insight improvements
-
-We now support all five levels of IPTC ontology.
-
-## April 2023
-
-### Resource Health support
-
-Azure AI Video Indexer is now integrated with Azure Resource Health enabling you to see the health and availability of each of your Azure AI Video Indexer resources. Azure Resource Health also helps with diagnosing and solving problems and you can set alerts to be notified whenever your resources are affected. For more information, see [Azure Resource Health overview](../service-health/resource-health-overview.md).
-
-### The animation character recognition model has been retired
-
-The **animation character recognition** model has been retired on March 1, 2023. For any related issues, [open a support ticket via the Azure portal](https://portal.azure.com/#blade/Microsoft_Azure_Support/HelpAndSupportBlade/overview).
-
-### Excluding sensitive AI models
-
-Following the Microsoft Responsible AI agenda, Azure AI Video Indexer now allows you to exclude specific AI models when indexing media files. The list of sensitive AI models includes: face detection, observed people, emotions, labels identification.
-
-This feature is currently available through the API, and is available in all presets except the Advanced preset.
-
-### Observed people tracing improvements
-
-For more information, see [Considerations and limitations when choosing a use case](observed-matched-people.md#considerations-and-limitations-when-choosing-a-use-case).
-
-## March 2023
-
-### Support for storage behind firewall
-
-It's good practice to lock storage accounts and disable public access to enhance or comply with enterprise security policy. Video Indexer can now access non-public accessible storage accounts using the [Azure Trusted Service](/azure/storage/common/storage-network-security?tabs=azure-portal#trusted-access-based-on-a-managed-identity) exception using Managed Identities. You can read more how to set it up in our [how-to](storage-behind-firewall.md).
-
-### New custom speech and pronunciation training
-
-Azure AI Video Indexer has added a new custom speech model experience. The experience includes ability to use custom pronunciation datasets to improve recognition of mispronounced words, phrases, or names. The custom models can be used to improve the transcription quality of content with industry specific terminology. To learn more, see [Customize speech model overview](customize-speech-model-overview.md).
-
-### Observed people quality improvements
-
-Observed people now supports people who are sitting. This is in addition to existing support of people who are standing or walking. This improvement makes observed people model more versatile and suitable for a wider range of use cases. We have also improved the model re-identification and grouping algorithms by 50%. The model can now more accurately track and group people across multiple camera views.
-
-### Observed people indexing duration optimization
-
-We have optimized the memory usage of the observed people model, resulting in a 60% reduction in indexing duration when using the advanced video analysis preset. You can now process your video footage more efficiently and get results faster.
-
-## February 2023
-
-### Pricing
-
-On January 01, 2023 we introduced the Advanced Audio and Video SKU for Advanced presets. This was done on order to report the use of each preset, Basic, Standard & Advanced, with their own distinct meter on the Azure Billing statement. This can also be seen on Azure Cost Analysis reports.
-
-Starting February 1st, weΓÇÖre excited to announce a 40% price reduction on the Basic Audio Analysis, Audio Analysis and Video Analysis SKUs. We took into consideration feedback from our customers and market trends to make changes that will benefit them. By reducing prices and introducing a new Advanced SKU, we are providing competitive pricing and more options for customers to balance costs and features. Additionally, as we continue to improve and add more AI capabilities, customers will be able to take advantage of these cost savings when performing new or re-indexing operations.
-
-This change will be implemented automatically, and customers who already have Azure discounts will continue to receive them in addition to the new pricing.
-
-|**Charge** | **Basic Audio Analysis** | **Standard Audio Analysis** | **Advanced Audio Analysis** | **Standard Video Analysis** | **Advanced Video Analysis** |
-| | | - | | | |
-| Per input minute | $0.0126 | $0.024 | $0.04 | $0.09 | $0.15 |
-
-### Network Service Tag
-
-Video Indexer supports the use of Network Security Tag to allow network traffic from Video Indexer IPs into your network. Starting 22 January, we renamed our Network Security Service tag from `AzureVideoAnalyzerForMedia` to `VideoIndexer`. This change will require you to update your deployment scripts and/or existing configuration. See our [Network Security Documentation](network-security.md) for more info.
-
-## January 2023
-
-### Notification experience
-
-The [Azure AI Video Indexer website](https://www.videoindexer.ai/) now has a notification panel where you can stay informed of important product updates, such as service impacting events, new releases, and more.
-
-### Textual logo detection
-
-**Textual logo detection** enables you to customize text logos to be detected within videos. For more information, see [Detect textual logo](detect-textual-logo.md).
-
-### Switching directories
-
-You can now switch Azure AD directories and manage Azure AI Video Indexer accounts across tenants using the [Azure AI Video Indexer website](https://www.videoindexer.ai/).
-
-### Language support
-
-* New languages are now supported: Irish, Bulgarian, Catalan, Greek, Estonian, Croatian, Latvian, Romanian, Slovak, Slovenian, Telugu, Malayalam, Kannada, Icelandic, Armenian, Gujarati, Malay, and Tamil.
-* Use an API to get all supported languages: [Get Supported Languages](https://api-portal.videoindexer.ai/api-details#api=Operations&operation=Get-Supported-Languages).
-
-For more information, see [supported languages](language-support.md).
-
-### Face grouping
-
-Significantly reduced number of low-quality face detection occurrences in the UI and [insights.json](video-indexer-output-json-v2.md#insights). Enhancing the quality and usability through improved grouping algorithm.
-
-## November 2022
-
-### Speakers' names can now be edited from the Azure AI Video Indexer website
-
-You can now add new speakers, rename identified speakers and modify speakers assigned to a particular transcript line using the [Azure AI Video Indexer website](https://www.videoindexer.ai/). For details on how to edit speakers from the **Timeline** pane, see [Edit speakers with the Azure AI Video Indexer website](edit-speakers.md).
-
-The same capabilities are available from the Azure AI Video Indexer [upload video index](https://api-portal.videoindexer.ai/api-details#api=Operations&operation=Update-Video-Index) API.
-
-## October 2022
-
-### A new built-in role: Video Indexer Restricted Viewer
-
-The limited access **Video Indexer Restricted Viewer** role is intended for the [Azure AI Video Indexer website](https://www.videoindexer.ai/) users. The role's permitted actions relate to the [Azure AI Video Indexer website](https://www.videoindexer.ai/) experience.
-
-For more information, see [Manage access with the Video Indexer Restricted Viewer role](restricted-viewer-role.md).
-
-### Slate detection insights (preview)
-
-The following slate detection (a movie post-production) insights are automatically identified when indexing a video using the advanced indexing option:
-
-* Clapperboard detection with metadata extraction.
-* Digital patterns detection, including color bars.
-* Textless slate detection, including scene matching.
-
-For details, see [Slate detection](slate-detection-insight.md).
-
-### New source languages support for STT, translation, and search
-
-Now supporting source languages for STT (speech-to-text), translation, and search in Ukraine and Vietnamese. It means transcription, translation, and search features are also supported for these languages in the [Azure AI Video Indexer](https://www.videoindexer.ai/) website, widgets and APIs.
-
-For more information, see [supported languages](language-support.md).
-
-### Edit a speaker's name in the transcription through the API
-
-You can now edit the name of the speakers in the transcription using the Azure AI Video Indexer API.
-
-### Word level time annotation with confidence score
-
-Now supporting word level time annotation with confidence score.
-
-An annotation is any type of additional information that is added to an already existing text, be it a transcription of an audio file or an original text file.
-
-For more information, see [Examine word-level transcription information](edit-transcript-lines-portal.md#examine-word-level-transcription-information).
-
-### Azure Monitor integration enabling indexing logs
-
-The new set of logs, described below, enables you to better monitor your indexing pipeline.
-
-Azure AI Video Indexer now supports Diagnostics settings for indexing events. You can now export logs monitoring upload, and re-indexing of media files through diagnostics settings to Azure Log Analytics, Storage, Event Hubs, or a third-party solution.
-
-### Expanded supported languages in LID and MLID through Azure AI Video Indexer API
-
-Expanded the languages supported in LID (language identification) and MLID (multi language Identification) using the Azure AI Video Indexer API.
-
-The following languages are now supported through the API: Arabic (United Arab Emirates), Arabic Modern Standard, Arabic Egypt, Arabic (Iraq), Arabic (Jordan), Arabic (Kuwait), Arabic (Oman), Arabic (Qatar), Arabic (Saudi Arabia), Arabic Syrian Arab Republic, Czech, Danish, German, English Australia, English United Kingdom, English United States, Spanish, Spanish (Mexico), Finnish, French (Canada), French, Hebrew, Hindi, Italian, Japanese, Korean, Norwegian, Dutch, Polish, Portuguese, Portuguese (Portugal), Russian, Swedish, Thai, Turkish, Ukrainian, Vietnamese, Chinese (Simplified), Chinese (Cantonese, Traditional).
-
-To specify the list of languages to be identified by LID or MLID when auto-detecting, call [upload a video](https://api-portal.videoindexer.ai/api-details#api=Operations&operation=Upload-Video) API and set the `customLanguages` parameter to include up to 10 languages from the supported languages above. Please note that the languages specified in the `customLanguages` are compared at a language level thus should include only one locale per language.
-
-For more information, see [supported languages](language-support.md).
-
-### Configure confidence level in a person model with an API
-
-Use the [Patch person model](https://api-portal.videoindexer.ai/api-details#api=Operations&operation=Patch-Person-Model) API to configure the confidence level for face recognition within a person model.
-
-### View speakers in closed captions
-
-You can now view speakers in closed captions of the Azure AI Video Indexer media player. For more information, see [View closed captions in the Azure AI Video Indexer website](view-closed-captions.md).
-
-### Control face and people bounding boxes using parameters
-
-The new `boundingBoxes` URL parameter controls the option to set bounding boxes on/off when embedding a player. For more information, see [Embed widgets](video-indexer-embed-widgets.md#player-widget).
-
-### Control autoplay from the account settings
-
-Control whether a media file will autoplay when opened using the webapp is through the user settings. Navigate to the [Azure AI Video Indexer website](https://www.videoindexer.ai/) -> the **Gear** icon (the top-right corner) -> **User settings** -> **Auto-play media files**.
-
-### Copy video ID from the player view
-
-**Copy video ID** is available when you select the video in the [Azure AI Video Indexer website](https://www.videoindexer.ai/)
-
-### New dark theme in native Azure colors
-
-Select the desired theme in the [Azure AI Video Indexer website](https://www.videoindexer.ai/). Select the **Gear** icon (the top-right corner) -> **User settings**.
-
-### Search or filter the account list
-
-You can search or filter the account list using the account name or region. Select **User accounts** in the top-right corner of the [Azure AI Video Indexer website](https://www.videoindexer.ai/).
-
-## September 2022
-
-### General availability of ARM-based accounts
-
-With an Azure Resource Management (ARM) based [paid (unlimited)](accounts-overview.md) accounts, you are able to use:
--- [Azure role-based access control (RBAC)](../role-based-access-control/overview.md).-- Managed Identity to better secure the communication between your Azure Media Services and Azure AI Video Indexer account, Network Service Tags, and native integration with Azure Monitor to monitor your account (audit and indexing logs).-- Scale and automate your [deployment with ARM-template](deploy-with-arm-template.md), [bicep](deploy-with-bicep.md) or terraform.-- [Create logic apps connector for ARM-based accounts](logic-apps-connector-arm-accounts.md).-
-To create an ARM-based account, see [create an account](create-account-portal.md).
-
-## August 2022
-
-### Update topic inferencing model
-
-Azure AI Video Indexer topic inferencing model was updated and now we extract more than 6.5 million topics (for example, covering topics such as Covid virus). To benefit from recent model updates you need to re-index your video files.
-
-### Topic inferencing model is now available on Azure Government
-
-You can now leverage topic inferencing model in your Azure AI Video Indexer paid account on [Azure Government](../azure-government/documentation-government-welcome.md) in Virginia and Arizona regions. With this release we completed the AI parity between Azure global and Azure Government.
-To benefit from the model updates you need to re-index your video files.
-
-### Session length is now 30 days in the Azure AI Video Indexer website
-
-The [Azure AI Video Indexer website](https://vi.microsoft.com) session length was extended to 30 days. You can preserve your session without having to re-login every 1 hour.
-
-## July 2022
-
-### The featured clothing insight (preview)
-
-The featured clothing insight enables more targeted ads placement.
-
-The insight provides information of key items worn by individuals within a video and the timestamp in which the clothing appears. This allows high-quality in-video contextual advertising, where relevant clothing ads are matched with the specific time within the video in which they are viewed.
-
-To view the featured clothing of an observed person, you have to index the video using Azure AI Video Indexer advanced video settings. For details on how featured clothing images are ranked and how to view this insight, see [featured clothing](observed-people-featured-clothing.md).
-
-## June 2022
-
-### Create Video Indexer blade improvements in Azure portal
-
-Azure AI Video Indexer now supports the creation of new resource using system-assigned managed identity or system and user assigned managed identity for the same resource.
-
-You can also change the primary managed identity using the **Identity** tab in the [Azure portal](https://portal.azure.com/#home).
-
-### Limited access of celebrity recognition and face identification features
-
-As part of Microsoft's commitment to responsible AI, we are designing and releasing Azure AI Video Indexer ΓÇô identification and celebrity recognition features. These features are designed to protect the rights of individuals and society and fostering transparent human-computer interaction. Thus, there is a limited access and use of Azure AI Video Indexer ΓÇô identification and celebrity recognition features.
-
-Identification and celebrity recognition features require registration and are only available to Microsoft managed customers and partners.
-Customers who wish to use this feature are required to apply and submit an [intake form](https://aka.ms/facerecognition). For more information, read [Azure AI Video Indexer limited access](limited-access-features.md).
-
-Also, see the following: the [announcement blog post](https://aka.ms/AAh91ff) and [investment and safeguard for facial recognition](https://aka.ms/AAh9oye).
-
-## May 2022
-
-### Line breaking in transcripts
-
-Improved line break logic to better split transcript into sentences. New editing capabilities are now available through the Azure AI Video Indexer website, such as adding a new line and editing the lineΓÇÖs timestamp. For more information, see [Insert or remove transcript lines](edit-transcript-lines-portal.md).
-
-### Azure Monitor integration
-
-Azure AI Video Indexer now supports Diagnostics settings for Audit events. Logs of Audit events can now be exported through diagnostics settings to Azure Log Analytics, Storage, Event Hubs, or a third-party solution.
-
-The additions enable easier access to analyze the data, monitor resource operation, and create automatically flows to act on an event. For more information, see [Monitor Azure AI Video Indexer](monitor-video-indexer.md).
-
-### Video Insights improvements
-
-Object Character Reader (OCR) is improved by 60%. Face Detection is improved by 20%. Label accuracy is improved by 30% over a wide variety of videos. These improvements are available immediately in all regions and do not require any changes by the customer.
-
-### Service tag
-
-Azure AI Video Indexer is now part of [Network Service Tags](network-security.md). Video Indexer often needs to access other Azure resources (for example, Storage). If you secure your inbound traffic to your resources with a Network Security Group you can now select Video Indexer as part of the built-in Service Tags. This will simplify security management as we populate the Service Tag with our public IPs.
-
-### Celebrity recognition toggle
-
-You can now enable or disable the celebrity recognition model on the account level (on classic account only). To turn on or off the model, go to the **Model customization** > toggle on/off the model. Once you disable the model, Video Indexer insights will not include the output of celebrity model and will not run the celebrity model pipeline.
--
-### Azure AI Video Indexer repository name
-
-As of May 1st, our new updated repository of Azure AI Video Indexer widget was renamed. Use https://www.npmjs.com/package/@azure/video-indexer-widgets instead
-
-## April 2022
-
-### Renamed **Azure Video Analyzer for Media** back to **Azure AI Video Indexer**
-
-As of today, Azure Video analyzer for Media product name is **Azure AI Video Indexer** and all product related assets (web portal, marketing materials). It is a backward compatible change that has no implication on APIs and links. **Azure AI Video Indexer**'s new logo:
--
-## March 2022
-
-### Closed Captioning files now support including speakersΓÇÖ attributes
-
-Azure AI Video Indexer enables you to include speakers' characteristic based on a closed captioning file that you choose to download. To include the speakersΓÇÖ attributes, select Downloads -> Closed Captions -> choose the closed captioning downloadable file format (SRT, VTT, TTML, TXT, or CSV) and check **Include speakers** checkbox.
-
-### Improvements to the widget offering
-
-The following improvements were made:
-
-* Azure AI Video Indexer widgets support more than 1 locale in a widget's parameter.
-* The Insights widgets support initial search parameters and multiple sorting options.
-* The Insights widgets also include a confirmation step before deleting a face to avoid mistakes.
-* The widget customization now supports width as strings (for example 100%, 100vw).
-
-## February 2022
-
-### Public preview of Azure AI Video Indexer account management based on ARM in Government cloud
-
-Azure AI Video Indexer website is now supporting account management based on ARM in public preview (see, [November 2021 release note](#november-2021)).
-
-### Leverage open-source code to create ARM based account
-
-Added new code samples including HTTP calls to use Azure AI Video Indexer create, read, update and delete (CRUD) ARM API for solution developers.
-
-## January 2022
-
-### Improved audio effects detection
-
-The audio effects detection capability was improved to have a better detection rate over the following classes:
-
-* Crowd reactions (cheering, clapping, and booing),
-* Gunshot or explosion,
-* Laughter
-
-For more information, see [Audio effects detection](audio-effects-detection.md).
-
-### New source languages support for STT, translation, and search on the website
-
-Azure AI Video Indexer introduces source languages support for STT (speech-to-text), translation, and search in Hebrew (he-IL), Portuguese (pt-PT), and Persian (fa-IR) on the [Azure AI Video Indexer](https://www.videoindexer.ai/) website.
-It means transcription, translation, and search features are also supported for these languages in the [Azure AI Video Indexer](https://www.videoindexer.ai/) website and widgets.
-
-## December 2021
-
-### The projects feature is now GA
-
-The projects feature is now GA and ready for productive use. There is no pricing impact related to the "Preview to GA" transition. See [Add video clips to your projects](use-editor-create-project.md).
-
-### New source languages support for STT, translation, and search on API level
-
-Azure AI Video Indexer introduces source languages support for STT (speech-to-text), translation, and search in Hebrew (he-IL), Portuguese (pt-PT), and Persian (fa-IR) on the API level.
-
-### Matched person detection capability
-
-When indexing a video with Azure AI Video Indexer advanced video settings, you can view the new matched person detection capability. If there are people observed in your media file, you can now view the specific person who matched each of them through the media player.
-
-## November 2021
-
-### Public preview of Azure AI Video Indexer account management based on ARM
-
-Azure AI Video Indexer introduces a public preview of Azure Resource Manager (ARM) based account management. You can leverage ARM-based Azure AI Video Indexer APIs to create, edit, and delete an account from the [Azure portal](https://portal.azure.com/#home).
-
-> [!NOTE]
-> The Government cloud includes support for CRUD ARM based accounts from Azure AI Video Indexer API and from the Azure portal.
->
-> There is currently no support from the Azure AI Video Indexer [website](https://www.videoindexer.ai).
-
-For more information go to [create an Azure AI Video Indexer account](https://techcommunity.microsoft.com/t5/azure-ai/azure-video-analyzer-for-media-is-now-available-as-an-azure/ba-p/2912422).
-
-### PeopleΓÇÖs clothing detection
-
-When indexing a video with Azure AI Video Indexer advanced video settings, you can view the new peopleΓÇÖs clothing detection capability. If there are people detected in your media file, you can now view the clothing type they are wearing through the media player.
-
-### Face bounding box (preview)
-
-You can now turn on a bounding box for detected faces during indexing of the media file. The face bounding box feature is available when indexing your file by choosing the **standard**, **basic**, or **advanced** indexing presets.
-
-You can enable the bounding boxes through the player.
-
-## October 2021
-
-### Embed widgets in your app using Azure AI Video Indexer package
-
-Use the new Azure AI Video Indexer (AVAM) `@azure/video-analyzer-for-media-widgets` npm package to add `insights` widgets to your app and customize it according to your needs.
-
-The new AVAM package enables you to easily embed and communicate between our widgets and your app, instead of adding an `iframe` element to embed the insights widget. Learn more in [Embed and customize Azure AI Video Indexer widgets in your app](https://techcommunity.microsoft.com/t5/azure-media-services/embed-and-customize-azure-video-analyzer-for-media-widgets-in/ba-p/2847063). 
-
-## August 2021
-
-### Re-index video or audio files
-
-There is now an option to re-index video or audio files that have failed during the indexing process.
-
-### Improve accessibility support
-
-Fixed bugs related to CSS, theming and accessibility:
-
-* high contrast
-* account settings and insights views in the [portal](https://www.videoindexer.ai).
-
-## July 2021
-
-### Automatic Scaling of Media Reserved Units
-
-Starting August 1st 2021, Azure AI Video Indexer enabled [Media Reserved Units (MRUs)](/azure/media-services/latest/concept-media-reserved-units) auto scaling by [Azure Media Services](/azure/media-services/latest/media-services-overview), as a result you do not need to manage them through Azure AI Video Indexer. That will allow price optimization, for example price reduction in many cases, based on your business needs as it is being auto scaled.
-
-## June 2021
-
-### Azure AI Video Indexer deployed in six new regions
-
-You can now create an Azure AI Video Indexer paid account in France Central, Central US, Brazil South, West Central US, Korea Central, and Japan West regions.
-
-## May 2021
-
-### New source languages support for speech-to-text (STT), translation, and search
-
-Azure AI Video Indexer now supports STT, translation, and search in Chinese (Cantonese) ('zh-HK'), Dutch (Netherlands) ('Nl-NL'), Czech ('Cs-CZ'), Polish ('Pl-PL'), Swedish (Sweden) ('Sv-SE'), Norwegian('nb-NO'), Finnish('fi-FI'), Canadian French ('fr-CA'), Thai('th-TH'),
-Arabic: (United Arab Emirates) ('ar-AE', 'ar-EG'), (Iraq) ('ar-IQ'), (Jordan) ('ar-JO'), (Kuwait) ('ar-KW'), (Lebanon) ('ar-LB'), (Oman) ('ar-OM'), (Qatar) ('ar-QA'), (Palestinian Authority) ('ar-PS'), (Syria) ('ar-SY'), and Turkish('tr-TR').
-
-These languages are available in both API and Azure AI Video Indexer website. Select the language from the combobox under **Video source language**.
-
-### New theme for Azure AI Video Indexer
-
-New theme is available: 'Azure' along with the 'light' and 'dark themes. To select a theme, click on the gear icon in the top-right corner of the website, find themes under **User settings**.
-
-### New open-source code you can leverage
-
-Three new Git-Hub projects are available at our [GitHub repository](https://github.com/Azure-Samples/media-services-video-indexer):
-
-* Code to help you leverage the newly added [widget customization](https://github.com/Azure-Samples/media-services-video-indexer/tree/master/Embedding%20widgets).
-* Solution to help you add [custom search](https://github.com/Azure-Samples/media-services-video-indexer/tree/master/VideoSearchWithAutoMLVision) to your video libraries.
-* Solution to help you add [de-duplication](https://github.com/Azure-Samples/media-services-video-indexer/commit/6b828f598f5bf61ce1b6dbcbea9e8b87ba11c7b1) to your video libraries.
-
-### New option to toggle bounding boxes (for observed people) on the player
-
-When indexing a video through our advanced video settings, you can view our new observed people capabilities. If there are people detected in your media file, you can enable a bounding box on the detected person through the media player.
-
-## April 2021
-
-The Video Indexer service was renamed to Azure AI Video Indexer.
-
-### Improved upload experience in the portal
-
-Azure AI Video Indexer has a new upload experience in the [website](https://www.videoindexer.ai). To upload your media file, press the **Upload** button from the **Media files** tab.
-
-### New developer portal in available in gov-cloud
-
-The [Azure AI Video Indexer API developer portal](https://api-portal.videoindexer.ai) is now also available in Azure for US Government.
-
-### Observed people tracing (preview)
-
-Azure AI Video Indexer now detects observed people in videos and provides information such as the location of the person in the video frame and the exact timestamp (start, end) when a person appears. The API returns the bounding box coordinates (in pixels) for each person instance detected, including its confidence.
-
-For example, if a video contains a person, the detect operation will list the person appearances together with their coordinates in the video frames. You can use this functionality to determine the person path in a video. It also lets you determine whether there are multiple instances of the same person in a video.
-
-The newly added observed people tracing feature is available when indexing your file by choosing the **Advanced option** -> **Advanced video** or **Advanced video + audio** preset (under Video + audio indexing). Standard and basic indexing presets will not include this new advanced model.
-
-When you choose to see Insights of your video on the Azure AI Video Indexer website, the Observed People Tracing will show up on the page with all detected people thumbnails. You can choose a thumbnail of a person and see where the person appears in the video player.
-
-The feature is also available in the JSON file generated by Azure AI Video Indexer. For more information, see [Trace observed people in a video](observed-people-tracing.md).
-
-### Detected acoustic events with **Audio Effects Detection** (preview)
-
-You can now see the detected acoustic events in the closed captions file. The file can be downloaded from the Azure AI Video Indexer website and is available as an artifact in the GetArtifact API.
-
-**Audio Effects Detection** (preview) component detects various acoustics events and classifies them into different acoustic categories (such as Gunshot, Screaming, Crowd Reaction and more). For more information, see [Audio effects detection](audio-effects-detection.md).
-
-## March 2021
-
-### Audio analysis
-
-Audio analysis is available now in additional new bundle of audio features at different price point. The new **Basic Audio** analysis preset provides a low-cost option to only extract speech transcription, translation and format output captions and subtitles. The **Basic Audio** preset will produce two separate meters on your bill, including a line for transcription and a separate line for caption and subtitle formatting. More information on the pricing, see the [Media Services pricing](https://azure.microsoft.com/pricing/details/media-services/) page.
-
-The newly added bundle is available when indexing or re-indexing your file by choosing the **Advanced option** -> **Basic Audio** preset (under the **Video + audio indexing** drop-down box).
-
-### New developer portal
-
-Azure AI Video Indexer has a new [developer portal](https://api-portal.videoindexer.ai/), try out the new Azure AI Video Indexer APIs and find all the relevant resources in one place: [GitHub repository](https://github.com/Azure-Samples/media-services-video-indexer), [Stack overflow](https://stackoverflow.com/questions/tagged/video-indexer), [Azure AI Video Indexer tech community](https://techcommunity.microsoft.com/t5/azure-media-services/bg-p/AzureMediaServices/label-name/Video%20Indexer) with relevant blog posts, [Azure AI Video Indexer FAQs](faq.yml), [User Voice](https://feedback.azure.com/d365community/forum/09041fae-0b25-ec11-b6e6-000d3a4f0858) to provide your feedback and suggest features, and ['CodePen' link](https://codepen.io/videoindexer) with widgets code samples.
-
-### Advanced customization capabilities for insight widget
-
-SDK is now available to embed Azure AI Video Indexer's insights widget in your own service and customize its style and data. The SDK supports the standard Azure AI Video Indexer insights widget and a fully customizable insights widget. Code sample is available in [Azure AI Video Indexer GitHub repository](https://github.com/Azure-Samples/media-services-video-indexer/tree/master/Embedding%20widgets/widget-customization). With this advanced customization capabilities, solution developer can apply custom styling and bring customerΓÇÖs own AI data and present that in the insight widget (with or without Azure AI Video Indexer insights).
-
-### Azure AI Video Indexer deployed in the US North Central, US West and Canada Central
-
-You can now create an Azure AI Video Indexer paid account in the US North Central, US West and Canada Central regions
-
-### New source languages support for speech-to-text (STT), translation and search
-
-Azure AI Video Indexer now supports STT, translation and search in Danish ('da-DK'), Norwegian('nb-NO'), Swedish('sv-SE'), Finnish('fi-FI'), Canadian French ('fr-CA'), Thai('th-TH'), Arabic ('ar-BH', 'ar-EG', 'ar-IQ', 'ar-JO', 'ar-KW', 'ar-LB', 'ar-OM', 'ar-QA', 'ar-S', and 'ar-SY'), and Turkish('tr-TR'). Those languages are available in both API and Azure AI Video Indexer website.
-
-### Search by Topic in Azure AI Video Indexer Website
-
-You can now use the search feature, at the top of the [Azure AI Video Indexer website](https://www.videoindexer.ai/account/login) page, to search for videos with specific topics.
-
-## February 2021
-
-### Multiple account owners
-
-Account owner role was added to Azure AI Video Indexer. You can add, change, and remove users; change their role. For details on how to share an account, see [Invite users](restricted-viewer-role.md#share-the-account).
-
-### Audio event detection (public preview)
-
-> [!NOTE]
-> This feature is only available in trial accounts.
-
-Azure AI Video Indexer now detects the following audio effects in the non-speech segments of the content: gunshot, glass shatter, alarm, siren, explosion, dog bark, screaming, laughter, crowd reactions (cheering, clapping, and booing) and Silence.
-
-The newly added audio affects feature is available when indexing your file by choosing the **Advanced option** -> **Advanced audio** preset (under Video + audio indexing). Standard indexing will only include **silence** and **crowd reaction**.
-
-The **clapping** event type that was included in the previous audio effects model, is now extracted a part of the **crowd reaction** event type.
-
-When you choose to see **Insights** of your video on the [Azure AI Video Indexer](https://www.videoindexer.ai/) website, the Audio Effects show up on the page.
--
-### Named entities enhancement
-
-The extracted list of people and location was extended and updated in general.
-
-In addition, the model now includes people and locations in-context which are not famous, like a ΓÇÿSamΓÇÖ or ΓÇÿHomeΓÇÖ in the video.
-
-## January 2021
-
-### Azure AI Video Indexer is deployed on US Government cloud
-
-You can now create an Azure AI Video Indexer paid account on US government cloud in Virginia and Arizona regions.
-Azure AI Video Indexer trial offering isn't available in the mentioned region. For more information go to Azure AI Video Indexer Documentation.
-
-### Azure AI Video Indexer deployed in the India Central region
-
-You can now create an Azure AI Video Indexer paid account in the India Central region.
-
-### New Dark Mode for the Azure AI Video Indexer website experience
-
-The Azure AI Video Indexer website experience is now available in dark mode.
-To enable the dark mode open the settings panel and toggle on the **Dark Mode** option.
--
-## December 2020
-
-### Azure AI Video Indexer deployed in the Switzerland West and Switzerland North
-
-You can now create an Azure AI Video Indexer paid account in the Switzerland West and Switzerland North regions.
-
-## October 2020
-
-### Planned Azure AI Video Indexer website authenticatication changes
-
-Starting March 1st 2021, you no longer will be able to sign up and sign in to the [Azure AI Video Indexer website](https://www.videoindexer.ai/) [developer portal](video-indexer-use-apis.md) using Facebook or LinkedIn.
-
-You will be able to sign up and sign in using one of these providers: Azure AD, Microsoft, and Google.
-
-> [!NOTE]
-> The Azure AI Video Indexer accounts connected to LinkedIn and Facebook will not be accessible after March 1st 2021.
->
-> You should [invite](restricted-viewer-role.md#share-the-account) an Azure AD, Microsoft, or Google email you own to the Azure AI Video Indexer account so you will still have access. You can add an additional owner of supported providers, as described in [invite](restricted-viewer-role.md#share-the-account). <br/>
-> Alternatively, you can create a paid account and migrate the data.
-
-## August 2020
-
-### Mobile design for the Azure AI Video Indexer website
-
-The Azure AI Video Indexer website experience is now supporting mobile devices. The user experience is responsive to adapt to your mobile screen size (excluding customization UIs).
-
-### Accessibility improvements and bug fixes
-
-As part of WCAG (Web Content Accessibility guidelines), the Azure AI Video Indexer website experience is aligned with grade C, as part of Microsoft Accessibility standards. Several bugs and improvements related to keyboard navigation, programmatic access, and screen reader were solved.
-
-## July 2020
-
-### GA for multi-language identification
-
-Multi-language identification is moved from preview to GA and ready for productive use.
-
-There is no pricing impact related to the "Preview to GA" transition.
-
-### Azure AI Video Indexer website improvements
-
-#### Adjustments in the video gallery
-
-New search bar for deep insights search with additional filtering capabilities was added. Search results were also enhanced.
-
-New list view with ability to sort and manage video archive with multiple files.
-
-#### New panel for easy selection and configuration
-
-Side panel for easy selection and user configuration was added, allowing simple and quick account creation and sharing as well as setting configuration.
-
-Side panel is also used for user preferences and help.
-
-## June 2020
-
-### Search by topics
-
-You can now use the search API to search for videos with specific topics (API only).
-
-Topics is added as part of the `textScope` (optional parameter). See [API](https://api-portal.videoindexer.ai/api-details#api=Operations&operation=Search-Videos) for details.
-
-### Labels enhancement
-
-The label tagger was upgraded and now includes more visual labels that can be identified.
-
-## May 2020
-
-### Azure AI Video Indexer deployed in the East US
-
-You can now create an Azure AI Video Indexer paid account in the East US region.
-
-### Azure AI Video Indexer URL
-
-Azure AI Video Indexer regional endpoints were all unified to start only with www. No action item is required.
-
-From now on, you reach www.videoindexer.ai whether it is for embedding widgets or logging into the [Azure AI Video Indexer](https://www.videoindexer.ai/) website.
-
-Also wus.videoindexer.ai would be redirected to www. More information is available in [Embed Azure AI Video Indexer widgets in your apps](video-indexer-embed-widgets.md).
-
-## April 2020
-
-### New widget parameters capabilities
-
-The **Insights** widget includes new parameters: `language` and `control`.
-
-The **Player** widget has a new `locale` parameter. Both `locale` and `language` parameters control the playerΓÇÖs language.
-
-For more information, see the [widget types](video-indexer-embed-widgets.md#widget-types) section.
-
-### New player skin
-
-A new player skin launched with updated design.
-
-### Prepare for upcoming changes
-
-* Today, the following APIs return an account object:
-
- * [Create-Paid-Account](https://api-portal.videoindexer.ai/api-details#api=Operations&operation=Create-Paid-Account)
- * [Get-Account](https://api-portal.videoindexer.ai/api-details#api=Operations&operation=Get-Account)
- * [Get-Accounts-Authorization](https://api-portal.videoindexer.ai/api-details#api=Operations&operation=Get-Accounts-Authorization)
- * [Get-Accounts-With-Token](https://api-portal.videoindexer.ai/api-details#api=Operations&operation=Get-Accounts-With-Token)
-
- The Account object has a `Url` field pointing to the location of the [Azure AI Video Indexer website](https://www.videoindexer.ai/).
-For paid accounts the `Url` field is currently pointing to an internal URL instead of the public website.
-In the coming weeks we will change it and return the [Azure AI Video Indexer website](https://www.videoindexer.ai/) URL for all accounts (trial and paid).
-
- Do not use the internal URLs, you should be using the [Azure AI Video Indexer public APIs](https://api-portal.videoindexer.ai/).
-* If you are embedding Azure AI Video Indexer URLs in your applications and the URLs are not pointing to the [Azure AI Video Indexer website](https://www.videoindexer.ai/) or the Azure AI Video Indexer API endpoint (`https://api.videoindexer.ai`) but rather to a regional endpoint (for example, `https://wus2.videoindexer.ai`), regenerate the URLs.
-
- You can do it by either:
-
- * Replacing the URL with a URL pointing to the Azure AI Video Indexer widget APIs (for example, the [insights widget](https://api-portal.videoindexer.ai/api-details#api=Operations&operation=Get-Video-Insights-Widget))
- * Using the Azure AI Video Indexer website to generate a new embedded URL:
-
- Press **Play** to get to your video's page -> click the **&lt;/&gt; Embed** button -> copy the URL into your application:
-
- The regional URLs are not supported and will be blocked in the coming weeks.
-
-## January 2020
-
-### Custom language support for additional languages
-
-Azure AI Video Indexer now supports custom language models for `ar-SY` , `en-UK`, and `en-AU` (API only).
-
-### Delete account timeframe action update
-
-Delete account action now deletes the account within 90 days instead of 48 hours.
-
-### New Azure AI Video Indexer GitHub repository
-
-A new Azure AI Video Indexer GitHub with different projects, getting started guides and code samples is now available:
-https://github.com/Azure-Samples/media-services-video-indexer
-
-### Swagger update
-
-Azure AI Video Indexer unified **authentications** and **operations** into a single [Azure AI Video Indexer OpenAPI Specification (swagger)](https://api-portal.videoindexer.ai/api-details#api=Operations&operation). Developers can find the APIs in the [Azure AI Video Indexer developer portal](https://api-portal.videoindexer.ai/).
-
-## December 2019
-
-### Update transcript with the new API
-
-Update a specific section in the transcript using the [Update-Video-Index](https://api-portal.videoindexer.ai/api-details#api=Operations&operation=Update-Video-Index) API.
-
-### Fix account configuration from the Azure AI Video Indexer website
-
-You can now update Media Services connection configuration in order to self-help with issues like:
-
-* incorrect Azure Media Services resource
-* password changes
-* Media Services resources were moved between subscriptions
-
-To fix the account configuration, in the Azure AI Video Indexer website, navigate to Settings > Account tab (as owner).
-
-### Configure the custom vision account
-
-Configure the custom vision account on paid accounts using the Azure AI Video Indexer website (previously, this was only supported by API). To do that, sign in to the Azure AI Video Indexer website, choose Model Customization > <*model*> > Configure.
-
-### Scenes, shots and keyframes ΓÇô now in one insight pane
-
-Scenes, shots, and keyframes are now merged into one insight for easier consumption and navigation. When you select the desired scene you can see what shots and keyframes it consists of.
-
-### Notification about a long video name
-
-When a video name is longer than 80 characters, Azure AI Video Indexer shows a descriptive error on upload.
-
-### Streaming endpoint is disabled notification
-
-When streaming endpoint is disabled, Azure AI Video Indexer will show a descriptive error on the player page.
-
-### Error handling improvement
-
-Status code 409 will now be returned from [Re-Index Video](https://api-portal.videoindexer.ai/api-details#api=Operations&operation=Re-Index-Video) and [Update Video Index](https://api-portal.videoindexer.ai/api-details#api=Operations&operation=Update-Video-Index) APIs in case a video is actively indexed, to prevent overriding the current re-index changes by accident.
-
-## November 2019
-
-* Korean custom language models support
-
- Azure AI Video Indexer now supports custom language models in Korean (`ko-KR`) in both the API and portal.
-* New languages supported for speech-to-text (STT)
-
- Azure AI Video Indexer APIs now support STT in Arabic Levantine (ar-SY), English UK regional language (en-GB), and English Australian regional language (en-AU).
-
- For video upload, we replaced zh-HANS to zh-CN, both are supported but zh-CN is recommended and more accurate.
-
-## October 2019
-
-* Search for animated characters in the gallery
-
- When indexing animated characters, you can now search for them in the accountΓÇÖs video galley.
-
-## September 2019
-
-Multiple advancements announced at IBC 2019:
-
-* Animated character recognition (public preview)
-
- Ability to detect group ad recognize characters in animated content, via integration with custom vision.
-* Multi-language identification (public preview)
-
- Detect segments in multiple languages in the audio track and create a multilingual transcript based on them. Initial support: English, Spanish, German and French. For more information, see [Automatically identify and transcribe multi-language content](multi-language-identification-transcription.md).
-* Named entity extraction for People and Location
-
- Extracts brands, locations, and people from speech and visual text via natural language processing (NLP).
-* Editorial shot type classification
-
- Tagging of shots with editorial types such as close up, medium shot, two shot, indoor, outdoor etc. For more information, see [Editorial shot type detection](scenes-shots-keyframes.md#editorial-shot-type-detection).
-* Topic inferencing enhancement - now covering level 2
-
- The topic inferencing model now supports deeper granularity of the IPTC taxonomy. Read full details at [Azure Media Services new AI-powered innovation](https://azure.microsoft.com/blog/azure-media-services-new-ai-powered-innovation/).
-
-## August 2019 updates
-
-### Azure AI Video Indexer deployed in UK South
-
-You can now create an Azure AI Video Indexer paid account in the UK south region.
-
-### New Editorial Shot Type insights available
-
-New tags added to video shots provides editorial ΓÇ£shot typesΓÇ¥ to identify them with common editorial phrases used in the content creation workflow such as: extreme closeup, closeup, wide, medium, two shot, outdoor, indoor, left face and right face (Available in the JSON).
-
-### New People and Locations entities extraction available
-
-Azure AI Video Indexer identifies named locations and people via natural language processing (NLP) from the videoΓÇÖs OCR and transcription. Azure AI Video Indexer uses machine learning algorithm to recognize when specific locations (for example, the Eiffel Tower) or people (for example, John Doe) are being called out in a video.
-
-### Keyframes extraction in native resolution
-
-Keyframes extracted by Azure AI Video Indexer are available in the original resolution of the video.
-
-### GA for training custom face models from images
-
-Training faces from images moved from Preview mode to GA (available via API and in the portal).
-
-> [!NOTE]
-> There is no pricing impact related to the "Preview to GA" transition.
-
-### Hide gallery toggle option
-
-User can choose to hide the gallery tab from the portal (similar to hiding the samples tab).
-
-### Maximum URL size increased
-
-Support for URL query string of 4096 (instead of 2048) on indexing a video.
-
-### Support for multi-lingual projects
-
-Projects can now be created based on videos indexed in different languages (API only).
-
-## July 2019
-
-### Editor as a widget
-
-The Azure AI Video Indexer AI-editor is now available as a widget to be embedded in customer applications.
-
-### Update custom language model from closed caption file from the portal
-
-Customers can provide VTT, SRT, and TTML file formats as input for language models in the customization page of the portal.
-
-## June 2019
-
-### Azure AI Video Indexer deployed to Japan East
-
-You can now create an Azure AI Video Indexer paid account in the Japan East region.
-
-### Create and repair account API (Preview)
-
-Added a new API that enables you to [update the Azure Media Service connection endpoint or key](https://api-portal.videoindexer.ai/api-details#api=Operations&operation=Update-Paid-Account-Azure-Media-Services).
-
-### Improve error handling on upload
-
-A descriptive message is returned in case of misconfiguration of the underlying Azure Media Services account.
-
-### Player timeline Keyframes preview
-
-You can now see an image preview for each time on the player's timeline.
-
-### Editor semi-select
-
-You can now see a preview of all the insights that are selected as a result of choosing a specific insight timeframe in the editor.
-
-## May 2019
-
-### Update custom language model from closed caption file
-
-[Create custom language model](https://api-portal.videoindexer.ai/api-details#api=Operations&operation=Create-Language-Model) and [Update custom language models](https://api-portal.videoindexer.ai/api-details#api=Operations&operation=Update-Language-Model) APIs now support VTT, SRT, and TTML file formats as input for language models.
-
-When calling the [Update Video transcript API](https://api-portal.videoindexer.ai/api-details#api=Operations&operation=Update-Video-Transcript), the transcript is added automatically. The training model associated with the video is updated automatically as well. For information on how to customize and train your language models, see [Customize a Language model with Azure AI Video Indexer](customize-language-model-overview.md).
-
-### New download transcript formats ΓÇô TXT and CSV
-
-In addition to the closed captioning format already supported (SRT, VTT, and TTML), Azure AI Video Indexer now supports downloading the transcript in TXT and CSV formats.
-
-## Next steps
-
-[Overview](video-indexer-overview.md)
azure-video-indexer Resource Health https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-video-indexer/resource-health.md
- Title: Diagnose Video Indexer resource issues with Azure Resource Health
-description: Learn how to diagnose Video Indexer resource issues with Azure Resource Health.
- Previously updated : 05/12/2023----
-# Diagnose Video Indexer resource issues with Azure Resource Health
--
-[Azure Resource Health](../service-health/resource-health-overview.md) can help you diagnose and get support for service problems that affect your Azure AI Video Indexer resources. Resource health is updated every 1-2 minutes and reports the current and past health of your resources. For additional details on how health is assessed, review the [full list of resource types and health checks](../service-health/resource-health-checks-resource-types.md#microsoftnetworkapplicationgateways) in Azure Resource Health.
-
-## Get started
-
-To open Resource Health for your Video Indexer resource:
-
-1. Sign in to the Azure portal.
-1. Browse to your Video Indexer account.
-1. On the resource menu in the left pane, in the Support and Troubleshooting section, select Resource health.
-
-The health status is displayed as one of the following statuses:
-
-### Available
-
-An **Available** status means the service hasn't detected any events that affect the health of the resource. You see the **Recently resolved** notification in cases where the resource has recovered from unplanned downtime during the last 24 hours.
-
-> [!div class="mx-imgBorder"]
-> :::image type="content" source="./media/resource-health/available-status.png" alt-text="Diagram of Azure AI Video Indexer resource health." :::
-
-### Unavailable
-
-An Unavailable status means the service has detected an ongoing platform or non-platform event that affects the health of the resource.
-
-#### Platform events
-
-Platform events are triggered by multiple components of the Azure infrastructure. They include both scheduled actions (for example, planned maintenance) and unexpected incidents (for example, an unplanned host reboot).
-
-Resource Health provides additional details on the event and the recovery process. It also enables you to contact support even if you don't have an active Microsoft support agreement.
-
-### Unknown
-
-The Unknown health status indicates Resource Health hasn't received information about the resource for more than 10 minutes. Although this status isn't a definitive indication of the state of the resource, it can be an important data point for troubleshooting.
-
-If the resource is running as expected, the status of the resource will change to **Available** after a few minutes.
-
-If you experience problems with the resource, the **Unknown** health status might mean that an event in the platform is affecting the resource.
-
-### Degraded
-
-The Degraded health status indicates your Video Indexer resource has detected a loss in performance, although it's still available for usage.
-
-## Next steps
--- [Configuring Resource Health alerts](../service-health/resource-health-alert-arm-template-guide.md) -- [Monitor Video Indexer](monitor-video-indexer.md) -
-
-
azure-video-indexer Restricted Viewer Role https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-video-indexer/restricted-viewer-role.md
- Title: Manage access to an Azure AI Video Indexer account
-description: This article talks about Video Indexer restricted viewer built-in role. This role is an account level permission, which allows users to grant restricted access to a specific user or security group.
- Previously updated : 12/14/2022----
-# Manage access to an Azure AI Video Indexer account
--
-In this article, you'll learn how to manage access (authorization) to an Azure AI Video Indexer account. As Azure AI Video IndexerΓÇÖs role management differs depending on the Video Indexer Account type, this document will first cover access management of regular accounts (ARM-based) and then of Classic and Trial accounts.
-
-To see your accounts, select **User Accounts** at the top-right of the [Azure AI Video Indexer website](https://videoindexer.ai/). Classic and Trial accounts will have a label with the account type to the right of the account name.
-
-> [!div class="mx-imgBorder"]
-> :::image type="content" source="./media/restricted-viewer-role/accounts.png" alt-text="Image of accounts.":::
-
-## User management of ARM accounts
-
-[Azure role-based access control (Azure RBAC)](../role-based-access-control/overview.md) is used to manage access to Azure resources, such as the ability to create new resources or use existing ones. Using Azure RBAC, you can segregate duties within your team and users by granting only the amount of access that is appropriate. Users in your Microsoft Entra ID are assigned specific roles, which grant access to resources.
-
-Users with owner or administrator Microsoft Entra permissions can assign roles to Microsoft Entra users or security groups for an account. For information on how to assign roles, seeΓÇ»[Assign Azure roles using the Azure portal](../role-based-access-control/role-assignments-portal.md).
-
-Azure AI Video Indexer provides three built-in roles. You can learn more about [Azure built-in roles](../role-based-access-control/built-in-roles.md). Azure AI Video Indexer doesn't support the creation of custom roles.
-
-**Owner** - This role grants full access to manage all resources, including the ability to assign roles to determine who has access to resources.
-**Contributor** - This role has permissions to everything an owner does except it can't control who has access to resources.
-**Video Indexer Restricted Viewer** - This role is unique to Azure AI Video Indexer and has permissions to view videos and their insights but can't perform edits or changes or user management operations. This role enables collaboration and user access to insights through the Video Indexer website while limiting their ability to make changes to the environment.
-
-Users with this role can perform the following tasks:
--- View and play videos. -- View and search insights and translate a videos insights and transcript.-
-Users with this role are unable to perform the following tasks:
--- Upload/index/re-index a video. -- Download/embed video/insights.-- Change account settings.-- Edit insights.-- Create/update customized models.-- Assign roles.-- Generate an access token.-
-Disabled features will appear to users with the **Restricted Viewer** access as greyed out. When a user navigates to an unauthorized page, they receive a pop-up message that they don't have access.
-
-> [!Important]
-> The Restricted Viewer role is only available in Azure AI Video Indexer ARM accounts.
->
-
-### Manage account access (for account owners)
-
-If you're an account owner, you can add and remove roles for the account. You can also assign roles to users. Use the following links to discover how to manage access:
--- [Azure portal UI](../role-based-access-control/role-assignments-portal.md)-- [PowerShell](../role-based-access-control/role-assignments-powershell.md) -- [Azure CLI](../role-based-access-control/role-assignments-cli.md) -- [REST API](../role-based-access-control/role-assignments-rest.md) -- [Azure Resource Manager templates](../role-based-access-control/role-assignments-template.md) -
-## User management of classic and trial accounts
-
-User management of classic accounts, including the creation of new users, is performed in the Account settings section of the Video Indexer website. This can be accessed by either:
--- Selecting the **User accounts** icon at the top-right of the website and then settings. -- Selecting the **Account settings** icon on the left of the website. -
-> [!div class="mx-imgBorder"]
-> :::image type="content" source="./media/restricted-viewer-role/settings.png" alt-text="Image of account settings.":::
-
-### Share the account
-
-In the **Account setting** section, select **Manage Roles** to view all the account users and people with pending invites.
-
-To add users, click **Invite more people to this account**. They'll receive an invitation but you also have the option to copy the invite link to share it directly. Once they've accepted the invitation, you can define their role as either **Owner** or **Contributor**. See above in the [ARM Account user management](#user-management-of-arm-accounts) section for a description of the **Owner** and **Contributor** roles.
-
-> [!div class="mx-imgBorder"]
-> :::image type="content" source="./media/restricted-viewer-role/share-account.png" alt-text="Image of invited users.":::
-
-## Next steps
-
-[Overview](video-indexer-overview.md)
azure-video-indexer Scenes Shots Keyframes https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-video-indexer/scenes-shots-keyframes.md
- Title: Azure AI Video Indexer scenes, shots, and keyframes
-description: This topic gives an overview of the Azure AI Video Indexer scenes, shots, and keyframes.
- Previously updated : 06/07/2022----
-# Scenes, shots, and keyframes
--
-Azure AI Video Indexer supports segmenting videos into temporal units based on structural and semantic properties. This capability enables customers to easily browse, manage, and edit their video content based on varying granularities. For example, based on scenes, shots, and keyframes, described in this topic.
-
-![Scenes, shots, and keyframes](./media/scenes-shots-keyframes/scenes-shots-keyframes.png)
-
-## Scene detection
-
-Azure AI Video Indexer determines when a scene changes in video based on visual cues. A scene depicts a single event and it is composed of a series of consecutive shots, which are semantically related. A scene thumbnail is the first keyframe of its underlying shot. Azure AI Video Indexer segments a video into scenes based on color coherence across consecutive shots and retrieves the beginning and end time of each scene. Scene detection is considered a challenging task as it involves quantifying semantic aspects of videos.
-
-> [!NOTE]
-> Applicable to videos that contain at least 3 scenes.
-
-## Shot detection
-
-Azure AI Video Indexer determines when a shot changes in the video based on visual cues, by tracking both abrupt and gradual transitions in the color scheme of adjacent frames. The shot's metadata includes a start and end time, as well as the list of keyframes included in that shot. The shots are consecutive frames taken from the same camera at the same time.
-
-## Keyframe detection
-
-Azure AI Video Indexer selects the frame(s) that best represent each shot. Keyframes are the representative frames selected from the entire video based on aesthetic properties (for example, contrast and stableness). Azure AI Video Indexer retrieves a list of keyframe IDs as part of the shot's metadata, based on which customers can extract the keyframe as a high resolution image.
-
-### Extracting Keyframes
-
-To extract high-resolution keyframes for your video, you must first upload and index the video.
-
-![Keyframes](./media/scenes-shots-keyframes/extracting-keyframes.png)
-
-#### With the Azure AI Video Indexer website
-
-To extract keyframes using the Azure AI Video Indexer website, upload and index your video. Once the indexing job is complete, click on the **Download** button and select **Artifacts (ZIP)**. This will download the artifacts folder to your computer (make sure to view the warning regarding artifacts below). Unzip and open the folder. In the *_KeyframeThumbnail* folder, and you will find all of the keyframes that were extracted from your video.
-
-![Screenshot that shows the "Download" drop-down with "Artifacts" selected.](./media/scenes-shots-keyframes/extracting-keyframes2.png)
-
-
-#### With the Azure AI Video Indexer API
-
-To get keyframes using the Video Indexer API, upload and index your video using the [Upload Video](https://api-portal.videoindexer.ai/api-details#api=Operations&operation=Upload-Video) call. Once the indexing job is complete, call [Get Video Index](https://api-portal.videoindexer.ai/api-details#api=Operations&operation=Get-Video-Index). This will give you all of the insights that Video Indexer extracted from your content in a JSON file.
-
-You will get a list of keyframe IDs as part of each shot's metadata.
-
-```json
-"shots":[
- {
- "id":0,
- "keyFrames":[
- {
- "id":0,
- "instances":[
- {
- "thumbnailId":"00000000-0000-0000-0000-000000000000",
- "start":"0:00:00.209",
- "end":"0:00:00.251",
- "duration":"0:00:00.042"
- }
- ]
- },
- {
- "id":1,
- "instances":[
- {
- "thumbnailId":"00000000-0000-0000-0000-000000000000",
- "start":"0:00:04.755",
- "end":"0:00:04.797",
- "duration":"0:00:00.042"
- }
- ]
- }
- ],
- "instances":[
- {
- "start":"0:00:00",
- "end":"0:00:06.34",
- "duration":"0:00:06.34"
- }
- ]
- },
-
-]
-```
-
-You will now need to run each of these keyframe IDs on the [Get Thumbnails](https://api-portal.videoindexer.ai/api-details#api=Operations&operation=Get-Video-Thumbnail) call. This will download each of the keyframe images to your computer.
-
-## Editorial shot type detection
-
-Keyframes are associated with shots in the output JSON.
-
-The shot type associated with an individual shot in the insights JSON represents its editorial type. You may find these shot type characteristics useful when editing videos into clips, trailers, or when searching for a specific style of keyframe for artistic purposes. The different types are determined based on analysis of the first keyframe of each shot. Shots are identified by the scale, size, and location of the faces appearing in their first keyframe.
-
-The shot size and scale are determined based on the distance between the camera and the faces appearing in the frame. Using these properties, Azure AI Video Indexer detects the following shot types:
-
-* Wide: shows an entire personΓÇÖs body.
-* Medium: shows a person's upper-body and face.
-* Close up: mainly shows a personΓÇÖs face.
-* Extreme close-up: shows a personΓÇÖs face filling the screen.
-
-Shot types can also be determined by location of the subject characters with respect to the center of the frame. This property defines the following shot types in Azure AI Video Indexer:
-
-* Left face: a person appears in the left side of the frame.
-* Center face: a person appears in the central region of the frame.
-* Right face: a person appears in the right side of the frame.
-* Outdoor: a person appears in an outdoor setting.
-* Indoor: a person appears in an indoor setting.
-
-Additional characteristics:
-
-* Two shots: shows two personsΓÇÖ faces of medium size.
-* Multiple faces: more than two persons.
--
-## Next steps
-
-[Examine the Azure AI Video Indexer output produced by the API](video-indexer-output-json-v2.md#scenes)
azure-video-indexer Slate Detection Insight https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-video-indexer/slate-detection-insight.md
- Title: Slate detection insights
-description: Learn about slate detection insights.
- Previously updated : 09/20/2022----
-# The slate detection insights
--
-The following slate detection insights (listed below) are automatically identified when indexing a video using the advanced indexing option. These insights are most useful to customers involved in the movie post-production process.
-
-* [Clapperboard](https://en.wikipedia.org/wiki/Clapperboard) detection with metadata extraction. This insight is used to detect clapperboard instances and information written on each (for example, *production*, *roll*, *scene*, *take*, etc.
-* Digital patterns detection, including [color bars](https://en.wikipedia.org/wiki/SMPTE_color_bars).
-* Textless slate detection, including scene matching.
-
-## View post-production insights
-
-### The Insight tab
-
-In order to set the indexing process to include the slate metadata, select the **Video + audio indexing** -> **Advanced** presets.
-
-> [!div class="mx-imgBorder"]
-> :::image type="content" source="./media/slate-detection-process/advanced-setting.png" alt-text="This image shows the advanced setting in order to view post-production insights.":::
-
-### The Timeline tab
-
-After the file has been uploaded and indexed, if you want to view the timeline of the insight, select the **Post-production** checkmark from the list of insights.
-
-> [!div class="mx-imgBorder"]
-> :::image type="content" source="./media/slate-detection-process/post-production-checkmark.png" alt-text="This image shows the post-production checkmark.":::
-
-For details about viewing each slate insight, see:
--- [How to enable and view clapper board with extracted metadata](clapperboard-metadata.md).-- [How to enable and view digital patterns with color bars](digital-patterns-color-bars.md)-- [How to enable and view textless slate with scene matching](textless-slate-scene-matching.md).-
-## Next steps
-
-[Overview](video-indexer-overview.md)
azure-video-indexer Storage Behind Firewall https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-video-indexer/storage-behind-firewall.md
- Title: Use Video Indexer with storage behind firewall
-description: This article gives an overview how to configure Azure AI Video Indexer to use storage behind firewall.
- Previously updated : 03/21/2023----
-# Configure Video Indexer to work with storage accounts behind firewall
--
-When you create a Video Indexer account, you must associate it with a Media Services and Storage account. Video Indexer can access Media Services and Storage using system authentication or Managed Identity authentication. Video Indexer validates that the user adding the association has access to the Media Services and Storage account with Azure Resource Manager Role Based Access Control (RBAC).
-
-If you want to use a firewall to secure your storage account and enable trusted storage, [Managed Identities](/azure/media-services/latest/concept-managed-identities) authentication that allows Video Indexer access through the firewall is the preferred option. It allows Video Indexer and Media Services to access the storage account that has been configured without needing public access for [trusted storage access.](../storage/common/storage-network-security.md?tabs=azure-portal#grant-access-to-trusted-azure-services)
-
-> [!IMPORTANT]
-> When you lock your storage accounts without public access be aware that the client device you're using to download the video source file using the Video Indexer portal will be the source ip that the storage account will see and allow/deny depending on the network configuration of your storage account. For instance, if I'm accessing the Video Indexer portal from my home network and I want to download the video source file a sas url to the storage account is created, my device will initiate the request and as a consequence the storage account will see my home ip as source ip. If you did not add exception for this ip you will not be able to access the SAS url to the source video. Work with your network/storage administrator on a network strategy i.e. use your corporate network, VPN or Private Link.
-
-Follow these steps to enable Managed Identity for Media Services and Storage and then lock your storage account. It's assumed that you already created a Video Indexer account and associated with a Media Services and Storage account.
-
-## Assign the Managed Identity and role
-
-1. When you navigate to your Video Indexer account for the first time, we validate if you have the correct role assignments for Media Services and Storage. If not, the following banners that allow you to assign the correct role automatically will appear. If you donΓÇÖt see the banner for the Storage account, it means your Storage account isn't behind a firewall, or everything is already set.
-
- :::image type="content" source="./media/storage-behind-firewall/trusted-service-assign-role-banner.png" alt-text="Screenshot shows how to assign role to Media Services and Storage accounts from the Azure portal.":::
-1. When you select **Assign Role**, the followinging roles are assigned: `Azure Media Services : Contributor` and `Azure Storage : Storage Blob Data Owner`. You can verify or manually set assignments by navigating to the **Identity** menu of your Video Indexer account and selecting **Azure Role Assignments**.
-
- :::image type="content" source="./media/storage-behind-firewall/trusted-service-verify-assigned-roles.png" alt-text="Screenshot of assigned roles from the Azure portal.":::
-1. Navigate to your Media Services account and select **Storage accounts**.
-
- :::image type="content" source="./media/storage-behind-firewall/trusted-service-media-services-managed-identity-menu.png" alt-text="Screenshot of Assigned Managed Identity role on the connected storage account for Media Services from the Azure portal.":::
-1. Select **Managed identity**. A warning that you have no managed identities will appear. Select **Click here** to configure one.
-
- :::image type="content" source="./media/storage-behind-firewall/trusted-service-media-services-managed-identity-selection.png" alt-text="Screenshot of enable System Managed Identity role on the connected storage account for Media Services from the Azure portal.":::
-1. Select **User** or **System-assigned** identity. In this case, choose **System-assigned**.
-1. Select **Save**.
-1. Select **Storage accounts** in the menu and select **Managed identity** again. This time, the banner that you donΓÇÖt have a managed identity shouldn't appear. Instead, you can now select the managed identity in the dropdown menu.
-1. Select **System-assigned**.
-
- :::image type="content" source="./media/storage-behind-firewall/trusted-service-media-services-managed-identity-system-assigned-selection.png" alt-text="Screenshot of Azure portal to select System Managed Identity role on the connected storage account for Media Services from the Azure portal.":::
-1. Select **Save**.
-1. Navigate to your Storage account. Select **Networking** from the menu and select **Enabled from selected virtual networks and IP addresses** in the **Public network access** section.
-
- :::image type="content" source="./media/storage-behind-firewall/trusted-service-storage-lock-select-exceptions.png" alt-text="Screenshot of how to disable public access for your storage account and enable exception for trusted services from the Azure portal.":::
-1. Under **Exceptions**, make sure that **Allow Azure services on the trusted services list to access this storage account** is selected.
--
-## Upload from locked storage account
-
-When uploading a file to Video Indexer you can provide a link to a video using a SAS locator. If the storage account hosting the video is not publicly accessible we need to use the Managed Identity and Trusted Service approach. Since there is no way for us to know if a SAS url is pointing to a locked storage account, and this also applies to the storage account connected to Media Services, you need to explicitly set the query parameter `useManagedIdentityToDownloadVideo` to `true` in the [upload-video API call](https://api-portal.videoindexer.ai/api-details#api=Operations&operation=Upload-Video). In addition, you also need to set the role `Azure Storage : Storage Blob Data Owner` on this storage account as you did with the storage account connect to Media Services in the previous section.
-
-## Summary
-
-This concludes the tutorial. With these steps you've completed the following activities:
-
-1. Assigning the Video Indexer managed-identity the necessary roles to Media Services (Contributor) and Storage (Storage Blob Data Owner).
-1. Assigning the Media Services Managed-identity role to the Storage.
-1. Locking down your storage account behind firewall and allow Azure Trusted Services to access the Storage account using Managed-identity.
-
-## Next steps
-
-[Disaster recovery](video-indexer-disaster-recovery.md)
azure-video-indexer Switch Tenants Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-video-indexer/switch-tenants-portal.md
- Title: Switch between tenants on the Azure AI Video Indexer website
-description: This article shows how to switch between tenants in the Azure AI Video Indexer website.
- Previously updated : 01/24/2023----
-# Switch between multiple tenants
--
-When working with multiple tenants/directories in the Azure environment user might need to switch between the different directories.
-
-When logging in the Azure AI Video Indexer website, a default directory will load and the relevant accounts and list them in the **Account list**.
-
-> [!Note]
-> Trial accounts and Classic accounts are global and not tenant-specific. Hence, the tenant switching described in this article only applies to your ARM accounts.
->
-> The option to switch directories is available only for users using Microsoft Entra ID to log in.
-
-This article shows two options to solve the same problem - how to switch tenants:
--- When starting [from within the Azure AI Video Indexer website](#switch-tenants-from-within-the-azure-ai-video-indexer-website).-- When starting [from outside of the Azure AI Video Indexer website](#switch-tenants-from-outside-the-azure-ai-video-indexer-website).-
-## Switch tenants from within the Azure AI Video Indexer website
-
-1. To switch between directories in the [Azure AI Video Indexer](https://www.videoindexer.ai/), open the **User menu** > select **Switch directory**.
-
- > [!div class="mx-imgBorder"]
- > ![Screenshot of a user name.](./media/switch-directory/avi-user-switch.png)
-
- Here user can view all detected directories listed. The current directory will be marked, once a different directory is selected the **Switch directory** button will be available.
-
- > [!div class="mx-imgBorder"]
- > ![Screenshot of a tenant list.](./media/switch-directory/tenants.png)
-
- Once clicked, the authenticated credentials will be used to sign in again to the Azure AI Video Indexer website with the new directory.
-
-## Switch tenants from outside the Azure AI Video Indexer website
-
-This section shows how to get the domain name from the Azure portal. You can then sign in with it into th the [Azure AI Video Indexer](https://www.videoindexer.ai/) website.
-
-### Get the domain name
-
-1. Sign in to the [Azure portal](https://portal.azure.com) using the same subscription tenant in which your Azure AI Video Indexer Azure Resource Manager (ARM) account was created.
-1. Hover over your account name (in the right-top corner).
-
- > [!div class="mx-imgBorder"]
- > ![Hover over your account name.](./media/switch-directory/account-attributes.png)
-1. Get the domain name of the current Azure subscription, you'll need it for the last step of the following section.
-
-If you want to see domains for all of your directories and switch between them, see [Switch and manage directories with the Azure portal](../azure-portal/set-preferences.md#switch-and-manage-directories).
-
-### Sign in with the correct domain name on the AVI website
-
-1. Go to the [Azure AI Video Indexer](https://www.videoindexer.ai/) website.
-1. Press **Sign out** after pressing the button in the top-right corner.
-1. On the AVI website, press **Sign in** and choose the Microsoft Entra account.
-
- > [!div class="mx-imgBorder"]
- > ![Sign in with the Microsoft Entra account.](./media/switch-directory/choose-account.png)
-1. Press **Use another account**.
-
- > [!div class="mx-imgBorder"]
- > ![Choose another account.](./media/switch-directory/use-another-account.png)
-1. Choose **Sign-in with other options**.
-
- > [!div class="mx-imgBorder"]
- > ![Sign in with other options.](./media/switch-directory/sign-in-options.png)
-1. Press **Sign in to an organization**.
-
- > [!div class="mx-imgBorder"]
- > ![Sign in to an organization.](./media/switch-directory/sign-in-organization.png)
-1. Enter the domain name you copied in the [Get the domain name from the Azure portal](#get-the-domain-name) section.
-
- > [!div class="mx-imgBorder"]
- > ![Find the organization.](./media/switch-directory/find-your-organization.png)
-
-## Next steps
-
-[FAQ](faq.yml)
azure-video-indexer Textless Slate Scene Matching https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-video-indexer/textless-slate-scene-matching.md
- Title: Enable and view a textless slate with matching scene
-description: Learn about how to enable and view a textless slate with matching scene.
- Previously updated : 09/20/2022----
-# Enable and view a textless slate with matching scene
--
-This article shows how to enable and view a textless slate with matching scene (preview).
-
-This insight is most useful to customers involved in the movie post-production process.
-
-## View post-production insights
-
-In order to set the indexing process to include the slate metadata, select the **Video + audio indexing** -> **Advanced** presets.
-
-> [!div class="mx-imgBorder"]
-> :::image type="content" source="./media/slate-detection-process/advanced-setting.png" alt-text="This image shows the advanced setting in order to view post-production clapperboards insights.":::
-
-### Insight
-
-This insight can only be viewed in the form of the downloaded json file.
-
-## Next steps
-
-* [Slate detection overview](slate-detection-insight.md)
-* [How to enable and view clapper board with extracted metadata](clapperboard-metadata.md).
-* [How to enable and view digital patterns with color bars](digital-patterns-color-bars.md).
azure-video-indexer Topics Inference https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-video-indexer/topics-inference.md
- Title: Azure AI Video Indexer topics inference overview
-description: An introduction to Azure AI Video Indexer topics inference component responsibly.
- Previously updated : 06/15/2022-----
-# Topics inference
--
-Topics inference is an Azure AI Video Indexer AI feature that automatically creates inferred insights derived from the transcribed audio, OCR content in visual text, and celebrities recognized in the video using the Video Indexer facial recognition model. The extracted Topics and categories (when available) are listed in the Insights tab. To jump to the topic in the media file, click a Topic -> Play Previous or Play Next.
-
-The resulting insights are also generated in a categorized list in a JSON file which includes the topic name, timeframe and confidence score.
-
-## Prerequisites
-
-Review [transparency note overview](/legal/azure-video-indexer/transparency-note?context=/azure/azure-video-indexer/context/context)
-
-## General principles
-
-This article discusses topics and the key considerations for making use of this technology responsibly. There are many things you need to consider when deciding how to use and implement an AI-powered feature:
--- Will this feature perform well in my scenario? Before deploying topics inference into your scenario, test how it performs using real-life data and make sure it can deliver the accuracy you need.-- Are we equipped to identify and respond to errors? AI-powered products and features won't be 100% accurate, so consider how you'll identify and respond to any errors that may occur.-
-## View the insight
-
-To display Topics Inference insights on the website.
-
-1. Go to Insights and scroll to Topics.
-
-To display the instances in a JSON file, do the following:
-
-1. Click Download -> Insight (JSON).
-1. Copy the `topics` text and paste it into your JSON viewer.
-
- ```json
- "topics": [
- {
- "id": 1,
- "name": "Pens",
- "referenceId": "Category:Pens",
- "referenceUrl": "https://en.wikipedia.org/wiki/Category:Pens",
- "referenceType": "Wikipedia",
- "confidence": 0.6833,
- "iabName": null,
- "language": "en-US",
- "instances": [
- {
- "adjustedStart": "0:00:30",
- "adjustedEnd": "0:01:17.5",
- "start": "0:00:30",
- "end": "0:01:17.5"
- }
- ]
- },
- {
- "id": 2,
- "name": "Musical groups",
- "referenceId": "Category:Musical_groups",
- "referenceUrl": "https://en.wikipedia.org/wiki/Category:Musical_groups",
- "referenceType": "Wikipedia",
- "confidence": 0.6812,
- "iabName": null,
- "language": "en-US",
- "instances": [
- {
- "adjustedStart": "0:01:10",
- "adjustedEnd": "0:01:17.5",
- "start": "0:01:10",
- "end": "0:01:17.5"
- }
- ]
- },
- ```
-
-To download the JSON file via the API, use the [Azure AI Video Indexer developer portal](https://api-portal.videoindexer.ai/).
-
-For more information, see [about topics](https://azure.microsoft.com/blog/multi-modal-topic-inferencing-from-videos/).
-
-## Topics components
-
-During the topics indexing procedure, topics are extracted, as follows:
-
-|Component|Definition|
-|||
-|Source language |The user uploads the source file for indexing.|
-|Pre-processing|Transcription, OCR and facial recognition AIs extract insights from the media file.|
-|Insights processing| Topics AI analyzes the transcription, OCR and facial recognition insights extracted during pre-processing: <br/>- Transcribed text, each line of transcribed text insight is examined using ontology-based AI technologies. <br/>- OCR and Facial Recognition insights are examined together using ontology-based AI technologies. |
-|Post-processing |- Transcribed text, insights are extracted and tied to a Topic category together with the line number of the transcribed text. For example, Politics in line 7.<br/>- OCR and Facial Recognition, each insight is tied to a Topic category together with the time of the topicΓÇÖs instance in the media file. For example, Freddie Mercury in the People and Music categories at 20.00. |
-|Confidence value |The estimated confidence level of each topic is calculated as a range of 0 to 1. The confidence score represents the certainty in the accuracy of the result. For example, an 82% certainty is represented as an 0.82 score.|
-
-## Example use cases
--- Personalization using topics inference to match customer interests, for example websites about England posting promotions about English movies or festivals.-- Deep-searching archives for insights on specific topics to create feature stories about companies, personas or technologies, for example by a news agency. -- Monetization, increasing the worth of extracted insights. For example, industries like the news or social media that rely on ad revenue can deliver relevant ads by using the extracted insights as additional signals to the ad server.-
-## Considerations and limitations when choosing a use case
-
-Below are some considerations to keep in mind when using topics:
--- When uploading a file always use high-quality video content. The recommended maximum frame size is HD and frame rate is 30 FPS. A frame should contain no more than 10 people. When outputting frames from videos to AI models, only send around 2 or 3 frames per second. Processing 10 and more frames might delay the AI result. -- When uploading a file always use high quality audio and video content. At least 1 minute of spontaneous conversational speech is required to perform analysis. Audio effects are detected in non-speech segments only. The minimal duration of a non-speech section is 2 seconds. Voice commands and singing aren't supported. -- Typically, small people or objects under 200 pixels and people who are seated may not be detected. People wearing similar clothes or uniforms might be detected as being the same person and will be given the same ID number. People or objects that are obstructed may not be detected. Tracks of people with front and back poses may be split into different instances. -
-When used responsibly and carefully, Azure AI Video Indexer is a valuable tool for many industries. To respect the privacy and safety of others, and to comply with local and global regulations, we recommend the following:
--- Always respect an individualΓÇÖs right to privacy, and only ingest videos for lawful and justifiable purposes. -- Don't purposely disclose inappropriate media showing young children or family members of celebrities or other content that may be detrimental or pose a threat to an individualΓÇÖs personal freedom. -- Commit to respecting and promoting human rights in the design and deployment of your analyzed media. -- When using 3rd party materials, be aware of any existing copyrights or permissions required before distributing content derived from them. -- Always seek legal advice when using media from unknown sources. -- Always obtain appropriate legal and professional advice to ensure that your uploaded videos are secured and have adequate controls to preserve the integrity of your content and to prevent unauthorized access. -- Provide a feedback channel that allows users and individuals to report issues with the service. -- Be aware of any applicable laws or regulations that exist in your area regarding processing, analyzing, and sharing media containing people. -- Keep a human in the loop. Don't use any solution as a replacement for human oversight and decision-making. -- Fully examine and review the potential of any AI model you're using to understand its capabilities and limitations. -
-## Next steps
-
-### Learn More about Responsible AI
--- [Microsoft Responsible AI principles](https://www.microsoft.com/ai/responsible-ai?activetab=pivot1%3aprimaryr6) -- [Microsoft Responsible AI resources](https://www.microsoft.com/ai/responsible-ai-resources)-- [Microsoft Azure Learning courses on Responsible AI](/training/paths/responsible-ai-business-principles/)-- [Microsoft Global Human Rights Statement](https://www.microsoft.com/corporate-responsibility/human-rights-statement?activetab=pivot_1:primaryr5) -
-### Contact us
-
-`visupport@microsoft.com`
-
-## Azure AI Video Indexer insights
--- [Audio effects detection](audio-effects-detection.md)-- [Face detection](face-detection.md)-- [Keywords extraction](keywords.md)-- [Transcription, translation & language identification](transcription-translation-lid.md)-- [Labels identification](labels-identification.md) -- [Named entities](named-entities.md)-- [Observed people tracking & matched faces](observed-matched-people.md)
azure-video-indexer Transcription Translation Lid https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-video-indexer/transcription-translation-lid.md
- Title: Azure AI Video Indexer media transcription, translation and language identification overview
-description: An introduction to Azure AI Video Indexer media transcription, translation and language identification components responsibly.
- Previously updated : 06/15/2022-----
-# Media transcription, translation and language identification
--
-Azure AI Video Indexer transcription, translation and language identification automatically detects, transcribes, and translates the speech in media files into over 50 languages.
--- Azure AI Video Indexer processes the speech in the audio file to extract the transcription that is then translated into many languages. When selecting to translate into a specific language, both the transcription and the insights like keywords, topics, labels or OCR are translated into the specified language. Transcription can be used as is or be combined with speaker insights that map and assign the transcripts into speakers. Multiple speakers can be detected in an audio file. An ID is assigned to each speaker and is displayed under their transcribed speech. -- Azure AI Video Indexer language identification (LID) automatically recognizes the supported dominant spoken language in the video file. For more information, see [Applying LID](/azure/azure-video-indexer/language-identification-model). -- Azure AI Video Indexer multi-language identification (MLID) automatically recognizes the spoken languages in different segments in the audio file and sends each segment to be transcribed in the identified languages. At the end of this process, all transcriptions are combined into the same file. For more information, see [Applying MLID](/azure/azure-video-indexer/multi-language-identification-transcription).
-The resulting insights are generated in a categorized list in a JSON file that includes the ID, language, transcribed text, duration and confidence score.
-- When indexing media files with multiple speakers, Azure AI Video Indexer performs speaker diarization which identifies each speaker in a video and attributes each transcribed line to a speaker. The speakers are given a unique identity such as Speaker #1 and Speaker #2. This allows for the identification of speakers during conversations and can be useful in a variety of scenarios such as doctor-patient conversations, agent-customer interactions, and court proceedings.-
-## Prerequisites
-
-Review [transparency note overview](/legal/azure-video-indexer/transparency-note?context=/azure/azure-video-indexer/context/context)
-
-## General principles
-
-This article discusses transcription, translation and language identification and the key considerations for making use of this technology responsibly. There are many things you need to consider when deciding how to use and implement an AI-powered feature:
--- Will this feature perform well in my scenario? Before using transcription, translation and language Identification into your scenario, test how it performs using real-life data and make sure it can deliver the accuracy you need.-- Are we equipped to identify and respond to errors? AI-powered products and features won't be 100% accurate, so consider how you'll identify and respond to any errors that may occur.-
-## View the insight
-
-To view the insights on the website:
-
-1. Go to Insight and scroll to Transcription and Translation.
-
-To view language insights in `insights.json`, do the following:
-
-1. Select Download -> Insights (JSON).
-1. Copy the desired element, under `insights`, and paste it into your online JSON viewer.
-
- ```json
- "insights": {
- "version": "1.0.0.0",
- "duration": "0:01:50.486",
- "sourceLanguage": "en-US",
- "sourceLanguages": [
- "en-US"
- ],
- "language": "en-US",
- "languages": [
- "en-US"
- ],
- "transcript": [
- {
- "id": 1,
- "text": "Hi, I'm Doug from office. We're talking about new features that office insiders will see first and I have a program manager,",
- "confidence": 0.8879,
- "speakerId": 1,
- "language": "en-US",
- "instances": [
- {
- "adjustedStart": "0:00:00",
- "adjustedEnd": "0:00:05.75",
- "start": "0:00:00",
- "end": "0:00:05.75"
- }
- ]
- },
- {
- "id": 2,
- "text": "Emily Tran, with office graphics.",
- "confidence": 0.8879,
- "speakerId": 1,
- "language": "en-US",
- "instances": [
- {
- "adjustedStart": "0:00:05.75",
- "adjustedEnd": "0:00:07.01",
- "start": "0:00:05.75",
- "end": "0:00:07.01"
- }
- ]
- },
- ```
-
-To download the JSON file via the API, use the [Azure AI Video Indexer developer portal](https://api-portal.videoindexer.ai/).
-
-## Transcription, translation and language identification components
-
-During the transcription, translation and language identification procedure, speech in a media file is processed, as follows:
-
-|Component|Definition|
-|||
-|Source language | The user uploads the source file for indexing, and either:<br/>- Specifies the video source language.<br/>- Selects auto detect single language (LID) to identify the language of the file. The output is saved separately.<br/>- Selects auto detect multi language (MLID) to identify multiple languages in the file. The output of each language is saved separately.|
-|Transcription API| The audio file is sent to Azure AI services to get the transcribed and translated output. If a language has been specified, it's processed accordingly. If no language is specified, a LID or MLID process is run to identify the language after which the file is processed. |
-|Output unification |The transcribed and translated files are unified into the same file. The outputted data includes the speaker ID of each extracted sentence together with its confidence level.|
-|Confidence value |The estimated confidence level of each sentence is calculated as a range of 0 to 1. The confidence score represents the certainty in the accuracy of the result. For example, an 82% certainty is represented as an 0.82 score.|
-
-## Example use cases
--- Promoting accessibility by making content available for people with hearing disabilities using Azure AI Video Indexer to generate speech to text transcription and translation into multiple languages.-- Improving content distribution to a diverse audience in different regions and languages by delivering content in multiple languages using Azure AI Video IndexerΓÇÖs transcription and translation capabilities. -- Enhancing and improving manual closed captioning and subtitles generation by leveraging Azure AI Video IndexerΓÇÖs transcription and translation capabilities and by using the closed captions generated by Azure AI Video Indexer in one of the supported formats.-- Using language identification (LID) or multi language identification (MLID) to transcribe videos in unknown languages to allow Azure AI Video Indexer to automatically identify the languages appearing in the video and generate the transcription accordingly. -
-## Considerations and limitations when choosing a use case
-
-When used responsibly and carefully, Azure AI Video Indexer is a valuable tool for many industries. To respect the privacy and safety of others, and to comply with local and global regulations, we recommend the following:
--- Carefully consider the accuracy of the results, to promote more accurate data, check the quality of the audio, low quality audio might impact the detected insights. -- Always respect an individualΓÇÖs right to privacy, and only ingest videos for lawful and justifiable purposes. -- Don't purposely disclose inappropriate media showing young children or family members of celebrities or other content that may be detrimental or pose a threat to an individualΓÇÖs personal freedom. -- Commit to respecting and promoting human rights in the design and deployment of your analyzed media. -- When using third party materials, be aware of any existing copyrights or permissions required before distributing content derived from them. -- Always seek legal advice when using media from unknown sources. -- Always obtain appropriate legal and professional advice to ensure that your uploaded videos are secured and have adequate controls to preserve the integrity of your content and to prevent unauthorized access. -- Provide a feedback channel that allows users and individuals to report issues with the service. -- Be aware of any applicable laws or regulations that exist in your area regarding processing, analyzing, and sharing media containing people. -- Keep a human in the loop. Don't use any solution as a replacement for human oversight and decision-making. -- Fully examine and review the potential of any AI model you're using to understand its capabilities and limitations.-- Video Indexer doesn't perform speaker recognition so speakers are not assigned an identifier across multiple files. You are unable to search for an individual speaker in multiple files or transcripts. -- Speaker identifiers are assigned randomly and can only be used to distinguish different speakers in a single file. -- Cross-talk and overlapping speech: When multiple speakers talk simultaneously or interrupt each other, it becomes challenging for the model to accurately distinguish and assign the correct text to the corresponding speakers.-- Speaker overlaps: Sometimes, speakers may have similar speech patterns, accents, or use similar vocabulary, making it difficult for the model to differentiate between them.-- Noisy audio: Poor audio quality, background noise, or low-quality recordings can hinder the model's ability to correctly identify and transcribe speakers.-- Emotional Speech: Emotional variations in speech, such as shouting, crying, or extreme excitement, can affect the model's ability to accurately diarize speakers.-- Speaker disguise or impersonation: If a speaker intentionally tries to imitate or disguise their voice, the model might misidentify the speaker.-- Ambiguous speaker identification: Some segments of speech may not have enough unique characteristics for the model to confidently attribute to a specific speaker.-
-For more information, see: guidelines and limitations in [language detection and transcription](/azure/azure-video-indexer/multi-language-identification-transcription).
-
-## Next steps
-
-### Learn More about Responsible AI
--- [Microsoft Responsible AI principles](https://www.microsoft.com/ai/responsible-ai?activetab=pivot1%3aprimaryr6) -- [Microsoft Responsible AI resources](https://www.microsoft.com/ai/responsible-ai-resources)-- [Microsoft Azure Learning courses on Responsible AI](/training/paths/responsible-ai-business-principles/)-- [Microsoft Global Human Rights Statement](https://www.microsoft.com/corporate-responsibility/human-rights-statement?activetab=pivot_1:primaryr5) -
-### Contact us
-
-`visupport@microsoft.com`
-
-## Azure AI Video Indexer insights
--- [Audio effects detection](audio-effects-detection.md)-- [Face detection](face-detection.md)-- [OCR](ocr.md)-- [Keywords extraction](keywords.md)-- [Labels identification](labels-identification.md) -- [Named entities](named-entities.md)-- [Observed people tracking & matched faces](observed-matched-people.md)-- [Topics inference](topics-inference.md)
azure-video-indexer Upload Index Videos https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-video-indexer/upload-index-videos.md
- Title: Upload and index videos with Azure AI Video Indexer using the Video Indexer website
-description: Learn how to upload videos by using Azure AI Video Indexer.
- Previously updated : 05/10/2023----
-# Upload media files using the Video Indexer website
--
-You can upload media files from your file system or from a URL. You can also configure basic or advanced settings for indexing, such as privacy, streaming quality, language, presets, people and brands models, custom logos and metadata.
-
-This article shows how to upload and index media files (audio or video) using the [Azure AI Video Indexer website](https://aka.ms/vi-portal-link).
-
-You can also view a video that shows [how to upload and index media files](https://www.youtube.com/watch?v=H-SHX8N65vM&t=34s&ab_channel=AzureVideoIndexer).
-
-## Prerequisites
--- To upload media files, you need an active Azure AI Video Indexer account. If you don't have one, [sign up](https://aka.ms/vi-portal-link) for a free trial account, or create an [unlimited paid account](https://aka.ms/avam-arm-docs).-- To upload media files, you need at least contributor-level permission for your account. To manage permissions, see [Manage users and groups](restricted-viewer-role.md).-- To upload media files from a URL, you need a publicly accessible URL for the media file. For example, if the file is hosted in an Azure storage account, you need to [generate a SAS token URL](../ai-services/document-intelligence/create-sas-tokens.md?view=form-recog-3.0.0&preserve-view=true) and paste it in the input box. You can't use URLs from streaming services such as YouTube.-
-## Quick upload
-
-Follow steps below to upload and index a media file using the quick upload option.
-
-> [!div class="mx-imgBorder"]
-> :::image type="content" source="./media/upload-index-videos/file-system-basic.png" alt-text="Screenshot that shows file system basic.":::
-
-1. Sign in to the [Video Indexer website](https://aka.ms/vi-portal-link).
-1. Select **Upload**.
-1. Select the file source. You can upload up to 10 files at a time.
-
- - To upload from your file system, select **Browse files** and choose the files you want to upload.
- - To upload from a URL, select **Enter URL**, paste the source file URL, and select **Add**.
-
- Make sure the URL is valid and the file is accessible.
-
- > [!NOTE]
- > If the file name is marked in red, it means the file has an issue and can't be uploaded.
-1. Configure the basic settings for indexing or use the default configuration. You need to specify the following settings for each file:
-
- - **Privacy**: Choose whether the video URL will be publicly available or private after indexing.
- - **Streaming quality**: Choose the streaming quality for the video. You can select **No streaming**, **Single bitrate**, or **Adaptive bitrate**. For more information, see [the streaming options](indexing-configuration-guide.md#streaming-quality-options)
- - **Video source language**: Choose the spoken language of the video to ensure high quality transcript and insights extraction. If you don't know the language or there's more than one spoken language, select **Auto-detect single language** or **Auto-detect multi language**. For more information, see [Language detection](multi-language-identification-transcription.md).
-1. If this is the first time you upload a media file, you need to check the consent checkbox to agree to the terms and conditions.
-1. Select **Upload+index**.
-1. Review the summary page that shows the indexing settings and the upload progress.
-1. After the indexing is done, you can view the insights by selecting the video.
-
-## Advanced upload
-
-Follow steps below to upload and index a media file using the advanced upload option.
-
-> [!div class="mx-imgBorder"]
-> :::image type="content" source="./media/upload-index-videos/advanced-settings.png" alt-text="Screenshot that shows advanced settings.":::
-
-1. Sign in to the [Video Indexer website](https://aka.ms/vi-portal-link).
-1. Select **Upload**.
-1. Select the file source. You can upload up to 10 files at a time.
-
- - To upload from your file system, select **Browse files** and choose the files you want to upload. To add more files, select **Add file**. To remove a file, select **Remove** on the file name.
- - To upload from a URL, select **Enter URL**, paste the source file URL, and select **Add**.
-
- Make sure the URL is valid and the file is accessible.
-
- > [!Note]
- > If the file name is marked in red, it means the file has an issue and can't be uploaded. You can add URLs from different storages for each file.
-1. Configure the basic settings, for more information, see the [quick upload](#quick-upload) section above.
-1. Configure the general settings for indexing. You can rename the file names by rewriting the file name. The updated name is reflected as the file name in Video Indexer.
-1. Configure the advanced settings for indexing. The selection of the following settings is for all files in the batch:
-
- - **Indexing preset**: [Choose the preset](indexing-configuration-guide.md#indexing-options) that fits your scenario. You can also exclude sensitive AI by selecting the checkbox.
- - **People model**: If you're using a customized people model, choose it from the dropdown list.
- - **Brand categories**: If you're using a customized brand model, choose it from the dropdown list.
- - **File information**: If you want to add metadata, enter the free text in the input box. The metadata is shared between all files in the same upload batch. When uploading a single file, you can also add a description.
-1. Select **Upload+index**.
-1. Review the summary page that shows the indexing settings and the upload progress.
-1. After the indexing is done, you can view the insights by selecting the video.
-
-## Troubleshoot upload issues
-
-If you encounter any issues while uploading media files, try the following solutions:
--- If the **Upload** button is disabled, hover over the button and check for the indication of the problem. Try to refresh the page.-
- If you're using a trial account, check if you have reached the account quota for daily count, daily duration, or total duration. To view your quota and usage, see the Account settings.
-- If the upload from URL failed, make sure that the URL is valid and accessible by Video Indexer. Make sure that the URL isn't from a streaming service such as YouTube. Make sure that the media file isn't encrypted, protected by DRM, corrupted, or damaged. Make sure that the media file format is supported by Video Indexer. For a list of supported formats, see [supported media formats](/azure/media-services/latest/encode-media-encoder-standard-formats-reference).-- If the upload from file system failed, make sure that the file size isn't larger than 2 GB. Make sure that you have a stable internet connection.-
-## Next steps
-
-[Supported media formats](/azure/azure-video-indexer/upload-index-videos?tabs=with-arm-account-account#supported-file-formats)
azure-video-indexer Use Editor Create Project https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-video-indexer/use-editor-create-project.md
- Title: Use the Azure AI Video Indexer editor to create projects and add video clips
-description: This topic demonstrates how to use the Azure AI Video Indexer editor to create projects and add video clips.
- Previously updated : 11/28/2020----
-# Add video clips to your projects
--
-The [Azure AI Video Indexer](https://www.videoindexer.ai/) website enables you to use your video's deep insights to: find the right media content, locate the parts that youΓÇÖre interested in, and use the results to create an entirely new project.
-
-Once created, the project can be rendered and downloaded from Azure AI Video Indexer and be used in your own editing applications or downstream workflows.
-
-Some scenarios where you may find this feature useful are:
-
-* Creating movie highlights for trailers.
-* Using old clips of videos in news casts.
-* Creating shorter content for social media.
-
-This article shows how to create a project and add selected clips from the videos to the project.
-
-## Create new project and manage videos
-
-1. Browse to the [Azure AI Video Indexer](https://www.videoindexer.ai/) website and sign in.
-1. Select the **Projects** tab. If you have created projects before, you will see all of your other projects here.
-1. Click **Create new project**.
-
- :::image type="content" source="./media/video-indexer-view-edit/new-project.png" alt-text="Create a new project":::
-1. Give your project a name by clicking on the pencil icon. Replace the text that says "Untitled project" with your project name and click on the check.
-
- :::image type="content" source="./media/video-indexer-view-edit/new-project-edit-name.png" alt-text="A new project":::
-
-### Add videos to the project
-
-> [!NOTE]
-> Currently, projects may only contain videos indexed in the same language. </br>Once you select a video in one language, you cannot add the videos in your account that are in a different language, the videos that have other languages will be grayed out/disabled.
-
-1. Add videos that you want to work with in this project by selecting **Add videos**.
-
- You will see all the videos in your account and a search box that says "Search for text, keywords, or visual content". You can search for videos that have a specified person, label, brand, keyword, or occurrence in the transcript and OCR.
-
- For example, in the image below, we were looking for videos that mention "custom vision" in transcript only (use **Filter** if you want to filter your search results).
-
- :::image type="content" source="./media/video-indexer-view-edit/custom-vision.png" alt-text="Screenshot shows searching for videos that mention custom vision":::
-1. Click **Add** to add videos to the project.
-1. Now, you will see all of the videos you chose. These are the videos from which you are going to select clips for your project.
-
- You can rearrange the order of the videos by dragging and dropping or by selecting the list menu button and selecting **Move down** or **Move up**. From the list menu, you will also be able to remove the video from this project.
-
- You can add more videos to this project at any time by selecting **Add videos**. You can also add multiple occurrences of the same video to your project. You might want to do this if you want to show a clip from one video and then a clip from another and then another clip from the first video.
-
-### Select clips to use in your project
-
-If you click on the downward arrow on the right side of each video, you will open up the insights in the video based on time stamps (clips of the video).
-
-1. To create queries for specific clips, use the search box that says "Search in transcript, visual text, people, and labels".
-1. Select **View Insights** to customize which insights you want to see and which you don't want to see.
-
- :::image type="content" source="./media/video-indexer-view-edit/search-try-cognitive-services.png" alt-text="Screenshot shows searching for videos that say Try Azure AI services":::
-1. Add filters to further specify details on what scenes you are looking for by selecting **Filter options**.
-
- You can add multiple filters.
-1. Once you are happy with your results, select the clips you want to add to this project by selecting the segment you want to add. You can unselect this clip by clicking on the segment again.
-
- Add all segments of a video (or, all that were returned after your search) by clicking on the list menu option next to the video and selecting **Select all**.
-
-As you are selecting and ordering your clips, you can preview the video in the player on the right side of the page.
-
-> [!IMPORTANT]
-> Remember to save your project when you make changes by selecting **Save project** at the top of the page.
-
-### Render and download the project
-
-> [!NOTE]
-> For Azure AI Video Indexer paid accounts, rendering your project has encoding costs. Azure AI Video Indexer trial accounts are limited to 5 hours of rendering.
-
-1. Once you are done, make sure that your project has been saved. You can now render this project. Click **Render**, a popup dialog comes up that tells you that Azure AI Video Indexer will render a file and then the download link will be sent to your email. Select Proceed.
-
- :::image type="content" source="./media/video-indexer-view-edit/render-download.png" alt-text="Screenshot shows Azure AI Video Indexer with the option to Render and download your project":::
-
- You will also see a notification that the project is being rendered on top of the page. Once it is done being rendered, you will see a new notification that the project has been successfully rendered. Click the notification to download the project. It will download the project in mp4 format.
-1. You can access saved projects from the **Projects** tab.
-
- If you select this project, you see all the insights and the timeline of this project. If you select **Video editor**, you can continue making edits to this project. Edits include adding or removing videos and clips or renaming the project.
-
-## Create a project from your video
-
-You can create a new project directly from a video in your account.
-
-1. Go to the **Library** tab of the Azure AI Video Indexer website.
-1. Open the video that you want to use to create your project. On the insights and timeline page, select the **Video editor** button.
-
- This takes you to the same page that you used to create a new project. Unlike the new project, you see the timestamped insights segments of the video, that you had started editing previously.
-
-## See also
-
-[Azure AI Video Indexer overview](video-indexer-overview.md)
-
azure-video-indexer Video Indexer Disaster Recovery https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-video-indexer/video-indexer-disaster-recovery.md
- Title: Azure AI Video Indexer failover and disaster recovery
-description: Learn how to fail over to a secondary Azure AI Video Indexer account if a regional datacenter failure or disaster occurs.
- Previously updated : 07/29/2019----
-# Azure AI Video Indexer failover and disaster recovery
--
-Azure AI Video Indexer doesn't provide instant failover of the service if there's a regional datacenter outage or failure. This article explains how to configure your environment for a failover to ensure optimal availability for apps and minimized recovery time if a disaster occurs.
-
-We recommend that you configure business continuity disaster recovery (BCDR) across regional pairs to benefit from Azure's isolation and availability policies. For more information, see [Azure paired regions](../availability-zones/cross-region-replication-azure.md).
-
-## Prerequisites
-
-An Azure subscription. If you don't have an Azure subscription yet, sign up for [Azure free trial](https://azure.microsoft.com/free/).
-
-## Fail over to a secondary account
-
-To implement BCDR, you need to have two Azure AI Video Indexer accounts to handle redundancy.
-
-1. Create two Azure AI Video Indexer accounts connected to Azure (see [Create an Azure AI Video Indexer account](connect-to-azure.md)). Create one account for your primary region and the other to the paired Azure region.
-1. If there's a failure in your primary region, switch to indexing using the secondary account.
-
-> [!TIP]
-> You can automate BCDR by setting up activity log alerts for service health notifications as per [Create activity log alerts on service notifications](../service-health/alerts-activity-log-service-notifications-portal.md).
-
-For information about using multiple tenants, see [Manage multiple tenants](manage-multiple-tenants.md). To implement BCDR, choose one of these two options: [Azure AI Video Indexer account per tenant](./manage-multiple-tenants.md#azure-ai-video-indexer-account-per-tenant) or [Azure subscription per tenant](./manage-multiple-tenants.md#azure-subscription-per-tenant).
-
-## Next steps
-
-[Manage an Azure AI Video Indexer account connected to Azure](manage-account-connected-to-azure.md).
azure-video-indexer Video Indexer Embed Widgets https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-video-indexer/video-indexer-embed-widgets.md
- Title: Embed Azure AI Video Indexer widgets in your apps
-description: Learn how to embed Azure AI Video Indexer widgets in your apps.
- Previously updated : 01/10/2023----
-# Embed Azure AI Video Indexer widgets in your apps
--
-This article shows how you can embed Azure AI Video Indexer widgets in your apps. Azure AI Video Indexer supports embedding three types of widgets into your apps: *Cognitive Insights*, *Player*, and *Editor*.
-
-Starting with version 2, the widget base URL includes the region of the specified account. For example, an account in the West US region generates: `https://www.videoindexer.ai/embed/insights/.../?location=westus2`.
-
-## Widget types
-
-### Cognitive Insights widget
-
-A Cognitive Insights widget includes all visual insights that were extracted from your video indexing process. The Cognitive Insights widget supports the following optional URL parameters:
-
-|Name|Definition|Description|
-||||
-|`widgets` | Strings separated by comma | Allows you to control the insights that you want to render.<br/>Example: `https://www.videoindexer.ai/embed/insights/<accountId>/<videoId>/?widgets=people,keywords` renders only people and keywords UI insights.<br/>Available options: `people`, `keywords`, `audioEffects`, `labels`, `sentiments`, `emotions`, `topics`, `keyframes`, `transcript`, `ocr`, `speakers`, `scenes`, `spokenLanguage`, `observedPeople`, `namedEntities`, `detectedObjects`.|
-|`controls`|Strings separated by comma|Allows you to control the controls that you want to render.<br/>Example: `https://www.videoindexer.ai/embed/insights/<accountId>/<videoId>/?controls=search,download` renders only search option and download button.<br/>Available options: `search`, `download`, `presets`, `language`.|
-|`language`|A short language code (language name)|Controls insights language.<br/>Example: `https://www.videoindexer.ai/embed/insights/<accountId>/<videoId>/?language=es-es` <br/>or `https://www.videoindexer.ai/embed/insights/<accountId>/<videoId>/?language=spanish`|
-|`locale` | A short language code | Controls the language of the UI. The default value is `en`. <br/>Example: `locale=de`.|
-|`tab` | The default selected tab | Controls the **Insights** tab that's rendered by default. <br/>Example: `tab=timeline` renders the insights with the **Timeline** tab selected.|
-|`search` | String | Allows you to control the initial search term.<br/>Example: `https://www.videoindexer.ai/embed/insights/<accountId>/<videoId>/?search=azure` renders the insights filtered by the word "Azure". |
-|`sort` | Strings separated by comma | Allows you to control the sorting of an insight.<br/>Each sort consists of 3 values: widget name, property and order, connected with '_' `sort=name_property_order`<br/>Available options:<br/>widgets: `keywords`, `audioEffects`, `labels`, `sentiments`, `emotions`, `keyframes`, `scenes`, `namedEntities` and `spokenLanguage`.<br/>property: `startTime`, `endTime`, `seenDuration`, `name` and `ID`.<br/>order: asc and desc.<br/>Example: `https://www.videoindexer.ai/embed/insights/<accountId>/<videoId>/?sort=labels_id_asc,keywords_name_desc` renders the labels sorted by ID in ascending order and keywords sorted by name in descending order.|
-|`location` ||The `location` parameter must be included in the embedded links, see [how to get the name of your region](regions.md). If your account is in preview, the `trial` should be used for the location value. `trial` is the default value for the `location` parameter.|
-
-### Player widget
-
-You can use the Player widget to stream video by using adaptive bit rate. The Player widget supports the following optional URL parameters.
-
-|Name|Definition|Description|
-||||
-|`t` | Seconds from the start | Makes the player start playing from the specified time point.<br/> Example: `t=60`. |
-|`captions` | A language code / A language code array | Fetches the caption in the specified language during the widget loading to be available on the **Captions** menu.<br/> Example: `captions=en-US`, `captions=en-US,es-ES` |
-|`showCaptions` | A Boolean value | Makes the player load with the captions already enabled.<br/> Example: `showCaptions=true`. |
-|`type`| | Activates an audio player skin (the video part is removed).<br/> Example: `type=audio`. |
-|`autoplay` | A Boolean value | Indicates if the player should start playing the video when loaded. The default value is `true`.<br/> Example: `autoplay=false`. |
-|`language`/`locale` | A language code | Controls the player language. The default value is `en-US`.<br/>Example: `language=de-DE`.|
-|`location` ||The `location` parameter must be included in the embedded links, see [how to get the name of your region](regions.md). If your account is in preview, the `trial` should be used for the location value. `trial` is the default value for the `location` parameter.|
-|`boundingBoxes`|Array of bounding boxes. Options: people (faces), observed people and detected objects. <br/>Values should be separated by a comma (",").|Controls the option to set bounding boxes on/off when embedding the player.<br/>All mentioned option will be turned on.<br/><br/>Example: `boundingBoxes=observedPeople,people,detectedObjects`<br/>Default value is `boundingBoxes=observedPeople,detectedObjects` (only observed people and detected objects bounding box are turned on).|
-
-### Editor widget
-
-You can use the Editor widget to create new projects and manage a video's insights. The Editor widget supports the following optional URL parameters.
-
-|Name|Definition|Description|
-||||
-|`accessToken`<sup>*</sup> | String | Provides access to videos that are only in the account that's used to embed the widget.<br> The Editor widget requires the `accessToken` parameter. |
-|`language` | A language code | Controls the player language. The default value is `en-US`.<br/>Example: `language=de-DE`. |
-|`locale` | A short language code | Controls the insights language. The default value is `en`.<br/>Example: `language=de`. |
-|`location` ||The `location` parameter must be included in the embedded links, see [how to get the name of your region](regions.md). If your account is in preview, the `trial` should be used for the location value. `trial` is the default value for the `location` parameter.|
-
-<sup>*</sup>The owner should provide `accessToken` with caution.
-
-## Embed videos
-
-This section discusses embedding videos by [using the website](#the-website-experience) or by [assembling the URL manually](#assemble-the-url-manually) into apps.
-
-The `location` parameter must be included in the embedded links, see [how to get the name of your region](regions.md). If your account is in preview, the `trial` should be used for the location value. `trial` is the default value for the `location` parameter. For example: `https://www.videoindexer.ai/accounts/00000000-0000-0000-0000-000000000000/videos/b2b2c74b8e/?location=trial`.
-
-### The website experience
-
-To embed a video, use the website as described below:
-
-1. Sign in to the [Azure AI Video Indexer](https://www.videoindexer.ai/) website.
-1. Select the video that you want to work with and press **Play**.
-1. Select the type of widget that you want (**Insights**, **Player**, or **Editor**).
-1. Click **&lt;/&gt; Embed**.
-5. Copy the embed code (appears in **Copy the embedded code** in the **Share & Embed** dialog).
-6. Add the code to your app.
-
-> [!NOTE]
-> Sharing a link for the **Player** or **Insights** widget will include the access token and grant the read-only permissions to your account.
-
-### Assemble the URL manually
-
-#### Public videos
-
-You can embed public videos assembling the URL as follows:
-
-`https://www.videoindexer.ai/embed/[insights | player]/<accountId>/<videoId>`
-
-
-#### Private videos
-
-To embed a private video, you must pass an access token (use [Get Video Access Token](https://api-portal.videoindexer.ai/api-details#api=Operations&operation=Get-Video-Access-Token) in the `src` attribute of the iframe:
-
-`https://www.videoindexer.ai/embed/[insights | player]/<accountId>/<videoId>/?accessToken=<accessToken>`
-
-### Provide editing insights capabilities
-
-To provide editing insights capabilities in your embedded widget, you must pass an access token that includes editing permissions. Use [Get Video Access Token](https://api-portal.videoindexer.ai/api-details#api=Operations&operation=Get-Video-Access-Token) with `&allowEdit=true`.
-
-## Widgets interaction
-
-The Cognitive Insights widget can interact with a video on your app. This section shows how to achieve this interaction.
-
-![Cognitive Insights widget](./media/video-indexer-embed-widgets/video-indexer-widget03.png)
-
-### Flow overview
-
-When you edit the transcripts, the following flow occurs:
-
-1. You edit the transcript in the timeline.
-1. Azure AI Video Indexer gets these updates and saves them in the [from transcript edits](customize-language-model-with-website.md#customize-language-models-by-correcting-transcripts) in the language model.
-1. The captions are updated:
-
- * If you are using Azure AI Video Indexer's player widget - itΓÇÖs automatically updated.
- * If you are using an external player - you get a new captions file user the **Get video captions** call.
-
-### Cross-origin communications
-
-To get Azure AI Video Indexer widgets to communicate with other components:
--- Uses the cross-origin communication HTML5 method `postMessage`.-- Validates the message across VideoIndexer.ai origin.-
-If you implement your own player code and integrate with Cognitive Insights widgets, it's your responsibility to validate the origin of the message that comes from VideoIndexer.ai.
-
-### Embed widgets in your app or blog (recommended)
-
-This section shows how to achieve interaction between two Azure AI Video Indexer widgets so that when a user selects the insight control on your app, the player jumps to the relevant moment.
-
-1. Copy the Player widget embed code.
-2. Copy the Cognitive Insights embed code.
-3. Add the [Mediator file](https://breakdown.blob.core.windows.net/public/vb.widgets.mediator.js) to handle the communication between the two widgets:<br/>
-`<script src="https://breakdown.blob.core.windows.net/public/vb.widgets.mediator.js"></script>`
-
-Now when a user selects the insight control on your app, the player jumps to the relevant moment.
-
-For more information, see the [Azure AI Video Indexer - Embed both Widgets demo](https://codepen.io/videoindexer/pen/NzJeOb).
-
-### Embed the Cognitive Insights widget and use Azure Media Player to play the content
-
-This section shows how to achieve interaction between a Cognitive Insights widget and an Azure Media Player instance by using the [AMP plug-in](https://breakdown.blob.core.windows.net/public/amp-vb.plugin.js).
-
-1. Add an Azure AI Video Indexer plug-in for the AMP player:<br/> `<script src="https://breakdown.blob.core.windows.net/public/amp-vb.plugin.js"></script>`
-2. Instantiate Azure Media Player with the Azure AI Video Indexer plug-in.
-
- ```javascript
- // Init the source.
- function initSource() {
- var tracks = [{
- kind: 'captions',
- // To load vtt from VI, replace it with your vtt URL.
- src: this.getSubtitlesUrl("c4c1ad4c9a", "English"),
- srclang: 'en',
- label: 'English'
- }];
- myPlayer.src([
- {
- "src": "//amssamples.streaming.mediaservices.windows.net/91492735-c523-432b-ba01-faba6c2206a2/AzureMediaServicesPromo.ism/manifest",
- "type": "application/vnd.ms-sstr+xml"
- }
- ], tracks);
- }
-
- // Init your AMP instance.
- var myPlayer = amp('vid1', { /* Options */
- "nativeControlsForTouch": false,
- autoplay: true,
- controls: true,
- width: "640",
- height: "400",
- poster: "",
- plugins: {
- videobreakedown: {}
- }
- }, function () {
- // Activate the plug-in.
- this.videobreakdown({
- videoId: "c4c1ad4c9a",
- syncTranscript: true,
- syncLanguage: true,
- location: "trial" /* location option for paid accounts (default is trial) */
- });
-
- // Set the source dynamically.
- initSource.call(this);
- });
- ```
-
-3. Copy the Cognitive Insights embed code.
-
-You can now communicate with Azure Media Player.
-
-For more information, see the [Azure Media Player + VI Insights demo](https://codepen.io/videoindexer/pen/rYONrO).
-
-### Embed the Azure AI Video Indexer Cognitive Insights widget and use a different video player
-
-If you use a video player other than Azure Media Player, you must manually manipulate the video player to achieve the communication.
-
-1. Insert your video player.
-
- For example, a standard HTML5 player:
-
- ```html
- <video id="vid1" width="640" height="360" controls autoplay preload>
- <source src="//breakdown.blob.core.windows.net/public/Microsoft%20HoloLens-%20RoboRaid.mp4" type="video/mp4" />
- Your browser does not support the video tag.
- </video>
- ```
-
-2. Embed the Cognitive Insights widget.
-3. Implement communication for your player by listening to the "message" event. For example:
-
- ```javascript
- <script>
-
- (function(){
- // Reference your player instance.
- var playerInstance = document.getElementById('vid1');
-
- function jumpTo(evt) {
- var origin = evt.origin || evt.originalEvent.origin;
-
- // Validate that the event comes from the videoindexer domain.
- if ((origin === "https://www.videoindexer.ai") && evt.data.time !== undefined){
-
- // Call your player's "jumpTo" implementation.
- playerInstance.currentTime = evt.data.time;
-
- // Confirm the arrival to us.
- if ('postMessage' in window) {
- evt.source.postMessage({confirm: true, time: evt.data.time}, origin);
- }
- }
- }
-
- // Listen to the message event.
- window.addEventListener("message", jumpTo, false);
-
- }())
-
- </script>
- ```
-
-For more information, see the [Azure Media Player + VI Insights demo](https://codepen.io/videoindexer/pen/YEyPLd).
-
-## Adding subtitles
-
-If you embed Azure AI Video Indexer insights with your own [Azure Media Player](https://aka.ms/azuremediaplayer), you can use the `GetVttUrl` method to get closed captions (subtitles). You can also call a JavaScript method from the Azure AI Video Indexer AMP plug-in `getSubtitlesUrl` (as shown earlier).
-
-## Customizing embeddable widgets
-
-### Cognitive Insights widget
-
-You can choose the types of insights that you want. To do this, specify them as a value to the following URL parameter that's added to the embed code that you get (from the [API](https://aka.ms/avam-dev-portal) or from the [Azure AI Video Indexer](https://www.videoindexer.ai/) website): `&widgets=<list of wanted widgets>`.
-
-The possible values are listed [here](#cognitive-insights-widget).
-
-For example, if you want to embed a widget that contains only people and keywords insights, the iframe embed URL will look like this:
-
-`https://www.videoindexer.ai/embed/insights/<accountId>/<videoId>/?widgets=people,keywords`
-
-The title of the iframe window can also be customized by providing `&title=<YourTitle>` to the iframe URL. (It customizes the HTML `<title>` value).
-
-For example, if you want to give your iframe window the title "MyInsights", the URL will look like this:
-
-`https://www.videoindexer.ai/embed/insights/<accountId>/<videoId>/?title=MyInsights`
-
-Notice that this option is relevant only in cases when you need to open the insights in a new window.
-
-### Player widget
-
-If you embed Azure AI Video Indexer player, you can choose the size of the player by specifying the size of the iframe.
-
-For example:
-
-`> [!VIDEO https://www.videoindexer.ai/embed/player/<accountId>/<videoId>/]>/<videoId>/" frameborder="0" allowfullscreen />`
-
-By default, Azure AI Video Indexer player has autogenerated closed captions that are based on the transcript of the video. The transcript is extracted from the video with the source language that was selected when the video was uploaded.
-
-If you want to embed with a different language, you can add `&captions=<Language Code>` to the embed player URL. If you want the captions to be displayed by default, you can pass &showCaptions=true.
-
-The embed URL then will look like this:
-
-`https://www.videoindexer.ai/embed/player/<accountId>/<videoId>/?captions=en-us`
-
-#### Autoplay
-
-By default, the player will start playing the video. you can choose not to by passing `&autoplay=false` to the preceding embed URL.
-
-## Code samples
-
-See the [code samples](https://github.com/Azure-Samples/media-services-video-indexer/tree/master/Embedding%20widgets) repo that contains samples for Azure AI Video Indexer API and widgets:
-
-| File/folder | Description |
-|--|--|
-| `azure-media-player` | Load an Azure AI Video Indexer video in a custom Azure Media Player. |
-| `azure-media-player-vi-insights` | Embed VI Insights with a custom Azure Media Player. |
-| `control-vi-embedded-player` | Embed VI Player and control it from outside. |
-| `custom-index-location` | Embed VI Insights from a custom external location (can be customer a blob). |
-| `embed-both-insights` | Basic usage of VI Insights both player and insights. |
-| `embed-insights-with-AMP` | Embed VI Insights widget with a custom Azure Media Player. |
-| `customize-the-widgets` | Embed VI widgets with customized options. |
-| `embed-both-widgets` | Embed VI Player and Insights and communicate between them. |
-| `url-generator` | Generates widgets custom embed URL based on user-specified options. |
-| `html5-player` | Embed VI Insights with a default HTML5 video player. |
-
-## Supported browsers
-
-For more information, see [supported browsers](video-indexer-get-started.md#supported-browsers).
-
-## Embed and customize Azure AI Video Indexer widgets in your app using npmjs package
-
-Using our [@azure/video-analyzer-for-media-widgets](https://www.npmjs.com/package/@azure/video-analyzer-for-media-widgets) package, you can add the insights widgets to your app and customize it according to your needs.
-
-Instead of adding an iframe element to embed the insights widget, with this new package you can easily embed & communicate between our widgets. Customizing your widget is only supported in this package - all in one place.
-
-For more information, see our official [GitHub](https://github.com/Azure-Samples/media-services-video-indexer/tree/master/Embedding%20widgets/widget-customization#readme).
-
-## Next steps
-
-For information about how to view and edit Azure AI Video Indexer insights, see [View and edit Azure AI Video Indexer insights](video-indexer-view-edit.md).
-
-Also, check out [Azure AI Video Indexer CodePen](https://codepen.io/videoindexer/pen/eGxebZ).
azure-video-indexer Video Indexer Get Started https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-video-indexer/video-indexer-get-started.md
- Title: Sign up for Azure AI Video Indexer and upload your first video - Azure
-description: Learn how to sign up and upload your first video using the Azure AI Video Indexer website.
- Previously updated : 08/24/2022-----
-# Quickstart: How to sign up and upload your first video
---
-You can access Azure AI Video Indexer capabilities in three ways:
-
-* The [Azure AI Video Indexer website](https://www.videoindexer.ai/): An easy-to-use solution that lets you evaluate the product, manage the account, and customize models (as described in this article).
-* API integration: All of Azure AI Video Indexer's capabilities are available through a REST API, which lets you integrate the solution into your apps and infrastructure. To get started, seeΓÇ»[Use Azure AI Video Indexer REST API](video-indexer-use-apis.md).
-* Embeddable widget: Lets you embed the Azure AI Video Indexer insights, player, and editor experiences into your app. For more information, seeΓÇ»[Embed visual widgets in your application](video-indexer-embed-widgets.md).
-
-Once you start using Azure AI Video Indexer, all your stored data and uploaded content are encrypted at rest with a Microsoft managed key.
-
-> [!NOTE]
-> Review [planned Azure AI Video Indexer website authenticatication changes](./release-notes.md#planned-azure-ai-video-indexer-website-authenticatication-changes).
-
-This quickstart shows you how to sign in to the Azure AI Video Indexer [website](https://www.videoindexer.ai/) and how to upload your first video.
--
-## Sign up and upload a video
-
-### Supported browsers
-
-The following list shows the supported browsers that you can use for the Azure AI Video Indexer website and for your apps that embed the widgets. The list also shows the minimum supported browser version:
--- Edge, version: 16-- Firefox, version: 54-- Chrome, version: 58-- Safari, version: 11-- Opera, version: 44-- Opera Mobile, version: 59-- Android Browser, version: 81-- Samsung Browser, version: 7-- Chrome for Android, version: 87-- Firefox for Android, version: 83-
-### Supported file formats for Azure AI Video Indexer
-
-See the [input container/file formats](/azure/media-services/latest/encode-media-encoder-standard-formats-reference) article for a list of file formats that you can use with Azure AI Video Indexer.
-
-### Upload
-
-1. Sign in on the [Azure AI Video Indexer](https://www.videoindexer.ai/) website.
-1. To upload a video, press the **Upload** button or link.
-
- > [!NOTE]
- > The name of the video must be no greater than 80 characters.
-
- > [!div class="mx-imgBorder"]
- > :::image type="content" source="./media/video-indexer-get-started/video-indexer-upload.png" alt-text="Upload":::
-1. Once your video has been uploaded, Azure AI Video Indexer starts indexing and analyzing the video. As a result a JSON output with insights is produced.
-
- You see the progress.
-
- > [!div class="mx-imgBorder"]
- > :::image type="content" source="./media/video-indexer-get-started/progress.png" alt-text="Progress of the upload":::
-
- The produced JSON output contains `Insights` and `SummarizedInsights` elements. We highly recommend using `Insights` and not using `SummarizedInsights` (which is present for backward compatibility).
-
-1. Once Azure AI Video Indexer is done analyzing, you'll get an email with a link to your video and a short description of what was found in your video. For example: people, spoken and written words, topics, and named entities.
-1. You can later find your video in the library list and perform different operations. For example: search, reindex, edit.
-
- > [!div class="mx-imgBorder"]
- > :::image type="content" source="./media/video-indexer-get-started/uploaded.png" alt-text="Uploaded the upload":::
-
-After you upload and index a video, you can continue using [Azure AI Video Indexer website](video-indexer-view-edit.md) or [Azure AI Video Indexer API developer portal](video-indexer-use-apis.md) to see the insights of the video (see [Examine the Azure AI Video Indexer output](video-indexer-output-json-v2.md)).
-
-## Start using insights
-
-For more details, see [Upload and index videos](upload-index-videos.md) and check out other **How to guides**.
-
-## Next steps
-
-* To embed widgets, seeΓÇ»[Embed visual widgets in your application](video-indexer-embed-widgets.md).
-* For the API integration, seeΓÇ»[Use Azure AI Video Indexer REST API](video-indexer-use-apis.md).
-* Check out our [introduction lab](https://github.com/Azure-Samples/media-services-video-indexer/blob/master/IntroToVideoIndexer.md).
-
- At the end of the workshop, you'll have a good understanding of the kind of information that can be extracted from video and audio content, you'll be more prepared to identify opportunities related to content intelligence, pitch video AI on Azure, and demo several scenarios on Azure AI Video Indexer.
azure-video-indexer Video Indexer Output Json V2 https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-video-indexer/video-indexer-output-json-v2.md
- Title: Examine the Azure AI Video Indexer output
-description: This article examines the Azure AI Video Indexer output produced by the Get Video Index API.
- Previously updated : 08/02/2023----
-# Examine the Azure AI Video Indexer output
--
-When a video is indexed, Azure AI Video Indexer produces the JSON content that contains details of the specified video insights. The insights include transcripts, optical character recognition elements (OCRs), faces, topics, and similar details. Each insight type includes instances of time ranges that show when the insight appears in the video.
-
-For information, see [Azure AI Video Indexer insights](insights-overview.md).
-
-## Root elements of the insights
-
-| Name | Description |
-|--|--|
-| `accountId` | The playlist's VI account ID. |
-| `id` | The playlist's ID. |
-| `name` | The playlist's name. |
-| `description` | The playlist's description. |
-| `userName` | The name of the user who created the playlist. |
-| `created` | The playlist's creation time. |
-| `privacyMode` | The playlist's privacy mode (`Private` or `Public`). |
-| `state` | The playlist's state (`Uploaded`, `Processing`, `Processed`, `Failed`, or `Quarantined`). |
-| `isOwned` | Indicates whether the current user created the playlist. |
-| `isEditable` | Indicates whether the current user is authorized to edit the playlist. |
-| `isBase` | Indicates whether the playlist is a base playlist (a video) or a playlist made of other videos (derived). |
-| `durationInSeconds` | The total duration of the playlist. |
-| `summarizedInsights` | The produced JSON output contains `Insights` and `SummarizedInsights` elements. We recommend using `Insights` and not using `SummarizedInsights` (which is present for backward compatibility). |
-| `videos` | A list of [videos](#videos) that construct the playlist.<br/>If this playlist is constructed of time ranges of other videos (derived), the videos in this list contain only data from the included time ranges. |
-
-```json
-{
- ...
- "accountId": "00000000-0000-0000-0000-000000000000",
- "id": "abc3454321",
- "name": "My first video",
- "description": "I am trying VI",
- "userName": "Some name",
- "created": "2018/2/2 18:00:00.000",
- "privacyMode": "Private",
- "state": "Processed",
- "isOwned": true,
- "isEditable": false,
- "isBase": false,
- "durationInSeconds": 120,
- "summarizedInsights" : null,
- "videos": [{ . . . }]
-}
-```
-
-> [!TIP]
-> The produced JSON output contains `Insights` and `SummarizedInsights` elements. We highly recommend using `Insights` and not using `SummarizedInsights` (which is present for backward compatibility).
--
-## Summary of the insights
-
-This section shows a summary of the insights.
--
-| Attribute | Description |
-|--|--|
-| `name` | The name of the video. For example: `Azure Monitor`. |
-| `id` | The ID of the video. For example: `63c6d532ff`. |
-| `privacyMode` | Your breakdown can have one of the following modes: A `Public` video is visible to everyone in your account and anyone who has a link to the video. A `Private` video is visible to everyone in your account. |
-| `duration` | The time when an insight occurred, in seconds. |
-| `thumbnailVideoId` | The ID of the video from which the thumbnail was taken. |
-| `thumbnailId` | The video's thumbnail ID. To get the actual thumbnail, call [Get-Thumbnail](https://api-portal.videoindexer.ai/api-details#api=Operations&operation=Get-Video-Thumbnail) and pass it `thumbnailVideoId` and `thumbnailId`. |
-| `faces` | Contains zero or more faces. For more information, see [faces](#faces). |
-| `keywords` | Contains zero or more keywords. For more information, see [keywords](#keywords). |
-| `sentiments` | Contains zero or more sentiments. For more information, see [sentiments](#sentiments). |
-| `audioEffects` | Contains zero or more audio effects. For more information, see [audioEffects](#audioeffects-preview). |
-| `labels` | Contains zero or more labels. For more information, see [labels](#labels). |
-| `brands` | Contains zero or more brands. For more information, see [brands](#brands). |
-| `statistics` | For more information, see [statistics](#statistics). |
-| `emotions` | Contains zero or more emotions. For more information, see [emotions](#emotions). |
-| `topics` | Contains zero or more topics. For more information, see [topics](#topics). |
-
-## videos
-
-| Name | Description |
-|--|--|
-| `accountId` | The video's VI account ID. |
-| `id` | The video's ID. |
-| `name` | The video's name. |
-| `state` | The video's state (`Uploaded`, `Processing`, `Processed`, `Failed`, or `Quarantined`). |
-| `processingProgress` | The progress during processing. For example: `20%`. |
-| `failureCode` | The failure code if the video failed to process. For example: `UnsupportedFileType`. |
-| `failureMessage` | The failure message if the video failed to process. |
-| `externalId` | The video's external ID (if the user specifies one). |
-| `externalUrl` | The video's external URL (if the user specifies one). |
-| `metadata` | The video's external metadata (if the user specifies one). |
-| `isAdult` | Indicates whether the video was manually reviewed and identified as an adult video. |
-| `insights` | The insights object. For more information, see [insights](#insights). |
-| `thumbnailId` | The video's thumbnail ID. To get the actual thumbnail, call [Get-Thumbnail](https://api-portal.videoindexer.ai/api-details#api=Operations&operation=Get-Video-Thumbnail) and pass it the video ID and thumbnail ID. |
-| `publishedUrl` | A URL to stream the video. |
-| `publishedUrlProxy` | A URL to stream the video on Apple devices. |
-| `viewToken` | A short-lived view token for streaming the video. |
-| `sourceLanguage` | The video's source language. |
-| `language` | The video's actual language (translation). |
-| `indexingPreset` | The preset used to index the video. |
-| `streamingPreset` | The preset used to publish the video. |
-| `linguisticModelId` | The transcript customization (CRIS) model used to transcribe the video. |
-| `statistics` | For more information, see [statistics](#statistics). |
-
-```json
-{
- "videos": [{
- "accountId": "2cbbed36-1972-4506-9bc7-55367912df2d",
- "id": "142a356aa6",
- "state": "Processed",
- "privacyMode": "Private",
- "processingProgress": "100%",
- "failureCode": "General",
- "failureMessage": "",
- "externalId": null,
- "externalUrl": null,
- "metadata": null,
- "insights": {. . . },
- "thumbnailId": "89d7192c-1dab-4377-9872-473eac723845",
- "publishedUrl": "https://videvmediaservices.streaming.mediaservices.windows.net:443/d88a652d-334b-4a66-a294-3826402100cd/Xamarine.ism/manifest",
- "publishedProxyUrl": null,
- "viewToken": "Bearer=<token>",
- "sourceLanguage": "En-US",
- "language": "En-US",
- "indexingPreset": "Default",
- "linguisticModelId": "00000000-0000-0000-0000-000000000000"
- }],
-}
-```
-### insights
-
-Each insight (for example, transcript lines, faces, or brands) contains a list of unique elements (for example, `face1`, `face2`, `face3`). Each element has its own metadata and a list of its instances, which are time ranges with additional metadata.
-
-A face might have an ID, a name, a thumbnail, other metadata, and a list of its temporal instances (for example, `00:00:05 ΓÇô 00:00:10`, `00:01:00 - 00:02:30`, and `00:41:21 ΓÇô 00:41:49`). Each temporal instance can have additional metadata. For example, the metadata can include the face's rectangle coordinates (`20,230,60,60`).
-
-| Version | The code version |
-|--|--|
-| `sourceLanguage` | The video's source language (assuming one master language), in the form of a [BCP-47](https://tools.ietf.org/html/bcp47) string. |
-| `language` | The insights language (translated from the source language), in the form of a [BCP-47](https://tools.ietf.org/html/bcp47) string. |
-| `transcript` | The [transcript](#transcript) insight. |
-| `ocr` | The [OCR](#ocr) insight. |
-| `keywords` | The [keywords](#keywords) insight. |
-| `transcripts` | Might contain one or more [transcript](#transcript). |
-| `faces` | The [faces](#faces) insight. |
-| `labels` | The [labels](#labels) insight. |
-| `shots` | The [shots](#shots) insight. |
-| `brands` | The [brands](#brands) insight. |
-| `audioEffects` | The [audioEffects](#audioeffects-preview) insight. |
-| `sentiments` | The [sentiments](#sentiments) insight. |
-| `visualContentModeration` | The [visualContentModeration](#visualcontentmoderation) insight. |
-| `textualContentModeration` | The [textualContentModeration](#textualcontentmoderation) insight. |
-| `emotions` | The [emotions](#emotions) insight. |
-| `topics` | The [topics](#topics) insight. |
-| `speakers` | The [speakers](#speakers) insight. |
-
-Example:
-
-```json
-{
- "version": "0.9.0.0",
- "sourceLanguage": "en-US",
- "language": "es-ES",
- "transcript": ...,
- "ocr": ...,
- "keywords": ...,
- "faces": ...,
- "labels": ...,
- "shots": ...,
- "brands": ...,
- "audioEffects": ...,
- "sentiments": ...,
- "visualContentModeration": ...,
- "textualContentModeration": ...
-}
-```
-
-#### transcript
-
-| Name | Description |
-|--|--|
-| `id` | The line ID. |
-| `text` | The transcript itself. |
-| `confidence` | The confidence level for transcript accuracy. |
-| `speakerId` | The ID of the speaker. |
-| `language` | The transcript language. It's intended to support transcripts where each line can have a different language. |
-| `instances` | A list of time ranges where this line appeared. If the instance is in a transcript, it has only one instance. |
-
-Example:
-
-```json
-"transcript":[
-{
- "id":1,
- "text":"Well, good morning everyone and welcome to",
- "confidence":0.8839,
- "speakerId":1,
- "language":"en-US",
- "instances":[
- {
- "adjustedStart":"0:00:10.21",
- "adjustedEnd":"0:00:12.81",
- "start":"0:00:10.21",
- "end":"0:00:12.81"
- }
- ]
-},
-{
- "id":2,
- "text":"ignite 2016. Your mission at Microsoft is to empower every",
- "confidence":0.8944,
- "speakerId":2,
- "language":"en-US",
- "instances":[
- {
- "adjustedStart":"0:00:12.81",
- "adjustedEnd":"0:00:17.03",
- "start":"0:00:12.81",
- "end":"0:00:17.03"
- }
- ]
-}
-```
-
-#### ocr
-
-| Name | Description |
-|--|--|
-| `id` | The OCR's line ID. |
-| `text` | The OCR's text. |
-| `confidence` | The recognition confidence. |
-| `language` | The OCR's language. |
-| `instances` | A list of time ranges where this OCR appeared. (The same OCR can appear multiple times.) |
-| `height` | The height of the OCR rectangle. |
-| `top` | The top location, in pixels. |
-| `left` | The left location, in pixels. |
-| `width` | The width of the OCR rectangle. |
-| `angle` | The angle of the OCR rectangle, from `-180` to `180`. A value of `0` means left-to-right horizontal. A value of `90` means top-to-bottom vertical. A value of `180` means right-to-left horizontal. A value of `-90` means bottom-to-top vertical. A value of `30` means from top left to bottom right. |
-
-```json
-"ocr": [
- {
- "id": 0,
- "text": "LIVE FROM NEW YORK",
- "confidence": 675.971,
- "height": 35,
- "language": "en-US",
- "left": 31,
- "top": 97,
- "width": 400,
- "angle": 30,
- "instances": [
- {
- "start": "00:00:26",
- "end": "00:00:52"
- }
- ]
- }
- ],
-```
-
-#### keywords
-
-| Name | Description |
-|--|--|
-| `id` | The keyword's ID. |
-| `text` | The keyword's text. |
-| `confidence` | Recognition confidence in the keyword. |
-| `language` | The keyword language (when translated). |
-| `instances` | A list of time ranges where this keyword appeared. (A keyword can appear multiple times.) |
-
-```json
-{
- id: 0,
- text: "technology",
- confidence: 1,
- language: "en-US",
- instances: [{
- adjustedStart: "0:05:15.782",
- adjustedEnd: "0:05:16.249",
- start: "0:05:15.782",
- end: "0:05:16.249"
- },
- {
- adjustedStart: "0:04:54.761",
- adjustedEnd: "0:04:55.228",
- start: "0:04:54.761",
- end: "0:04:55.228"
- }]
-}
-```
-
-#### faces
-
-If faces are present, Azure AI Video Indexer uses the Face API on all the video's frames to detect faces and celebrities.
-
-| Name | Description |
-|--|--|
-| `id` | The face's ID. |
-| `name` | The name of the face. It can be `Unknown #0`, an identified celebrity, or a customer-trained person. |
-| `confidence` | The level of confidence in face identification. |
-| `description` | A description of the celebrity. |
-| `thumbnailId` | The ID of the thumbnail of the face. |
-| `knownPersonId` | If it's a known person, the internal ID. |
-| `referenceId` | If it's a Bing celebrity, the Bing ID. |
-| `referenceType` | Currently, just Bing. |
-| `title` | If it's a celebrity, the person's title. For example: `Microsoft's CEO`. |
-| `imageUrl` | If it's a celebrity, the image URL. |
-| `instances` | Instances of where the face appeared in the time range. Each instance also has a `thumbnailsIds` value. |
-
-```json
-"faces": [{
- "id": 2002,
- "name": "Xam 007",
- "confidence": 0.93844,
- "description": null,
- "thumbnailId": "00000000-aee4-4be2-a4d5-d01817c07955",
- "knownPersonId": "8340004b-5cf5-4611-9cc4-3b13cca10634",
- "referenceId": null,
- "title": null,
- "imageUrl": null,
- "instances": [{
- "thumbnailsIds": ["00000000-9f68-4bb2-ab27-3b4d9f2d998e",
- "cef03f24-b0c7-4145-94d4-a84f81bb588c"],
- "adjustedStart": "00:00:07.2400000",
- "adjustedEnd": "00:00:45.6780000",
- "start": "00:00:07.2400000",
- "end": "00:00:45.6780000"
- },
- {
- "thumbnailsIds": ["00000000-51e5-4260-91a5-890fa05c68b0"],
- "adjustedStart": "00:10:23.9570000",
- "adjustedEnd": "00:10:39.2390000",
- "start": "00:10:23.9570000",
- "end": "00:10:39.2390000"
- }]
-}]
-```
-
-#### labels
-
-| Name | Description |
-|--|--|
-| `id` | The label's ID. |
-| `name` | The label's name. For example: `Computer` or `TV`. |
-| `language` | The language of the label's name (when translated), in the form of a [BCP-47](https://tools.ietf.org/html/bcp47) string. |
-| `instances` | A list of time ranges where this label appeared. (A label can appear multiple times.) Each instance has a confidence field. |
--
-```json
-"labels": [
- {
- "id": 0,
- "name": "person",
- "language": "en-US",
- "instances": [
- {
- "confidence": 1.0,
- "start": "00: 00: 00.0000000",
- "end": "00: 00: 25.6000000"
- },
- {
- "confidence": 1.0,
- "start": "00: 01: 33.8670000",
- "end": "00: 01: 39.2000000"
- }
- ]
- },
- {
- "name": "indoor",
- "language": "en-US",
- "id": 1,
- "instances": [
- {
- "confidence": 1.0,
- "start": "00: 00: 06.4000000",
- "end": "00: 00: 07.4670000"
- },
- {
- "confidence": 1.0,
- "start": "00: 00: 09.6000000",
- "end": "00: 00: 10.6670000"
- },
- {
- "confidence": 1.0,
- "start": "00: 00: 11.7330000",
- "end": "00: 00: 20.2670000"
- },
- {
- "confidence": 1.0,
- "start": "00: 00: 21.3330000",
- "end": "00: 00: 25.6000000"
- }
- ]
- }
- ]
-```
-
-#### scenes
-
-| Name | Description |
-|--|--|
-| `id` | The scene's ID. |
-| `instances` | A list of time ranges for this scene. (A scene can have only one instance.) |
-
-```json
-"scenes":[
- {
- "id":0,
- "instances":[
- {
- "start":"0:00:00",
- "end":"0:00:06.34",
- "duration":"0:00:06.34"
- }
- ]
- },
- {
- "id":1,
- "instances":[
- {
- "start":"0:00:06.34",
- "end":"0:00:47.047",
- "duration":"0:00:40.707"
- }
- ]
- },
-
-]
-```
-
-#### shots
-
-| Name | Description |
-|--|--|
-| `id` | The shot's ID. |
-| `keyFrames` | A list of keyframes within the shot. Each has an ID and a list of instance time ranges. Each keyframe instance has a `thumbnailId` field, which holds the keyframe's thumbnail ID. |
-| `instances` | A list of time ranges for this shot. (A shot can have only one instance.) |
-
-```json
-"shots":[
- {
- "id":0,
- "keyFrames":[
- {
- "id":0,
- "instances":[
- {
- "thumbnailId":"00000000-0000-0000-0000-000000000000",
- "start":"0:00:00.209",
- "end":"0:00:00.251",
- "duration":"0:00:00.042"
- }
- ]
- },
- {
- "id":1,
- "instances":[
- {
- "thumbnailId":"00000000-0000-0000-0000-000000000000",
- "start":"0:00:04.755",
- "end":"0:00:04.797",
- "duration":"0:00:00.042"
- }
- ]
- }
- ],
- "instances":[
- {
- "start":"0:00:00",
- "end":"0:00:06.34",
- "duration":"0:00:06.34"
- }
- ]
- },
-
-]
-```
-
-#### brands
-
-Azure AI Video Indexer detects business and product brand names in the speech-to-text transcript and/or video OCR. This information doesn't include visual recognition of brands or logo detection.
-
-| Name | Description |
-|--|--|
-| `id` | The brand's ID. |
-| `name` | The brand's name. |
-| `referenceId` | The suffix of the brand's Wikipedia URL. For example, `Target_Corporation` is the suffix of [https://en.wikipedia.org/wiki/Target_Corporation](https://en.wikipedia.org/wiki/Target_Corporation). |
-| `referenceUrl` | The brand's Wikipedia URL, if one exists. For example: [https://en.wikipedia.org/wiki/Target_Corporation](https://en.wikipedia.org/wiki/Target_Corporation). |
-| `description` | The brand's description. |
-| `tags` | A list of predefined tags that were associated with this brand. |
-| `confidence` | The confidence value of the Azure AI Video Indexer brand detector (`0`-`1`). |
-| `instances` | A list of time ranges for this brand. Each instance has a `brandType` value, which indicates whether this brand appeared in the transcript or in an OCR. |
-
-```json
-"brands": [
-{
- "id": 0,
- "name": "MicrosoftExcel",
- "referenceId": "Microsoft_Excel",
- "referenceUrl": "http: //en.wikipedia.org/wiki/Microsoft_Excel",
- "referenceType": "Wiki",
- "description": "Microsoft Excel is a sprea..",
- "tags": [],
- "confidence": 0.975,
- "instances": [
- {
- "brandType": "Transcript",
- "start": "00: 00: 31.3000000",
- "end": "00: 00: 39.0600000"
- }
- ]
-},
-{
- "id": 1,
- "name": "Microsoft",
- "referenceId": "Microsoft",
- "referenceUrl": "http: //en.wikipedia.org/wiki/Microsoft",
- "description": "Microsoft Corporation is...",
- "tags": [
- "competitors",
- "technology"
- ],
- "confidence": 1.0,
- "instances": [
- {
- "brandType": "Transcript",
- "start": "00: 01: 44",
- "end": "00: 01: 45.3670000"
- },
- {
- "brandType": "Ocr",
- "start": "00: 01: 54",
- "end": "00: 02: 45.3670000"
- }
- ]
-}
-]
-```
-
-#### statistics
-
-| Name | Description |
-|--|--|
-| `CorrespondenceCount` | The number of correspondences in the video. |
-| `SpeakerWordCount` | The number of words per speaker. |
-| `SpeakerNumberOfFragments` | The number of fragments that the speaker has in a video. |
-| `SpeakerLongestMonolog` | The speaker's longest monolog. If the speaker has silence inside the monolog, it's included. Silence at the beginning and the end of the monolog is removed. |
-| `SpeakerTalkToListenRatio` | The calculation is based on the time spent on the speaker's monolog (without the silence in between) divided by the total time of the video. The time is rounded to the third decimal point. |
-
-#### audioEffects (preview)
-
-| Name | Description |
-|--|--|
-| `id` | The audio effect's ID. |
-| `type` | The audio effect's type. |
-| `name` | The audio effect's type in the language in which the JSON was indexed. |
-| `instances` | A list of time ranges where this audio effect appeared. Each instance has a confidence field. |
-| `start` + `end` | The time range in the original video. |
-| `adjustedStart` + `adjustedEnd` | [Time range versus adjusted time range](concepts-overview.md#time-range-vs-adjusted-time-range). |
-
-```json
-audioEffects: [{
- {
- id: 0,
- type: "Laughter",
- name: "Laughter",
- instances: [{
- confidence: 0.8815,
- adjustedStart: "0:00:10.2",
- adjustedEnd: "0:00:11.2",
- start: "0:00:10.2",
- end: "0:00:11.2"
- }, {
- confidence: 0.8554,
- adjustedStart: "0:00:48.26",
- adjustedEnd: "0:00:49.56",
- start: "0:00:48.26",
- end: "0:00:49.56"
- }, {
- confidence: 0.8492,
- adjustedStart: "0:00:59.66",
- adjustedEnd: "0:01:00.66",
- start: "0:00:59.66",
- end: "0:01:00.66"
- }
- ]
- }
-],
-```
-
-#### sentiments
-
-Sentiments get aggregated by their `sentimentType` field (`Positive`, `Neutral`, or `Negative`). For example: `0-0.1`, `0.1-0.2`.
-
-| Name | Description |
-|--|--|
-| `id` | The sentiment's ID. |
-| `averageScore` | The average of all scores of all instances of that sentiment type. |
-| `instances` | A list of time ranges where this sentiment appeared. |
-| `sentimentType` | The type can be `Positive`, `Neutral`, or `Negative`. |
-
-```json
-"sentiments": [
-{
- "id": 0,
- "averageScore": 0.87,
- "sentimentType": "Positive",
- "instances": [
- {
- "start": "00:00:23",
- "end": "00:00:41"
- }
- ]
-}, {
- "id": 1,
- "averageScore": 0.11,
- "sentimentType": "Positive",
- "instances": [
- {
- "start": "00:00:13",
- "end": "00:00:21"
- }
- ]
-}
-]
-```
-
-#### visualContentModeration
-
-The `visualContentModeration` transcript contains time ranges that Azure AI Video Indexer found to potentially have adult content. If `visualContentModeration` is empty, no adult content was identified.
-
-Videos that contain adult or racy content might be available for private view only. Users can submit a request for a human review of the content. In that case, the `IsAdult` attribute contains the result of the human review.
-
-| Name | Description |
-|--|--|
-| `id` | The ID of the visual content moderation. |
-| `adultScore` | The adult score (from content moderation). |
-| `racyScore` | The racy score (from content moderation). |
-| `instances` | A list of time ranges where this visual content moderation appeared. |
-
-## Learn more about visualContentModeration
--- [Azure AI services documentation](/azure/ai-services/computer-vision/concept-detecting-adult-content)-- [Transparency note](/legal/cognitive-services/computer-vision/imageanalysis-transparency-note?context=%2Fazure%2Fai-services%2Fcomputer-vision%2Fcontext%2Fcontext#features) -- [Use cases](/legal/cognitive-services/computer-vision/imageanalysis-transparency-note?context=%2Fazure%2Fai-services%2Fcomputer-vision%2Fcontext%2Fcontext#use-cases) -- [Capabilities and limitations](/legal/cognitive-services/computer-vision/imageanalysis-transparency-note?context=%2Fazure%2Fai-services%2Fcomputer-vision%2Fcontext%2Fcontext#system-performance-and-limitations-for-image-analysis) -- [Guidance for integration and responsible use](/legal/cognitive-services/computer-vision/imageanalysis-transparency-note?context=%2Fazure%2Fai-services%2Fcomputer-vision%2Fcontext%2Fcontext#general-guidelines-for-integration-and-responsible-use)-- [Data, privacy, and security](/legal/cognitive-services/computer-vision/imageanalysis-transparency-note?context=%2Fazure%2Fai-services%2Fcomputer-vision%2Fcontext%2Fcontext#recommendations-for-preserving-privacy)-
-```json
-"visualContentModeration": [
-{
- "id": 0,
- "adultScore": 0.00069,
- "racyScore": 0.91129,
- "instances": [
- {
- "start": "00:00:25.4840000",
- "end": "00:00:25.5260000"
- }
- ]
-},
-{
- "id": 1,
- "adultScore": 0.99231,
- "racyScore": 0.99912,
- "instances": [
- {
- "start": "00:00:35.5360000",
- "end": "00:00:35.5780000"
- }
- ]
-}
-]
-```
-
-#### textualContentModeration
-
-| Name | Description |
-|--|--|
-| `id` | The ID of the textual content moderation. |
-| `bannedWordsCount` | The number of banned words. |
-| `bannedWordsRatio` | The ratio of banned words to the total number of words. |
-
-##### Learn more about textualContentModeration
--- [Azure AI services documentation](/azure/ai-services/content-moderator/text-moderation-api)-- [Supported languages](/azure/ai-services/content-moderator/language-support) -- [Capabilities and limitations](/azure/ai-services/content-moderator/text-moderation-api) -- [Data, privacy and security](/azure/ai-services/content-moderator/overview#data-privacy-and-security)-
-#### emotions
-
-Azure AI Video Indexer identifies emotions based on speech and audio cues.
-
-| Name | Description |
-|--|--|
-| `id` | The emotion's ID. |
-| `type` | The type of an identified emotion: `Joy`, `Sadness`, `Anger`, or `Fear`. |
-| `instances` | A list of time ranges where this emotion appeared. |
-
-```json
-"emotions": [{
- "id": 0,
- "type": "Fear",
- "instances": [{
- "adjustedStart": "0:00:39.47",
- "adjustedEnd": "0:00:45.56",
- "start": "0:00:39.47",
- "end": "0:00:45.56"
- },
- {
- "adjustedStart": "0:07:19.57",
- "adjustedEnd": "0:07:23.25",
- "start": "0:07:19.57",
- "end": "0:07:23.25"
- }]
- },
- {
- "id": 1,
- "type": "Anger",
- "instances": [{
- "adjustedStart": "0:03:55.99",
- "adjustedEnd": "0:04:05.06",
- "start": "0:03:55.99",
- "end": "0:04:05.06"
- },
- {
- "adjustedStart": "0:04:56.5",
- "adjustedEnd": "0:05:04.35",
- "start": "0:04:56.5",
- "end": "0:05:04.35"
- }]
- },
- {
- "id": 2,
- "type": "Joy",
- "instances": [{
- "adjustedStart": "0:12:23.68",
- "adjustedEnd": "0:12:34.76",
- "start": "0:12:23.68",
- "end": "0:12:34.76"
- },
- {
- "adjustedStart": "0:12:46.73",
- "adjustedEnd": "0:12:52.8",
- "start": "0:12:46.73",
- "end": "0:12:52.8"
- },
- {
- "adjustedStart": "0:30:11.29",
- "adjustedEnd": "0:30:16.43",
- "start": "0:30:11.29",
- "end": "0:30:16.43"
- },
- {
- "adjustedStart": "0:41:37.23",
- "adjustedEnd": "0:41:39.85",
- "start": "0:41:37.23",
- "end": "0:41:39.85"
- }]
- },
- {
- "id": 3,
- "type": "Sad",
- "instances": [{
- "adjustedStart": "0:13:38.67",
- "adjustedEnd": "0:13:41.3",
- "start": "0:13:38.67",
- "end": "0:13:41.3"
- },
- {
- "adjustedStart": "0:28:08.88",
- "adjustedEnd": "0:28:18.16",
- "start": "0:28:08.88",
- "end": "0:28:18.16"
- }]
- }
-],
-```
-
-#### topics
-
-Azure AI Video Indexer makes an inference of main topics from transcripts. When possible, the second-level [IPTC](https://iptc.org/standards/media-topics/) taxonomy is included.
-
-| Name | Description |
-|--|--|
-| `id` | The topic's ID. |
-| `name` | The topic's name. For example: `Pharmaceuticals`. |
-| `referenceId` | Breadcrumbs that reflect the topic's hierarchy. For example: `HEALTH AND WELLBEING/MEDICINE AND HEALTHCARE/PHARMACEUTICALS`. |
-| `confidence` | The confidence score in the range `0`-`1`. Higher is more confident. |
-| `language` | The language used in the topic. |
-| `iptcName` | The IPTC media code name, if detected. |
-| `instances` | Currently, Azure AI Video Indexer doesn't index a topic to time intervals. The whole video is used as the interval. |
-
-```json
-"topics": [{
- "id": 0,
- "name": "INTERNATIONAL RELATIONS",
- "referenceId": "POLITICS AND GOVERNMENT/FOREIGN POLICY/INTERNATIONAL RELATIONS",
- "referenceType": "VideoIndexer",
- "confidence": 1,
- "language": "en-US",
- "instances": [{
- "adjustedStart": "0:00:00",
- "adjustedEnd": "0:03:36.25",
- "start": "0:00:00",
- "end": "0:03:36.25"
- }]
-}, {
- "id": 1,
- "name": "Politics and Government",
- "referenceType": "VideoIndexer",
- "iptcName": "Politics",
- "confidence": 0.9041,
- "language": "en-US",
- "instances": [{
- "adjustedStart": "0:00:00",
- "adjustedEnd": "0:03:36.25",
- "start": "0:00:00",
- "end": "0:03:36.25"
- }]
-}]
-. . .
-```
-
-#### speakers
-
-| Name | Description |
-|--|--|
-| `id` | The speaker's ID. |
-| `name` | The speaker's name in the form of `Speaker #<number>`. For example: `Speaker #1`. |
-| `instances` | A list of time ranges where this speaker appeared. |
-
-```json
-"speakers":[
-{
- "id":1,
- "name":"Speaker #1",
- "instances":[
- {
- "adjustedStart":"0:00:10.21",
- "adjustedEnd":"0:00:12.81",
- "start":"0:00:10.21",
- "end":"0:00:12.81"
- }
- ]
-},
-{
- "id":2,
- "name":"Speaker #2",
- "instances":[
- {
- "adjustedStart":"0:00:12.81",
- "adjustedEnd":"0:00:17.03",
- "start":"0:00:12.81",
- "end":"0:00:17.03"
- }
- ]
-},
-
-```
-
-## Next steps
-
-Explore the [Azure AI Video Indexer API developer portal](https://api-portal.videoindexer.ai).
-
-For information about how to embed widgets in your application, see [Embed Azure AI Video Indexer widgets into your applications](video-indexer-embed-widgets.md).
azure-video-indexer Video Indexer Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-video-indexer/video-indexer-overview.md
- Title: What is Azure AI Video Indexer?
-description: This article gives an overview of the Azure AI Video Indexer service.
- Previously updated : 08/02/2023----
-# Azure AI Video Indexer overview
--
-Azure AI Video Indexer is a cloud application, part of Azure AI services, built on Azure Media Services and Azure AI services (such as the Face, Translator, Azure AI Vision, and Speech). It enables you to extract the insights from your videos using Azure AI Video Indexer video and audio models.
-
-Azure AI Video Indexer analyzes the video and audio content by running 30+ AI models, generating rich insights. Here is an illustration of the audio and video analysis performed by Azure AI Video Indexer in the background:
-
-> [!div class="mx-imgBorder"]
-> :::image type="content" source="./media/video-indexer-overview/model-chart.png" alt-text="Diagram of Azure AI Video Indexer flow." lightbox="./media/video-indexer-overview/model-chart.png":::
-
-To start extracting insights with Azure AI Video Indexer, see the [how can I get started](#how-can-i-get-started-with-azure-ai-video-indexer) section.
-
-## What can I do with Azure AI Video Indexer?
-
-Azure AI Video Indexer's insights can be applied to many scenarios, among them are:
-
-* Deep search: Use the insights extracted from the video to enhance the search experience across a video library. For example, indexing spoken words and faces can enable the search experience of finding moments in a video where a person spoke certain words or when two people were seen together. Search based on such insights from videos is applicable to news agencies, educational institutes, broadcasters, entertainment content owners, enterprise LOB apps, and in general to any industry that has a video library that users need to search against.
-* Content creation: Create trailers, highlight reels, social media content, or news clips based on the insights Azure AI Video Indexer extracts from your content. Keyframes, scenes markers, and timestamps of the people and label appearances make the creation process smoother and easier, enabling you to easily get to the parts of the video you need when creating content.
-* Accessibility: Whether you want to make your content available for people with disabilities or if you want your content to be distributed to different regions using different languages, you can use the transcription and translation provided by Azure AI Video Indexer in multiple languages.
-* Monetization: Azure AI Video Indexer can help increase the value of videos. For example, industries that rely on ad revenue (news media, social media, and so on) can deliver relevant ads by using the extracted insights as additional signals to the ad server.
-* Content moderation: Use textual and visual content moderation models to keep your users safe from inappropriate content and validate that the content you publish matches your organization's values. You can automatically block certain videos or alert your users about the content.
-* Recommendations: Video insights can be used to improve user engagement by highlighting the relevant video moments to users. By tagging each video with additional metadata, you can recommend to users the most relevant videos and highlight the parts of the video that matches their needs.
-
-## Video/audio AI features
-
-The following list shows the insights you can retrieve from your video/audio files using Azure AI Video Indexer video and audio AI features (models).
-
-Unless specified otherwise, a model is generally available.
-
-### Video models
-
-* **Face detection**: Detects and groups faces appearing in the video.
-* **Celebrity identification**: Identifies over 1 million celebritiesΓÇölike world leaders, actors, artists, athletes, researchers, business, and tech leaders across the globe. The data about these celebrities can also be found on various websites (IMDB, Wikipedia, and so on).
-* **Account-based face identification**: Trains a model for a specific account. It then recognizes faces in the video based on the trained model. For more information, see [Customize a Person model from the Azure AI Video Indexer website](customize-person-model-with-website.md) and [Customize a Person model with the Azure AI Video Indexer API](customize-person-model-with-api.md).
-* **Thumbnail extraction for faces**: Identifies the best captured face in each group of faces (based on quality, size, and frontal position) and extracts it as an image asset.
-* **Optical character recognition (OCR)**: Extracts text from images like pictures, street signs and products in media files to create insights.
-* **Visual content moderation**: Detects adult and/or racy visuals.
-* **Labels identification**: Identifies visual objects and actions displayed.
-* **Scene segmentation**: Determines when a scene changes in video based on visual cues. A scene depicts a single event and it's composed by a series of consecutive shots, which are semantically related.
-* **Shot detection**: Determines when a shot changes in video based on visual cues. A shot is a series of frames taken from the same motion-picture camera. For more information, see [Scenes, shots, and keyframes](scenes-shots-keyframes.md).
-* **Black frame detection**: Identifies black frames presented in the video.
-* **Keyframe extraction**: Detects stable keyframes in a video.
-* **Rolling credits**: Identifies the beginning and end of the rolling credits in the end of TV shows and movies.
-* **Editorial shot type detection**: Tags shots based on their type (like wide shot, medium shot, close up, extreme close up, two shot, multiple people, outdoor and indoor, and so on). For more information, see [Editorial shot type detection](scenes-shots-keyframes.md#editorial-shot-type-detection).
-* **Observed people tracking** (preview): Detects observed people in videos and provides information such as the location of the person in the video frame (using bounding boxes) and the exact timestamp (start, end) and confidence when a person appears. For more information, see [Trace observed people in a video](observed-people-tracing.md).
- * **People's detected clothing** (preview): Detects the clothing types of people appearing in the video and provides information such as long or short sleeves, long or short pants and skirt or dress. The detected clothing is associated with the people wearing it and the exact timestamp (start, end) along with a confidence level for the detection are provided. For more information, see [detected clothing](detected-clothing.md).
- * **Featured clothing** (preview): captures featured clothing images appearing in a video. You can improve your targeted ads by using the featured clothing insight. For information on how the featured clothing images are ranked and how to get the insights, see [featured clothing](observed-people-featured-clothing.md).
-* **Matched person** (preview): Matches people that were observed in the video with the corresponding faces detected. The matching between the observed people and the faces contain a confidence level.
-* **Slate detection** (preview): identifies the following movie post-production insights when indexing a video using the advanced indexing option:
-
- * Clapperboard detection with metadata extraction.
- * Digital patterns detection, including color bars.
- * Textless slate detection, including scene matching.
-
- For details, see [Slate detection](slate-detection-insight.md).
-* **Textual logo detection** (preview): Matches a specific predefined text using Azure AI Video Indexer OCR. For example, if a user created a textual logo: "Microsoft", different appearances of the word *Microsoft* will be detected as the "Microsoft" logo. For more information, see [Detect textual logo](detect-textual-logo.md).
-
-### Audio models
-
-* **Audio transcription**: Converts speech to text over 50 languages and allows extensions. For more information, see [Azure AI Video Indexer language support](language-support.md).
-* **Automatic language detection**: Identifies the dominant spoken language. For more information, see [Azure AI Video Indexer language support](language-support.md). If the language can't be identified with confidence, Azure AI Video Indexer assumes the spoken language is English. For more information, see [Language identification model](language-identification-model.md).
-* **Multi-language speech identification and transcription**: Identifies the spoken language in different segments from audio. It sends each segment of the media file to be transcribed and then combines the transcription back to one unified transcription. For more information, see [Automatically identify and transcribe multi-language content](multi-language-identification-transcription.md).
-* **Closed captioning**: Creates closed captioning in three formats: VTT, TTML, SRT.
-* **Two channel processing**: Auto detects separate transcript and merges to single timeline.
-* **Noise reduction**: Clears up telephony audio or noisy recordings (based on Skype filters).
-* **Transcript customization** (CRIS): Trains custom speech to text models to create industry-specific transcripts. For more information, see [Customize a Language model from the Azure AI Video Indexer website](customize-language-model-with-website.md) and [Customize a Language model with the Azure AI Video Indexer APIs](customize-language-model-with-api.md).
-* **Speaker enumeration**: Maps and understands which speaker spoke which words and when. Sixteen speakers can be detected in a single audio-file.
-* **Speaker statistics**: Provides statistics for speakers' speech ratios.
-* **Textual content moderation**: Detects explicit text in the audio transcript.
-* **Text-based emotion detection**: Emotions such as joy, sadness, anger, and fear that were detected via transcript analysis.
-* **Translation**: Creates translations of the audio transcript to many different languages. For more information, see [Azure AI Video Indexer language support](language-support.md).
-* **Audio effects detection** (preview): Detects the following audio effects in the non-speech segments of the content: alarm or siren, dog barking, crowd reactions (cheering, clapping, and booing), gunshot or explosion, laughter, breaking glass, and silence.
-
- The detected acoustic events are in the closed captions file. The file can be downloaded from the Azure AI Video Indexer website. For more information, see [Audio effects detection](audio-effects-detection.md).
-
- > [!NOTE]
- > The full set of events is available only when you choose **Advanced Audio Analysis** when uploading a file, in upload preset. By default, only silence is detected.
-
-### Audio and video models (multi-channels)
-
-When indexing by one channel, partial results for those models are available.
-
-* **Keywords extraction**: Extracts keywords from speech and visual text.
-* **Named entities extraction**: Extracts brands, locations, and people from speech and visual text via natural language processing (NLP).
-* **Topic inference**: Extracts topics based on various keywords (that is, keywords 'Stock Exchange', 'Wall Street' produces the topic 'Economics'). The model uses three different ontologies ([IPTC](https://iptc.org/standards/media-topics/), [Wikipedia](https://www.wikipedia.org/) and the Video Indexer hierarchical topic ontology). The model uses transcription (spoken words), OCR content (visual text), and celebrities recognized in the video using the Video Indexer facial recognition model.
-* **Artifacts**: Extracts rich set of "next level of details" artifacts for each of the models.
-* **Sentiment analysis**: Identifies positive, negative, and neutral sentiments from speech and visual text.
-
-## How can I get started with Azure AI Video Indexer?
-
-Learn how to [get started with Azure AI Video Indexer](video-indexer-get-started.md).
-
-Once you set up, start using [insights](video-indexer-output-json-v2.md) and check out other **How to guides**.
-
-## Compliance, privacy and security
---
-As an important reminder, you must comply with all applicable laws in your use of Azure AI Video Indexer, and you may not use Azure AI Video Indexer or any Azure service in a manner that violates the rights of others, or that may be harmful to others.
-
-Before uploading any video/image to Azure AI Video Indexer, You must have all the proper rights to use the video/image, including, where required by law, all the necessary consents from individuals (if any) in the video/image, for the use, processing, and storage of their data in Azure AI Video Indexer and Azure. Some jurisdictions may impose special legal requirements for the collection, online processing and storage of certain categories of data, such as biometric data. Before using Azure AI Video Indexer and Azure for the processing and storage of any data subject to special legal requirements, You must ensure compliance with any such legal requirements that may apply to You.
-
-To learn about compliance, privacy and security in Azure AI Video Indexer please visit the Microsoft [Trust Center](https://www.microsoft.com/TrustCenter/CloudServices/Azure/default.aspx). For Microsoft's privacy obligations, data handling and retention practices, including how to delete your data, please review Microsoft's [Privacy Statement](https://privacy.microsoft.com/PrivacyStatement), the [Online Services Terms](https://www.microsoft.com/licensing/product-licensing/products?rtc=1) ("OST") and [Data Processing Addendum](https://www.microsoftvolumelicensing.com/DocumentSearch.aspx?Mode=3&DocumentTypeId=67) ("DPA"). By using Azure AI Video Indexer, you agree to be bound by the OST, DPA and the Privacy Statement.
-
-## Next steps
-
-You're ready to get started with Azure AI Video Indexer. For more information, see the following articles:
--- [Indexing and configuration guide](indexing-configuration-guide.md)-- [Pricing](https://azure.microsoft.com/pricing/details/video-indexer/)-- [Get started with the Azure AI Video Indexer website](video-indexer-get-started.md).-- [Process content with Azure AI Video Indexer REST API](video-indexer-use-apis.md).-- [Embed visual widgets in your application](video-indexer-embed-widgets.md).-
-For the latest updates, see [Azure AI Video Indexer release notes](release-notes.md).
azure-video-indexer Video Indexer Search https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-video-indexer/video-indexer-search.md
- Title: Search for exact moments in videos with Azure AI Video Indexer
-description: Learn how to search for exact moments in videos using Azure AI Video Indexer.
- Previously updated : 11/23/2019----
-# Search for exact moments in videos with Azure AI Video Indexer
--
-This topic shows you how to use the Azure AI Video Indexer website to search for exact moments in videos.
-
-1. Go to the [Azure AI Video Indexer](https://www.videoindexer.ai/) website and sign in.
-1. Specify the search keywords and the search will be performed among all videos in your account's library.
-
- You can filter your search by selecting **Filters**. In below example, we search for "Microsoft" that appears as an on-screen text only (OCR).
-
- :::image type="content" source="./media/video-indexer-search/filter.png" alt-text="Filter, text only":::
-1. Press **Search** to see the result.
-
- :::image type="content" source="./media/video-indexer-search/results.png" alt-text="Video search result":::
-
- If you select one of the results, the player brings you to that exact moment in the video.
-1. View and search the summarized insights of the video by clicking **Play** on the video or selecting one of your original search results.
-
- You can view, search, edit the **insights**. When you select one of the insights, the player brings you to that exact moment in the video.
-
- :::image type="content" source="./media/video-indexer-search/insights.png" alt-text="View, search and edit the insights of the video":::
-
- If you embed the video through Azure AI Video Indexer widgets, you can achieve the player/insights view and synchronization in your app. For more information, see [Embed Azure AI Video Indexer widgets into your app](video-indexer-embed-widgets.md).
-1. You can view, search, and edit the transcripts by clicking on the **Timeline** tab.
-
- :::image type="content" source="./media/video-indexer-search/timeline.png" alt-text="View, search and edit the transcripts of the video":::
-
- To edit the text, select **Edit** from the top-right corner and change the text as you need.
-
- You can also translate and download the transcripts by selecting the appropriate option from the top-right corner.
-
-## Embed, download, create projects
-
-You can embed your video by selecting **</>Embed** under your video. For details, see [Embed visual widgets in your application](video-indexer-embed-widgets.md).
-
-You can download the source video, insights of the video, transcripts by clicking **Download** under your video.
-
-You can create a clip based on your video of specific lines and moments by clicking **Open in editor**. Then editing the video, and saving the project. For details, see [Use your videos' deep insights](use-editor-create-project.md).
--
-## Next steps
-
-[Process content with Azure AI Video Indexer REST API](video-indexer-use-apis.md)
azure-video-indexer Video Indexer Use Apis https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-video-indexer/video-indexer-use-apis.md
- Title: Use the Azure AI Video Indexer API
-description: This article describes how to get started with Azure AI Video Indexer API.
Previously updated : 07/03/2023------
-# Tutorial: Use the Azure AI Video Indexer API
--
-Azure AI Video Indexer consolidates various audio and video artificial intelligence (AI) technologies offered by Microsoft into one integrated service, making development simpler. The APIs are designed to enable developers to focus on consuming Media AI technologies without worrying about scale, global reach, availability, and reliability of cloud platforms. You can use the API to upload your files, get detailed video insights, get URLs of embeddable insight and player widgets, and more.
--
-This article shows how the developers can take advantage of the [Azure AI Video Indexer API](https://api-portal.videoindexer.ai/).
-
-## Prerequisite
-
-Before you start, see the [Recommendations](#recommendations) section (that follows later in this article).
-
-## Subscribe to the API
-
-1. Sign in to the [Azure AI Video Indexer API developer portal](https://api-portal.videoindexer.ai/).
-
- > [!Important]
- > * You must use the same provider you used when you signed up for Azure AI Video Indexer.
- > * Personal Google and Microsoft (Outlook/Live) accounts can only be used for trial accounts. Accounts connected to Azure require Microsoft Entra ID.
- > * There can be only one active account per email. If a user tries to sign in with user@gmail.com for LinkedIn and later with user@gmail.com for Google, the latter will display an error page, saying the user already exists.
-
- ![Sign in to the Azure AI Video Indexer API developer portal](./media/video-indexer-use-apis/sign-in.png)
-1. Subscribe.
-
- Select the [Products](https://api-portal.videoindexer.ai/products) tab. Then, select **Authorization** and subscribe.
-
- > [!NOTE]
- > New users are automatically subscribed to Authorization.
-
- After you subscribe, you can find your subscription under **[Products](https://api-portal.videoindexer.ai/products)** -> **Profile**. In the subscriptions section, you'll find the primary and secondary keys. The keys should be protected. The keys should only be used by your server code. They shouldn't be available on the client side (.js, .html, and so on).
-
- ![Subscription and keys in the Azure AI Video Indexer API developer portal](./media/video-indexer-use-apis/subscriptions.png)
-
-An Azure AI Video Indexer user can use a single subscription key to connect to multiple Azure AI Video Indexer accounts. You can then link these Azure AI Video Indexer accounts to different Media Services accounts.
-
-## Obtain access token using the Authorization API
-
-Once you subscribe to the Authorization API, you can obtain access tokens. These access tokens are used to authenticate against the Operations API.
-
-Each call to the Operations API should be associated with an access token, matching the authorization scope of the call.
--- User level: User level access tokens let you perform operations on the **user** level. For example, get associated accounts.-- Account level: Account level access tokens let you perform operations on the **account** level or the **video** level. For example, upload video, list all videos, get video insights, and so on.-- Video level: Video level access tokens let you perform operations on a specific **video**. For example, get video insights, download captions, get widgets, and so on.-
-You can control the permission level of tokens in two ways:
-
-* For **Account** tokens, you can use the **Get Account Access Token With Permission** API and specify the permission type (**Reader**/**Contributor**/**MyAccessManager**/**Owner**).
-* For all types of tokens (including **Account** tokens), you can specify **allowEdit=true/false**. **false** is the equivalent of a **Reader** permission (read-only) and **true** is the equivalent of a **Contributor** permission (read-write).
-
-For most server-to-server scenarios, you'll probably use the same **account** token since it covers both **account** operations and **video** operations. However, if you're planning to make client side calls to Azure AI Video Indexer (for example, from JavaScript), you would want to use a **video** access token to prevent clients from getting access to the entire account. That's also the reason that when embedding Azure AI Video Indexer client code in your client (for example, using **Get Insights Widget** or **Get Player Widget**), you must provide a **video** access token.
-
-To make things easier, you can use the **Authorization** API > **GetAccounts** to get your accounts without obtaining a user token first. You can also ask to get the accounts with valid tokens, enabling you to skip an additional call to get an account token.
-
-Access tokens expire after 1 hour. Make sure your access token is valid before using the Operations API. If it expires, call the Authorization API again to get a new access token.
-
-You're ready to start integrating with the API. Find [the detailed description of each Azure AI Video Indexer REST API](https://api-portal.videoindexer.ai/).
-
-## Operational API calls
-
-The Account ID parameter is required in all operational API calls. Account ID is a GUID that can be obtained in one of the following ways:
-
-* Use the **Azure AI Video Indexer website** to get the Account ID:
-
- 1. Browse to the [Azure AI Video Indexer](https://www.videoindexer.ai/) website and sign in.
- 2. Browse to the **Settings** page.
- 3. Copy the account ID.
-
- ![Azure AI Video Indexer settings and account ID](./media/video-indexer-use-apis/account-id.png)
-
-* Use **Azure AI Video Indexer Developer Portal** to programmatically get the Account ID.
-
- Use the [Get account](https://api-portal.videoindexer.ai/api-details#api=Operations&operation=Get-Account) API.
-
- > [!TIP]
- > You can generate access tokens for the accounts by defining `generateAccessTokens=true`.
-
-* Get the account ID from the URL of a player page in your account.
-
- When you watch a video, the ID appears after the `accounts` section and before the `videos` section.
-
- ```
- https://www.videoindexer.ai/accounts/00000000-f324-4385-b142-f77dacb0a368/videos/d45bf160b5/
- ```
-
-## Recommendations
-
-This section lists some recommendations when using Azure AI Video Indexer API.
-
-### Uploading
--- If you're planning to upload a video, it's recommended to place the file in some public network location (for example, an Azure Blob Storage account). Get the link to the video and provide the URL as the upload file param.-
- The URL provided to Azure AI Video Indexer must point to a media (audio or video) file. An easy verification for the URL (or SAS URL) is to paste it into a browser, if the file starts playing/downloading, it's likely a good URL. If the browser is rendering some visualization, it's likely not a link to a file but to an HTML page.
-When you're uploading videos by using the API, you have the following options:
-
-* Upload your video from a URL (preferred).
-* Send the video file as a byte array in the request body.
-* Use existing an Azure Media Services asset by providing the [asset ID](/azure/media-services/latest/assets-concept). This option is supported in paid accounts only.
-* There is an API request limit of 10 requests per second and up to 120 requests per minute.
-
-### Getting JSON output
--- When you call the API that gets video insights for the specified video, you get a detailed JSON output as the response content. [See details about the returned JSON in this article](video-indexer-output-json-v2.md).-- The JSON output produced by the API contains `Insights` and `SummarizedInsights` elements. We highly recommend using `Insights` and not using `SummarizedInsights` (which is present for backward compatibility).-- We don't recommend that you use data directly from the artifacts folder for production purposes. Artifacts are intermediate outputs of the indexing process. They're essentially raw outputs of the various AI engines that analyze the videos; the artifacts schema may change over time. -
- It's recommended that you use the [Get Video Index](https://api-portal.videoindexer.ai/api-details#api=Operations&operation=Get-Video-Index) API, as described in [Get insights and artifacts produced by the API](insights-overview.md#get-insights-produced-by-the-api) and **not** [Get-Video-Artifact-Download-Url](https://api-portal.videoindexer.ai/api-details#api=Operations&operation=Get-Video-Artifact-Download-Url).
-
-## Code sample
-
-The following C# code snippet demonstrates the usage of all the Azure AI Video Indexer APIs together.
-
-> [!NOTE]
-> The following sample is intended for classic accounts only and not compatible with ARM-based accounts. For an updated sample for ARM (recommended), see [this ARM sample repo](https://github.com/Azure-Samples/media-services-video-indexer/blob/master/API-Samples/C%23/ArmBased/Program.cs).
-
-```csharp
-var apiUrl = "https://api.videoindexer.ai";
-var accountId = "...";
-var location = "westus2"; // replace with the account's location, or with ΓÇ£trialΓÇ¥ if this is a trial account
-var apiKey = "...";
-
-System.Net.ServicePointManager.SecurityProtocol = System.Net.ServicePointManager.SecurityProtocol | System.Net.SecurityProtocolType.Tls12;
-
-// create the http client
-var handler = new HttpClientHandler();
-handler.AllowAutoRedirect = false;
-var client = new HttpClient(handler);
-client.DefaultRequestHeaders.Add("Ocp-Apim-Subscription-Key", apiKey);
-
-// obtain account access token
-var accountAccessTokenRequestResult = client.GetAsync($"{apiUrl}/auth/{location}/Accounts/{accountId}/AccessToken?allowEdit=true").Result;
-var accountAccessToken = accountAccessTokenRequestResult.Content.ReadAsStringAsync().Result.Replace("\"", "");
-
-client.DefaultRequestHeaders.Remove("Ocp-Apim-Subscription-Key");
-
-// upload a video
-var content = new MultipartFormDataContent();
-Debug.WriteLine("Uploading...");
-// get the video from URL
-var videoUrl = "VIDEO_URL"; // replace with the video URL
-
-// as an alternative to specifying video URL, you can upload a file.
-// remove the videoUrl parameter from the query string below and add the following lines:
- //FileStream video =File.OpenRead(Globals.VIDEOFILE_PATH);
- //byte[] buffer = new byte[video.Length];
- //video.Read(buffer, 0, buffer.Length);
- //content.Add(new ByteArrayContent(buffer));
-
-var uploadRequestResult = client.PostAsync($"{apiUrl}/{location}/Accounts/{accountId}/Videos?accessToken={accountAccessToken}&name=some_name&description=some_description&privacy=private&partition=some_partition&videoUrl={videoUrl}", content).Result;
-var uploadResult = uploadRequestResult.Content.ReadAsStringAsync().Result;
-
-// get the video id from the upload result
-var videoId = JsonConvert.DeserializeObject<dynamic>(uploadResult)["id"];
-Debug.WriteLine("Uploaded");
-Debug.WriteLine("Video ID: " + videoId);
-
-// obtain video access token
-client.DefaultRequestHeaders.Add("Ocp-Apim-Subscription-Key", apiKey);
-var videoTokenRequestResult = client.GetAsync($"{apiUrl}/auth/{location}/Accounts/{accountId}/Videos/{videoId}/AccessToken?allowEdit=true").Result;
-var videoAccessToken = videoTokenRequestResult.Content.ReadAsStringAsync().Result.Replace("\"", "");
-
-client.DefaultRequestHeaders.Remove("Ocp-Apim-Subscription-Key");
-
-// wait for the video index to finish
-while (true)
-{
- Thread.Sleep(10000);
-
- var videoGetIndexRequestResult = client.GetAsync($"{apiUrl}/{location}/Accounts/{accountId}/Videos/{videoId}/Index?accessToken={videoAccessToken}&language=English").Result;
- var videoGetIndexResult = videoGetIndexRequestResult.Content.ReadAsStringAsync().Result;
-
- var processingState = JsonConvert.DeserializeObject<dynamic>(videoGetIndexResult)["state"];
-
- Debug.WriteLine("");
- Debug.WriteLine("State:");
- Debug.WriteLine(processingState);
-
- // job is finished
- if (processingState != "Uploaded" && processingState != "Processing")
- {
- Debug.WriteLine("");
- Debug.WriteLine("Full JSON:");
- Debug.WriteLine(videoGetIndexResult);
- break;
- }
-}
-
-// search for the video
-var searchRequestResult = client.GetAsync($"{apiUrl}/{location}/Accounts/{accountId}/Videos/Search?accessToken={accountAccessToken}&id={videoId}").Result;
-var searchResult = searchRequestResult.Content.ReadAsStringAsync().Result;
-Debug.WriteLine("");
-Debug.WriteLine("Search:");
-Debug.WriteLine(searchResult);
-
-// get insights widget url
-var insightsWidgetRequestResult = client.GetAsync($"{apiUrl}/{location}/Accounts/{accountId}/Videos/{videoId}/InsightsWidget?accessToken={videoAccessToken}&widgetType=Keywords&allowEdit=true").Result;
-var insightsWidgetLink = insightsWidgetRequestResult.Headers.Location;
-Debug.WriteLine("Insights Widget url:");
-Debug.WriteLine(insightsWidgetLink);
-
-// get player widget url
-var playerWidgetRequestResult = client.GetAsync($"{apiUrl}/{location}/Accounts/{accountId}/Videos/{videoId}/PlayerWidget?accessToken={videoAccessToken}").Result;
-var playerWidgetLink = playerWidgetRequestResult.Headers.Location;
-Debug.WriteLine("");
-Debug.WriteLine("Player Widget url:");
-Debug.WriteLine(playerWidgetLink);
-```
-
-## Clean up resources
-
-After you're done with this tutorial, delete resources that you aren't planning to use.
-
-## See also
--- [Azure AI Video Indexer overview](video-indexer-overview.md)-- [Regions](https://azure.microsoft.com/global-infrastructure/services/?products=cognitive-services)-
-## Next steps
--- [Examine details of the output JSON](video-indexer-output-json-v2.md)-- Check out the [sample code](https://github.com/Azure-Samples/media-services-video-indexer) that demonstrates important aspect of uploading and indexing a video. Following the code will give you a good idea of how to use our API for basic functionalities. Make sure to read the inline comments and notice our best practices advice.
azure-video-indexer Video Indexer View Edit https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-video-indexer/video-indexer-view-edit.md
- Title: View Azure AI Video Indexer insights
-description: This article demonstrates how to view Azure AI Video Indexer insights.
- Previously updated : 04/12/2023----
-# View Azure AI Video Indexer insights
--
-This article shows you how to view the Azure AI Video Indexer insights of a video.
-
-1. Browse to the [Azure AI Video Indexer](https://www.videoindexer.ai/) website and sign in.
-2. Find a video from which you want to create your Azure AI Video Indexer insights. For more information, see [Find exact moments within videos](video-indexer-search.md).
-3. Press **Play**.
-
- The page shows the video's insights.
-
- ![Insights](./media/video-indexer-view-edit/video-indexer-summarized-insights.png)
-4. Select which insights you want to view. For example, faces, keywords, sentiments. You can see the faces of people and the time ranges each face appears in and the % of the time it's shown.
-
- The **Timeline** tab shows transcripts with timelines and other information that you can choose from the **View** drop-down.
-
- > [!div class="mx-imgBorder"]
- > :::image type="content" source="./media/video-indexer-view-edit/timeline.png" alt-text="Screenshot that shows how to select the Insights." lightbox="./media/video-indexer-view-edit/timeline.png":::
-
- The player and the insights are synchronized. For example, if you click a keyword or the transcript line, the player brings you to that moment in the video. You can achieve the player/insights view and synchronization in your application. For more information, see [Embed Azure Indexer widgets into your application](video-indexer-embed-widgets.md).
-
- For more information, see [Insights output](video-indexer-output-json-v2.md).
-
-## Considerations
--- [!INCLUDE [insights](./includes/insights.md)]-- If you plan to download artifact files, beware of the following:
-
- [!INCLUDE [artifacts](./includes/artifacts.md)]
-
-## Next steps
-
-[Use your videos' deep insights](use-editor-create-project.md)
-
-## See also
-
-[Azure AI Video Indexer overview](video-indexer-overview.md)
-
azure-video-indexer View Closed Captions https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-video-indexer/view-closed-captions.md
- Title: View closed captions
-description: Learn how to view captions using the Azure AI Video Indexer website.
- Previously updated : 10/24/2022----
-# View closed captions in the Azure AI Video Indexer website
--
-This article shows how to view closed captions in the [Azure AI Video Indexer video player](https://www.videoindexer.ai).
-
-## View closed captions
-
-1. Go to the [Azure AI Video Indexer](https://www.videoindexer.ai/) website and sign in.
-1. Select a video for which you want to view captions.
-1. On the bottom of the Azure AI Video Indexer video player, select **Closed Captioning** (in some browsers located under the **Captions** menu, in some located under the **gear** icon).
-1. Under **Closed Captioning**, select a language in which you want to view captions. For example, **English**. Once checked, you see the captions in English.
-1. To see a speaker in front of the caption, select **Settings** under **Closed Captioning** and check **Show speakers** (under **Configurations**) -> press **Done**.
-
-## Next steps
-
-See how to [Insert or remove transcript lines in the Azure AI Video Indexer website](edit-transcript-lines-portal.md) and other how to articles that demonstrate how to navigate in the Azure AI Video Indexer website.
azure-vmware Attach Azure Netapp Files To Azure Vmware Solution Hosts https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-vmware/attach-azure-netapp-files-to-azure-vmware-solution-hosts.md
Now that you've attached a datastore on Azure NetApp Files-based NFS volume to y
- **Can a single Azure NetApp Files datastore be added to multiple clusters within the same Azure VMware Solution SDDC?**
- Yes, you can select multiple clusters at the time of creating the datastore. Additional clusters may be added or removed after the initial creation as well.
+ Yes, you can connect an Azure NetApp Files volume as a datastore to multiple clusters in different SDDCs. Each SDDC will need connectivity via the ExpressRoute gateway in the Azure NetApp Files virtual network.
azure-vmware Configure Vmware Hcx https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-vmware/configure-vmware-hcx.md
For an end-to-end overview of this procedure, view the [Azure VMware Solution: C
The selections define the resources where VMs can consume VMware HCX services.
+ > [!NOTE]
+ > If you have a mixed mode SDDC with a fleet cluster, deployment of service mesh appliances for fleet cluster is not viable/supported.
+ :::image type="content" source="media/tutorial-vmware-hcx/select-compute-profile-source.png" alt-text="Screenshot that shows selecting the source compute profile." lightbox="media/tutorial-vmware-hcx/select-compute-profile-source.png"::: :::image type="content" source="media/tutorial-vmware-hcx/select-compute-profile-remote.png" alt-text="Screenshot that shows selecting the remote compute profile." lightbox="media/tutorial-vmware-hcx/select-compute-profile-remote.png":::
azure-web-pubsub Howto Develop Create Instance https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-web-pubsub/howto-develop-create-instance.md
zone_pivot_groups: azure-web-pubsub-create-resource-methods
## Create a resource from Azure portal
-1. Select the New button found on the upper left-hand corner of the Azure portal. In the New screen, type **Web PubSub** in the search box and press enter.
+1. Select the New button found on the upper left-hand corner of the Azure portal. In the New screen, type **Web PubSub** in the search box and then press Enter.
:::image type="content" source="./media/create-instance-portal/search-web-pubsub-in-portal.png" alt-text="Screenshot of searching the Azure Web PubSub in portal.":::
databox-online Azure Stack Edge Gpu Clustering Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/databox-online/azure-stack-edge-gpu-clustering-overview.md
Previously updated : 05/16/2023 Last updated : 10/05/2023
A quorum is always maintained on your Azure Stack Edge cluster to remain online
For an Azure Stack Edge cluster with two nodes, if a node fails, then a cluster witness provides the third vote so that the cluster stays online (since the cluster is left with two out of three votes - a majority). A cluster witness is required on your Azure Stack Edge cluster. You can set up the witness in the cloud or in a local fileshare using the local UI of your device.
-For more information on cluster witness, see [Cluster witness on Azure Stack Edge](azure-stack-edge-gpu-cluster-witness-overview.md).
-
+ - For more information about the cluster witness, see [Cluster witness on Azure Stack Edge](azure-stack-edge-gpu-cluster-witness-overview.md).
+ - For more information about witness in the cloud, see[Configure cloud witness](azure-stack-edge-gpu-manage-cluster.md#configure-cloud-witness).
## Infrastructure cluster
databox-online Azure Stack Edge Pro 2 Deploy Install https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/databox-online/azure-stack-edge-pro-2-deploy-install.md
Previously updated : 11/08/2022 Last updated : 10/06/2023 zone_pivot_groups: azure-stack-edge-device-deployment # Customer intent: As an IT admin, I need to understand how to install Azure Stack Edge Pro 2 in datacenter so I can use it to transfer data to Azure.
Before you start cabling your device, you need the following things:
![Example of a QSFP28 DAC connector](./media/azure-stack-edge-pro-2-deploy-install/qsfp28-dac-connector.png)
- For a full list of supported cables, modules, and switches, see [Connect-X6 DX adapter card compatible firmware](https://docs.nvidia.com/networking/display/ConnectX6DxFirmwarev22271016/Firmware+Compatible+Products).
+ For a full list of supported cables, modules, and switches, see [Firmware compatible products](https://docs.nvidia.com/networking/display/connectx6dxfirmwarev22361010/firmware+compatible+products).
- Access to one power distribution unit. - At least one 100-GbE network switch to connect a 10/1-GbE or a 100-GbE network interface to the internet for data. At least one data network interface from among Port 2, Port 3, and Port 4 needs to be connected to the Internet (with connectivity to Azure). - A pair of Wi-Fi antennas (included in the accessory box).
Before you start cabling your device, you need the following things:
![Example of a QSFP28 DAC connector](./media/azure-stack-edge-pro-2-deploy-install/qsfp28-dac-connector.png)
- For a full list of supported cables, modules, and switches, see [Connect-X6 DX adapter card compatible firmware](https://docs.nvidia.com/networking/display/ConnectX6DxFirmwarev22271016/Firmware+Compatible+Products).
+ For a full list of supported cables, modules, and switches, see [Firmware compatible products](https://docs.nvidia.com/networking/display/connectx6dxfirmwarev22361010/firmware+compatible+products).
- At least one 100-GbE network switch to connect a 1-GbE or a 100-GbE network interface to the internet for data for each device. - A pair of Wi-Fi antennas (included in the accessory box). > [!NOTE]
databox-online Azure Stack Edge Reset Reactivate Device https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/databox-online/azure-stack-edge-reset-reactivate-device.md
Previously updated : 10/09/2023 Last updated : 10/13/2023
Before you reset, create a copy of the local data on the device if needed. You c
>[!IMPORTANT] > - Resetting your device will erase all local data and workloads from your device, and that can't be reversed. Reset your device only if you want to start afresh with the device. > - If running AP5GC/SAP Kubernetes workload profiles and you updated your Azure Stack Edge to 2309, and reset your Azure Stack Edge device, you see the following behavior:
-> -- In the local web UI, if you go to Software updates page, you see that the Kubernetes version is unavailable.
-> -- In Azure portal, you are prompted to apply a Kubernetes update.
-> Go ahead and apply the Kubernetes update.
-> -- After device reset, you must select a Kubernetes workload profile again. Otherwise, the default "Other workloads" profile will be applied. For more information, see [Configure compute IPs](azure-stack-edge-gpu-deploy-configure-network-compute-web-proxy.md?pivots=two-node#configure-compute-ips-1).
+> - In the local web UI, if you go to Software updates page, you see that the Kubernetes version is unavailable.
+> - In Azure portal, you are prompted to apply a Kubernetes update. Go ahead and apply the Kubernetes update.
+> - After device reset, you must select a Kubernetes workload profile again. Otherwise, the default "Other workloads" profile will be applied. For more information, see [Configure compute IPs](azure-stack-edge-gpu-deploy-configure-network-compute-web-proxy.md?pivots=two-node#configure-compute-ips-1).
You can reset your device in the local web UI or in PowerShell. For PowerShell instructions, see [Reset your device](./azure-stack-edge-connect-powershell-interface.md#reset-your-device).
ddos-protection Test Through Simulations https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ddos-protection/test-through-simulations.md
Title: Azure DDoS Protection simulation testing
-description: Learn about how to test through simulations.
+ Title: 'Tutorial: Azure DDoS Protection simulation testing'
+description: Learn about how to test Azure DDoS Protection through simulations.
-+ Previously updated : 02/06/2023 Last updated : 10/06/2023
-# Test with simulation partners
+# Tutorial: Azure DDoS Protection simulation testing
-ItΓÇÖs a good practice to test your assumptions about how your services will respond to an attack by conducting periodic simulations. During testing, validate that your services or applications continue to function as expected and thereΓÇÖs no disruption to the user experience. Identify gaps from both a technology and process standpoint and incorporate them in the DDoS response strategy. We recommend that you perform such tests in staging environments or during non-peak hours to minimize the impact to the production environment.
+ItΓÇÖs a good practice to test your assumptions about how your services respond to an attack by conducting periodic simulations. During testing, validate that your services or applications continue to function as expected and thereΓÇÖs no disruption to the user experience. Identify gaps from both a technology and process standpoint and incorporate them in the DDoS response strategy. We recommend that you perform such tests in staging environments or during non-peak hours to minimize the impact to the production environment.
Simulations help you: - Validate how Azure DDoS Protection helps protect your Azure resources from DDoS attacks.
You may only simulate attacks using our approved testing partners:
Our testing partners' simulation environments are built within Azure. You can only simulate against Azure-hosted public IP addresses that belong to an Azure subscription of your own, which will be validated by our partners before testing. Additionally, these target public IP addresses must be protected under Azure DDoS Protection. Simulation testing allows you to assess your current state of readiness, identify gaps in your incident response procedures, and guide you in developing a properΓÇ»[DDoS response strategy](ddos-response-strategy.md). -- > [!NOTE] > BreakingPoint Cloud and Red Button are only available for the Public cloud.
+For this tutorial, you'll create a test environment that includes:
+- A DDoS protection plan
+- A virtual network
+- A Azure Bastion host
+- A load balancer
+- Two virtual machines.
+
+You'll then configure diagnostic logs and alerts to monitor for attacks and traffic patterns. Finally, you'll configure a DDoS attack simulation using one of our approved testing partners.
++ ## Prerequisites -- Before you can complete the steps in this tutorial, you must first create a [Azure DDoS Protection plan](manage-ddos-protection.md) with protected public IP addresses.-- For BreakingPoint Cloud, you must first [create an account](https://www.ixiacom.com/products/breakingpoint-cloud).
+- An Azure account with an active subscription.
+- In order to use diagnostic logging, you must first create a [Log Analytics workspace with diagnostic settings enabled](ddos-configure-log-analytics-workspace.md).
+
-## BreakingPoint Cloud
-### Configure a DDoS test attack
+## Prepare test environment
+### Create a DDoS protection plan
-1. Enter or select the following values, then select **Start test**:
+1. Select **Create a resource** in the upper left corner of the Azure portal.
+1. Search the term *DDoS*. When **DDoS protection plan** appears in the search results, select it.
+1. Select **Create**.
+1. Enter or select the following values.
+
+ :::image type="content" source="./media/ddos-attack-simulation/create-ddos-plan.png" alt-text="Screenshot of creating a DDoS protection plan.":::
|Setting |Value | | | |
- |Target IP address | Enter one of your public IP address you want to test. |
- |Port Number | Enter _443_. |
- |DDoS Profile | Possible values include `DNS Flood`, `NTPv2 Flood`, `SSDP Flood`, `TCP SYN Flood`, `UDP 64B Flood`, `UDP 128B Flood`, `UDP 256B Flood`, `UDP 512B Flood`, `UDP 1024B Flood`, `UDP 1514B Flood`, `UDP Fragmentation`, `UDP Memcached`.|
- |Test Size | Possible values include `100K pps, 50 Mbps and 4 source IPs`, `200K pps, 100 Mbps and 8 source IPs`, `400K pps, 200Mbps and 16 source IPs`, `800K pps, 400 Mbps and 32 source IPs`. |
- |Test Duration | Possible values include `10 Minutes`, `15 Minutes`, `20 Minutes`, `25 Minutes`, `30 Minutes`.|
+ |Subscription | Select your subscription. |
+ |Resource group | Select **Create new** and enter **MyResourceGroup**.|
+ |Name | Enter **MyDDoSProtectionPlan**. |
+ |Region | Enter **East US**. |
+
+1. Select **Review + create** then **Create**
+
+### Create the virtual network
+
+In this section, you'll create a virtual network, subnet, Azure Bastion host, and associate the DDoS Protection plan. The virtual network and subnet contains the load balancer and virtual machines. The bastion host is used to securely manage the virtual machines and install IIS to test the load balancer. The DDoS Protection plan will protect all public IP resources in the virtual network.
+
+> [!IMPORTANT]
+> [!INCLUDE [Pricing](../../includes/bastion-pricing.md)]
+>
+
+1. In the search box at the top of the portal, enter **Virtual network**. Select **Virtual Networks** in the search results.
+
+1. In **Virtual networks**, select **+ Create**.
+
+1. In **Create virtual network**, enter or select the following information in the **Basics** tab:
+
+ | **Setting** | **Value** |
+ |||
+ | **Project Details** | |
+ | Subscription | Select your Azure subscription. |
+ | Resource Group | Select **MyResourceGroup** |
+ | **Instance details** | |
+ | Name | Enter **myVNet** |
+ | Region | Select **East US** |
+
+1. Select the **Security** tab.
+
+1. Under **BastionHost**, select **Enable**. Enter this information:
+
+ | Setting | Value |
+ |--|-|
+ | Bastion name | Enter **myBastionHost** |
+ | Azure Bastion Public IP Address | Select **myvent-bastion-publicIpAddress**. Select **OK**. |
+
+1. Under **DDoS Network Protection**, select **Enable**. Then from the drop-down menu, select **MyDDoSProtectionPlan**.
+
+ :::image type="content" source="./media/ddos-attack-simulation/enable-ddos.png" alt-text="Screenshot of enabling DDoS during virtual network creation.":::
+
+1. Select the **IP Addresses** tab or select **Next: IP Addresses** at the bottom of the page.
+
+1. In the **IP Addresses** tab, enter this information:
+
+ | Setting | Value |
+ |--|-|
+ | IPv4 address space | Enter **10.1.0.0/16** |
+
+1. Under **Subnets**, select the word **default**. If a subnet isn't present, select **+ Add subnet**.
+
+1. In **Edit subnet**, enter this information, then select **Save**:
+
+ | Setting | Value |
+ |--|-|
+ | Name | Enter **myBackendSubnet** |
+ | Starting Address | Enter **10.1.0.0/24** |
+
+1. Under **Subnets**, select **AzureBastionSubnet**. In **Edit subnet**, enter this information,then select **Save**:
+
+ | Setting | Value |
+ |--|-|
+ | Starting Address | Enter **10.1.1.0/26** |
+
+1. Select the **Review + create** tab or select the **Review + create** button, then select **Create**.
+
+ > [!NOTE]
+ > The virtual network and subnet are created immediately. The Bastion host creation is submitted as a job and will complete within 10 minutes. You can proceed to the next steps while the Bastion host is created.
+
+### Create load balancer
+
+In this section, you'll create a zone redundant load balancer that load balances virtual machines. With zone-redundancy, one or more availability zones can fail and the data path survives as long as one zone in the region remains healthy.
+
+During the creation of the load balancer, you'll configure:
+
+* Frontend IP address
+* Backend pool
+* Inbound load-balancing rules
+* Health probe
+
+1. In the search box at the top of the portal, enter **Load balancer**. Select **Load balancers** in the search results. In the **Load balancer** page, select **+ Create**.
+
+1. In the **Basics** tab of the **Create load balancer** page, enter or select the following information:
+
+ | Setting | Value |
+ | | |
+ | **Project details** | |
+ | Subscription | Select your subscription. |
+ | Resource group | Select **MyResourceGroup**. |
+ | **Instance details** | |
+ | Name | Enter **myLoadBalancer** |
+ | Region | Select **East US**. |
+ | SKU | Leave the default **Standard**. |
+ | Type | Select **Public**. |
+ | Tier | Leave the default **Regional**. |
+
+ :::image type="content" source="./media/ddos-attack-simulation/create-standard-load-balancer.png" alt-text="Screenshot of create standard load balancer basics tab." border="true":::
+
+1. Select **Next: Frontend IP configuration** at the bottom of the page.
+
+1. In **Frontend IP configuration**, select **+ Add a frontend IP configuration**, then enter the following information. Leave the rest of the defaults and select **Add**.
+
+ | Setting | Value |
+ | --| -- |
+ | **Name** | Enter **myFrontend**. |
+ | **IP Type** | Select *Create new*. In *Add a public IP address*, enter **myPublicIP** for Name |
+ | **Availability zone** | Select **Zone-redundant**. |
+
+ > [!NOTE]
+ > In regions with [Availability Zones](../availability-zones/az-overview.md?toc=%2fazure%2fvirtual-network%2ftoc.json#availability-zones), you have the option to select no-zone (default option), a specific zone, or zone-redundant. The choice will depend on your specific domain failure requirements. In regions without Availability Zones, this field won't appear. </br> For more information on availability zones, see [Availability zones overview](../availability-zones/az-overview.md).
++
+1. Select **Next: Backend pools** at the bottom of the page.
+
+1. In the **Backend pools** tab, select **+ Add a backend pool**, then enter the following information. Leave the rest of the defaults and select **Save**.
+
+ | Setting | Value |
+ | --| -- |
+ | **Name** | Enter **myBackendPool**. |
+ | **Backend Pool Configuration** | Select **IP Address**. |
+
+
+1. Select **Save**, then select **Next: Inbound rules** at the bottom of the page.
+
+1. Under **Load balancing rule** in the **Inbound rules** tab, select **+ Add a load balancing rule**.
+
+1. In **Add load balancing rule**, enter or select the following information:
+
+ | Setting | Value |
+ | - | -- |
+ | Name | Enter **myHTTPRule** |
+ | IP Version | Select **IPv4** or **IPv6** depending on your requirements. |
+ | Frontend IP address | Select **myFrontend (To be created)**. |
+ | Backend pool | Select **myBackendPool**. |
+ | Protocol | Select **TCP**. |
+ | Port | Enter **80**. |
+ | Backend port | Enter **80**. |
+ | Health probe | Select **Create new**. </br> In **Name**, enter **myHealthProbe**. </br> Select **TCP** in **Protocol**. </br> Leave the rest of the defaults, and select **OK**. |
+ | Session persistence | Select **None**. |
+ | Idle timeout (minutes) | Enter or select **15**. |
+ | TCP reset | Select the *Enabled* radio. |
+ | Floating IP | Select the *Disabled* radio. |
+ | Outbound source network address translation (SNAT) | Leave the default of **(Recommended) Use outbound rules to provide backend pool members access to the internet.** |
+
+1. Select **Save**.
+
+1. Select the blue **Review + create** button at the bottom of the page.
+
+1. Select **Create**.
+
+### Create virtual machines
+
+In this section, you'll create two virtual machines that will be load balanced by the load balancer. You'll also install IIS on the virtual machines to test the load balancer.
+
+1. In the search box at the top of the portal, enter **Virtual machine**. Select **Virtual machines** in the search results. In the **Virtual machines** page, select **+ Create**.
+
+1. In **Create a virtual machine**, enter or select the following values in the **Basics** tab:
+
+ | Setting | Value |
+ |--|-|
+ | **Project Details** | |
+ | Subscription | Select your Azure subscription |
+ | Resource Group | Select **MyResourceGroup** |
+ | **Instance details** | |
+ | Virtual machine name | Enter **myVM1** |
+ | Region | Select **((US) East US)** |
+ | Availability Options | Select **Availability zones** |
+ | Availability zone | Select **Zone 1** |
+ | Security type | Select **Standard**. |
+ | Image | Select **Windows Server 2022 Datacenter: Azure Edition - Gen2** |
+ | Azure Spot instance | Leave the default of unchecked. |
+ | Size | Choose VM size or take default setting |
+ | **Administrator account** | |
+ | Username | Enter a username |
+ | Password | Enter a password |
+ | Confirm password | Reenter password |
+ | **Inbound port rules** | |
+ | Public inbound ports | Select **None** |
+
+1. Select the **Networking** tab, or select **Next: Disks**, then **Next: Networking**.
+
+1. In the Networking tab, select or enter the following information:
+
+ | Setting | Value |
+ | - | -- |
+ | **Network interface** | |
+ | Virtual network | Select **myVNet** |
+ | Subnet | Select **myBackendSubnet** |
+ | Public IP | Select **None**. |
+ | NIC network security group | Select **Advanced** |
+ | Configure network security group | Skip this setting until the rest of the settings are completed. Complete after **Select a backend pool**.|
+ | Delete NIC when VM is deleted | Leave the default of **unselected**. |
+ | Accelerated networking | Leave the default of **selected**. |
+ | **Load balancing** |
+ | **Load balancing options** |
+ | Load-balancing options | Select **Azure load balancer** |
+ | Select a load balancer | Select **myLoadBalancer** |
+ | Select a backend pool | Select **myBackendPool** |
+ | Configure network security group | Select **Create new**. </br> In the **Create network security group**, enter **myNSG** in **Name**. </br> Under **Inbound rules**, select **+Add an inbound rule**. </br> Under **Service**, select **HTTP**. </br> Under **Priority**, enter **100**. </br> In **Name**, enter **myNSGRule** </br> Select **Add** </br> Select **OK** |
+
+1. Select **Review + create**.
+
+1. Review the settings, and then select **Create**.
+
+1. Follow the steps 1 through 7 to create another VM with the following values and all the other settings the same as **myVM1**:
+
+ | Setting | VM 2
+ | - | -- |
+ | Name | **myVM2** |
+ | Availability zone | **Zone 2** |
+ | Network security group | Select the existing **myNSG** |
+
-It should now appear like this:
+### Install IIS
-![DDoS Attack Simulation Example: BreakingPoint Cloud](./media/ddos-attack-simulation/ddos-attack-simulation-example-1.png)
+1. In the search box at the top of the portal, enter **Virtual machine**. Select **Virtual machines** in the search results.
-### Monitor and validate
+1. Select **myVM1**.
-1. Log in to the [Azure portal](https://portal.azure.com) and go to your subscription.
-1. Select the Public IP address you tested the attack on.
-1. Under **Monitoring**, select **Metrics**.
-1. For **Metric**, select _Under DDoS attack or not_.
+1. On the **Overview** page, select **Connect**, then **Bastion**.
-Once the resource is under attack, you should see that the value changes from **0** to **1**, like the following picture:
+1. Enter the username and password entered during VM creation.
-![DDoS Attack Simulation Example: Portal](./media/ddos-attack-simulation/ddos-attack-simulation-example-2.png)
+1. Select **Connect**.
-### BreakingPoint Cloud API Script
+1. On the server desktop, navigate to **Start** > **Windows PowerShell** > **Windows PowerShell**.
-This [API script](https://aka.ms/ddosbreakingpoint) can be used to automate DDoS testing by running once or using cron to schedule regular tests. This is useful to validate that your logging is configured properly and that detection and response procedures are effective. The scripts require a Linux OS (tested with Ubuntu 18.04 LTS) and Python 3. Install prerequisites and API client using the included script or by using the documentation on the [BreakingPoint Cloud](https://www.ixiacom.com/products/breakingpoint-cloud) website.
+1. In the PowerShell Window, run the following commands to:
-## Red Button
+ * Install the IIS server
+ * Remove the default iisstart.htm file
+ * Add a new iisstart.htm file that displays the name of the VM:
+
+ ```powershell
+ # Install IIS server role
+ Install-WindowsFeature -name Web-Server -IncludeManagementTools
+
+ # Remove default htm file
+ Remove-Item C:\inetpub\wwwroot\iisstart.htm
+
+ # Add a new htm file that displays server name
+ Add-Content -Path "C:\inetpub\wwwroot\iisstart.htm" -Value $("Hello World from " + $env:computername)
+
+ ```
+
+1. Close the Bastion session with **myVM1**.
+
+1. Repeat steps 1 to 8 to install IIS and the updated iisstart.htm file on **myVM2**.
+
+## Configure DDoS Protection metrics and alerts
+
+Now we'll configure metrics and alerts to monitor for attacks and traffic patterns.
+
+### Configure diagnostic logs
+
+1. Sign in to the [Azure portal](https://portal.azure.com/).
+1. In the search box at the top of the portal, enter **Monitor**. Select **Monitor** in the search results.
+1. Select **Diagnostic Settings** under **Settings** in the left pane, then select the following information in the **Diagnostic settings** page. Next, select **Add diagnostic setting**.
+
+ :::image type="content" source="./media/ddos-attack-simulation/ddos-monitor-diagnostic-settings.png" alt-text="Screenshot of Monitor diagnostic settings.":::
+
+ | Setting | Value |
+ |--|--|
+ |Subscription | Select the **Subscription** that contains the public IP address you want to log. |
+ | Resource group | Select the **Resource group** that contains the public IP address you want to log. |
+ |Resource type | Select **Public IP Addresses**.|
+ |Resource | Select the specific **Public IP address** you want to log metrics for. |
+
+1. On the *Diagnostic setting* page, under *Destination details*, select **Send to Log Analytics workspace**, then enter the following information, then select **Save**.
+
+ :::image type="content" source="./media/ddos-attack-simulation/ddos-public-ip-diagnostic-setting.png" alt-text="Screenshot of DDoS diagnostic settings.":::
+
+ | Setting | Value |
+ |--|--|
+ | Diagnostic setting name | Enter **myDiagnosticSettings**. |
+ |**Logs**| Select **allLogs**.|
+ |**Metrics**| Select **AllMetrics**. |
+ |**Destination details**| Select **Send to Log Analytics workspace**.|
+ | Subscription | Select your Azure subscription. |
+ | Log Analytics Workspace | Select **myLogAnalyticsWorkspace**. |
+
+### Configure metric alerts
++
+1. Sign in to the [Azure portal](https://portal.azure.com/).
+
+1. In the search box at the top of the portal, enter **Alerts**. Select **Alerts** in the search results.
+
+1. Select **+ Create** on the navigation bar, then select **Alert rule**.
+
+ :::image type="content" source="./media/ddos-attack-simulation/ddos-protection-alert-page.png" alt-text="Screenshot of creating Alerts." lightbox="./media/ddos-attack-simulation/ddos-protection-alert-page.png":::
+
+1. On the **Create an alert rule** page, select **+ Select scope**, then select the following information in the **Select a resource** page.
+
+ :::image type="content" source="./media/ddos-attack-simulation/ddos-protection-alert-scope.png" alt-text="Screenshot of selecting DDoS Protection attack alert scope." lightbox="./media/ddos-attack-simulation/ddos-protection-alert-scope.png":::
++
+ | Setting | Value |
+ |--|--|
+ |Filter by subscription | Select the **Subscription** that contains the public IP address you want to log. |
+ |Filter by resource type | Select **Public IP Addresses**.|
+ |Resource | Select the specific **Public IP address** you want to log metrics for. |
+
+1. Select **Done**, then select **Next: Condition**.
+1. On the **Condition** page, select **+ Add Condition**, then in the *Search by signal name* search box, search and select **Under DDoS attack or not**.
+
+ :::image type="content" source="./media/ddos-attack-simulation/ddos-protection-alert-add-condition.png" alt-text="Screenshot of adding DDoS Protection attack alert condition." lightbox="./media/ddos-attack-simulation/ddos-protection-alert-add-condition.png":::
+
+1. In the **Create an alert rule** page, enter or select the following information.
+ :::image type="content" source="./media/ddos-attack-simulation/ddos-protection-alert-signal.png" alt-text="Screenshot of adding DDoS Protection attack alert signal." lightbox="./media/ddos-attack-simulation/ddos-protection-alert-signal.png":::
+
+ | Setting | Value |
+ |--|--|
+ | Threshold | Leave as default. |
+ | Aggregation type | Leave as default. |
+ | Operator | Select **Greater than or equal to**. |
+ | Unit | Leave as default. |
+ | Threshold value | Enter **1**. For the *Under DDoS attack or not metric*, **0** means you're not under attack while **1** means you are under attack. |
+
+
+
+1. Select **Next: Actions** then select **+ Create action group**.
+
+#### Create action group
+
+1. In the **Create action group** page, enter the following information, then select **Next: Notifications**.
+
+ | Setting | Value |
+ |--|--|
+ | Subscription | Select your Azure subscription that contains the public IP address you want to log. |
+ | Resource Group | Select your Resource group. |
+ | Region | Leave as default. |
+ | Action Group | Enter **myDDoSAlertsActionGroup**. |
+ | Display name | Enter **myDDoSAlerts**. |
+
+
+1. On the *Notifications* tab, under *Notification type*, select **Email/SMS message/Push/Voice**. Under *Name*, enter **myUnderAttackEmailAlert**.
+
+ :::image type="content" source="./media/ddos-attack-simulation/ddos-protection-alert-action-group-notification.png" alt-text="Screenshot of adding DDoS Protection attack alert notification type." lightbox="./media/ddos-attack-simulation/ddos-protection-alert-action-group-notification.png":::
++
+1. On the *Email/SMS message/Push/Voice* page, select the **Email** check box, then enter the required email. Select **OK**.
+
+ :::image type="content" source="./media/ddos-attack-simulation/ddos-protection-alert-notification.png" alt-text="Screenshot of adding DDoS Protection attack alert notification page." lightbox="./media/ddos-attack-simulation/ddos-protection-alert-notification.png":::
+
+1. Select **Review + create** and then select **Create**.
+
+#### Continue configuring alerts through portal
+
+1. Select **Next: Details**.
+
+ :::image type="content" source="./media/ddos-attack-simulation/ddos-protection-alert-details.png" alt-text="Screenshot of adding DDoS Protection attack alert details page." lightbox="./media/ddos-attack-simulation/ddos-protection-alert-details.png":::
+
+1. On the *Details* tab, under *Alert rule details*, enter the following information.
+
+ | Setting | Value |
+ |--|--|
+ | Severity | Select **2 - Warning**. |
+ | Alert rule name | Enter **myDDoSAlert**. |
+
+1. Select **Review + create** and then select **Create** after validation passes.
+
+## Configure a DDoS attack simulation
+
+### BreakingPoint Cloud
+
+BreakingPoint Cloud is a self-service traffic generator where you can generate traffic against DDoS Protection-enabled public endpoints for simulations.
+
+BreakingPoint Cloud offers:
+
+- A simplified user interface and an ΓÇ£out-of-the-boxΓÇ¥ experience.
+- pay-per-use model.
+- Pre-defined DDoS test sizing and test duration profiles enable safer validations by eliminating the potential of configuration errors.v
+
+> [!NOTE]
+> For BreakingPoint Cloud, you must first [create a BreakingPoint Cloud account](https://www.ixiacom.com/products/breakingpoint-cloud).
+
+Example attack values:
++
+|Setting |Value |
+| | |
+|Target IP address | Enter one of your public IP address you want to test. |
+|Port Number | Enter _443_. |
+|DDoS Profile | Possible values include `DNS Flood`, `NTPv2 Flood`, `SSDP Flood`, `TCP SYN Flood`, `UDP 64B Flood`, `UDP 128B Flood`, `UDP 256B Flood`, `UDP 512B Flood`, `UDP 1024B Flood`, `UDP 1514B Flood`, `UDP Fragmentation`, `UDP Memcached`.|
+|Test Size | Possible values include `100K pps, 50 Mbps and 4 source IPs`, `200K pps, 100 Mbps and 8 source IPs`, `400K pps, 200Mbps and 16 source IPs`, `800K pps, 400 Mbps and 32 source IPs`. |
+|Test Duration | Possible values include `10 Minutes`, `15 Minutes`, `20 Minutes`, `25 Minutes`, `30 Minutes`.|
+
+> [!NOTE]
+> - For more information on using BreakingPoint Cloud with your Azure environment, see this [BreakingPoint Cloud blog](https://www.keysight.com/blogs/tech/nwvs/2020/11/17/six-simple-steps-to-understand-how-microsoft-azure-ddos-protection-works).
+> - For a video demonstration of utilizing BreakingPoint Cloud, see [DDoS Attack Simulation](https://www.youtube.com/watch?v=xFJS7RnX-Sw).
+++
+### Red Button
Red ButtonΓÇÖs [DDoS Testing](https://www.red-button.net/ddos-testing/) service suite includes three stages:
Red ButtonΓÇÖs [DDoS Testing](https://www.red-button.net/ddos-testing/) service
Here's an example of a [DDoS Test Report](https://www.red-button.net/wp-content/uploads/2021/06/DDoS-Test-Report-Example-with-Analysis.pdf) from Red Button:
-![DDoS Test Report Example](./media/ddos-attack-simulation/red-button-test-report-example.png)
In addition, Red Button offers two other service suites, [DDoS 360](https://www.red-button.net/prevent-ddos-attacks-with-ddos360/) and [DDoS Incident Response](https://www.red-button.net/ddos-incident-response/), that can complement the DDoS Testing service suite.
RedWolf offers an easy-to-use testing system that is either self-serve or guided
RedWolf's [DDoS Testing](https://www.redwolfsecurity.com/services/) service suite includes:
- - **Attack Vectors**: Unique cloud attacks designed by RedWolf.
- - **Guided and self service**: Leverage RedWolf's team or run tests yourself.
-
+ - **Attack Vectors**: Unique cloud attacks designed by RedWolf. For more information about RedWolf attack vectors, see [Technical Details](https://www.redwolfsecurity.com/redwolf-technical-details/).
+ - **Guided Service**: Leverage RedWolf's team to run tests. For more information about RedWolf's guided service, see [Guided Service](https://www.redwolfsecurity.com/managed-testing-explained/).
+ - **Self Service**: Leverage RedWol to run tests yourself. For more information about RedWolf's self-service, see [Self Service](https://www.redwolfsecurity.com/self-serve-testing/).
## Next steps -- Learn how to [view and configure DDoS protection telemetry](telemetry.md).-- Learn how to [view and configure DDoS diagnostic logging](diagnostic-logging.md).-- Learn how to [engage DDoS rapid response](ddos-rapid-response.md).
+To view attack metrics and alerts after an attack, continue to these next tutorials.
+
+> [!div class="nextstepaction"]
+> [View alerts in defender for cloud](ddos-view-alerts-defender-for-cloud.md)
+> [View diagnostic logs in Log Analytic workspace](ddos-view-diagnostic-logs.md)
+> [Engage with Azure DDoS Rapid Response](ddos-rapid-response.md)
deployment-environments How To Configure Deployment Environments User https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/deployment-environments/how-to-configure-deployment-environments-user.md
Based on the scope of access that you allow, a developer who has the Deployment
* View the project environment types. * Create an environment. * Read, write, delete, or perform actions (like deploy or reset) on their own environment.
-* Read or perform actions (like deploy or reset) on environments that other users created.
+* Read environments that other users created.
When you assign the role at the project level, the user can perform the preceding actions on all environment types enabled at the project level. When you assign the role to specific environment types, the user can perform the actions only on the respective environment types.
event-hubs Dynamically Add Partitions https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/event-hubs/dynamically-add-partitions.md
Event Hubs provides direct receivers and an easy consumer library called the [Ev
- **Event processor host** ΓÇô This client doesn't automatically refresh the entity metadata. So, it wouldn't pick up on partition count increase. Recreating an event processor instance will cause an entity metadata fetch, which in turn will create new blobs for the newly added partitions. Pre-existing blobs won't be affected. Restarting all event processor instances is recommended to ensure that all instances are aware of the newly added partitions, and load-balancing is handled correctly among consumers. If you're using the old version of .NET SDK ([WindowsAzure.ServiceBus](https://www.nuget.org/packages/WindowsAzure.ServiceBus/)), the event processor host removes an existing checkpoint upon restart if partition count in the checkpoint doesn't match the partition count fetched from the service. This behavior may have an impact on your application.
+
+ [!INCLUDE [service-bus-track-0-and-1-sdk-support-retirement](../../includes/service-bus-track-0-and-1-sdk-support-retirement.md)]
## Apache Kafka clients This section describes how Apache Kafka clients that use the Kafka endpoint of Azure Event Hubs behave when the partition count is updated for an event hub.
event-hubs Event Hubs Exchange Events Different Protocols https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/event-hubs/event-hubs-exchange-events-different-protocols.md
The advice in this article specifically covers these clients, with the listed ve
* Kafka Java client (version 1.1.1 from https://www.mvnrepository.com/artifact/org.apache.kafka/kafka-clients) * Microsoft Azure Event Hubs Client for Java (version 1.1.0 from https://github.com/Azure/azure-event-hubs-java) * Microsoft Azure Event Hubs Client for .NET (version 2.1.0 from https://github.com/Azure/azure-event-hubs-dotnet)
-* Microsoft Azure Service Bus (version 5.0.0 from https://www.nuget.org/packages/WindowsAzure.ServiceBus)
* HTTPS (supports producers only) Other AMQP clients may behave slightly differently. AMQP has a well-defined type system, but the specifics of serializing language-specific types to and from that type system depends on the client, as does how the client provides access to the parts of an AMQP message.
event-hubs Event Hubs Messaging Exceptions https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/event-hubs/event-hubs-messaging-exceptions.md
This section lists the .NET exceptions generated by .NET Framework APIs.
> > For information about the EventHubsException raised by the new .NET library, see [EventHubsException - .NET](exceptions-dotnet.md) + ## Exception categories The Event Hubs .NET APIs generate exceptions that can fall into the following categories, along with the associated action you can take to try to fix them:
event-hubs Sdks https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/event-hubs/sdks.md
The following table describes all the latest available Azure Event Hubs runtime
The following table lists older Azure Event Hubs runtime clients. While these packages may receive critical bug fixes, they aren't in active development. We recommend using the latest SDKs listed in the above table instead. + | Language | Package | Reference | | -- | - | | | . NET Standard | [Microsoft.Azure.EventHubs](https://www.nuget.org/packages/Microsoft.Azure.EventHubs/) (**legacy**) | <ul><li>[GitHub location](https://github.com/Azure/azure-sdk-for-net/tree/master/sdk/eventhub/Microsoft.Azure.EventHubs)</li><li>[Tutorial](event-hubs-dotnet-standard-getstarted-send.md)</li></ul> |
expressroute About Upgrade Circuit Bandwidth https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/expressroute/about-upgrade-circuit-bandwidth.md
Previously updated : 06/30/2023 Last updated : 10/16/2023
ExpressRoute is a dedicated and private connection to Microsoft's global network
### Insufficient capacity for physical connection
-An ExpressRoute circuit is created on a physical connection between Microsoft and a ExpressRoute Partner. The physical connection has a fixed capacity. If you're unable to increase your circuit size that means that the underlying physical connection for your existing circuit doesnΓÇÖt have capacity for the upgrade. You need to create a new circuit if you want to change the circuit size.
+An ExpressRoute circuit is created on a physical connection between Microsoft and a ExpressRoute Partner. The physical connection has a fixed capacity. If you're unable to increase your circuit size that means that the underlying physical connection for your existing circuit doesnΓÇÖt have capacity for the upgrade. You need to create a new circuit if you want to change the circuit size. For more information, see [Migrate to a new ExpressRoute circuit](circuit-migration.md).
After you've successfully created the new ExpressRoute circuit, you should link your existing virtual networks to this circuit. You can then test and validate the connectivity of the new ExpressRoute circuit before you deprovision the old circuit. These recommended migration steps minimize down time and disruption to your production work load.
frontdoor Front Door Cdn Comparison https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/frontdoor/front-door-cdn-comparison.md
+
+ Title: Comparison between Azure Front Door and Azure CDN services
+description: This article provides a comparison between the different Azure Front Door tiers and Azure CDN services.
+++++ Last updated : 10/13/2023+++
+# Comparison between Azure Front Door and Azure CDN services
+
+Azure Front Door and Azure CDN are both Azure services that offer global content delivery with intelligent routing and caching capabilities at the application layer. Both services can be used to optimize and accelerate your applications by providing a globally distributed network of points of presence (POP) close to your users. Both services also offer a variety of features to help you secure your applications from malicious attacks and to help you monitor your application's health and performance.
++
+> [!NOTE]
+> To switch between tiers, you will need to recreate the Azure Front Door profile. You can use the [**migration capability**](migrate-tier.md) to move your existing Azure Front Door profile to the new tier. For more information about upgrading from Standard to Premium, see [**upgrade capability**](tier-upgrade.md).
+>
+
+## Service comparison
+
+The following table provides a comparison between Azure Front Door and Azure CDN services.
+
+| Features and optimizations | Front Door Standard | Front Door Premium | Azure CDN Classic | Azure CDN Standard Microsoft | Azure CDN Standard Edgio | Azure CDN Premium Edgio |
+| | | | | | | |
+| **Delivery and acceleration** | | | | | | |
+| Static file delivery | Yes | Yes | Yes | Yes | Yes | Yes |
+| Dynamic site delivery | Yes | Yes | Yes | No | Yes | Yes |
+| **Domains and Certs** | | | | | | |
+| Custom domains | Yes - DNS TXT record based domain validation | Yes - DNS TXT record based domain validation | Yes - CNAME based validation | Yes - CNAME based validation | Yes - CNAME based validation | Yes - CNAME based validation |
+| HTTPS support | Yes | Yes | Yes | Yes | Yes | Yes |
+| Custom domain HTTPS | Yes | Yes | Yes | Yes | Yes | Yes |
+| Bring your own certificate | Yes | Yes | Yes | Yes | Yes | Yes |
+| Supported TLS Versions | TLS1.2, TLS1.0 | TLS1.2, TLS1.0 | TLS1.2, TLS1.0 | TLS 1.2, TLS 1.0/1.1 | "TLS 1.2, TLS 1.3" | TLS 1.2, TLS 1.3 |
+| **Caching** | | | | | | |
+| Query string caching | Yes | Yes | Yes | Yes | Yes | Yes |
+| Cache manage (purge, rules, and compression) | Yes | Yes | Yes | Yes | Yes | Yes |
+| Fast purge | No | No | No | No | Yes | Yes |
+| Asset pre-loading | No | No | No | No | Yes | Yes |
+| Cache behavior settings | Yes using Standard rules engine | Yes using Standard rules engine | Yes using Standard rules engine | Yes using Standard rules engine | Yes | Yes |
+| **Routing** | | | | | | |
+| Origin load balancing | Yes | Yes | Yes | Yes | Yes | Yes |
+| Path based routing | Yes | Yes | Yes | Yes | Yes | Yes |
+| Rules engine | Yes | Yes | Yes | Yes | Yes | Yes |
+| Server variable | Yes | Yes | No | No | No | No |
+| Regular expression in rules engine | Yes | Yes | No | No | No | Yes |
+| URL redirect/rewrite | Yes | Yes | Yes | Yes | No | Yes |
+| IPv4/IPv6 dual-stack | Yes | Yes | Yes | Yes | Yes | Yes |
+| HTTP/2 support | Yes | Yes | Yes | Yes | Yes | Yes |
+| Routing preference unmetered | Not required as Data transfer from Azure origin to AFD is free and path is directly connected | Not required as Data transfer from Azure origin to AFD is free and path is directly connected | Not required as Data transfer from Azure origin to AFD is free and path is directly connected | Not required as Data transfer from Azure origin to CDN is free and path is directly connected | Yes | Yes |
+| Origin Port | All TCP ports | All TCP ports | All TCP ports | All TCP ports | All TCP ports | All TCP ports |
+| Customizable, rules based content delivery engine | Yes | Yes | Yes | Yes using Standard rules engine | No | Yes using Premium rules engine |
+| Mobile device rules | Yes | Yes | Yes | Yes using Standard rules engine | No | Yes using Premium rules engine |
+| **Security** | | | | | | |
+| Custom Web Application Firewall (WAF) rules | Yes | Yes | Yes | No | No | No |
+| Microsoft managed rule set | No | Yes | Yes - Only default rule set 1.1 or below | No | No | No |
+| Bot protection | No | Yes | Yes - Only bot manager rule set 1.0 | No | No | No |
+| Private link connection to origin | No | Yes | No | No | No | No |
+| Geo-filtering | Yes | Yes | Yes | Yes | Yes | Yes |
+| Token authentication | No | No | No | No | No | Yes |
+| DDOS protection | Yes | Yes | Yes | Yes | Yes | Yes |
+| **Analytics and reporting** | | | | | | |
+| Monitoring Metrics | Yes (more metrics than Classic) | Yes (more metrics than Classic) | Yes | Yes | Yes | Yes |
+| Advanced analytics/built-in reports | Yes | Yes - includes WAF report | No | No | No | Yes |
+| Raw logs - access logs and WAF logs | Yes | Yes | Yes | Yes | Yes | Yes |
+| Health probe log | Yes | Yes | No | No | No | No |
+| **Ease of use** | | | | | | |
+| Easy integration with Azure services, such as Storage and Web Apps | Yes | Yes | Yes | Yes | Yes | Yes |
+| Management via REST API, .NET, Node.js, or PowerShell | Yes | Yes | Yes | Yes | Yes | Yes |
+| Compression MIME types | Configurable | Configurable | Configurable | Configurable | Configurable | Configurable |
+| Compression encodings | gzip, brotli | gzip, brotli | gzip, brotli | gzip, brotli | gzip, deflate, bzip2 | gzip, deflate, bzip2 |
+| Azure Policy integration | No | No | Yes | No | No | No |
+| Azure Advisory integration | Yes | Yes | No | No | Yes | Yes |
+| Managed Identities with Azure Key Vault | Yes | Yes | No | No | No | No |
+| **Pricing** | | | | | | |
+| Simplified pricing | Yes | Yes | No | Yes | Yes | Yes |
+
+## Next steps
+
+* Learn how to [create an Azure Front Door](create-front-door-portal.md).
+* Learn how about the [Azure Front Door architecture](front-door-routing-architecture.md).
frontdoor Tier Comparison https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/frontdoor/standard-premium/tier-comparison.md
- Title: Azure Front Door tier comparison
-description: This article provides a comparison between the different Azure Front Door tiers and their features.
----- Previously updated : 04/04/2023---
-# Azure Front Door tier comparison
-
-Azure Front Door offers two different tiers, Standard and Premium. Both Azure Front Door tier combines capabilities of Azure Front Door (classic), Azure CDN Standard from Microsoft (classic), and Azure WAF into a single secure cloud CDN platform with intelligent threat protection.
--
-> [!NOTE]
-> To switch between tiers, you will need to recreate the Azure Front Door profile. Currently in Public Preview, you can use the [**migration capability**](../migrate-tier.md) to move your existing Azure Front Door profile to the new tier. For more information about upgrading from Standard to Premium, see [**upgrade capability**](../tier-upgrade.md).
->
-
-## Feature comparison between tiers
-
-| Features and optimization | Standard | Premium | Classic |
-|--|--|--|--|
-| Static file delivery | Yes | Yes | Yes |
-| Dynamic site delivery | Yes | Yes | Yes |
-| Custom domains | Yes - DNS TXT record based domain validation | Yes - DNS TXT record based domain validation | Yes - CNAME based validation |
-| Cache manage (purge, rules, and compression) | Yes | Yes | Yes |
-| Origin load balancing | Yes | Yes | Yes |
-| Path based routing | Yes | Yes | Yes |
-| Rules engine | Yes | Yes | Yes |
-| Server variable | Yes | Yes | No |
-| Regular expression in rules engine | Yes | Yes | No |
-| Expanded metrics | Yes | Yes | No |
-| Advanced analytics/built-in reports | Yes | Yes - includes WAF report | No |
-| Raw logs - access logs and WAF logs | Yes | Yes | Yes |
-| Health probe log | Yes | Yes | No |
-| Custom Web Application Firewall (WAF) rules | Yes | Yes | Yes |
-| Microsoft managed rule set | No | Yes | Yes - Only default rule set 1.1 or below |
-| Bot protection | No | Yes | Yes - Only bot manager rule set 1.0 |
-| Private link connection to origin | No | Yes | No |
-| Simplified price (base + usage) | Yes | Yes | No |
-| Azure Policy integration | Yes | Yes | No |
-| Azure Advisory integration | Yes | Yes | No |
-
-## Next steps
-
-* Learn how to [create an Azure Front Door](create-front-door-portal.md)
-* Learn how about the [Azure Front Door architecture](../front-door-routing-architecture.md)
governance Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/governance/policy/overview.md
Azure Policy operations can have a significant effect on your Azure environment.
> permissions to create or update targeted resources. For more information, see > [Configure policy definitions for remediation](./how-to/remediate-resources.md#configure-the-policy-definition).
-### Special permissions requirement for Azure Policy with Azure Virtual Network Manager (preview)
+### Special permissions requirement for Azure Policy with Azure Virtual Network Manager
[Azure Virtual Network Manager (preview)](../../virtual-network-manager/overview.md) enables you to apply consistent management and security policies to multiple Azure virtual networks (VNets) throughout your cloud infrastructure. Azure Virtual Network Manager (AVNM) dynamic groups use Azure Policy definitions to evaluate VNet membership in those groups.
hdinsight Benefits Of Migrating To Hdinsight 40 https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/hdinsight/benefits-of-migrating-to-hdinsight-40.md
Title: Benefits of migrating to Azure HDInsight 4.0.
description: Learn the benefits of migrating to Azure HDInsight 4.0. Previously updated : 09/23/2022 Last updated : 10/16/2023 # Significant version changes in HDInsight 4.0 and advantages
HDInsight 4.0 has several advantages over HDInsight 3.6. Here's an overview of w
- Information schema. - Performance advantage - Result caching - Caching query results allow a previously computed query result to be reused
- - Dynamic materialized views - Pre-computation of summaries
+ - Dynamic materialized views - Precomputation of summaries
- ACID V2 performance improvements in both storage format and execution engine - Security - GDPR compliance enabled on Apache Hive transactions
HDInsight 4.0 has several advantages over HDInsight 3.6. Here's an overview of w
- Spark Cruise - an automatic computation reuse system for Spark. - Performance advantage - Result caching - Caching query results allow a previously computed query result to be reused
- - Dynamic materialized views - Pre-computation of summaries
+ - Dynamic materialized views - Precomputation of summaries
- Security - GDPR compliance enabled for Spark transactions
Added timestamp column for Parquet vectorization and format under LLAP.
1. You can enable and disable the query result cache from command line. You might want to do so to debug a query. 1. Disable the query result cache by setting the following parameter to false: `hive.query.results.cache.enabled=false` 1. Hive stores the query result cache in `/tmp/hive/__resultcache__/`. By default, Hive allocates 2 GB for the query result cache. You can change this setting by configuring the following parameter in bytes: `hive.query.results.cache.max.size`
-1. Changes to query processing: During query compilation, check the results cache to see if it already has the query results. If there's a cache hit, then the query plan will be set to a `FetchTask` that will read from the cached location.
+1. Changes to query processing: During query compilation, check the results cache to see if it already has the query results. If there's a cache hit, then the query plan is set to a `FetchTask` that reads from the cached location.
During query execution: Parquet `DataWriteableWriter` relies on `NanoTimeUtils` to convert a timestamp object into a binary value. This query calls `toString()` on the timestamp object, and then parses the String. 1. If the results cache can be used for this query
- 1. The query will be the `FetchTask` reading from the cached results directory.
- 1. No cluster tasks will be required.
+ 1. The query is `FetchTask` reading from the cached results directory.
+ 1. No cluster tasks are required.
1. If the results cache can't be used, run the cluster tasks as normal 1. Check if the query results that have been computed are eligible to add to the results cache.
- 1. If results can be cached, the temporary results generated for the query will be saved to the results cache. Steps may need to be done here to ensure the query results directory isn't deleted by query clean-up.
+ 1. If results can be cached, the temporary results generated for the query are saved to the results cache. You might need to perform steps here to ensure that the query clean-up does not delete the query results directory.
## SQL features
For more information, see https://cwiki.apache.org/confluence/display/Hive/Suppo
### Metastore `CachedStore`
-Hive metastore operation takes much time and thus slow down Hive compilation. In some extreme case, it takes much longer than the actual query run time. Especially, we find the latency of cloud db is high and 90% of total query runtime is waiting for metastore SQL database operations. Based on this observation, the metastore operation performance will be greatly enhanced, if we have a memory structure which cache the database query result.
+Hive metastore operation takes much time and thus slow down Hive compilation. In some extreme case, it takes longer than the actual query run time. Especially, we find the latency of cloud db is high and 90% of total query runtime is waiting for metastore SQL database operations. Based on this observation, the metastore operation performance is enhanced, if we have a memory structure which cache the database query result.
`hive.metastore.rawstore.impl=org.apache.hadoop.hive.metastore.cache.CachedStore`
https://hadoop.apache.org/docs/r3.1.1/hadoop-project-dist/hadoop-common/release/
## Further reading * [HDInsight 4.0 Announcement](./hdinsight-version-release.md)
-* [HDInsight 4.0 deep dive](https://azure.microsoft.com/blog/deep-dive-into-azure-hdinsight-4-0/)
+* [HDInsight 4.0 deep dive](https://azure.microsoft.com/blog/deep-dive-into-azure-hdinsight-4-0/)
hdinsight Apache Hbase Provision Vnet https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/hdinsight/hbase/apache-hbase-provision-vnet.md
description: Get started using HBase in Azure HDInsight. Learn how to create HDI
Previously updated : 09/15/2022 Last updated : 10/16/2023 # Create Apache HBase clusters on HDInsight in Azure Virtual Network
In this section, you create a Linux-based Apache HBase cluster with the dependen
|Cluster Login User Name and Password|The default User Name is **admin**. Provide a password.| |Ssh User Name and Password|The default User Name is **sshuser**. Provide a password.|
- Select **I agree to the terms and the conditions stated above**.
+ Select **I agree to the terms and the conditions**.
1. Select **Purchase**. It takes about around 20 minutes to create a cluster. Once the cluster is created, you can select the cluster in the portal to open it.
Create an infrastructure as a service (IaaS) virtual machine into the same Azure
> [!IMPORTANT] > Replace `CLUSTERNAME` with the name you used when creating the HDInsight cluster in previous steps.
-Using these values, the virtual machine is placed in the same virtual network and subnet as the HDInsight cluster. This configuration allows them to directly communicate with each other. There is a way to create an HDInsight cluster with an empty edge node. The edge node can be used to manage the cluster. For more information, see [Use empty edge nodes in HDInsight](../hdinsight-apps-use-edge-node.md).
+By using these values, the virtual machine is placed in the same virtual network and subnet as the HDInsight cluster. This configuration allows them to directly communicate with each other. There is a way to create an HDInsight cluster with an empty edge node. The edge node can be used to manage the cluster. For more information, see [Use empty edge nodes in HDInsight](../hdinsight-apps-use-edge-node.md).
### Obtain fully qualified domain name
-When using a Java application to connect to HBase remotely, you must use the fully qualified domain name (FQDN). To determine this, you must get the connection-specific DNS suffix of the HBase cluster. To do that, you can use one of the following methods:
+When you use a Java application to connect to HBase remotely, you must use the fully qualified domain name (FQDN). To determine, you must get the connection-specific DNS suffix of the HBase cluster. To do that, you can use one of the following methods:
* Use a Web browser to make an [Apache Ambari](https://ambari.apache.org/) call:
hdinsight Hdinsight Administer Use Powershell https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/hdinsight/hdinsight-administer-use-powershell.md
description: Learn how to perform administrative tasks for the Apache Hadoop clu
Previously updated : 09/19/2022 Last updated : 10/16/2023 # Manage Apache Hadoop clusters in HDInsight by using Azure PowerShell
hdinsight Hdinsight Apps Install Hiveserver2 https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/hdinsight/hdinsight-apps-install-hiveserver2.md
Previously updated : 09/28/2022 Last updated : 10/16/2023 # Scale HiveServer2 on Azure HDInsight Clusters for High Availability
Learn how to deploy an additional HiveServer2 into your cluster to increase avai
## Prerequisites
-To use this guide, you'll need to understand the following article:
+To use this guide, you need to understand the following article:
- [Use empty edge nodes on Apache Hadoop clusters in HDInsight](hdinsight-apps-use-edge-node.md) ## Install HiveServer2
-In this section, you'll install an additional HiveServer2 onto your target hosts.
+In this section, you install an additional HiveServer2 onto your target hosts.
1. Open Ambari in your browser and click on your target host.
In this section, you'll install an additional HiveServer2 onto your target hosts
:::image type="content" source="media/hdinsight-apps-install-hiveserver2/hdinsight-install-hiveserver2-b.png" alt-text="Add HiveServer2 panel of host.":::
-3. Confirm and the process will run. Repeat 1-3 for all desired hosts.
+3. Confirm and the process runs. Repeat 1-3 for all desired hosts.
4. When you have finished installing, restart all services with stale configs and start HiveServer2.
hdinsight Hdinsight Config For Vscode https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/hdinsight/hdinsight-config-for-vscode.md
Title: Azure HDInsight configuration settings reference
description: Introduce the configuration of Azure HDInsight extension. Previously updated : 09/19/2023 Last updated : 10/16/2023
For general information about working with settings in VS Code, refer to [User a
| HDInsight: Enable Skip Pyspark Installation | Unchecked | Enable/Disable skipping pyspark installation | | HDInsight: Login Tips Enable | Unchecked | When this option is checked, there is a prompt when logging in to Azure | | HDInsight: Previous Extension Version | Display the version number of the current extension | Show the previous extension version|
-| HDInsight: Results Font Family | -apple-system,BlinkMacSystemFont,Segoe WPC,Segoe UI,HelveticaNeue-Light,Ubuntu,Droid Sans,sans-serif | Set the font family for the results grid; set to blank to use the editor font |
+| HDInsight: Results Font Family | -apple-system, BlinkMacSystemFont, Segoe WPC, Segoe UI, HelveticaNeue-Light, Ubuntu, Droid Sans, sans-serif | Set the font family for the results grid; set to blank to use the editor font |
| HDInsight: Results Font Size | 13 |Set the font size for the results gird; set to blank to use the editor size | | HDInsight Cluster: Linked Cluster | -- | Linked clusters urls. Also can edit the JSON file to set | | HDInsight Hive: Apply Localization | Unchecked | [Optional] Configuration options for localizing into Visual Studio Code's configured locale (must restart Visual Studio Code for settings to take effect)|
For general information about working with settings in VS Code, refer to [User a
| HDInsight Hive › Format: Datatype Casing | none | Should data types be formatted as UPPERCASE, lowercase, or none (not formatted) | | HDInsight Hive › Format: Keyword Casing | none | Should keywords be formatted as UPPERCASE, lowercase, or none (not formatted) | | HDInsight Hive › Format: Place Commas Before Next Statement | Unchecked | Should commas be placed at the beginning of each statement in a list for example ', mycolumn2' instead of at the end 'mycolumn1,'|
-| HDInsight Hive › Format: Place Select Statement References On New Line | Unchecked | Should references to objects in a SELECT statement be split into separate lines? For example, for 'SELECT C1, C2 FROM T1' both C1 and C2 is on separate lines
+| HDInsight Hive › Format: Place Select Statement References On New Line | Unchecked | Is reference to objects in a SELECT statement be split into separate lines? For example, for 'SELECT C1, C2 FROM T1' both C1 and C2 is on separate lines
| HDInsight Hive: Log Debug Info | Unchecked | [Optional] Log debug output to the VS Code console (Help -> Toggle Developer Tools) | HDInsight Hive: Messages Default Open | Checked | True for the messages pane to be open by default; false for closed| | HDInsight Hive: Results Font Family | -apple-system, BlinkMacSystemFont, Segoe WPC,Segoe UI, HelveticaNeue-Light, Ubuntu, Droid Sans, sans-serif | Set the font family for the results grid; set to blank to use the editor font |
For general information about working with settings in VS Code, refer to [User a
| HDInsight Job Submission: Livy `Conf` | -- | Livy Configuration. POST/batches | | HDInsight Jupyter: Append Results| Checked | Whether to append the results to the results window or to clear and display them. | | HDInsight Jupyter: Languages | -- | Default settings per language. |
-| HDInsight Jupyter › Log: Verbose | Unchecked | If enable verbose logging. |
+| HDInsight Jupyter › Log: Verbose | Unchecked | If you enable verbose logging. |
| HDInsight Jupyter › Notebook: Startup Args | Can add item | `jupyter notebook` command-line arguments. Each argument is a separate item in the array. For a full list type `jupyter notebook--help` in a terminal window. | | HDInsight Jupyter › Notebook: Startup Folder | ${workspaceRoot} |-- | | HDInsight Jupyter: Python Extension Enabled | Checked | Use Python-Interactive-Window of ms-python extension when submitting pySpark Interactive jobs. Otherwise, use our own `jupyter` window. |
For general information about working with settings in VS Code, refer to [User a
| HDInsight Spark.NET: SPARK_HOME | D:\spark-2.3.3-bin-hadoop2.7\ | Path to Spark Home | | Hive: Persist Query Result Tabs | Unchecked | Hive PersistQueryResultTabs | | Hive: Split Pane Selection | next | [Optional] Configuration options for which column new result panes should open in |
-| Hive Interactive: Copy Executable Folder | Unchecked | If copy the hive interactive service runtime folder to user's tmp folder. |
+| Hive Interactive: Copy Executable Folder | Unchecked | If you copy the Hive Interactive Service runtime folder to the user's tmp folder. |
| Hql Interactive Server: Wrapper Port | 13424 | Hive interactive service port | | Hql Language Server: Language Wrapper Port | 12342 | Hive language service port servers listen to. | | Hql Language Server: Max Number Of Problems | 100 | Controls the maximum number of problems produced by the server. |
hdinsight Hdinsight Hadoop Optimize Hive Query https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/hdinsight/hdinsight-hadoop-optimize-hive-query.md
description: This article describes how to optimize your Apache Hive queries in
Previously updated : 09/21/2022 Last updated : 10/16/2023 # Optimize Apache Hive queries in Azure HDInsight
For more information on running Hive queries on various HDInsight cluster types,
## Scale out worker nodes
-Increasing the number of worker nodes in an HDInsight cluster allows the work to use more mappers and reducers to be run in parallel. There are two ways you can increase scale out in HDInsight:
+Increasing the number of worker nodes in an HDInsight cluster allows the work to use more mappers and reducers to be run in parallel. There are two ways you can increase out scale in HDInsight:
* When you create a cluster, you can specify the number of worker nodes using the Azure portal, Azure PowerShell, or command-line interface. For more information, see [Create HDInsight clusters](hdinsight-hadoop-provision-linux-clusters.md). The following screenshot shows the worker node configuration on the Azure portal:
Tez is faster because:
* **Execute Directed Acyclic Graph (DAG) as a single job in the MapReduce engine**. The DAG requires each set of mappers to be followed by one set of reducers. This requirement causes multiple MapReduce jobs to be spun off for each Hive query. Tez doesn't have such constraint and can process complex DAG as one job minimizing job startup overhead. * **Avoids unnecessary writes**. Multiple jobs are used to process the same Hive query in the MapReduce engine. The output of each MapReduce job is written to HDFS for intermediate data. Since Tez minimizes number of jobs for each Hive query, it's able to avoid unnecessary writes. * **Minimizes start-up delays**. Tez is better able to minimize start-up delay by reducing the number of mappers it needs to start and also improving optimization throughout.
-* **Reuses containers**. Whenever possible Tez will reuse containers to ensure that latency from starting up containers is reduced.
+* **Reuses containers**. Whenever possible Tez reuse containers to ensure that latency from starting up containers is reduced.
* **Continuous optimization techniques**. Traditionally optimization was done during compilation phase. However more information about the inputs is available that allow for better optimization during runtime. Tez uses continuous optimization techniques that allow it to optimize the plan further into the runtime phase. For more information on these concepts, see [Apache TEZ](https://tez.apache.org/).
set hive.execution.engine=tez;
I/O operations are the major performance bottleneck for running Hive queries. The performance can be improved if the amount of data that needs to be read can be reduced. By default, Hive queries scan entire Hive tables. However for queries that only need to scan a small amount of data (for example, queries with filtering), this behavior creates unnecessary overhead. Hive partitioning allows Hive queries to access only the necessary amount of data in Hive tables.
-Hive partitioning is implemented by reorganizing the raw data into new directories. Each partition has its own file directory. The partitioning is defined by the user. The following diagram illustrates partitioning a Hive table by the column *Year*. A new directory is created for each year.
+Hive partitioning is implemented by reorganizing the raw data into new directories. Each partition has its own file directory. The user defines the partitioning. The following diagram illustrates partitioning a Hive table by the column *Year*. A new directory is created for each year.
:::image type="content" source="./media/hdinsight-hadoop-optimize-hive-query/hdinsight-partitioning.png" alt-text="HDInsight Apache Hive partitioning":::
hdinsight Hdinsight Overview Before You Start https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/hdinsight/hdinsight-overview-before-you-start.md
Title: Before you start with Azure HDInsight
description: In Azure HDInsight, few points to be considered before starting to create a cluster. Previously updated : 09/22/2022 Last updated : 10/16/2023 # Consider the below points before starting to create a cluster.
HDInsight have two options to configure the databases in the clusters.
1. Bring your own database (external) 1. Default database (internal)
-During cluster creation, default configuration will use internal database. Once the cluster is created, customer canΓÇÖt change the database type. Hence, it's recommended to create and use the external database. You can create custom databases for Ambari, Hive, and Ranger.
+During cluster creation, default configuration uses internal database. Once the cluster is created, customer canΓÇÖt change the database type. Hence, it's recommended to create and use the external database. You can create custom databases for Ambari, Hive, and Ranger.
For more information, see how to [Set up HDInsight clusters with a custom Ambari DB](./hdinsight-custom-ambari-db.md)
For more information, see how to [Migrate HDInsight cluster to a newer version](
* [Create Apache Hadoop cluster in HDInsight](./hadoop/apache-hadoop-linux-create-cluster-get-started-portal.md) * [Create Apache Spark cluster - Portal](./spark/apache-spark-jupyter-spark-sql-use-portal.md)
-* [Enterprise security in Azure HDInsight](./domain-joined/hdinsight-security-overview.md)
+* [Enterprise security in Azure HDInsight](./domain-joined/hdinsight-security-overview.md)
hdinsight Hdinsight Sdk Dotnet Samples https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/hdinsight/hdinsight-sdk-dotnet-samples.md
description: Find C# .NET examples on GitHub for common tasks using the HDInsigh
Previously updated : 08/30/2022 Last updated : 10/16/2023 # Azure HDInsight: .NET samples
hdinsight Hdinsight Troubleshoot Hive https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/hdinsight/hdinsight-troubleshoot-hive.md
description: Get answers to common questions about working with Apache Hive and
keywords: Azure HDInsight, Hive, FAQ, troubleshooting guide, common questions Previously updated : 09/23/2022 Last updated : 10/16/2023 # Troubleshoot Apache Hive by using Azure HDInsight
hdinsight Apache Hadoop Connect Hive Power Bi Directquery https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/hdinsight/interactive-query/apache-hadoop-connect-hive-power-bi-directquery.md
description: Use Microsoft Power BI to visualize Interactive Query Hive data fro
Previously updated : 09/15/2022 Last updated : 10/16/2023 # Visualize Interactive Query Apache Hive data with Microsoft Power BI using direct query in HDInsight
This article describes how to connect Microsoft Power BI to Azure HDInsight Inte
:::image type="content" source="./media/apache-hadoop-connect-hive-power-bi-directquery/hdinsight-power-bi-visualization.png" alt-text="HDInsight Power BI the map report" border="true":::
-You can leverage the [Apache Hive ODBC driver](../hadoop/apache-hadoop-connect-hive-power-bi.md) to do import via the generic ODBC connector in Power BI Desktop. However it is not recommended for BI workloads given non-interactive nature of the Hive query engine. [HDInsight Interactive Query connector](./apache-hadoop-connect-hive-power-bi-directquery.md) and [HDInsight Apache Spark connector](/power-bi/spark-on-hdinsight-with-direct-connect) are better choices for their performance.
+You can use the [Apache Hive ODBC driver](../hadoop/apache-hadoop-connect-hive-power-bi.md) to do import via the generic ODBC connector in Power BI Desktop. However it is not recommended for BI workloads given non-interactive nature of the Hive query engine. [HDInsight Interactive Query connector](./apache-hadoop-connect-hive-power-bi-directquery.md) and [HDInsight Apache Spark connector](/power-bi/spark-on-hdinsight-with-direct-connect) are better choices for their performance.
## Prerequisites Before going through this article, you must have the following items:
hdinsight Quickstart Bicep https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/hdinsight/interactive-query/quickstart-bicep.md
Previously updated : 07/19/2022 Last updated : 10/16/2023 #Customer intent: As a developer new to Interactive Query on Azure, I need to see how to create an Interactive Query cluster.
Two Azure resources are defined in the Bicep file:
* Replace **\<cluster-name\>** with the name of the HDInsight cluster to create. * Replace **\<cluster-username\>** with the credentials used to submit jobs to the cluster and to log in to cluster dashboards.
- * Replace **\<ssh-username\>** with the credentials used to remotely access the cluster. The username cannot be admin.
+ * Replace **\<ssh-username\>** with the credentials used to remotely access the cluster. The username can not be admin username.
- You'll also be prompted to enter the following:
+ You are prompted to enter the following password:
* **clusterLoginPassword**, which must be at least 10 characters long and contain one digit, one uppercase letter, one lowercase letter, and one non-alphanumeric character except single-quote, double-quote, backslash, right-bracket, full-stop. It also must not contain three consecutive characters from the cluster username or SSH username. * **sshPassword**, which must be 6-72 characters long and must contain at least one digit, one uppercase letter, and one lowercase letter. It must not contain any three consecutive characters from the cluster login name.
healthcare-apis Dicom Service V2 Api Changes https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/healthcare-apis/dicom/dicom-service-v2-api-changes.md
Previously updated : 7/21/2023 Last updated : 10/13/2023 # DICOM Service API v2 Changes
-This reference guide provides you with a summary of the changes in the V2 API of the DICOM service. To see the full set of capabilities in v2, see the [DICOM Conformance Statement v2](dicom-services-conformance-statement-v2.md).
+This reference guide provides you with a summary of the changes in the V2 API of the DICOM&reg; service. To see the full set of capabilities in v2, see the [DICOM Conformance Statement v2](dicom-services-conformance-statement-v2.md).
## Summary of changes in v2 ### Store #### Lenient validation of optional attributes
-In previous versions, a Store request would fail if any of the [required](dicom-services-conformance-statement-v2.md#store-required-attributes) or [searchable attributes](dicom-services-conformance-statement-v2.md#searchable-attributes) failed validation. Beginning with v2, the request fails only if **required attributes** fail validation.
+In previous versions, a Store request fails if any of the [required](dicom-services-conformance-statement-v2.md#store-required-attributes) or [searchable attributes](dicom-services-conformance-statement-v2.md#searchable-attributes) fails validation. Beginning with v2, the request fails only if **required attributes** fail validation.
-Failed validation of attributes not required by the API results in the file being stored with a warning in the response. Warnings result in an HTTP return code of `202 Accepted` and the response payload will contain the `WarningReason` tag (`0008, 1196`).
+Failed validation of attributes not required by the API results in the file being stored with a warning in the response. Warnings result in an HTTP return code of `202 Accepted` and the response payload contains the `WarningReason` tag (`0008, 1196`).
A warning is given about each failing attribute per instance. When a sequence contains an attribute that fails validation, or when there are multiple issues with a single attribute, only the first failing attribute reason is noted. There are some notable behaviors for optional attributes that fail validation:
- * Searches for the attribute that failed validation will not return the study/series/instance.
- * The attributes are not returned when retrieving metadata via WADO `/metadata` endpoints.
+ * Searches for the attribute that failed validation returns the study/series/instance.
+ * The attributes aren't returned when retrieving metadata via WADO `/metadata` endpoints.
-Retrieving a study/series/instance will always return the original binary files with the original attributes, even if those attributes failed validation.
+Retrieving a study/series/instance always returns the original binary files with the original attributes, even if those attributes failed validation.
If an attribute is padded with nulls, the attribute is indexed when searchable and is stored as is in dicom+json metadata. No validation warning is provided.
Single frame retrieval is supported by adding the following `Accept` header:
### Search
-#### Search results may be incomplete for extended query tags with validation warnings
-In the v1 API and continued for v2, if an [extended query tag](dicom-extended-query-tags-overview.md) has any errors, because one or more of the existing instances had a tag value that couldn't be indexed, then subsequent search queries containing the extended query tag return `erroneous-dicom-attributes` as detailed in the [documentation](dicom-extended-query-tags-overview.md#tag-query-status). However, tags (also known as attributes) with validation warnings from STOW-RS are **not** included in this header. If a store request results in validation warnings on [searchable tags](dicom-services-conformance-statement-v2.md#searchable-attributes), subsequent searches containing these tags won't consider any DICOM SOP instance that produced a warning. This behavior may result in incomplete search results. To correct an attribute, delete the stored instance and upload the corrected data.
+#### Search results might be incomplete for extended query tags with validation warnings
+In the v1 API and continued for v2, if an [extended query tag](dicom-extended-query-tags-overview.md) has any errors, because one or more of the existing instances had a tag value that couldn't be indexed, then subsequent search queries containing the extended query tag return `erroneous-dicom-attributes` as detailed in the [documentation](dicom-extended-query-tags-overview.md#tag-query-status). However, tags (also known as attributes) with validation warnings from STOW-RS are **not** included in this header. If a store request results in validation warnings on [searchable tags](dicom-services-conformance-statement-v2.md#searchable-attributes), subsequent searches containing these tags don't consider any DICOM SOP instance that produced a warning. This behavior might result in incomplete search results. To correct an attribute, delete the stored instance and upload the corrected data.
#### Fewer Study, Series, and Instance attributes are returned by default The set of attributes returned by default has been reduced to improve performance. See the detailed list in the [search response](./dicom-services-conformance-statement-v2.md#search-response) documentation. #### Null padded attributes can be searched for with or without padding
-When an attribute was stored using null padding, it can be searched for with or without the null padding in uri encoding. Results retrieved will be for attributes stored both with and without null padding.
+When an attribute was stored using null padding, it can be searched for with or without the null padding in uri encoding. Results retrieved are for attributes stored both with and without null padding.
### Operations
To align with [Microsoft's REST API guidelines](https://github.com/microsoft/api
#### Change feed now accepts a time range The Change Feed API now accepts optional `startTime` and `endTime` parameters to help scope the results. Changes within a time range can still be paginated using the existing `offset` and `limit` parameters. The offset is relative to the time window defined by `startTime` and `endTime`. For example, the fifth change feed entry starting from 7/24/2023 at 09:00 AM UTC would use the query string `?startTime=2023-07-24T09:00:00Z&offset=5`.
-For v2, it's recommended to always include a time range to improve performance.
+For v2, it's recommended to always include a time range to improve performance.
+
healthcare-apis Dicom Services Conformance Statement V2 https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/healthcare-apis/dicom/dicom-services-conformance-statement-v2.md
Previously updated : 4/20/2023 Last updated : 10/13/2023 # DICOM Conformance Statement v2 > [!NOTE]
-> API version 2 is the latest API version. For a list of changes in v2 compared to v1, see [DICOM Service API v2 Changes](dicom-service-v2-api-changes.md)
+> API version 2 is the latest API version. For a list of changes in v2 compared to v1, see [DICOM service API v2 changes](dicom-service-v2-api-changes.md)
-The Medical Imaging Server for DICOM supports a subset of the DICOMwebΓäó Standard. Support includes:
+The Medical Imaging Server for DICOM&reg; supports a subset of the DICOMwebΓäó Standard. Support includes:
* [Studies Service](#studies-service) * [Store (STOW-RS)](#store-stow-rs)
Each file stored must have a unique combination of `StudyInstanceUID`, `SeriesIn
Only transfer syntaxes with explicit Value Representations are accepted. > [!NOTE]
-> Requests are limited to 2GB. No single DICOM file or combination of files may exceed this limit.
+> Requests are limited to 2GB. No single DICOM file or combination of files might exceed this limit.
#### Store changes from v1 In previous versions, a Store request would fail if any of the [required](#store-required-attributes) or [searchable attributes](#searchable-attributes) failed validation. Beginning with V2, the request fails only if **required attributes** fail validation.
If an attribute is padded with nulls, the attribute is indexed when searchable a
| Code | Description | | : |:| | `200 (OK)` | All the SOP instances in the request have been stored. |
-| `202 (Accepted)` | The origin server stored some of the Instances and others have failed or returned warnings. Additional information regarding this error may be found in the response message body. |
+| `202 (Accepted)` | The origin server stored some of the Instances and others have failed or returned warnings. Additional information regarding this error might be found in the response message body. |
| `204 (No Content)` | No content was provided in the store transaction request. | | `400 (Bad Request)` | The request was badly formatted. For example, the provided study instance identifier didn't conform the expected UID format. | | `401 (Unauthorized)` | The client isn't authenticated. |
The following `Accept` headers are supported for retrieving frames:
#### Retrieve transfer syntax
-When the requested transfer syntax is different from original file, the original file is transcoded to requested transfer syntax. The original file needs to be one of the formats below for transcoding to succeed, otherwise transcoding may fail:
+When the requested transfer syntax is different from original file, the original file is transcoded to requested transfer syntax. The original file needs to be one of these formats for transcoding to succeed, otherwise transcoding might fail:
* 1.2.840.10008.1.2 (Little Endian Implicit) * 1.2.840.10008.1.2.1 (Little Endian Explicit) * 1.2.840.10008.1.2.2 (Explicit VR Big Endian)
The following `Accept` header is supported for retrieving metadata for a study,
* `application/dicom+json`
-Retrieving metadata won't return attributes with the following value representations:
+Retrieving metadata doesn't return attributes with the following value representations:
| VR Name | Description | | : | : |
The service only supports rendering of a single frame. If rendering is requested
When specifying a particular frame to return, frame indexing starts at 1.
-The `quality` query parameter is also supported. An integer value between `1` and `100` inclusive (1 being worst quality, and 100 being best quality) may be passed as the value for the query parameter. This parameter is used for images rendered as `jpeg`, and is ignored for `png` render requests. If not specified the parameter defaults to `100`.
+The `quality` query parameter is also supported. An integer value between `1` and `100` inclusive (1 being worst quality, and 100 being best quality) might be passed as the value for the query parameter. This parameter is used for images rendered as `jpeg`, and is ignored for `png` render requests. If not specified the parameter defaults to `100`.
### Retrieve response status codes
The following `Accept` header(s) are supported for searching:
* `application/dicom+json` ### Search changes from v1
-In the v1 API and continued for v2, if an [extended query tag](dicom-extended-query-tags-overview.md) has any errors, because one or more of the existing instances had a tag value that couldn't be indexed, then subsequent search queries containing the extended query tag returns `erroneous-dicom-attributes` as detailed in the [documentation](dicom-extended-query-tags-overview.md#tag-query-status). However, tags (also known as attributes) with validation warnings from STOW-RS are **not** included in this header. If a store request results in validation warnings on [searchable tags](#searchable-attributes), subsequent searches containing these tags won't consider any DICOM SOP instance that produced a warning. This behavior may result in incomplete search results.
+In the v1 API and continued for v2, if an [extended query tag](dicom-extended-query-tags-overview.md) has any errors, because one or more of the existing instances had a tag value that couldn't be indexed, then subsequent search queries containing the extended query tag returns `erroneous-dicom-attributes` as detailed in the [documentation](dicom-extended-query-tags-overview.md#tag-query-status). However, tags (also known as attributes) with validation warnings from STOW-RS are **not** included in this header. If a store request results in validation warnings on [searchable tags](#searchable-attributes), subsequent searches containing these tags doesn't consider any DICOM SOP instance that produced a warning. This behavior might result in incomplete search results.
To correct an attribute, delete the stored instance and upload the corrected data. ### Supported search parameters
The following parameters for each query are supported:
| Key | Support Value(s) | Allowed Count | Description | | : | :- | : | :- | | `{attributeID}=` | `{value}` | 0...N | Search for attribute/ value matching in query. |
-| `includefield=` | `{attributeID}`<br/>`all` | 0...N | The other attributes to return in the response. Both, public and private tags are supported.<br/>When `all` is provided, refer to [Search Response](#search-response) for more information about which attributes are returned for each query type.<br/>If a mixture of `{attributeID}` and `all` is provided, the server defaults to using `all`. |
+| `includefield=` | `{attributeID}`<br/>`all` | 0...N | The other attributes to return in the response. Both, public and private tags are supported.<br/>When `all` is provided, refer to [Search Response](#search-response) for more information.<br/>If a mixture of `{attributeID}` and `all` is provided, the server defaults to using `all`. |
| `limit=` | `{value}` | 0..1 | Integer value to limit the number of values returned in the response.<br/>Value can be between the range 1 >= x <= 200. Defaulted to 100. | | `offset=` | `{value}` | 0..1 | Skip `{value}` results.<br/>If an offset is provided larger than the number of search query results, a 204 (no content) response is returned. | | `fuzzymatching=` | `true` / `false` | 0..1 | If true fuzzy matching is applied to PatientName attribute. It does a prefix word match of any name part inside PatientName value. For example, if PatientName is "John^Doe", then "joh", "do", "jo do", "Doe" and "John Doe" all match. However "ohn" doesn't match. |
The response is an array of DICOM datasets. Depending on the resource, by *defau
If `includefield=all`, the following attributes are included along with default attributes. Along with the default attributes, this is the full list of attributes supported at each resource level.
-#### Additional Study tags
+#### Other Study tags
| Tag | Attribute Name | | :-- | :- |
If `includefield=all`, the following attributes are included along with default
| (0010, 0040) | `PatientSex` | | (0020, 0010) | `StudyID` |
-#### Additional Series tags
+#### Other Series tags
| Tag | Attribute Name | | :-- | :- |
If `includefield=all`, the following attributes are included along with default
| (0040, 0245) | PerformedProcedureStepStartTime | | (0040, 0275) | RequestAttributesSequence |
-#### Additional Instance tags
+#### Other Instance tags
| Tag | Attribute Name | | :-- | :- |
The query API returns one of the following status codes in the response:
| `403 (Forbidden)` | The user isn't authorized. | | `503 (Service Unavailable)` | The service is unavailable or busy. Try again later. |
-### Additional notes
+### Notes
* Querying using the `TimezoneOffsetFromUTC (00080201)` isn't supported.
-* The query API doesn't return `413 (request entity too large)`. If the requested query response limit is outside of the acceptable range, a bad request is returned. Anything requested within the acceptable range will be resolved.
+* The query API doesn't return `413 (request entity too large)`. If the requested query response limit is outside of the acceptable range, a bad request is returned. Anything requested within the acceptable range is resolved.
* When target resource is Study/Series, there's a potential for inconsistent study/series level metadata across multiple instances. For example, two instances could have different patientName. In this case, the latest wins and you can search only on the latest data. * Paged results are optimized to return matched _newest_ instance first, possibly resulting in duplicate records in subsequent pages if newer data matching the query was added. * Matching is case in-sensitive and accent in-sensitive for PN VR types. * Matching is case in-sensitive and accent sensitive for other string VR types. * Only the first value is indexed of a single valued data element that incorrectly has multiple values. * Using the default attributes or limiting the number of results requested maximizes performance.
-* When an attribute was stored using null padding, it can be searched for with or without the null padding in uri encoding. Results retrieved will be for attributes stored both with and without null padding.
+* When an attribute was stored using null padding, it can be searched for with or without the null padding in uri encoding. Results retrieved are for attributes stored both with and without null padding.
### Delete
If not specified in the URI, the payload dataset must contain the Workitem in th
The `Accept` and `Content-Type` headers are required in the request, and must both have the value `application/dicom+json`.
-There are several requirements related to DICOM data attributes in the context of a specific transaction. Attributes may be
+There are several requirements related to DICOM data attributes in the context of a specific transaction. Attributes might be
required to be present, required to not be present, required to be empty, or required to not be empty. These requirements can be found [in this table](https://dicom.nema.org/medical/dicom/current/output/html/part04.html#table_CC.2.5-3). > [!NOTE]
-> Although the reference table above says that SOP Instance UID shouldn't be present, this guidance is specific to the DIMSE protocol and is handled differently in DICOMWebΓäó. SOP Instance UID should be present in the dataset if not in the URI.
+> Although the reference table says that SOP Instance UID shouldn't be present, this guidance is specific to the DIMSE protocol and is handled differently in DICOMWebΓäó. SOP Instance UID should be present in the dataset if not in the URI.
> [!NOTE] > All the conditional requirement codes including 1C and 2C are treated as optional.
found [in this table](https://dicom.nema.org/medical/dicom/current/output/html/p
| Code | Description | | :-- | :- | | `201 (Created)` | The target Workitem was successfully created. |
-| `400 (Bad Request)` | There was a problem with the request. For example, the request payload didn't satisfy the requirements above. |
+| `400 (Bad Request)` | There was a problem with the request. For example, the request payload didn't satisfy the requirements. |
| `401 (Unauthorized)` | The client isn't authenticated. | | `403 (Forbidden)` | The user isn't authorized. | | `409 (Conflict)` | The Workitem already exists. |
A failure response payload contains a message describing the failure.
### Request cancellation
-This transaction enables the user to request cancellation of a non-owned Workitem.
+This transaction enables the user to request cancellation of a nonowned Workitem.
There are [four valid Workitem states](https://dicom.nema.org/medical/dicom/current/output/html/part04.html#table_CC.1.1-1):
There are [four valid Workitem states](https://dicom.nema.org/medical/dicom/curr
* `CANCELED` * `COMPLETED`
-This transaction only succeeds against Workitems in the `SCHEDULED` state. Any user can claim ownership of a Workitem by setting its Transaction UID and changing its state to `IN PROGRESS`. From then on, a user can only modify the Workitem by providing the correct Transaction UID. While UPS defines Watch and Event SOP classes that allow cancellation requests and other events to be forwarded, this DICOM service doesn't implement these classes, and so cancellation requests on workitems that are `IN PROGRESS` will return failure. An owned Workitem can be canceled via the [Change Workitem State](#change-workitem-state) transaction.
+This transaction only succeeds against Workitems in the `SCHEDULED` state. Any user can claim ownership of a Workitem by setting its Transaction UID and changing its state to `IN PROGRESS`. From then on, a user can only modify the Workitem by providing the correct Transaction UID. While UPS defines Watch and Event SOP classes that allow cancellation requests and other events to be forwarded, this DICOM service doesn't implement these classes, and so cancellation requests on workitems that are `IN PROGRESS` returns failure. An owned Workitem can be canceled via the [Change Workitem State](#change-workitem-state) transaction.
| Method | Path | Description | | : | :- | :-- |
This transaction only succeeds against Workitems in the `SCHEDULED` state. Any u
The `Content-Type` header is required, and must have the value `application/dicom+json`.
-The request payload may include Action Information as [defined in the DICOM Standard](https://dicom.nema.org/medical/dicom/current/output/html/part04.html#table_CC.2.2-1).
+The request payload might include Action Information as [defined in the DICOM Standard](https://dicom.nema.org/medical/dicom/current/output/html/part04.html#table_CC.2.2-1).
#### Request cancellation response status codes
The `Content-Type` header is required, and must have the value `application/dico
The request payload contains a dataset with the changes to be applied to the target Workitem. When a sequence is modified, the request must include all Items in the sequence, not just the Items to be modified. When multiple Attributes need updated as a group, do this as multiple Attributes in a single request, not as multiple requests.
-There are many requirements related to DICOM data attributes in the context of a specific transaction. Attributes may be
+There are many requirements related to DICOM data attributes in the context of a specific transaction. Attributes might be
required to be present, required to not be present, required to be empty, or required to not be empty. These requirements can be found in [this table](https://dicom.nema.org/medical/dicom/current/output/html/part04.html#table_CC.2.5-3).
The origin server shall support header fields as required in [Table 11.6.3-2](ht
A success response shall have either no payload or a payload containing a Status Report document.
-A failure response payload may contain a Status Report describing any failures, warnings, or other useful information.
+A failure response payload might contain a Status Report describing any failures, warnings, or other useful information.
### Change Workitem state
The request payload shall contain the Change UPS State Data Elements. These data
* Responses include the header fields specified in [section 11.7.3.2](https://dicom.nema.org/medical/dicom/current/output/html/part18.html#sect_11.7.3.2). * A success response shall have no payload.
-* A failure response payload may contain a Status Report describing any failures, warnings, or other useful information.
+* A failure response payload might contain a Status Report describing any failures, warnings, or other useful information.
### Search Workitems
The query API returns one of the following status codes in the response:
| `403 (Forbidden)` | The user isn't authorized. | | `503 (Service Unavailable)` | The service is unavailable or busy. Try again later. |
-#### Additional Notes
+#### Additional notes
The query API won't return `413 (request entity too large)`. If the requested query response limit is outside of the acceptable range, a bad request is returned. Anything requested within the acceptable range, will be resolved.
-* Paged results are optimized to return matched newest instance first, this may result in duplicate records in subsequent pages if newer data matching the query was added.
+* Paged results are optimized to return matched newest instance first, this might result in duplicate records in subsequent pages if newer data matching the query was added.
* Matching is case insensitive and accent insensitive for PN VR types. * Matching is case insensitive and accent sensitive for other string VR types. * If there's a scenario where canceling a Workitem and querying the same happens at the same time, then the query will most likely exclude the Workitem that's getting updated and the response code will be `206 (Partial Content)`.
-### Next Steps
-
-For more information about the DICOM service, see
-
->[!div class="nextstepaction"]
->[Overview of the DICOM service](dicom-services-overview.md)
healthcare-apis Dicom Services Conformance Statement https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/healthcare-apis/dicom/dicom-services-conformance-statement.md
Previously updated : 10/13/2022 Last updated : 10/13/2023
> [!NOTE] > API version 2 is the latest API version and should be used in place of v1. See the [DICOM Conformance Statement v2](dicom-services-conformance-statement-v2.md) for details.
-The Medical Imaging Server for DICOM supports a subset of the DICOMwebΓäó Standard. Support includes:
+The Medical Imaging Server for DICOM&reg; supports a subset of the DICOMwebΓäó Standard. Support includes:
* [Studies Service](#studies-service) * [Store (STOW-RS)](#store-stow-rs)
The following `Accept` headers are supported for retrieving frames:
#### Retrieve transfer syntax
-When the requested transfer syntax is different from original file, the original file is transcoded to requested transfer syntax. The original file needs to be one of the following formats for transcoding to succeed, otherwise transcoding may fail:
+When the requested transfer syntax is different from original file, the original file is transcoded to requested transfer syntax. The original file needs to be one of the following formats for transcoding to succeed, otherwise transcoding might fail:
* 1.2.840.10008.1.2 (Little Endian Implicit) * 1.2.840.10008.1.2.1 (Little Endian Explicit)
The service only supports rendering of a single frame. If rendering is requested
When specifying a particular frame to return, frame indexing starts at 1.
-The `quality` query parameter is also supported. An integer value between `1` and `100` inclusive (1 being worst quality, and 100 being best quality) may be passed as the value for the query parameter. This parameter is used for images rendered as `jpeg`, and is ignored for `png` render requests. If not specified the parameter defaults to `100`.
+The `quality` query parameter is also supported. An integer value between `1` and `100` inclusive (1 being worst quality, and 100 being best quality) might be passed as the value for the query parameter. This parameter is used for images rendered as `jpeg`, and is ignored for `png` render requests. If not specified the parameter defaults to `100`.
### Retrieve response status codes
The response is an array of DICOM datasets. Depending on the resource, by *defau
If `includefield=all`, the following attributes are included along with default attributes. Along with the default attributes, this is the full list of attributes supported at each resource level.
-#### Additional Study tags
+#### Extra Study tags
| Tag | Attribute Name | | :-- | :- |
If `includefield=all`, the following attributes are included along with default
| (0010, 2180) | `Occupation` | | (0010, 21B0) | `AdditionalPatientHistory` |
-#### Additional Series tags
+#### Other Series tags
| Tag | Attribute Name | | :-- | :- |
The query API returns one of the following status codes in the response:
| `403 (Forbidden)` | The user isn't authorized. | | `503 (Service Unavailable)` | The service is unavailable or busy. Try again later. |
-### Additional notes
+### Other notes
* Querying using the `TimezoneOffsetFromUTC (00080201)` isn't supported.
-* The query API doesn't return `413 (request entity too large)`. If the requested query response limit is outside of the acceptable range, a bad request is returned. Anything requested within the acceptable range will be resolved.
+* The query API doesn't return `413 (request entity too large)`. If the requested query response limit is outside of the acceptable range, a bad request is returned. Anything requested within the acceptable range is resolved.
* When target resource is Study/Series, there's a potential for inconsistent study/series level metadata across multiple instances. For example, two instances could have different patientName. In this case, the latest wins and you can search only on the latest data.
-* Paged results are optimized to return matched _newest_ instance first, this may result in duplicate records in subsequent pages if newer data matching the query was added.
+* Paged results are optimized to return matched _newest_ instance first, this might result in duplicate records in subsequent pages if newer data matching the query was added.
* Matching is case in-sensitive and accent in-sensitive for PN VR types. * Matching is case in-sensitive and accent sensitive for other string VR types. * Only the first value is indexed of a single valued data element that incorrectly has multiple values.
If not specified in the URI, the payload dataset must contain the Workitem in th
The `Accept` and `Content-Type` headers are required in the request, and must both have the value `application/dicom+json`.
-There are several requirements related to DICOM data attributes in the context of a specific transaction. Attributes may be
+There are several requirements related to DICOM data attributes in the context of a specific transaction. Attributes might be
required to be present, required to not be present, required to be empty, or required to not be empty. These requirements can be found [in this table](https://dicom.nema.org/medical/dicom/current/output/html/part04.html#table_CC.2.5-3). > [!NOTE]
-> Although the reference table above says that SOP Instance UID shouldn't be present, this guidance is specific to the DIMSE protocol and is handled differently in DICOMWebΓäó. SOP Instance UID should be present in the dataset if not in the URI.
+> Although the reference table says that SOP Instance UID shouldn't be present, this guidance is specific to the DIMSE protocol and is handled differently in DICOMWebΓäó. SOP Instance UID should be present in the dataset if not in the URI.
> [!NOTE] > All the conditional requirement codes including 1C and 2C are treated as optional.
found [in this table](https://dicom.nema.org/medical/dicom/current/output/html/p
| Code | Description | | :-- | :- | | `201 (Created)` | The target Workitem was successfully created. |
-| `400 (Bad Request)` | There was a problem with the request. For example, the request payload didn't satisfy the requirements above. |
+| `400 (Bad Request)` | There was a problem with the request. For example, the request payload didn't satisfy the requirements. |
| `401 (Unauthorized)` | The client isn't authenticated. | | `403 (Forbidden)` | The user isn't authorized. | | `409 (Conflict)` | The Workitem already exists. |
A failure response payload contains a message describing the failure.
### Request cancellation
-This transaction enables the user to request cancellation of a non-owned Workitem.
+This transaction enables the user to request cancellation of a nonowned Workitem.
There are [four valid Workitem states](https://dicom.nema.org/medical/dicom/current/output/html/part04.html#table_CC.1.1-1):
This transaction only succeeds against Workitems in the `SCHEDULED` state. Any u
The `Content-Type` header is required, and must have the value `application/dicom+json`.
-The request payload may include Action Information as [defined in the DICOM Standard](https://dicom.nema.org/medical/dicom/current/output/html/part04.html#table_CC.2.2-1).
+The request payload might include Action Information as [defined in the DICOM Standard](https://dicom.nema.org/medical/dicom/current/output/html/part04.html#table_CC.2.2-1).
#### Request cancellation response status codes
The `Content-Type` header is required, and must have the value `application/dico
The request payload contains a dataset with the changes to be applied to the target Workitem. When a sequence is modified, the request must include all Items in the sequence, not just the Items to be modified. When multiple Attributes need to be updated as a group, do this as multiple Attributes in a single request, not as multiple requests.
-There are many requirements related to DICOM data attributes in the context of a specific transaction. Attributes may be
+There are many requirements related to DICOM data attributes in the context of a specific transaction. Attributes might be
required to be present, required to not be present, required to be empty, or required to not be empty. These requirements can be found in [this table](https://dicom.nema.org/medical/dicom/current/output/html/part04.html#table_CC.2.5-3).
The origin server shall support header fields as required in [Table 11.6.3-2](ht
A success response shall have either no payload or a payload containing a Status Report document.
-A failure response payload may contain a Status Report describing any failures, warnings, or other useful information.
+A failure response payload might contain a Status Report describing any failures, warnings, or other useful information.
### Change Workitem state
The request payload shall contain the Change UPS State Data Elements. These data
* Responses include the header fields specified in [section 11.7.3.2](https://dicom.nema.org/medical/dicom/current/output/html/part18.html#sect_11.7.3.2). * A success response shall have no payload.
-* A failure response payload may contain a Status Report describing any failures, warnings, or other useful information.
+* A failure response payload might contain a Status Report describing any failures, warnings, or other useful information.
### Search Workitems
We support these matching types:
| Search Type | Supported Attribute | Example | | :- | : | : |
-| Range Query | `ScheduledΓÇïProcedureΓÇïStepΓÇïStartΓÇïDateΓÇïTime` | `{attributeID}={value1}-{value2}`. For date/time values, we support an inclusive range on the tag. This will be mapped to `attributeID >= {value1} AND attributeID <= {value2}`. If `{value1}` isn't specified, all occurrences of dates/times prior to and including `{value2}` will be matched. Likewise, if `{value2}` isn't specified, all occurrences of `{value1}` and subsequent dates/times will be matched. However, one of these values has to be present. `{attributeID}={value1}-` and `{attributeID}=-{value2}` are valid, however, `{attributeID}=-` is invalid. |
+| Range Query | `ScheduledΓÇïProcedureΓÇïStepΓÇïStartΓÇïDateΓÇïTime` | `{attributeID}={value1}-{value2}`. For date/time values, we support an inclusive range on the tag. This is mapped to `attributeID >= {value1} AND attributeID <= {value2}`. If `{value1}` isn't specified, all occurrences of dates/times prior to and including `{value2}` will be matched. Likewise, if `{value2}` isn't specified, all occurrences of `{value1}` and subsequent dates/times will be matched. However, one of these values has to be present. `{attributeID}={value1}-` and `{attributeID}=-{value2}` are valid, however, `{attributeID}=-` is invalid. |
| Exact Match | All supported attributes | `{attributeID}={value1}` | | Fuzzy Match | `PatientName` | Matches any component of the name that starts with the value. | > [!NOTE]
-> While we don't support full sequence matching, we do support exact match on the attributes listed above that are contained in a sequence.
+> While we don't support full sequence matching, we do support exact match on the attributes listed that are contained in a sequence.
##### Attribute ID
The query API returns one of the following status codes in the response:
| `403 (Forbidden)` | The user isn't authorized. | | `503 (Service Unavailable)` | The service is unavailable or busy. Try again later. |
-#### Additional Notes
+#### Other Notes
-The query API won't return `413 (request entity too large)`. If the requested query response limit is outside of the acceptable range, a bad request is returned. Anything requested within the acceptable range, will be resolved.
+The query API doesn't `413 (request entity too large)`. If the requested query response limit is outside of the acceptable range, a bad request is returned. Anything requested within the acceptable range is resolved.
-* Paged results are optimized to return matched newest instance first, this may result in duplicate records in subsequent pages if newer data matching the query was added.
+* Paged results are optimized to return matched newest instance first, this might result in duplicate records in subsequent pages if newer data matching the query was added.
* Matching is case insensitive and accent insensitive for PN VR types. * Matching is case insensitive and accent sensitive for other string VR types.
-* If there's a scenario where canceling a Workitem and querying the same happens at the same time, then the query will most likely exclude the Workitem that's getting updated and the response code will be `206 (Partial Content)`.
+* If there's a scenario where canceling a Workitem and querying the same happens at the same time, then the query will likely exclude the Workitem that's getting updated and the response code is `206 (Partial Content)`.
-### Next Steps
-For more information about the DICOM service, see
-
->[!div class="nextstepaction"]
->[Overview of the DICOM service](dicom-services-overview.md)
healthcare-apis Enable Diagnostic Logging https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/healthcare-apis/dicom/enable-diagnostic-logging.md
Previously updated : 03/02/2022 Last updated : 10/13/2023
MicrosoftHealthcareApisAuditLogs
| where ResultType == "Failed" ```
-## Conclusion
+## Next steps
-Having access to diagnostic logs is essential for monitoring a service and providing compliance reports. The DICOM service allows you to do these actions through diagnostic logs.
+Having access to diagnostic logs is essential for monitoring a service and providing compliance reports. The DICOM service allows you to do these actions through diagnostic logs. For more information, see [Azure Activity Log event schema](.././../azure-monitor/essentials/activity-log-schema.md)
-## Next steps
-In this article, you learned how to enable audit logs for the DICOM service. For information about the Azure activity log, see
-
->[!div class="nextstepaction"]
->[Azure Activity Log event schema](.././../azure-monitor/essentials/activity-log-schema.md)
healthcare-apis Release Notes https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/healthcare-apis/release-notes.md
The DICOM service supports `ModalitiesInStudy` as a [searchable attribute](dicom
**Added support for `NumberOfStudyRelatedInstances` and `NumberOfSeriesRelatedInstances` attributes**
-Two new attributes for returning the count of Instances in a Study or Series are available in Search [responses](dicom/dicom-services-conformance-statement.md#additional-series-tags).
+Two new attributes for returning the count of Instances in a Study or Series are available in Search [responses](dicom/dicom-services-conformance-statement.md#other-series-tags).
key-vault Tutorial Import Certificate https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/key-vault/certificates/tutorial-import-certificate.md
Create a key vault using one of these three methods:
- [Create a key vault using Azure PowerShell](../general/quick-create-powershell.md) ## Import a certificate to your key vault
+> [!NOTE]
+> By default, imported certificates have exportable private keys. You can use the SDK, Azure CLI, or PowerShell to define policies that prevent the private key from being exported.
To import a certificate to the vault, you need to have a PEM or PFX certificate file to be on disk. If the certificate is in PEM format, the PEM file must contain the key as well as x509 certificates. This operation requires the certificates/import permission.
key-vault Soft Delete Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/key-vault/general/soft-delete-overview.md
Last updated 01/25/2022
Key Vault's soft-delete feature allows recovery of the deleted vaults and deleted key vault objects (for example, keys, secrets, certificates), known as soft-delete. Specifically, we address the following scenarios: This safeguard offer the following protections: - Once a secret, key, certificate, or key vault is deleted, it will remain recoverable for a configurable period of 7 to 90 calendar days. If no configuration is specified the default recovery period will be set to 90 days. This provides users with sufficient time to notice an accidental secret deletion and respond.-- Two operations must be made to permanently delete a secret. First a user must delete the object, which puts it into the soft-deleted state. Second, a user must purge the object in the soft-deleted state. The purge operation requires additional access policy permissions. These additional protections reduce the risk of a user accidentally or maliciously deleting a secret or a key vault. -- To purge a secret in the soft-deleted state, a service principal must be granted an additional "purge" access policy permission. The purge access policy permission isn't granted by default to any service principal including key vault and subscription owners and must be deliberately set. By requiring an elevated access policy permission to purge a soft-deleted secret, it reduces the probability of accidentally deleting a secret.
+- Two operations must be made to permanently delete a secret. First a user must delete the object, which puts it into the soft-deleted state. Second, a user must purge the object in the soft-deleted state. These additional protections reduce the risk of a user accidentally or maliciously deleting a secret or a key vault.
+- To purge a secret, key, certificate in the soft-deleted state, a security principal must be granted "purge" operation permission.
## Supporting interfaces
The soft-delete feature is available through the [REST API](/rest/api/keyvault/)
Azure Key Vaults are tracked resources, managed by Azure Resource Manager. Azure Resource Manager also specifies a well-defined behavior for deletion, which requires that a successful DELETE operation must result in that resource not being accessible anymore. The soft-delete feature addresses the recovery of the deleted object, whether the deletion was accidental or intentional.
-1. In the typical scenario, a user may have inadvertently deleted a key vault or a key vault object; if that key vault or key vault object were to be recoverable for a predetermined period, the user may undo the deletion and recover their data.
+1. In the typical scenario, a user can have inadvertently deleted a key vault or a key vault object; if that key vault or key vault object were to be recoverable for a predetermined period, the user can undo the deletion and recover their data.
-2. In a different scenario, a rogue user may attempt to delete a key vault or a key vault object, such as a key inside a vault, to cause a business disruption. Separating the deletion of the key vault or key vault object from the actual deletion of the underlying data can be used as a safety measure by, for instance, restricting permissions on data deletion to a different, trusted role. This approach effectively requires quorum for an operation which might otherwise result in an immediate data loss.
+2. In a different scenario, a rogue user can attempt to delete a key vault or a key vault object, such as a key inside a vault, to cause a business disruption. Separating the deletion of the key vault or key vault object from the actual deletion of the underlying data can be used as a safety measure by, for instance, restricting permissions on data deletion to a different, trusted role. This approach effectively requires quorum for an operation which might otherwise result in an immediate data loss.
### Soft-delete behavior
Purge Protection can be turned on via [CLI](./key-vault-recovery.md?tabs=azure-c
Permanently deleting, purging, a key vault is possible via a POST operation on the proxy resource and requires special privileges. Generally, only the subscription owner will be able to purge a key vault. The POST operation triggers the immediate and irrecoverable deletion of that vault. Exceptions are:-- When the Azure subscription has been marked as *undeletable*. In this case, only the service may then perform the actual deletion, and does so as a scheduled process.
+- When the Azure subscription has been marked as *undeletable*. In this case, only the service can then perform the actual deletion, and does so as a scheduled process.
- When the `--enable-purge-protection` argument is enabled on the vault itself. In this case, Key Vault will wait for 7 to 90 days from when the original secret object was marked for deletion to permanently delete the object. For steps, see [How to use Key Vault soft-delete with CLI: Purging a key vault](./key-vault-recovery.md?tabs=azure-cli#key-vault-cli) or [How to use Key Vault soft-delete with PowerShell: Purging a key vault](./key-vault-recovery.md?tabs=azure-powershell#key-vault-powershell).
At the same time, Key Vault will schedule the deletion of the underlying data co
Soft-deleted resources are retained for a set period of time, 90 days. During the soft-delete retention interval, the following apply: -- You may list all of the key vaults and key vault objects in the soft-delete state for your subscription as well as access deletion and recovery information about them.
+- You can list all of the key vaults and key vault objects in the soft-delete state for your subscription as well as access deletion and recovery information about them.
- Only users with special permissions can list deleted vaults. We recommend that our users create a custom role with these special permissions for handling deleted vaults. - A key vault with the same name can't be created in the same location; correspondingly, a key vault object can't be created in a given vault if that key vault contains an object with the same name and which is in a deleted state.-- Only a specifically privileged user may restore a key vault or key vault object by issuing a recover command on the corresponding proxy resource.
+- Only a specifically privileged user can restore a key vault or key vault object by issuing a recover command on the corresponding proxy resource.
- The user, member of the custom role, who has the privilege to create a key vault under the resource group can restore the vault.-- Only a specifically privileged user may forcibly delete a key vault or key vault object by issuing a delete command on the corresponding proxy resource.
+- Only a specifically privileged user can forcibly delete a key vault or key vault object by issuing a delete command on the corresponding proxy resource.
-Unless a key vault or key vault object is recovered, at the end of the retention interval the service performs a purge of the soft-deleted key vault or key vault object and its content. Resource deletion may not be rescheduled.
+Unless a key vault or key vault object is recovered, at the end of the retention interval the service performs a purge of the soft-deleted key vault or key vault object and its content. Resource deletion can not be rescheduled.
### Billing implications
load-balancer Upgrade Basic Standard With Powershell https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/load-balancer/upgrade-basic-standard-with-powershell.md
The PowerShell module performs the following functions:
> Migrating _internal_ Basic Load Balancers where the backend VMs or VMSS instances do not have Public IP Addresses assigned requires additional action post-migration to enable backend pool members to connect to the internet. The recommended approach is to create a NAT Gateway and assign it to the backend pool members' subnet (see: [**Integrate NAT Gateway with Internal Load Balancer**](../virtual-network/nat-gateway/tutorial-nat-gateway-load-balancer-internal-portal.md)). Alternatively, Public IP Addresses can be allocated to each Virtual Machine Scale Set or Virtual Machine instance by adding a Public IP Configuration to the Network Profile (see: [**VMSS Public IPv4 Address Per Virtual Machine**](../virtual-machine-scale-sets/virtual-machine-scale-sets-networking.md)) for Virtual Machine Scale Sets or [**Associate a Public IP address with a Virtual Machine**](../virtual-network/ip-services/associate-public-ip-address-vm.md) for Virtual Machines. >[!NOTE]
-> If the Virtual Machine Scale Set in the Load Balancer backend pool has Public IP Addresses in its network configuration, the Public IP Addresses will change during migration when they are upgraded to Standard SKU. The Public IP addresses associated with Virtual Machines will be retained through the migration.
+> If the Virtual Machine Scale Set in the Load Balancer backend pool has Public IP Addresses in its network configuration, the Public IP Addresses associated with each Virtual Machine Scale Set instance will change when they are upgraded to Standard SKU. This is because scale set instance-level Public IP addresses cannot be upgraded, only replaced with a new Standard SKU Public IP. All other Public IP addresses will be retained through the migration.
>[!NOTE] > If the Virtual Machine Scale Set behind the Load Balancer is a **Service Fabric Cluster**, migration with this script will take more time. In testing, a 5-node Bronze cluster was unavailable for about 30 minutes and a 5-node Silver cluster was unavailable for about 45 minutes. For Service Fabric clusters that require minimal / no connectivity downtime, adding a new nodetype with Standard Load Balancer and IP resources is a better solution.
logic-apps Create Run Custom Code Functions https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/logic-apps/create-run-custom-code-functions.md
ms.suite: integration
Previously updated : 10/10/2023 Last updated : 10/16/2023 # Customer intent: As a logic app workflow developer, I want to write and run my own .NET Framework code to perform custom integration tasks.
-# Create and run .NET Framework code from Standard workflows in Azure Logic Apps (preview)
+# Create and run .NET Framework code from Standard workflows in Azure Logic Apps
[!INCLUDE [logic-apps-sku-standard](../../includes/logic-apps-sku-standard.md)]
-> [!IMPORTANT]
-> This capability is in preview and is subject to the
-> [Supplemental Terms of Use for Microsoft Azure Previews](https://azure.microsoft.com/support/legal/preview-supplemental-terms/).
- For integration solutions where you have to author and run .NET Framework code from your Standard logic app workflow, you can use Visual Studio Code with the Azure Logic Apps (Standard) extension. This extension provides the following capabilities and benefits: -- Create code that has the flexibility and control to solve your most challenging integration problems.
+- Write your own code by creating functions that have the flexibility and control to solve your most challenging integration problems.
- Debug code locally in Visual Studio Code. Step through your code and workflows in the same debugging session. - Deploy code alongside your workflows. No other service plans are necessary. - Support BizTalk Server migration scenarios so you can lift-and shift custom .NET Framework investments from on premises to the cloud.
With the capability to write your own code, you can accomplish scenarios such as
- Message shaping for outbound messages to another system, such as an API - Calculations
-However, custom code isn't suitable for scenarios such as the following:
+This capability isn't suitable for scenarios such as the following:
- Processes that take more than 10 minutes to run - Large message and data transformations
For more information about limitations in Azure Logic Apps, see [Limits and conf
- Visual Studio Code with the Azure Logic Apps (Standard) extension. To meet these requirements, see the prerequisites for [Create Standard workflows in single-tenant Azure Logic Apps with Visual Studio Code](create-single-tenant-workflows-visual-studio-code.md#prerequisites).
- - The custom code capability is currently available only in Visual Studio Code, running on a Windows operating system.
+ - The custom functions capability is currently available only in Visual Studio Code, running on a Windows operating system.
- - This capability currently supports calling .NET Framework 4.7.2 assemblies.
+ - The custom functions capability currently supports calling only .NET Framework 4.7.2 assemblies.
- A local folder to use for creating your code project ## Limitations
-Custom code authoring currently isn't available in the Azure portal. However, after you deploy your custom code to Azure, you can use the **Call a local function in this logic app** built-in action and deployed functions to run code and reference the outputs in subsequent actions like in any other workflow. You can view the run history, inputs, and outputs for the built-in action.
+Custom functions authoring currently isn't available in the Azure portal. However, after you deploy your functions from Visual Studio Code to Azure, follow the steps in [Call your code from a workflow](#call-code-from-workflow) for the Azure portal. You can use the built-in action named **Call a local function in this logic app** to select from your deployed custom functions and run your code. Subsequent actions in your workflow can reference the outputs from these functions, as in any other workflow. You can view the built-in action's run history, inputs, and outputs.
## Create a code project
The latest Azure Logic Apps (Standard) extension for Visual Studio Code includes
:::image type="content" source="media/create-run-custom-code-functions/create-workspace.png" alt-text="Screenshot shows Visual Studio Code, Azure window, Workspace section toolbar, and selected option for Create new logic app workspace.":::
-1. In the **Create new logic app workspace** prompt that appears, find and select the local folder that you created for your project.
+1. In the **Select folder** box, browse to and select the local folder that you created for your project.
+
+1. When the **Create new logic app workspace** prompt box appears, provide a name for your workspace:
+
+ :::image type="content" source="media/create-run-custom-code-functions/workspace-name.png" alt-text="Screenshot shows Visual Studio Code with prompt to enter workspace name.":::
+
+ This example continues with **MyLogicAppWorkspace**.
- :::image type="content" source="media/create-run-custom-code-functions/select-local-folder.png" alt-text="Screenshot shows Visual Studio Code with prompt to select a local folder for workflow project.":::
+1. When the **Select a project template for your logic app workspace** prompt box appears, select **Logic app with custom code project**.
-1. Follow the prompts to provide the following example values:
+ :::image type="content" source="media/create-run-custom-code-functions/project-template.png" alt-text="Screenshot shows Visual Studio Code with prompt to select project template for logic app workspace.":::
+
+1. Follow the subsequent prompts to provide the following example values:
| Item | Example value | |||
- | Workspace name | **MyLogicAppWorkspace** |
- | Function name | **WeatherForecast** |
- | Namespace name | **Contoso.Enterprise** |
+ | Function name for functions project | **WeatherForecast** |
+ | Namespace name for functions project | **Contoso.Enterprise** |
| Workflow template: <br>- **Stateful Workflow** <br>- **Stateless Workflow** | **Stateful Workflow** | | Workflow name | **MyWorkflow** | 1. Select **Open in current window**.
- After you finish this step, Visual Studio Code creates your workspace, which includes a function project and a logic app workflow project, by default, for example:
+ After you finish this step, Visual Studio Code creates your workspace, which includes a functions project and a logic app project, by default, for example:
:::image type="content" source="media/create-run-custom-code-functions/created-workspace.png" alt-text="Screenshot shows Visual Studio Code with created workspace.":::
The latest Azure Logic Apps (Standard) extension for Visual Studio Code includes
1. In your workspace, expand the **Functions** node, if not already expanded.
-1. Open the **<*function-name*>.cs** file.
+1. Open the **<*function-name*>.cs** file, which is named **WeatherForecast.cs** in this example.
By default, this file contains sample code that has the following code elements along with the previously provided example values where appropriate:
The latest Azure Logic Apps (Standard) extension for Visual Studio Code includes
using System.Threading.Tasks; using Microsoft.Azure.Functions.Extensions.Workflows; using Microsoft.Azure.WebJobs;
+ using Microsoft.Extensions.Logging;
/// <summary> /// Represents the WeatherForecast flow invoked function. /// </summary>
- public static class WeatherForecast
+ public class WeatherForecast
{+
+ private readonly ILogger<WeatherForecast> logger;
+
+ public WeatherForecast(ILoggerFactory loggerFactory)
+ {
+ logger = loggerFactory.CreateLogger<WeatherForecast>();
+ }
+ /// <summary> /// Executes the logic app workflow. /// </summary> /// <param name="zipCode">The zip code.</param> /// <param name="temperatureScale">The temperature scale (e.g., Celsius or Fahrenheit).</param> [FunctionName("WeatherForecast")]
- public static Task<Weather> Run([WorkflowActionTrigger] int zipCode, string temperatureScale)
+ public Task<Weather> Run([WorkflowActionTrigger] int zipCode, string temperatureScale)
{+
+ this.logger.LogInformation("Starting WeatherForecast with Zip Code: " + zipCode + " and Scale: " + temperatureScale);
+ // Generate random temperature within a range based on the temperature scale Random rnd = new Random(); var currentTemp = temperatureScale == "Celsius" ? rnd.Next(1, 30) : rnd.Next(40, 90);
The latest Azure Logic Apps (Standard) extension for Visual Studio Code includes
} /// <summary>
- /// Represents the weather information.
+ /// Represents the weather information for WeatherForecast.
/// </summary> public class Weather {
The latest Azure Logic Apps (Standard) extension for Visual Studio Code includes
} ```
- The function definition includes a default `Run` method that you can use to get started. This sample `Run` method demonstrates some of the capabilities available with the custom code feature, such as passing different inputs and outputs, including complex .NET types.
+ The function definition includes a default `Run` method that you can use to get started. This sample `Run` method demonstrates some of the capabilities available with the custom functions feature, such as passing different inputs and outputs, including complex .NET types.
+
+ The **<*function-name*>.cs** file also includes the **`ILogger`** interface, which provides support for logging events to an Application Insights resource. You can send tracing information to Application Insights and store that information alongside the trace information from your workflows, for example:
+
+ ```csharp
+ private readonly ILogger<WeatherForecast> logger;
+
+ public WeatherForecast(ILoggerFactory loggerFactory)
+ {
+ logger = loggerFactory.CreateLogger<WeatherForecast>();
+ }
+
+ [FunctionName("WeatherForecast")]
+ public Task<Weather> Run([WorkflowActionTrigger] int zipCode, string temperatureScale)
+ {
+
+ this.logger.LogInformation("Starting WeatherForecast with Zip Code: " + zipCode + " and Scale: " + temperatureScale);
+
+ <...>
+
+ }
+ ```
1. Replace the sample function code with your own, and edit the default `Run` method for your own scenarios. Or, you can copy the function, including the `[FunctionName("<*function-name*>")]` declaration, and then rename the function with a unique name. You can then edit the renamed function to meet your needs.
This example continues with the sample code without any changes.
## Compile and build your code
-After you finish writing your code, compile to make sure that no build errors exist. Your function project automatically includes build tasks, which compile and then add your code to the **lib\custom** folder in your logic app project where workflows look for custom code to run. These tasks put the assemblies in the **lib\custom\net472** folder.
+After you finish writing your code, compile to make sure that no build errors exist. Your function project automatically includes build tasks, which compile and then add your code to the **lib\custom** folder in your logic app project where workflows look for custom functions to run. These tasks put the assemblies in the **lib\custom\net472** folder.
1. In Visual Studio Code, from the **Terminal** menu, select **New Terminal**.
After you finish writing your code, compile to make sure that no build errors ex
:::image type="content" source="media/create-run-custom-code-functions/generated-assemblies.png" alt-text="Screenshot shows Visual Studio Code and logic app workspace with function project and logic app project, now with the generated assemblies and other required files.":::
+<a name="call-code-from-workflow"></a>
+ ## Call your code from a workflow After you confirm that your code compiles and that your logic app project contains the necessary files for your code to run, open the default workflow that's included with your logic app project.
After you confirm that your code compiles and that your logic app project contai
## Deploy your code
-You can deploy your custom code in the same way that you deploy your logic app project. Whether you deploy from Visual Studio Code or use a CI/CD DevOps process, make sure that you build your code and that all dependent assemblies exist in the logic app project's **lib/custom/net472** folder before you deploy. For more information, see [Deploy Standard workflows from Visual Studio Code to Azure](create-single-tenant-workflows-visual-studio-code.md#deploy-azure).
+You can deploy your custom functions in the same way that you deploy your logic app project. Whether you deploy from Visual Studio Code or use a CI/CD DevOps process, make sure that you build your code and that all dependent assemblies exist in the logic app project's **lib/custom/net472** folder before you deploy. For more information, see [Deploy Standard workflows from Visual Studio Code to Azure](create-single-tenant-workflows-visual-studio-code.md#deploy-azure).
## Troubleshoot problems
logic-apps Create Single Tenant Workflows Azure Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/logic-apps/create-single-tenant-workflows-azure-portal.md
In single-tenant Azure Logic Apps, workflows in the same logic app resource and
| Property | Required | Value | Description | |-|-|-|-|
- | **Storage type** | Yes | - **Azure Storage** <br>- **SQL and Azure Storage** | The storage type that you want to use for workflow-related artifacts and data. <br><br>- To deploy only to Azure, select **Azure Storage**. <br><br>- To use SQL as primary storage and Azure Storage as secondary storage, select **SQL and Azure Storage**, and review [Set up SQL database storage for Standard logic apps in single-tenant Azure Logic Apps](set-up-sql-db-storage-single-tenant-standard-workflows.md). <br><br>**Note**: If you're deploying to an Azure region, you still need an Azure storage account, which is used to complete the one-time hosting of the logic app's configuration on the Azure Logic Apps platform. The workflow state, run history, and other runtime artifacts are stored in your SQL database. <br><br>For deployments to a custom location that's hosted on an Azure Arc cluster, you only need SQL as your storage provider. |
+ | **Storage type** | Yes | - **Azure Storage** <br>- **SQL and Azure Storage** | The storage type that you want to use for workflow-related artifacts and data. <br><br>- To deploy only to Azure, select **Azure Storage**. <br><br>- To use SQL as primary storage and Azure Storage as secondary storage, select **SQL and Azure Storage**, and review [Set up SQL database storage for Standard logic apps in single-tenant Azure Logic Apps](set-up-sql-db-storage-single-tenant-standard-workflows.md). <br><br>**Note**: If you're deploying to an Azure region, you still need an Azure storage account, which is used to complete the one-time hosting of the logic app's configuration on the Azure Logic Apps platform. The workflow's state, run history, and other runtime artifacts are stored in your SQL database. <br><br>For deployments to a custom location that's hosted on an Azure Arc cluster, you only need SQL as your storage provider. |
| **Storage account** | Yes | <*Azure-storage-account-name*> | The [Azure Storage account](../storage/common/storage-account-overview.md) to use for storage transactions. <br><br>This resource name must be unique across regions and have 3-24 characters with only numbers and lowercase letters. Either select an existing account or create a new account. <br><br>This example creates a storage account named **mystorageacct**. | 1. On the **Networking** tab, you can leave the default options for this example.
To debug a stateless workflow more easily, you can enable the run history for th
During workflow run, your logic app emits telemetry along with other events. You can use this telemetry to get better visibility into how well your workflow runs and how the Logic Apps runtime works in various ways. You can monitor your workflow by using [Application Insights](../azure-monitor/app/app-insights-overview.md), which provides near real-time telemetry (live metrics). This capability can help you investigate failures and performance problems more easily when you use this data to diagnose issues, set up alerts, and build charts.
-If your logic app's creation and deployment settings support using [Application Insights](../azure-monitor/app/app-insights-overview.md), you can optionally enable diagnostics logging and tracing for your logic app. You can do so either when you create your logic app in the Azure portal or after deployment. You need to have an Application Insights instance, but you can create this resource either [in advance](../azure-monitor/app/create-workspace-resource.md), when you create your logic app, or after deployment.
+If your logic app's creation and deployment settings support using [Application Insights](../azure-monitor/app/app-insights-overview.md), you can optionally enable diagnostics logging and tracing for your logic app workflow. You can do so either when you create your logic app resource in the Azure portal or after deployment. You need to have an Application Insights instance, but you can create this resource either [in advance](../azure-monitor/app/create-workspace-resource.md), when you create your logic app, or after deployment. You can also optionally [enable enhanced telemetry in Application Insights for Standard workflows](enable-enhanced-telemetry-standard-workflows.md).
-To enable Application Insights on a deployed logic app or open the Application Insights dashboard if already enabled, follow these steps:
+### Enable Application Insights on a deployed logic app
1. In the Azure portal, find your deployed logic app. 1. On the logic app menu, under **Settings**, select **Application Insights**.
-1. If Application Insights isn't enabled, on the **Application Insights** pane, select **Turn on Application Insights**. After the pane updates, at the bottom, select **Apply** > **Yes**.
+1. On the **Application Insights** pane, select **Turn on Application Insights**.
- If Application Insights is enabled, on the **Application Insights** pane, select **View Application Insights data**.
+1. After the pane updates, at the bottom, select **Apply** > **Yes**.
-After Application Insights opens, you can review various metrics for your logic app. For more information, review these topics:
+1. On the **Application Insights** pane, select **View Application Insights data**.
-* [Azure Logic Apps Running Anywhere - Monitor with Application Insights - part 1](https://techcommunity.microsoft.com/t5/integrations-on-azure/azure-logic-apps-running-anywhere-monitor-with-application/ba-p/1877849)
-* [Azure Logic Apps Running Anywhere - Monitor with Application Insights - part 2](https://techcommunity.microsoft.com/t5/integrations-on-azure/azure-logic-apps-running-anywhere-monitor-with-application/ba-p/2003332)
+ After the Application Insights dashboard opens, you can review metrics or logs for your logic app workflow. For example, to chart or query for data, on the Application Insights resource menu, under **Monitoring**, select **Metrics** or **Logs**.
+
+<a name="open-application-insights"></a>
+
+### Open Application Insights
+
+1. In the Azure portal, find your deployed logic app.
+
+1. On the logic app menu, under **Settings**, select **Application Insights**.
+
+1. On the **Application Insights** pane, select **View Application Insights data**.
+
+ After the Application Insights dashboard opens, you can review metrics or logs for your logic app workflow. For example, to chart or query for data, on the Application Insights resource menu, under **Monitoring**, select **Metrics** or **Logs**.
<a name="view-connections"></a>
logic-apps Enable Enhanced Telemetry Standard Workflows https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/logic-apps/enable-enhanced-telemetry-standard-workflows.md
+
+ Title: Enable and view enhanced telemetry for Standard workflows
+description: How to enable and view enhanced telemetry in Application Insights for Standard workflows in Azure Logic Apps.
+
+ms.suite: integration
++++ Last updated : 10/16/2023+
+# Customer intent: As a developer, I want to turn on and view enhanced telemetry in Application Insights for Standard logic app workflows.
++
+# Enable and view enhanced telemetry in Application Insights for Standard workflows in Azure Logic Apps
++
+This how-to guide shows how to turn on enhanced telemetry collection in Application Insights for your Standard logic app resource and then view the collected data after your workflow finishes a run.
+
+## Prerequisites
+
+- An Azure account and subscription. If you don't have a subscription, [sign up for a free Azure account](https://azure.microsoft.com/free/?WT.mc_id=A261C142F).
+
+- An [Application Insights](../azure-monitor/app/app-insights-overview.md) instance. You create this resource [in advance](../azure-monitor/app/create-workspace-resource.md), when you create your Standard logic app, or after logic app deployment.
+
+- A Standard logic app and workflow, either in the Azure portal or in Visual Studio Code.
+
+ - Your logic app resource or project must use the Azure Functions v4 runtime, which is enabled by default.
+
+ - Your logic app must [have enabled Application Insights](create-single-tenant-workflows-azure-portal.md#enable-open-application-insights) for diagnostics logging and tracing. You can do so either when you create your logic app or after deployment.
+
+## Enable enhanced telemetry in Application Insights
+
+### [Portal](#tab/portal)
+
+1. In the [Azure portal](https://portal.azure.com), open your Standard logic app resource.
+
+1. On the logic app menu, under **Development Tools**, select **Advanced Tools**. On the **Advanced Tools** page, select **Go**, which opens the Kudu tools.
+
+1. On the **Kudu** page, from the **Debug console** menu, select **CMD**. In the folder directory table, browse to the following file and select **Edit**: **site/wwwroot/host.json**
+
+1. In the **host.json** file, add the following JSON code:
+
+ ```json
+ {
+ "version": "2.0",
+ "extensionBundle": {
+ "id": "Microsoft.Azure.Functions.ExtensionBundle.Workflows",
+ "version": "[1, 2.00]"
+ },
+ "extensions": {
+ "workflow": {
+ "Settings": {
+ "Runtime.ApplicationInsightTelemetryVersion": "v2"
+ }
+ }
+ }
+ }
+ ```
+
+ This configuration enables the default level of verbosity. For other options, see [Apply filtering at the source](#filter-events-source).
+
+### [Visual Studio Code](#tab/visual-studio-code)
+
+1. In Visual Studio Code, open your logic app project, and then open the project's **host.json** file.
+
+1. In the **host.json** file, add the following JSON code:
+
+ ```json
+ {
+ "version": "2.0",
+ "extensionBundle": {
+ "id": "Microsoft.Azure.Functions.ExtensionBundle.Workflows",
+ "version": "[1, 2.00]"
+ },
+ "extensions": {
+ "workflow": {
+ "Settings": {
+ "Runtime.ApplicationInsightTelemetryVersion": "v2"
+ }
+ }
+ }
+ }
+ ```
+
+ This configuration enables the default level of verbosity. For other options, see [Apply filtering at the source](#filter-events-source).
+++
+<a name="open-application-insights"></a>
+
+## Open Application Insights
+
+After your workflow finishes a run and a few minutes pass, open your Application Insights resource.
+
+1. In the [Azure portal](https://portal.azure.com), on your logic app menu, under **Settings**, select **Application Insights**.
+
+1. On the Application Insights resource menu, under **Monitoring**, select **Logs**.
+
+<a name="view-enhanced-logs"></a>
+
+## View enhanced logs in Application Insights
+
+The following sections describe the tables in Application Insights where you can find and view the enhanced telemetry generated from your workflow run.
+
+| Table name | Description |
+||-|
+| [Requests](#requests-table) | Details about the following events in workflow runs: <br><br>- Trigger and action events <br>- Retry attempts <br>- Connector usage |
+| [Traces](#traces-table) | Details about the following events in workflow runs: <br><br>- Workflow start and end events <br>- Batch send and batch receive events |
+| [Exceptions](#exceptions-table) | Details about exception events in workflow runs |
+| [Dependencies](#dependencies-table) | Details about dependency events in workflow runs |
+
+### Requests table
+
+The Requests table contains fields that track data about the following events in Standard workflow runs:
+
+- Trigger and action events
+- Retry attempts
+- Connector usage
+
+To show how data gets into these fields, suppose you have the following example Standard workflow that starts with the **Request** trigger followed by the **Compose** action and the **Response** action.
+
+![Screenshot shows Azure portal and Standard workflow designer with trigger and actions.](media/enable-enhanced-telemetry-standard-workflows/workflow-overview.png)
+
+The trigger's settings has a parameter named **Custom Tracking Id**. The parameter value is set to an expression that pulls the **orderId** property value from the body of an incoming message:
+
+![Screenshot shows Azure portal, Standard workflow, Request trigger selected, Settings tab, and custom tracking Id.](media/enable-enhanced-telemetry-standard-workflows/requests-table/request-trigger-custom-tracking-id.png)
+
+Next, the workflow's **Compose** action settings has an added tracked property named **solutionName**. The property value is set to the name of the logic app resource.
+
+![Screenshot shows Azure portal, Standard workflow, Compose action selected, Settings tab, and tracked property.](media/enable-enhanced-telemetry-standard-workflows/requests-table/compose-action-tracked-property.png)
+
+ The **Compose** action is followed by a **Response** action that returns a response to the caller.
+
+The following list has example queries that you can create and run against the Requests table:
+
+| Task | Steps |
+||-|
+| View all trigger and action events | [Query for all trigger and action events](#requests-table-view-all-trigger-action-events) |
+| View only trigger events or action events | [Query for only trigger or action events](#requests-table-view-trigger-or-action-events) |
+| View trigger or action events with a specific operation type | [Query trigger or action events by operation type](#requests-table-view-trigger-action-events-type) |
+| View trigger and action events with a specific workflow run ID | [Query trigger and action events by workflow run ID](#requests-table-view-trigger-action-events-workflow-id) |
+| View trigger and action events with a specific client tracking ID | [Query trigger and action events by client tracking ID](#requests-table-view-events-client-tracking-id)
+| View trigger and action events with a specific solution name | [Query trigger and action events by solution name](#requests-table-view-events-solution-name) |
+| View trigger and action events with retry attempts | [Query trigger and action events for retry attempts](#requests-table-view-retries) |
+| View trigger and action events with connector usage | [Query for trigger and action events for connector usage](#requests-table-view-connector-usage) |
+
+<a name="requests-table-view-all-trigger-action-events"></a>
+
+#### Query for all trigger and action events
+
+After the workflow runs and a few minutes pass, you can create a query against the Requests table to view all the operation events.
+
+1. If necessary, select the time range that you want to review. By default, this value is the last 24 hours.
+
+1. To view all trigger and action events, create and run the following query:
+
+ ```kusto
+ requests
+ | sort by timestamp desc
+ | take 10
+ ```
+
+ The following example shows the **Results** tab with the noted columns and data in each row:
+
+ :::image type="content" source="media/enable-enhanced-telemetry-standard-workflows/requests-table/results-table.png" alt-text="Screenshot shows Application Insights, query, Results tab, and operation events from workflow run." lightbox="media/enable-enhanced-telemetry-standard-workflows/requests-table/results-table.png":::
+
+ | Column | Description | Example |
+ |--|-||
+ | **name** | Workflow operation name | For this example, the rows show **manual** (Request trigger), **Compose**, and **Response**. |
+ | **success** | Operation execution status | For this example, all the rows show **True** for a successful execution. If an error happened, the value is **False**. |
+ | **resultCode** | Operation execution status code | For this example, all the rows show **Succeeded** (200). |
+ | **duration** | Operation execution duration | Varies for each operation. |
+
+1. To view the details for a specific operation, expand the row for the trigger or action:
+
+ The following example shows the expanded details for the **Request** trigger:
+
+ :::image type="content" source="media/enable-enhanced-telemetry-standard-workflows/requests-table/request-trigger-details.png" alt-text="Screenshot shows Application Insights, Results tab for Request trigger, and details." lightbox="media/enable-enhanced-telemetry-standard-workflows/requests-table/request-trigger-details.png":::
+
+ | Property | Description | Example |
+ |-|-||
+ | **Category** | Operation category, which is always either **Workflow.Operations.Triggers** or **Workflow.Operations.Actions**, based on the operation | **Workflow.Operations.Triggers**. |
+ | **clientTrackingId** | Custom tracking ID, if specified | **123456** |
+ | **runId** | ID for the workflow run instance | **08585358375819913417237801890CU00** |
+ | **triggerName** | Trigger name | **manual** |
+ | **workflowId** | ID for the workflow that ran the trigger | **c7711d107e6647179c2e15fe2c2720ce** |
+ | **workflowName** | Name for the workflow that ran the trigger | **Request-Response-Workflow** |
+ | **operation_Name** | Name for the operation that ran the trigger. In this case, this name is the same as the workflow name. | **Request-Response-Workflow** |
+ | **operation_Id** | ID for the component or workflow that just ran. This ID is the same as the **runId** value for the workflow run instance. If exceptions or dependencies exist, this value transcends tables so you can link this trigger record to those exceptions or dependencies. | **08585358375819913417237801890CU00** |
+ | **operation_ParentId** | Linkable ID for the workflow that called the trigger | **f95138daff8ab129** |
+
+ The following example shows the expanded details for the **Compose** action:
+
+ :::image type="content" source="media/enable-enhanced-telemetry-standard-workflows/requests-table/compose-action-details.png" alt-text="Screenshot shows Application Insights, Results tab for Compose action, and details." lightbox="media/enable-enhanced-telemetry-standard-workflows/requests-table/compose-action-details.png":::
+
+ | Property | Description | Example |
+ |-|-||
+ | **Category** | Operation category, which is always either **Workflow.Operations.Triggers** or **Workflow.Operations.Actions**, based on the operation | **Workflow.Operations.Actions** |
+ | **clientTrackingId** | Custom tracking ID, if specified | **123456** |
+ | **actionName** | Action name | **Compose** |
+ | **runId** | ID for the workflow run instance | **08585358375819913417237801890CU00** |
+ | **workflowId** | ID for the workflow that ran the action | **c7711d107e6647179c2e15fe2c2720ce** |
+ | **workflowName** | Name for the workflow that ran the action | **Request-Response-Workflow** |
+ | **solutionName** | Tracked property name, if specified | **LA-AppInsights** |
+ | **operation_Name** | Name for the operation that ran the action. In this case, this name is the same as the workflow name. | **Request-Response-Workflow** |
+ | **operation_Id** | ID for the component or workflow that just ran. This ID is the same as the **runId** value for the workflow run instance. If exceptions or dependencies exist, this value transcends tables so you can link this action record to those exceptions or dependencies. | **08585358375819913417237801890CU00** |
+ | **operation_ParentId** | Linkable ID for the workflow that called the action | **f95138daff8ab129** |
+
+<a name="requests-table-view-trigger-or-action-events"></a>
+
+#### Query for only trigger or action events
+
+You can create a query against the Requests table to view a subset of operation events, based on operation category and the workflow name.
+
+1. If necessary, select the time range that you want to review. By default, this value is the last 24 hours.
+
+1. To view all trigger events in a specific workflow, create and run a query with the **customDimensions.Category** property value set to **Workflow.Operations.Triggers** and **operation_Name** set to the workflow name, for example:
+
+ ```kusto
+ requests
+ | where customDimensions.Category == "Workflow.Operations.Triggers" and operation_Name == "Request-Response-Workflow"
+ ```
+
+ ![Screenshot shows Requests table query for triggers only.](media/enable-enhanced-telemetry-standard-workflows/requests-table/triggers-only.png)
+
+1. To view all action events in a specific workflow, create a query with the **customDimensions.Category** property value set to **Workflow.Operations.Actions** and **operation_Name** set to the workflow name, for example:
+
+ ```kusto
+ requests
+ | where customDimensions.Category == "Workflow.Operations.Actions" and operation_Name == "Request-Response-Workflow"
+ ```
+
+ ![Screenshot shows Requests table query for actions only.](media/enable-enhanced-telemetry-standard-workflows/requests-table/actions-only.png)
+
+<a name="requests-table-view-trigger-action-events-type"></a>
+
+#### Query trigger or action events by operation type
+
+You can create a query against the Requests table to view events for a specific trigger or action type.
+
+1. If necessary, select the time range that you want to review. By default, this value is the last 24 hours.
+
+1. To view all operation events with a specific trigger type, create and run a query with the **customDimensions.triggerType** value set to the trigger type you want, for example:
+
+ ```kusto
+ requests
+ | where customDimensions.triggerType == "Request"
+ ```
+
+ ![Screenshot shows Requests table query for Request trigger type.](media/enable-enhanced-telemetry-standard-workflows/requests-table/trigger-type.png)
+
+1. To view all operation events with a specific action type, create and run a query with the **customDimensions.actionType** value set to the action type you want, for example:
+
+ ```kusto
+ requests
+ | where customDimensions.actionType == "Compose"
+ ```
+
+ ![Screenshot shows Requests table query for Compose action type.](media/enable-enhanced-telemetry-standard-workflows/requests-table/action-type.png)
+
+<a name="requests-table-view-trigger-action-events-workflow-id"></a>
+
+#### Query trigger and action events by workflow run ID
+
+You can create a query against the Requests table to view a subset of operation events, based on the workflow run ID. This workflow run ID is the same ID that you can find in the workflow's run history.
+
+1. If necessary, select the time range that you want to review. By default, this value is the last 24 hours.
+
+1. To view all operation events with a specific workflow run ID, create and run a query with the **operation_Id** value set to the workflow run ID, for example:
+
+ ```kusto
+ requests
+ | where operation_Id == "08585287554177334956853859655CU00"
+ ```
+
+ ![Screenshot shows Requests table query based on workflow run ID.](media/enable-enhanced-telemetry-standard-workflows/requests-table/workflow-run-id.png)
+
+<a name="requests-table-view-events-client-tracking-id"></a>
+
+#### Query trigger and action events by client tracking ID
+
+You can create a query against the Requests table to view a subset of operation events, based on the workflow name and client tracking ID.
+
+1. If necessary, select the time range that you want to review. By default, this value is the last 24 hours.
+
+1. To view all operation events with a specific client tracking ID in a specific workflow, create and run a query with the **operation_Name** value set to the workflow name and the **clientTrackingId** property value set to the value you want, for example:
+
+ ```kusto
+ requests
+ | where operation_Name == "Request-Response-Workflow"
+ | extend correlation = todynamic(tostring(customDimensions.correlation))
+ | where correlation.clientTrackingId == "123456"
+ ```
+
+ ![Screenshot shows query results using operation name and client tracking ID.](media/enable-enhanced-telemetry-standard-workflows/requests-table/query-operation-name-client-tracking-id.png)
+
+<a name="requests-table-view-events-solution-name"></a>
+
+#### Query trigger and action events by solution name
+
+You can create a query against the Requests table to view a subset of operation events, based on the workflow name and solution name.
+
+1. If necessary, select the time range that you want to review. By default, this value is the last 24 hours.
+
+1. To view all operation events with a specific client tracking ID in a specific workflow, create and run a query with the **operation_Name** value set to the workflow name and the **solutionName** property value set to the value you want, for example:
+
+ ```kusto
+ requests
+ | where operation_Name == "Request-Response-Workflow" and customDimensions has "trackedProperties"
+ | extend trackedProperties = todynamic(tostring(customDimensions.trackedProperties))
+ | where trackedProperties.solutionName == "LA-AppInsights"
+ ```
+
+ ![Screenshot shows query results using operation name and solution name.](media/enable-enhanced-telemetry-standard-workflows/requests-table/query-operation-name-solution-name.png)
+
+#### Retry attempts
+
+To show how this data gets into the Requests table, the following example Standard workflow uses an **HTTP** action that calls a URL, which doesn't resolve. The workflow also has a retry policy that is set to a fixed interval that retries three times, once every 60 seconds.
+
+![Screenshot shows Azure portal, Standard workflow, HTTP action selected, Settings tab, and retry policy.](media/enable-enhanced-telemetry-standard-workflows/requests-table/http-action-retry-policy.png)
+
+<a name="requests-table-view-retries"></a>
+
+#### Query trigger and action events for retry attempts
+
+You can create a query against the Requests table to view a subset of operation events with retry attempts.
+
+1. If necessary, select the time range that you want to review. By default, this value is the last 24 hours.
+
+1. To view only trigger and action events with retry history, create and run the following query in Application Insights:
+
+ ```kusto
+ requests
+ | extend retryHistory = tostring(tostring(customDimensions.retryHistory))
+ | where isnotempty(retryHistory)
+ ```
+
+1. To view the retry attempts for a specific operation with a retry policy, expand the row for that operation.
+
+ The following example shows the expanded details for the **HTTP** action:
+
+ :::image type="content" source="media/enable-enhanced-telemetry-standard-workflows/requests-table/http-action-retry-details.png" alt-text="Screenshot shows Application Insights, Results tab for HTTP action, and details." lightbox="media/enable-enhanced-telemetry-standard-workflows/requests-table/http-action-retry-details.png":::
+
+ The **success** and **resultCode** property values indicate that the **HTTP** action failed. Along with the properties described in [Query the Requests table for all trigger and action events](#requests-table-view-all-trigger-action-events), the record contains the following information, which include three retry attempts:
+
+ | Property | Description | Example |
+ |-|-||
+ | **retryHistory** | History details for one or more retry attempts |
+ | **code** | Error type for a specific retry attempt |
+ | **error** | Details about the specific error that happened |
+
+<a name="requests-table-view-connector-usage"></a>
+
+#### Query trigger and action events for connector usage
+
+You can create a query against the Requests table to view a subset of operation events, based on specific connector usage.
+
+1. If necessary, select the time range that you want to review. By default, this value is the last 24 hours.
+
+1. To view all trigger events using a specific connector type, create and run a query with the following properties and values:
+
+ ```kusto
+ requests
+ | where customDimensions.Category == "Workflow.Operations.Triggers" and customDimensions.triggerType =="ApiConnectionWebhook" and customDimensions.apiName =="commondataservice"
+ ```
+
+ | Property | Example value |
+ |-||
+ | **customDimensions.Category** | **Workflow.Operations.Triggers** |
+ | **customDimensions.triggerType** | The operation type, for example, **ApiConnectionWebhook** |
+ | **customDimensions.apiName** | The connector's API name in JSON format, for example, **commondataservice** for the Microsoft Dataverse connector |
+
+ ![Screenshot shows Application Insights, Results tab for Microsoft Dataverse trigger events with ApiConnectionWebhook connection.](media/enable-enhanced-telemetry-standard-workflows/requests-table/apiconnectionwebhook-connection.png)
+
+1. To view all action events with specific connector usage, create and run a query with the **customDimensions.Category** value set to **Workflow.Operations.Actions**, the **customDimensions.triggerType** value set to the operation type, and the **customDimensions.apiName** set to the connector's API name in JSON format, for example:
+
+ | Property | Example value |
+ |-||
+ | **customDimensions.Category** | **Workflow.Operations.Actions** |
+ | **customDimensions.triggerType** | The operation type, for example, **ApiConnection** |
+ | **customDimensions.apiName** | The connector's API name in JSON format, for example, **office365** for the Microsoft Office 365 Outlook connector |
+
+ ```kusto
+ requests
+ | where customDimensions.Category == "Workflow.Operations.Actions" and customDimensions.actionType == "ApiConnection" and customDimensions.apiName == "office365"
+ ```
+
+ ![Screenshot shows Application Insights, Results tab for Microsoft Office 365 Outlook action events with ApiConnection connection.](media/enable-enhanced-telemetry-standard-workflows/requests-table/apiconnection-connection.png)
+
+For both triggers and actions, Application Insights differentiates between the types of connections that exist. You might see different values in the **actionType** and **triggerType** fields based on whether the connection has **ApiConnection**, **ApiConnectionWebhook**, the built-in basic type such as **Request**, or the built-in service provider-based **ServiceProvider** type.
+
+### Traces table
+
+The Traces table contains fields that track data about the following events in Standard workflow runs:
+
+- Workflow start and end events
+
+ This information is represented as two distinct events due to the potential for long-running workflow executions.
+
+- Batch send and receive events
+
+ For more information, see [Using Built-In Batch Operations in Azure Logic Apps (Standard)](https://techcommunity.microsoft.com/t5/azure-integration-services-blog/using-built-in-batch-operations-in-azure-logic-apps-standard/ba-p/3650659)
+
+The following list has example queries that you can create and run against the Traces table:
+
+| Task | Steps |
+||-|
+| View start and end events in all workflow runs | [Query for start and end events in all workflow runs](#traces-table-view-all-start-end-events) |
+| View start and end events in a specific workflow run | [Query for start and end events in a workflow run](#traces-table-view-start-end-events-specific-run) |
+| View batch send and receive events in all workflow runs | [Query for batch send and batch receive events in all workflow runs](#traces-table-view-all-batch-send-receive-events) |
+
+<a name="traces-table-view-all-start-end-events"></a>
+
+#### Query for start and end events in all workflow runs
+
+You can create a query against the Traces table to view all the start and end events for all workflow runs.
+
+1. If necessary, select the time range that you want to review. By default, this value is the last 24 hours.
+
+1. Create and run a query with the **customDimensions.Category** value set to **Workflow.Operations.Runs**, for example:
+
+ ```kusto
+ traces
+ | where customDimensions.Category == "Workflow.Operations.Runs"
+ ```
+
+ ![Screenshot shows Application Insights, Results tab for start and events across all workflow runs.](media/enable-enhanced-telemetry-standard-workflows/traces-table/start-end-events-all-runs.png)
+
+<a name="traces-table-view-start-end-events-specific-run"></a>
+
+#### Query for start and end events in a specific workflow run
+
+You can create a query against the Traces table to view the start and end events for a specific workflow run.
+
+1. If necessary, select the time range that you want to review. By default, this value is the last 24 hours.
+
+1. Create and run a query with the **customDimensions.Category** value set to **Workflow.Operations.Runs** and the **operation_Id** value set to the workflow run ID, for example:
+
+ ```kusto
+ traces
+ | where customDimensions.Category == "Workflow.Operations.Runs"
+ | and operation_Id == "08585287571846573488078100997CU00"
+ ```
+
+ ![Screenshot shows Application Insights, Results tab for start and events for a specific run.](media/enable-enhanced-telemetry-standard-workflows/traces-table/start-end-events-specific-run.png)
+
+<a name="traces-table-view-all-batch-send-receive-events"></a>
+
+#### Query for batch send and batch receive events in all workflow runs
+
+You can create a query against the Traces table to view the batch send and batch receive events in all workflow runs.
+
+1. If necessary, select the time range that you want to review. By default, this value is the last 24 hours.
+
+1. Create and run a query with the **customDimensions.Category** value set to **Workflow.Operations.Runs** and the **operation_Id** value set to the workflow run ID, for example:
+
+ ```kusto
+ traces
+ | where customDimensions.Category == "Workflow.Operations.Batch"
+ ```
+
+ ![Screenshot shows Application Insights, Results tab for batch send and batch receive events in all workflow runs.](media/enable-enhanced-telemetry-standard-workflows/traces-table/batch-events-all-runs.png)
+
+### Exceptions table
+
+The Exceptions table contains fields that track data about exception events in Standard workflow runs. To show how data gets into these fields, suppose you have the following example Standard workflow that starts with the **Request** trigger followed by the **Compose** action and the **Response** action. The **Compose** action uses an expression that divides a value by zero, which generates an exception:
+
+![Screenshot shows Azure portal, Standard workflow designer, Request trigger, Compose action with an exception-generating expression, and Response action.](media/enable-enhanced-telemetry-standard-workflows/exceptions-table/compose-action-exception-expression.png)
+
+<a name="exceptions-table-view-exception-events"></a>
+
+#### Query for exception events in all workflow runs
+
+You can create a query against the Exceptions table to view the exception events in all workflow runs.
+
+1. If necessary, select the time range that you want to review. By default, this value is the last 24 hours.
+
+1. To view all exception events, create and run the following query in Application Insights:
+
+ ```kusto
+ exceptions
+ | sort by timestamp desc
+ ```
+
+1. To view the details for a specific exception, expand the row for that exception:
+
+ The following example shows the expanded exception for the **Compose** action and details about the exception:
+
+ :::image type="content" source="media/enable-enhanced-telemetry-standard-workflows/exceptions-table/exception-details.png" alt-text="Screenshot shows Application Insights, Results tab for exception events with the exception event for the Compose action expanded, and exception details." lightbox="media/enable-enhanced-telemetry-standard-workflows/exceptions-table/exception-details.png":::
+
+ | Property | Description |
+ |-|-|
+ | **problemId** | Exception type, or a short description about the exception that happened |
+ | **outerMessage** | More detailed description about the exception |
+ | **details** | Verbose and most complete information about the exception |
+ | **clientTrackingId** | Client tracking ID, if specified |
+ | **workflowId** | ID for the workflow that experienced the exception |
+ | **workflowName** | Name for the workflow that experienced the exception |
+ | **runId** | ID for the workflow run instance |
+ | **actionName** | Name for the action that failed with the exception |
+ | **operation_Name** | Name for the workflow that experienced the exception |
+ | **operation_Id** | ID for the component or workflow that just ran. This ID is the same as the **runId** value for the workflow run instance. This value transcends tables so you can link this exception record with the workflow run instance. |
+ | **operation_ParentId** | ID for the workflow that called the action, which you can link to the action's ID in the Requests table |
+
+1. To view the exceptions for a specific workflow, create and run the following query:
+
+ ```kusto
+ exceptions
+ | where operation_Name contains "Request-Response-Workflow-Exception"
+ ```
+
+### Dependencies table
+
+The Dependencies table contains fields that track data about dependency events in Standard workflow runs. These events are emitted when one resource calls another resource and when both resources use Application Insights. Examples for Azure Logic Apps include a service calling another service over HTTP, a database, or file system. Application Insights measures the duration of dependency calls and whether those calls succeed or fail, along with information, such as the dependency name. You can investigate specific dependency calls and correlate them to requests and exceptions.
+
+To show how data gets into these fields, suppose you have the following example Standard parent workflow that calls a child workflow over HTTP using the **HTTP** action:
+
+![Screenshot shows Azure portal, Standard workflow designer with parent workflow using HTTP action to call a child workflow.](media/enable-enhanced-telemetry-standard-workflows/dependencies-table/parent-child-workflow.png)
+
+<a name="dependencies-table-view-dependency-events"></a>
+
+#### Query for dependency events in a specific workflow
+
+You can create a query against the Dependencies table to view the dependency events in a specific workflow run.
+
+1. If necessary, select the time range that you want to review. By default, this value is the last 24 hours.
+
+1. To view dependency events between the parent workflow and the child workflow, create and run the following query:
+
+ ```kusto
+ union requests, dependencies
+ | where operation_Id contains "<runId>"
+ ```
+
+ This query uses the [**union** operator](/azure/data-explorer/kusto/query/unionoperator) to return records from the Requests table and Dependencies table. The query also uses the **operation_Id** property value to provide the link between records by specifying the workflow **runId** value you want, for example:
+
+ ```kusto
+ union requests, dependencies
+ | where operation_Id contains "08585355753671110236506928546CU00"
+ ```
+
+ The following example shows a dependency event for the specified workflow, including records for the operation events in the parent workflow from the Requests table and then a dependency record from the Dependencies table:
+
+ :::image type="content" source="media/enable-enhanced-telemetry-standard-workflows/dependencies-table/dependency-details.png" alt-text="Screenshot shows Application Insights, Results tab with dependency events for a specific workflow." lightbox="media/enable-enhanced-telemetry-standard-workflows/dependencies-table/dependency-details.png":::
+
+ For the operation event records, the **itemType** column shows their record types as **request**. For the dependency record, the **itemType** column indicates the record type as **dependency**.
+
+ | Property | Description |
+ |-|-|
+ | **runId** | ID for the workflow run instance |
+ | **actionName** | Name for the action where the dependency event happens |
+ | **operation_Id** | ID for the specified workflow. This ID is the same as the **runId** value for the workflow run instance. This value transcends tables so you can link this dependency record with the workflow run instance. |
+ | **operation_ParentId** | ID for the action where the dependency event happens, which also links the operation event record and dependency event record together |
+
+With your query, you can also visualize the dependency call from a parent workflow to a child workflow when you use the application map in Application Insights. The **operation_Id** value in your query provides the link that makes this visualization possible.
+
+To open the application map, on the Application Insights resource menu, under **Investigate**, select **Application map**.
+
+![Screenshot shows Application Insights and application map with dependency between parent workflow and child workflow.](media/enable-enhanced-telemetry-standard-workflows/dependencies-table/application-map.png)
+
+<a name="filter-events"></a>
+
+## Filter events
+
+In Application Insights, you can filter events in the following ways:
+
+- Create and run queries as described in earlier sections.
+
+- Filter at the source by specifying criteria to evaluate before emitting events.
+
+ By applying filters at the source, you can reduce the amount of necessary storage and as a result, operating costs.
+
+<a name="filter-events-source"></a>
+
+### Apply filtering at the source
+
+In the Requests table or Traces table, a record has a node named **customDimensions**, which contains a **Category** property. For example, in the Requests table, the request record for a Batch trigger event looks similar to the following sample:
+
+![Screenshot shows Application Insights with Requests table and record for a Batch messages trigger event.](media/enable-enhanced-telemetry-standard-workflows/requests-table-batch-trigger-event.png)
+
+In the Requests table, the following **Category** property values can help you differentiate and associate different verbosity levels:
+
+| Category value | Description |
+|-|-|
+| **Workflow.Operations.Triggers** | Identifies a request record for a trigger event |
+| **Workflow.Operations.Actions** | Identifies a request record for an action event |
+
+For each **Category** value, you can independently set the verbosity level in the **host.json** file for your logic app resource or project. For example, to return only the records for trigger or action events that have errors, in the **host.json** file, you can add the following **logging** JSON object, which contains a **logLevel** JSON object with the verbosity levels you want:
+
+```json
+{
+ "logging": {
+ "logLevel": {
+ "Workflow.Operations.Actions": "Error",
+ "Workflow.Operations.Triggers": "Error"
+ }
+ }
+}
+```
+
+For Traces table records, the following examples show ways that you can change the verbosity level for events:
+
+```json
+{
+ "logging": {
+ "logLevel": {
+ "Workflow.Host": "Warning",
+ "Workflow.Jobs": "Warning",
+ "Workflow.Runtime": "Warning"
+ }
+ }
+}
+```
+
+The following example sets the log's default verbosity level to **Warning**, but keeps the verbosity level at **Information** for trigger, action, and workflow run events:
+
+```json
+{
+ "logging": {
+ "logLevel": {
+ "default": "Warning",
+ "Workflow.Operations.Actions": "Information",
+ "Workflow.Operations.Runs": "Information",
+ "Workflow.Operations.Triggers": "Information"
+ }
+ }
+}
+```
+
+If you don't specify any **logLevel** values, the default verbosity level is **Information**. For more information, see [Configure log levels](../azure-functions/configure-monitoring.md#configure-log-levels).
+
+### [Portal](#tab/portal)
+
+1. In the [Azure portal](https://portal.azure.com), open your Standard logic app resource.
+
+1. On the logic app menu, under **Development Tools**, select **Advanced Tools**. On the **Advanced Tools** page, select **Go**, which opens the Kudu tools.
+
+1. On the **Kudu** page, from the **Debug console** menu, select **CMD**. In the folder directory table, browse to the following file and select **Edit**: **site/wwwroot/host.json**
+
+1. In the **host.json** file, add the **logging** JSON object with the **logLevel** values set to the verbosity levels that you want:
+
+ ```json
+ {
+ "logging": {
+ "logLevel": {
+ "Workflow.Operations.Actions": "<verbosity-level>",
+ "Workflow.Operations.Triggers": "<verbosity-level>"
+ }
+ }
+ }
+ ```
+
+### [Visual Studio Code](#tab/visual-studio-code)
+
+1. In Visual Studio Code, open your logic app project, and then open the project's **host.json** file.
+
+1. In the **host.json** file, add the **logging** JSON object with the **logLevel** values set to the verbosity levels that you want:
+
+ ```json
+ {
+ "logging": {
+ "logLevel": {
+ "Workflow.Operations.Actions": "<verbosity-level>",
+ "Workflow.Operations.Triggers": "<verbosity-level>"
+ }
+ }
+ }
+ ```
+++
+<a name="view-workflow-metrics"></a>
+
+## View workflow metrics in Application Insights
+
+With the telemetry enhancements in Application Insights, you also get workflow insights in the Metrics dashboard.
+
+<a name="open-metrics-dashboard"></a>
+
+### Open the Metrics dashboard and set up basic filters
+
+1. In the Azure portal, open your Application Insights resource, if not opened already.
+
+1. On your Application Insights resource menu, under **Monitoring**, select **Metrics**.
+
+1. From the **Scope** list, select your Application Insights instance.
+
+1. From the **Metric Namespace** list, select **workflow.operations**.
+
+1. From the **Metric** list, select a metric, for example, **Runs Completed**.
+
+1. From the **Aggregation** list, select a type, for example, **Count** or **Avg**.
+
+ When you're done, the Metrics dashboard shows a chart with your finished workflow executions.
+
+ :::image type="content" source="media/enable-enhanced-telemetry-standard-workflows/metrics-dashboard.png" alt-text="Screenshot shows Application Insights with Metrics dashboard and chart that shows number of finished workflow executions over time." lightbox="media/enable-enhanced-telemetry-standard-workflows/metrics-dashboard.png":::
+
+<a name="filter-by-workflow"></a>
+
+### Filter based on a specific workflow
+
+When you enable multidimensional metrics in the Metrics dashboard, you can target a subset of the overall events captured in Application Insights and filter events based on a specific workflow.
+
+1. On your Application Insights resource, [enable multidimensional metrics](../azure-monitor/app/get-metric.md#enable-multidimensional-metrics).
+
+1. In Application Insights, [open the Metrics dashboard](#open-metrics-dashboard).
+
+1. On the chart toolbar, select **Add filter**.
+
+1. From the **Property** list, select **Workflow**.
+
+1. From the **Operator** list, select the equal sign (**=**).
+
+1. From the **Values** list, select the workflows you want.
+
+ :::image type="content" source="media/enable-enhanced-telemetry-standard-workflows/multidimensional-metrics.png" alt-text="Screenshot shows Application Insights with Metrics dashboard and chart with multidimensional metrics." lightbox="media/enable-enhanced-telemetry-standard-workflows/multidimensional-metrics.png":::
+
+<a name="view-live-metrics"></a>
+
+## View "live" log data and metrics
+
+With Application Insights enhanced telemetry enabled, you can view near real-time log data and other metrics from your Application Insights instance in the Azure portal. You can use this visualization to plot inbound requests, outbound requests, and overall health. You also get a table for trace level diagnostics.
+
+1. In the Azure portal, open your Application Insights resource, if not opened already.
+
+1. On your Application Insights resource menu, under **Investigate**, select **Live metrics**.
+
+ The **Live metrics** page shows the log data and other metrics, for example:
+
+ :::image type="content" source="media/enable-enhanced-telemetry-standard-workflows/live-metrics.png" alt-text="Screenshot shows Azure portal and Application Insights menu with selected item named Live metrics." lightbox="media/enable-enhanced-telemetry-standard-workflows/live-metrics.png":::
+
+For more information, see [Live Metrics: Monitor and diagnose with 1-second latency](../azure-monitor/app/live-stream.md).
+
+> [!NOTE]
+>
+> As Standard logic app workflows are based on Azure Functions,
+> **Live Metrics** supports these logic app workflows.
+
+<a name="view-stream-application-logs"></a>
+
+## Stream and view debug output from application log files
+
+With Application Insights enhanced telemetry enabled, you can stream verbose debugging information in the Azure portal for your application's log files. This information is equivalent to the output generated from debugging your workflow in your local Visual Studio Code environment.
+
+1. In the Azure portal, open your Standard logic app resource.
+
+1. On your logic app resource menu, under **Monitoring**, select **Log stream**.
+
+ The **Log stream** page connects to your Application Insights instance and shows the debugging output. For example, the following output includes request and response calls among other information:
+
+ :::image type="content" source="media/enable-enhanced-telemetry-standard-workflows/log-stream.png" alt-text="Screenshot shows Azure portal and Standard logic app menu with selected item named Log stream." lightbox="media/enable-enhanced-telemetry-standard-workflows/log-stream.png":::
+
+## Next steps
+
+[Enable or open Application Insights](create-single-tenant-workflows-azure-portal.md#enable-open-application-insights)
logic-apps Logic Apps Securing A Logic App https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/logic-apps/logic-apps-securing-a-logic-app.md
ms.suite: integration Previously updated : 07/06/2023 Last updated : 10/16/2023
The following table identifies the authentication types that are available on th
| [Client Certificate](#client-certificate-authentication) | Azure API Management, Azure App Services, HTTP, HTTP + Swagger, HTTP Webhook | | [Active Directory OAuth](#azure-active-directory-oauth-authentication) | - **Consumption**: Azure API Management, Azure App Services, Azure Functions, HTTP, HTTP + Swagger, HTTP Webhook <br><br>- **Standard**: Azure Automation, Azure Blob Storage, Azure Event Hubs, Azure Queues, Azure Service Bus, Azure Tables, HTTP, HTTP Webhook, SQL Server | | [Raw](#raw-authentication) | Azure API Management, Azure App Services, Azure Functions, HTTP, HTTP + Swagger, HTTP Webhook |
-| [Managed identity](#managed-identity-authentication) | **Built-in connectors**: <br><br>- **Consumption**: Azure API Management, Azure App Services, Azure Functions, HTTP, HTTP Webhook <br><br>- **Standard**: Azure Automation, Azure Blob Storage, Azure Event Hubs, Azure Queues, Azure Service Bus, Azure Tables, HTTP, HTTP Webhook, SQL Server <br><br>**Note**: Currently, most [built-in, service provider-based connectors](/azure/logic-apps/connectors/built-in/reference/) don't support selecting user-assigned managed identities for authentication. <br><br>**Managed connectors**: Microsoft Entra ID Protection, Azure App Service, Azure Automation, Azure Blob Storage, Azure Container Instance, Azure Cosmos DB, Azure Data Explorer, Azure Data Factory, Azure Data Lake, Azure Event Grid, Azure Event Hubs, Azure IoT Central V2, Azure IoT Central V3, Azure Key Vault, Azure Log Analytics, Azure Queues, Azure Resource Manager, Azure Service Bus, Azure Sentinel, Azure Table Storage, Azure VM, HTTP with Microsoft Entra ID, SQL Server |
+| [Managed identity](#managed-identity-authentication) | **Built-in connectors**: <br><br>- **Consumption**: Azure API Management, Azure App Services, Azure Functions, HTTP, HTTP Webhook <br><br>- **Standard**: Azure Automation, Azure Blob Storage, Azure Event Hubs, Azure Queues, Azure Service Bus, Azure Tables, HTTP, HTTP Webhook, SQL Server <br><br>**Note**: Currently, most [built-in, service provider-based connectors](/azure/logic-apps/connectors/built-in/reference/) don't support selecting user-assigned managed identities for authentication. <br><br>**Managed connectors**: Azure App Service, Azure Automation, Azure Blob Storage, Azure Container Instance, Azure Cosmos DB, Azure Data Explorer, Azure Data Factory, Azure Data Lake, Azure Event Grid, Azure Event Hubs, Azure IoT Central V2, Azure IoT Central V3, Azure Key Vault, Azure Log Analytics, Azure Queues, Azure Resource Manager, Azure Service Bus, Azure Sentinel, Azure Table Storage, Azure VM, HTTP with Microsoft Entra ID, SQL Server |
<a name="secure-inbound-requests"></a>
In the [Azure portal](https://portal.azure.com), add one or more authorization p
| Property | Required | Type | Description | |-|-||-| | **Policy name** | Yes | String | The name that you want to use for the authorization policy |
- | **Policy type** | Yes | String | Either **Microsoft Entra ID** for bearer type tokens or **AADPOP** for Proof-of-Possession type tokens. |
+ | **Policy type** | Yes | String | Either **AAD** for bearer type tokens or **AADPOP** for Proof-of-Possession type tokens. |
| **Claims** | Yes | String | A key-value pair that specifies the claim type and value that the workflow's Request trigger expects in the access token presented by each inbound call to the trigger. You can add any standard claim you want by selecting **Add standard claim**. To add a claim that's specific to a PoP token, select **Add custom claim**. <br><br>Available standard claim types: <br><br>- **Issuer** <br>- **Audience** <br>- **Subject** <br>- **JWT ID** (JSON Web Token identifier) <br><br>Requirements: <br><br>- At a minimum, the **Claims** list must include the **Issuer** claim, which has a value that starts with `https://sts.windows.net/` or `https://login.microsoftonline.com/` as the Microsoft Entra issuer ID. <br><br>- Each claim must be a single string value, not an array of values. For example, you can have a claim with **Role** as the type and **Developer** as the value. You can't have a claim that has **Role** as the type and the values set to **Developer** and **Program Manager**. <br><br>- The claim value is limited to a [maximum number of characters](logic-apps-limits-and-config.md#authentication-limits). <br><br>For more information about these claim types, review [Claims in Microsoft Entra security tokens](../active-directory/develop/security-tokens.md#json-web-tokens-and-claims). You can also specify your own claim type and value. | The following example shows the information for a PoP token:
machine-learning How To Use Openai Models In Azure Ml https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-use-openai-models-in-azure-ml.md
Title: How to use Azure OpenAI models in Azure Machine Learning description: Use Azure OpenAI models in Azure Machine Learning--++ Previously updated : 06/30/2023 Last updated : 10/12/2023
The model catalog (preview) in Azure Machine Learning studio is your starting po
> [!TIP] >Supported OpenAI models are published to the AzureML Model Catalog. View a complete list of [Azure OpenAI models](../ai-services/openai/concepts/models.md). You can filter the list of models in the model catalog by inference task, or by finetuning task. Select a specific model name and see the model card for the selected model, which lists detailed information about the model. For example:
-> [!NOTE]
->Use of Azure OpenAI models in Azure Machine Learning requires Azure OpenAI services resources. You can request access to Azure OpenAI service [here](https://go.microsoft.com/fwlink/?linkid=2222006&clcid=0x409).
### Connect to Azure OpenAI service
-In order to deploy an Azure OpenAI model, you need to have an [Azure OpenAI resource](https://azure.microsoft.com/products/cognitive-services/openai-service/). Azure Machine Learning creates a default Azure OpenAI resource on behalf of the user when you deploy any Azure OpenAI model.
+In order to deploy an Azure OpenAI model, you need to have an [Azure OpenAI resource](https://azure.microsoft.com/products/cognitive-services/openai-service/). You can create an Azure OpenAI resource following the instructions [here](../ai-services/openai/how-to/create-resource.md).
### Deploying Azure OpenAI models To deploy an Azure Open Model from Azure Machine Learning, in order to deploy an Azure OpenAI model: 1. Select on **Model Catalog** in the left pane.
-1. Select on **Azure OpenAI Service** from the options.
-1. Select a model to deploy
+1. Select **View Models** under Azure OpenAI language models. Then select a model to deploy.
1. Select `Deploy` to deploy the model to the Azure OpenAI service.
- :::image type="content" source="./media/how-to-use-openai-models-in-azure-ml/deploy-to-azure-open-ai.png" lightbox="./media/how-to-use-openai-models-in-azure-ml/deploy-to-azure-open-ai.png" alt-text="Screenshot showing the deploy to Azure OpenAI.":::
+ :::image type="content" source="./media/how-to-use-openai-models-in-azure-ml/deploy-to-azure-open-ai-turbo.png" lightbox="./media/how-to-use-openai-models-in-azure-ml/deploy-to-azure-open-ai-turbo.png" alt-text="Screenshot showing the deploy to Azure OpenAI.":::
-1. Provide a name for your deployment in **Deployment Name** and select **Finish**.
+1. Select on **Azure OpenAI resource** from the options
+1. Provide a name for your deployment in **Deployment Name** and select **Deploy**.
1. The find the models deployed to Azure OpenAI service, go to the **Endpoint** section in your workspace. 1. Select the **Azure OpenAI** tab and find the deployment you created. When you select the deployment, you'll be redirect to the OpenAI resource that is linked to the deployment.
You can invoke the finetune settings form by selecting on the **Finetune** butto
**Finetune Settings:** **Training Data**
-1. Pass in the training data you would like to use to finetune your model. You can choose to either upload a local file (in JSONL format) or select an existing registered dataset from your workspace. The dataset needs to have two fields - prompt and completion.
+1. Pass in the training data you would like to use to finetune your model. You can choose to either upload a local file (in JSONL format) or select an existing registered dataset from your workspace.
+For models with completion task type, the training data you use must be formatted as a JSON Lines (JSONL) document in which each line represents a single prompt-completion pair.
+ :::image type="content" source="./media/how-to-use-openai-models-in-azure-ml/finetune-training-data.png" lightbox="./media/how-to-use-openai-models-in-azure-ml/finetune-training-data.png" alt-text="Screenshot showing the training data in the finetune UI section.":::
+ For models with a chat task type, each row in the dataset should be a list of JSON objects. Each row corresponds to a conversation and each object in the row is a turn/utterance in the conversation.
-* Validation data: Pass in the data you would like to use to validate your model. Selecting **Automatic split** reserves an automatic split of training data for validation. Alternatively, you can provide a different validation dataset.
-* Test data: Pass in the test data you would like to use to evaluate your finetuned model. Selecting **Automatic split** reserves an automatic split of training data for test.
+ :::image type="content" source="./media/how-to-use-openai-models-in-azure-ml/finetune-training-data-chat.png" lightbox="./media/how-to-use-openai-models-in-azure-ml/finetune-training-data-chat.png" alt-text="Screenshot showing the training data after the data is uploaded into Azure.":::
-1. Select **Finish** in the finetune form to submit your finetuning job. Once the job completes, you can view evaluation metrics for the finetuned model. You can then deploy this finetuned model to an endpoint for inferencing.
+ * Validation data: Pass in the data you would like to use to validate your model.
+
+2. Select **Finish** in the finetune form to submit your finetuning job. Once the job completes, you can view evaluation metrics for the finetuned model. You can then deploy this finetuned model to an endpoint for inferencing.
**Customizing finetuning parameters:**
-If you would like to customize the finetuning parameters, you can select on the Customize button in the Finetune wizard to configure parameters such as batch size, number of epochs, learning rate multiplier or another desired parameter. Each of these settings has default values, but can be customized via code based samples, if needed.
+If you would like to customize the finetuning parameters, you can select on the Customize button in the Finetune wizard to configure parameters such as batch size, number of epochs and learning rate multiplier. Each of these settings has default values, but can be customized via code based samples, if needed.
**Deploying finetuned models:** To run a deploy fine-tuned model job from Azure Machine Learning, in order to deploy finetuned an Azure OpenAI model:
To enable users to quickly get started with code based finetuning, we have publi
### Troubleshooting Here are some steps to help you resolve any of the following issues with your Azure OpenAI in Azure Machine Learning experience.
+Currently, only a maximum of 10 workspaces can be designated for a particular subscription. If a user creates more workspaces, they will get access to the models but their jobs will fail.
+ You might receive any of the following errors when you try to deploy an Azure OpenAI model. - **Only one deployment can be made per model name and version**
- - **Fix**: You'll need to go to the [Azure OpenAI Studio](https://oai.azure.com/portal) and delete the deployments of the model you're trying to deploy.
+ - **Fix**: Go to the [Azure OpenAI Studio](https://oai.azure.com/portal) and delete the deployments of the model you're trying to deploy.
- **Failed to create deployment** - **Fix**: Azure OpenAI failed to create. This is due to Quota issues, make sure you have enough quota for the deployment.
migrate Discovered Metadata https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/migrate/discovered-metadata.md
File size| sys.master_files| Recommended SKU size (Storage dimension)
## ASP.NET web apps data
-Azure Migrate appliance used for discovery of VMware VMs can also collect data on ASP.NET web applications.
+Azure Migrate appliance used for discovery of VMs can also collect data on ASP.NET web applications.
-> [!Note]
-> Currently this feature is only available for servers running in your VMware environment.
-
-Here's the web apps configuration data that the appliance collects from each Windows server discovered in your VMware environment.
+Here's the web apps configuration data that the appliance collects from each Windows server discovered in your environment.
**Entity** | **Data** | Web apps | Application Name <br/>Configuration Path <br/>Frontend Bindings <br/>Enabled Frameworks <br/>Hosting Web Server<br/>Sub-Applications and virtual applications <br/>Application Pool name <br/>Runtime version <br/>Managed pipeline mode Web server | Server Name <br/>Server Type (currently only IIS) <br/>Configuration Location <br/>Version <br/>FQDN <br/>Credentials used for discovery <br/>List of Applications
+## Java web apps data
+
+**Entity** | **Data**
+ |
+Web apps | Application Name <br/> Web Server ID <br/> Web Server Name <br/> Display Name<br/> Directories <br/>Configurations <br/>Bindings <br/>Discovered Frameworks (may contain JVM version) <br/>Requests (CPU requests) <br/>Limits (CPU Limits) <br/> WorkloadType <br/> Application Scratch Path <br/>Static Folders
+Web server | OS Type <br/> OS Name<br/> OS Version <br/> OS Architecture<br/> Host Name <br/> CatalinaHomes <br/> Tomcat Version <br/>JVM Version<br/> User Name <br/> User ID<br/> Group Name<br/> Group ID
+ ## Spring Boot web apps data The Azure Migrate appliance used for discovery can also collect data on Spring Boot web applications.
Here's the web apps configuration data that the appliance collects from each Win
Web apps | Application name <br/>Maven artifact name <br/>JAR file location <br/>JAR file checksum <br/>JAR file size<br/>Spring Boot version<br/>Maven build JDK version <br/> Application property files <br/>Certificates file names <br/> Static content location <br/> Application port <br/> Binding ports (including app port) <br/> Logging configuration <br/> JAR file last modified time OS runtime | OS installed JDK version <br/> JVM options <br/> JVM heap memory <br/> OS name <br/> OS version <br/> Environment variables - ## Application dependency data Azure Migrate appliance can collect data about inter-server dependencies for servers running in your VMware environment/Hyper-V environment/ physical servers or servers running on other clouds like AWS, GCP etc.
migrate Migrate Support Matrix Hyper V Migration https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/migrate/migrate-support-matrix-hyper-v-migration.md
ms. Previously updated : 09/01/2023 Last updated : 10/16/2023
You can select up to 10 VMs at once for replication. If you want to migrate more
| **Linux boot** | If /boot is on a dedicated partition, it should reside on the OS disk, and not be spread across multiple disks.<br> If /boot is part of the root (/) partition, then the '/' partition should be on the OS disk, and not span other disks. | | **UEFI boot** | Supported. UEFI-based VMs will be migrated to Azure generation 2 VMs. | | **UEFI - Secure boot** | Not supported for migration.|
-| **Disk size** | Up to 2 TB OS disk for gen 1 VM; up to 4 TB OS disk for gen 2 VM; 32 TB for data disks. </br></br> For existing Azure Migrate projects, you may need to upgrade the replication provider on the Hyper-V host to the latest version to replicate large disks up to 32 TB.|
+| **Disk size** | Up to 2 TB OS disk for gen 1 VM; up to 4 TB OS disk for gen 2 VM; 32 TB for data disks. </br></br> For existing Azure Migrate projects, you might need to upgrade the replication provider on the Hyper-V host to the latest version to replicate large disks up to 32 TB.|
| **Disk number** | A maximum of 16 disks per VM.| | **Encrypted disks/volumes** | Not supported for migration.| | **RDM/passthrough disks** | Not supported for migration.| | **Shared disk** | VMs using shared disks aren't supported for migration.|
+| **Ultra disk** | Ultra disk migration isn't supported from the Azure Migrate portal. You have to do an out-of-band migration for the disks that are recommended as Ultra disks. That is, you can migrate selecting it as premium disk type and change it to Ultra disk after migration.|
| **NFS** | NFS volumes mounted as volumes on the VMs won't be replicated.| | **ReiserFS** | Not supported. | **ISCSI** | VMs with iSCSI targets aren't supported for migration.
time.nist.gov | Verifies time synchronization between system and global time.
>[!Note] > If your Migrate project has **private endpoint connectivity**, the replication provider software on the Hyper-V hosts will need access to these URLs for private link support.
-> - *.blob.core.windows.com - To access storage account that stores replicated data. This is optional and is not required if the storage account has a private endpoint attached.
+> - *.blob.core.windows.com - To access storage account that stores replicated data. This is optional and isn't required if the storage account has a private endpoint attached.
> - login.windows.net for access control and identity management using Active Directory. ## Replication storage account requirements
This table summarizes support for the replication storage account for Hyper-V VM
**Setting** | **Support** | **Details** | |
-General purpose V2 storage accounts (Hot and Cool tier) | Supported | GPv2 storage accounts may incur higher transaction costs than V1 storage accounts.
+General purpose V2 storage accounts (Hot and Cool tier) | Supported | GPv2 storage accounts might incur higher transaction costs than V1 storage accounts.
Premium storage | Supported | However, standard storage accounts are recommended to help optimize costs. Region | Same region as virtual machine | Storage account should be in the same region as the virtual machine being protected. Subscription | Can be different from source virtual machines | The Storage account need not be in the same subscription as the source virtual machine(s).
-Azure Storage firewalls for virtual networks | Supported | If you are using firewall enabled replication storage account or target storage account, ensure you [Allow trusted Microsoft services](../storage/common/storage-network-security.md#exceptions). Also, ensure that you allow access to at least one subnet of source VNet. **You should allow access from All networks for public endpoint connectivity.**
-Soft delete | Not supported | Soft delete is not supported because once it is enabled on replication storage account, it increases cost. Azure Migrate performs very frequent creates/deletes of log files while replicating causing costs to increase.
+Azure Storage firewalls for virtual networks | Supported | If you're using firewall enabled replication storage account or target storage account, ensure you [Allow trusted Microsoft services](../storage/common/storage-network-security.md#exceptions). Also, ensure that you allow access to at least one subnet of source virtual network. **You should allow access from All networks for public endpoint connectivity.**
+Soft delete | Not supported | Soft delete isn't supported because once it's enabled on replication storage account, it increases cost. Azure Migrate performs very frequent creates/deletes of log files while replicating causing costs to increase.
Private endpoint | Supported | Follow the guidance to [set up Azure Migrate with private endpoints](migrate-servers-to-azure-using-private-link.md?pivots=hyperv). ## Azure VM requirements
migrate Migrate Support Matrix Physical Migration https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/migrate/migrate-support-matrix-physical-migration.md
ms. Previously updated : 10/11/2023 Last updated : 10/16/2023
The table summarizes support for physical servers, AWS VMs, and GCP VMs that you
**UEFI boot** | Supported. UEFI-based machines will be migrated to Azure generation 2 VMs. <br/><br/> The OS disk should have up to four partitions, and volumes should be formatted with NTFS. **UEFI - Secure boot** | Not supported for migration. **Target disk** | Machines can be migrated only to managed disks (standard HDD, standard SSD, premium SSD) in Azure.
+**Ultra disk** | Ultra disk migration isn't supported from the Azure Migrate portal. You have to do an out-of-band migration for the disks that are recommended as Ultra disks. That is, you can migrate selecting it as premium disk type and change it to Ultra disk after migration.
**Disk size** | up to 2-TB OS disk for gen 1 VM; up to 4-TB OS disk for gen 2 VM; 32 TB for data disks. **Disk limits** | Up to 63 disks per machine. **Encrypted disks/volumes** | Machines with encrypted disks/volumes aren't supported for migration.
migrate Migrate Support Matrix Vmware Migration https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/migrate/migrate-support-matrix-vmware-migration.md
ms. Previously updated : 09/29/2023 Last updated : 10/16/2023
The table summarizes agentless migration requirements for VMware vSphere VMs.
**Linux VMs in Azure** | Some VMs might require changes so that they can run in Azure.<br/><br/> For Linux, Azure Migrate makes the changes automatically for these operating systems:<br/> - Red Hat Enterprise Linux 9.x, 8.x, 7.9, 7.8, 7.7, 7.6, 7.5, 7.4, 7.0, 6.x<br> - CentOS 9.x (Release and Stream), 8.x (Release and Stream), 7.9, 7.7, 7.6, 7.5, 7.4, 6.x</br> - SUSE Linux Enterprise Server 15 SP4, 15 SP3, 15 SP2, 15 SP1, 15 SP0, 12, 11 SP4, 11 SP3 <br>- Ubuntu 22.04, 21.04, 20.04, 19.04, 19.10, 18.04LTS, 16.04LTS, 14.04LTS<br> - Debian 11, 10, 9, 8, 7<br> - Oracle Linux 9, 8, 7.7-CI, 7.7, 6<br> - Kali Linux (2016, 2017, 2018, 2019, 2020, 2021, 2022) <br> For other operating systems, you make the [required changes](prepare-for-migration.md#verify-required-changes-before-migrating) manually.<br/> The `SELinux Enforced` setting is currently not fully supported. It causes Dynamic IP setup and Microsoft Azure Linux Guest agent (waagent/WALinuxAgent) installation to fail. You can still migrate and use the VM. **Boot requirements** | **Windows VMs:**<br/>OS Drive (C:\\) and System Reserved Partition (EFI System Partition for UEFI VMs) should reside on the same disk.<br/>If `/boot` is on a dedicated partition, it should reside on the OS disk and not be spread across multiple disks. <br/> If `/boot` is part of the root (/) partition, then the '/' partition should be on the OS disk and not span other disks. <br/><br/> **Linux VMs:**<br/> If `/boot` is on a dedicated partition, it should reside on the OS disk and not be spread across multiple disks.<br/> If `/boot` is part of the root (/) partition, then the '/' partition should be on the OS disk and not span other disks. **UEFI boot** | Supported. UEFI-based VMs will be migrated to Azure generation 2 VMs.
-**Disk size** | Up to 2-TB OS disk for gen 1 VM and gen 2 VMs; 32 TB for data disks. Changing the size of the source disk after initiating replication is supported and will not impact ongoing replication cycle.
-**Dynamic disk** | - An OS disk as a dynamic disk is not supported. <br/> - If a VM with OS disk as dynamic disk is replicating, convert the disk type from dynamic to basic and allow the new cycle to complete, before triggering test migration or migration. Note that you will need help from OS support for conversion of dynamic to basic disk type.
+**Disk size** | Up to 2-TB OS disk for gen 1 VM and gen 2 VMs; 32 TB for data disks. Changing the size of the source disk after initiating replication is supported and won't impact ongoing replication cycle.
+**Dynamic disk** | - An OS disk as a dynamic disk isn't supported. <br/> - If a VM with OS disk as dynamic disk is replicating, convert the disk type from dynamic to basic and allow the new cycle to complete, before triggering test migration or migration. Note that you'll need help from OS support for conversion of dynamic to basic disk type.
+**Ultra disk** | Ultra disk migration isn't supported from the Azure Migrate portal. You have to do an out-of-band migration for the disks that are recommended as Ultra disks. That is, you can migrate selecting it as premium disk type and change it to Ultra disk after migration.
**Encrypted disks/volumes** | VMs with encrypted disks/volumes aren't supported for migration. **Shared disk cluster** | Not supported. **Independent disks** | Not supported.
mysql Concepts Monitoring https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mysql/flexible-server/concepts-monitoring.md
These metrics are available for Azure Database for MySQL:
|Host Network In |network_bytes_ingress|Bytes|Total sum of incoming network traffic on the server for a selected period. This metric includes traffic to your database and to Azure MySQL features like monitoring, logs etc.| |Host Network out|network_bytes_egress|Bytes|Total sum of outgoing network traffic on the server for a selected period. This metric includes traffic from your database and from Azure MySQL features like monitoring, logs etc.| |Active Connections|active_connection|Count|The number of active connections to the server. Active connections are the total number of [threads connected](https://dev.mysql.com/doc/refman/8.0/en/server-status-variables.html#statvar_Threads_connected) to your server, which also includes threads from [azure_superuser](../single-server/how-to-create-users.md).|
-|Backup Storage Used|backup_storage_used|Bytes|The amount of backup storage used.|
|Storage IO percent|io_consumption_percent|Percent|The percentage of IO in use over selected period. IO percent is for both read and write IOPS.| |Storage IO Count|storage_io_count|Count|The total count of I/O operations (both read and write) utilized by server per minute.|
-|Host Memory Percent|memory_percent|Percent|The total percentage of memory in use on the server, including memory utilization from both database workload and other Azure MySQL processes. This metric shows total consumption of memory of underlying host similar to consumption of memory on any virtual machine.|
-|Storage Limit|storage_limit|Bytes|The maximum storage for this server.|
-|Storage Percent|storage_percent|Percent|The percentage of storage used out of the server's maximum.|
-|Storage Used|storage_used|Bytes|The amount of storage in use. The storage used by the service may include the database files, transaction logs, and the server logs.|
+|Host Memory Percent|memory_percent|Percent|The total percentage of memory in use on the server, including memory utilization from both database workload and other Azure MySQL processes. This metric provides evaluation of the server's memory utilization, excluding re-usable memory like buffer and cache.|
|Total connections|total_connections|Count|The number of client connections to your Azure Database for MySQL - Flexible Server. Total Connections is sum of connections by clients using TCP/IP protocol over a selected period.| |Aborted Connections|aborted_connections|Count|Total number of failed attempts to connect to your MySQL server, for example, failed connection due to bad credentials. For more information on aborted connections, you can refer to this [documentation](https://dev.mysql.com/doc/refman/5.7/en/communication-errors.html).| |Queries|queries|Count|Total number of queries executed per minute on your server. Total count of queries per minute on your server from your database workload and Azure MySQL processes.|
These metrics are available for Azure Database for MySQL:
|Metric display name|Metric|Unit|Description| |||||
+|InnoDB Row Lock Time|innodb_row_lock_time|Milliseconds|InnoDB row lock time measures the duration of time in milliseconds for InnoDB row-level locks.|
+|InnoDB Row Lock Waitss|innodb_row_lock_waits|Count|InnoDB row lock waits count the number of times a query had to wait for an InnoDB row-level lock.|
|Innodb_buffer_pool_reads|Innodb_buffer_pool_reads|Count|The total count of logical reads that InnoDB engine couldn't satisfy from the Innodb buffer pool, and had to be fetched from the disk.| |Innodb_buffer_pool_read_requests|Innodb_buffer_pool_read_requests|Count|The total count of logical read requests to read from the Innodb Buffer pool.| |Innodb_buffer_pool_pages_free|Innodb_buffer_pool_pages_free|Count|The total count of free pages in InnoDB buffer pool.| |Innodb_buffer_pool_pages_data|Innodb_buffer_pool_pages_data|Count|The total count of pages in the InnoDB buffer pool containing data. The number includes both dirty and clean pages.| |Innodb_buffer_pool_pages_dirty|Innodb_buffer_pool_pages_dirty|Count|The total count of pages in the InnoDB buffer pool containing dirty pages.| +
+## Storage Breakdown Metrics
+
+|Metric display name|Metric|Unit|Description|
+|||||
+|Storage Limit|storage_limit|Bytes|The maximum storage size configured for this server.|
+|Storage Percent|storage_percent|Percent|The percentage of storage used out of the server's maximum storage available.|
+|Storage Used|storage_used|Bytes|The amount of storage in use. The storage used by the service may include the database files, transaction logs, and the server logs.|
+|Data Storage Used|data_storage_used|Bytes|The amount of storage used for storing database files.|
+|ibdata1 Storage Used|ibdata1_storage_used|Bytes|The amount of storage used for storing system tablespace (ibdata1) file.|
+|Binlog Storage Used|binlog_storage_used|Bytes|The amount of storage used for storing binary log files.|
+|Other Storage Used|other_storage_used|Bytes| The amount of storage used for other components and metadata files.|
+|Backup Storage Used|backup_storage_used|Bytes|The amount of backup storage used.|
+++ ## Server logs In Azure Database for MySQL Server ΓÇô Flexible Server, users can configure and download server logs to assist with troubleshooting efforts. With this feature enabled, a flexible server starts capturing events of the selected log type and writes them to a file. You can then use the Azure portal and Azure CLI to download the files to work with them.
mysql Whats New https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mysql/flexible-server/whats-new.md
This article summarizes new releases and features in Azure Database for MySQL -
- **Metrics computation for Azure Database for MySQL - Flexible Server** "Host Memory Percent" metric will provide more accurate calculations of memory usage. It will now reflect the actual memory consumed by the server, excluding re-usable memory from the calculation. This improvement ensures that you have a more precise understanding of your server's memory utilization. After the completion of the [scheduled maintenance window](./concepts-maintenance.md), existing servers will benefit from this enhancement.
+- **Known Issues**
+When attempting to modify the User assigned managed identity and Key identifier in a single request while changing the CMK settings, the operation gets struck. We are working on the upcoming deployment for the permanent solution to address this issue, in the meantime, please ensure that you perform the two operations of updating the User Assigned Managed Identity and Key identifier in separate requests. The sequence of these operations is not critical, as long as the user-assigned identities have the necessary access to both Key Vault
## September 2023
If you have questions about or suggestions for working with Azure Database for M
+
mysql Concepts Compatibility https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mysql/single-server/concepts-compatibility.md
Last updated 06/20/2022
# MySQL drivers and management tools compatible with Azure Database for MySQL [!INCLUDE[applies-to-mysql-single-server](../includes/applies-to-mysql-single-server.md)] [!INCLUDE[azure-database-for-mysql-single-server-deprecation](../includes/azure-database-for-mysql-single-server-deprecation.md)]
nat-gateway Tutorial Hub Spoke Nat Firewall https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/nat-gateway/tutorial-hub-spoke-nat-firewall.md
A virtual network peering is used to connect the hub to the spoke and the spoke
| - | -- | | **This virtual network** | | | Peering link name | Enter **vnet-hub-to-vnet-spoke**. |
- | Allow access to remote virtual network | Leave the default of **Selected**. |
- | Allow traffic to remote virtual network | **Select** the checkbox. |
- | Allow traffic forwarded from the remote virtual network (allow gateway transit) | **Select** the checkbox.. |
- | Use remote virtual network gateway or route server | Leave the default of **Unselected**. |
+ | Allow 'vnet-hub' to access 'vnet-spoke' | Leave the default of **Selected**. |
+ | Allow 'vnet-hub' to receive forwarded traffic from 'vnet-spoke' | **Select** the checkbox. |
+ | Allow gateway in 'vnet-hub' to forward traffic to 'vnet-spoke' | Leave the default of **Unselected**. |
+ | Enable 'vnet-hub' to use 'vnet-spoke's' remote gateway | Leave the default of **Unselected**. |
| **Remote virtual network** | | | Peering link name | Enter **vnet-spoke-to-vnet-hub**. | | Virtual network deployment model | Leave the default of **Resource manager**. | | Subscription | Select your subscription. | | Virtual network | Select **vnet-spoke**. |
- | Allow access to current virtual network | Leave the default of **Selected**. |
- | Allow traffic to current virtual network | **Select** the checkbox. |
- | Allow traffic forwarded from the current virtual network (allow gateway transit) | **Select** the checkbox. |
- | Use remote virtual network gateway or route server | Leave the default of **Unselected**. |
+ | Allow 'vnet-spoke' to access 'vnet-hub' | Leave the default of **Selected**. |
+ | Allow 'vnet-spoke' to receive forwarded traffic from 'vnet-hub' | **Select** the checkbox. |
+ | Allow gateway in 'vnet-spoke' to forward traffic to 'vnet-hub' | Leave the default of **Unselected**. |
+ | Enable 'vnet-spoke' to use 'vnet-hub's' remote gateway | Leave the default of **Unselected**. |
1. Select **Add**.
nat-gateway Tutorial Hub Spoke Route Nat https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/nat-gateway/tutorial-hub-spoke-route-nat.md
A virtual network peering is used to connect the hub to spoke one and spoke one
| - | -- | | **This virtual network** | | | Peering link name | Enter **vnet-hub-to-vnet-spoke-1**. |
- | Allow access to remote virtual network | Leave the default of **Selected**. |
- | Allow traffic to remote virtual network | **Select** the checkbox. |
- | Allow traffic forwarded from the remote virtual network (allow gateway transit) | **Select** the checkbox.. |
- | Use remote virtual network gateway or route server | Leave the default of **Unselected**. |
+ | Allow 'vnet-hub' to access 'vnet-spoke-1' | Leave the default of **Selected**. |
+ | Allow 'vnet-hub' to receive forwarded traffic from 'vnet-spoke-1' | **Select** the checkbox. |
+ | Allow gateway in 'vnet-hub' to forward traffic to 'vnet-spoke-1' | Leave the default of **Unselected**. |
+ | Enable 'vnet-hub' to use 'vnet-spoke-1's' remote gateway | Leave the default of **Unselected**. |
| **Remote virtual network** | | | Peering link name | Enter **vnet-spoke-1-to-vnet-hub**. | | Virtual network deployment model | Leave the default of **Resource manager**. | | Subscription | Select your subscription. | | Virtual network | Select **vnet-spoke-1**. |
- | Allow access to current virtual network | Leave the default of **Selected**. |
- | Allow traffic to current virtual network | **Select** the checkbox. |
- | Allow traffic forwarded from the current virtual network (allow gateway transit) | **Select** the checkbox. |
- | Use remote virtual network gateway or route server | Leave the default of **Unselected**. |
+ | Allow 'vnet-spoke-1' to access 'vnet-hub' | Leave the default of **Selected**. |
+ | Allow 'vnet-spoke-1' to receive forwarded traffic from 'vnet-hub' | **Select** the checkbox. |
+ | Allow gateway in 'vnet-spoke-1' to forward traffic to 'vnet-hub' | Leave the default of **Unselected**. |
+ | Enable 'vnet-spoke-1' to use 'vnet-hub's' remote gateway | Leave the default of **Unselected**. |
1. Select **Add**.
Create a two-way virtual network peer between the hub and spoke two.
| - | -- | | **This virtual network** | | | Peering link name | Enter **vnet-hub-to-vnet-spoke-2**. |
- | Allow access to remote virtual network | Leave the default of **Selected**. |
- | Allow traffic to remote virtual network | **Select** the checkbox. |
- | Allow traffic forwarded from the remote virtual network (allow gateway transit) | **Select** the checkbox.. |
- | Use remote virtual network gateway or route server | Leave the default of **Unselected**. |
+ | Allow 'vnet-hub' to access 'vnet-spoke-2' | Leave the default of **Selected**. |
+ | Allow 'vnet-hub' to receive forwarded traffic from 'vnet-spoke-2' | **Select** the checkbox. |
+ | Allow gateway in 'vnet-hub' to forward traffic to 'vnet-spoke-2' | Leave the default of **Unselected**. |
+ | Enable 'vnet-hub' to use 'vnet-spoke-2's' remote gateway | Leave the default of **Unselected**. |
| **Remote virtual network** | | | Peering link name | Enter **vnet-spoke-2-to-vnet-hub**. | | Virtual network deployment model | Leave the default of **Resource manager**. | | Subscription | Select your subscription. | | Virtual network | Select **vnet-spoke-2**. |
- | Allow access to current virtual network | Leave the default of **Selected**. |
- | Allow traffic to current virtual network | **Select** the checkbox. |
- | Allow traffic forwarded from the current virtual network (allow gateway transit) | **Select** the checkbox. |
- | Use remote virtual network gateway or route server | Leave the default of **Unselected**. |
+ | Allow 'vnet-spoke-1' to access 'vnet-hub' | Leave the default of **Selected**. |
+ | Allow 'vnet-spoke-1' to receive forwarded traffic from 'vnet-hub' | **Select** the checkbox. |
+ | Allow gateway in 'vnet-spoke-1' to forward traffic to 'vnet-hub' | Leave the default of **Unselected**. |
+ | Enable 'vnet-spoke-1' to use 'vnet-hub's' remote gateway | Leave the default of **Unselected**. |
1. Select **Add**.
notification-hubs Notification Hubs Nodejs Push Notification Tutorial https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/notification-hubs/notification-hubs-nodejs-push-notification-tutorial.md
The sample snippets above allow you to easily build service infrastructure to de
[3]: .media/notification-hubs-nodejs-how-to-use-notification-hubs/sb-queues-05.png [4]: .media/notification-hubs-nodejs-how-to-use-notification-hubs/sb-queues-06.png [5]: .media/notification-hubs-nodejs-how-to-use-notification-hubs/sb-queues-07.png
-[SqlFilter.SqlExpression]: /dotnet/api/microsoft.servicebus.messaging.sqlfilter#microsoft_servicebus_messaging_sqlfilter_sqlexpression
[Azure Service Bus Notification Hubs]: /previous-versions/azure/azure-services/jj927170(v=azure.100)
-[SqlFilter]: /dotnet/api/microsoft.servicebus.messaging.sqlfilter#microsoft_servicebus_messaging_sqlfilter
[Web Site with WebMatrix]: /develop/nodejs/tutorials/web-site-with-webmatrix/ [Node.js Cloud Service]: ../cloud-services/cloud-services-nodejs-develop-deploy-app.md [Previous Management Portal]: .media/notification-hubs-nodejs-how-to-use-notification-hubs/previous-portal.png
private-5g-core Configure Access For User Equipment Ip Addresses https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/private-5g-core/configure-access-for-user-equipment-ip-addresses.md
+
+ Title: Configure Azure Private 5G Core network for accessing UE IP addresses
+
+description: Learn how to configure your Azure Private 5G Core to access UE IP addresses.
++++ Last updated : 10/02/2023++
+# Configure Azure Private 5G Core network for accessing UE IP addresses
+
+Azure Private 5G Core (AP5GC) provides a secure and reliable network for your organization's communication needs. To access user equipment (UE) IP addresses from the data network (DN), you need to configure appropriate firewall rules, routes, and other settings. This article will guide you through the steps and considerations required.
+
+## Prerequisites
+
+Before you begin, you must have:
+
+- Access to your Azure Private 5G Core via the Azure portal.
+- Knowledge of your organization's network topology.
+- An AP5GC with network address port translation (NAPT) disabled.
+ > [!IMPORTANT]
+ > Using a deployment where NAPT is enabled only works if the UE initiates the contact to the server and the server is capable of differentiating UE clients using a combination of IP address and port.
+ > If the server tries to make the initial contact or tries to contact a UE after the pinhole has timed out, the connection will fail.
+- Access to any necessary network devices for configuration (for example, routers, firewalls, switches, proxies).
+- Ability to capture packet traces at different points in your network.
+
+## Configure UE IP addresses access
+
+1. Determine the IP addresses of the devices that you wish to access from the data network. These IP addresses belong the IP pool defined during site creation.
+You can see the IP addresses for devices by either
+ - checking [distributed tracing](distributed-tracing.md),
+ - checking [packet captures of the device attaching and creating a session](data-plane-packet-capture.md),
+ - or using integrated tools for the UE (for example, command line or UI).
+1. Confirm that the client device you are using can reach the UE via the AP5GC N6 (in a 5G deployment) or SGi (in a 4G deployment) network.
+ - If the client is on the same subnet as the AP5GC N6/SGi interface, the client device should have a route to the UE subnet, and the next hop should be to the N6/SGi IP address that belongs to the data network name (DNN) assigned to the UE.
+ - Otherwise, if there is a router or firewall between the client and AP5GC, the route to the UE subnet should have the router or firewall as the next hop.
+1. Ensure the client device traffic destined to the UE reaches the AP5GC N6 network interface.
+ 1. Check each firewall between the N6 address and the client device IP address.
+ 1. Ensure that the type of traffic expected between client device and UE is allowed to pass through the firewall.
+ 1. Repeat for TCP/UDP ports, IP addresses and protocols required.
+ 1. Ensure the firewall has routes to forward the traffic destined to the UE IP address to the N6 interface IP address.
+1. Configure appropriate routes in your routers to ensure that traffic from the data network is directed to the correct destination IP addresses in the RAN network.
+1. Test the configuration to ensure that you can successfully access the UE IP addresses from the Data network.
+
+## Example
+
+- **UE:** A smart camera that can be accessed using HTTPS. The UE is using AP5GC to send information to an operator's managed server.
+- **Network Topology:** The N6 network has a firewall separating it from the secure corporate network and from the internet.
+- **Requirement:** From the operator's IT infrastructure be able to sign in into the smart camera using HTTPS.
+
+### Solution
+
+1. Deploy AP5GC with NAPT disabled.
+1. Add rules to the enterprise firewall to allow HTTPS traffic from the corporate network to the smart camera IP address.
+1. Add routing configuration to the firewall. Forward traffic destined to the smart camera's IP address to the N6 IP address of the DN name assigned to the UE in the AP5GC deployment.
+1. Verify the intended traffic flows for the N3 and N6 interfaces.
+ 1. Take packet captures on the N3 and N6 interface simultaneously.
+ 1. Check traffic on the N3 interface.
+ 1. Check the packet capture for expected traffic reaching the N3 interface from the UE.
+ 1. Check the packet capture for expected traffic leaving the N3 interface towards the UE.
+ 1. Check traffic on the N6 interface.
+ 1. Check the packet capture for expected traffic reaching the N6 interface from the UE.
+ 1. Check the packet capture for expected traffic leaving the N6 interface towards the UE.
+1. Take packet captures to check that the firewall is both receiving and sending traffic destined to the smart camera and to the client device.
++
+## Result
+
+Your Azure Private 5G Core network can access UE IP addresses from the Data network.
route-server Hub Routing Preference https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/route-server/hub-routing-preference.md
Title: Routing preference (preview)
-description: Learn about Azure Route Server routing preference (preview) feature.
+description: Learn about Azure Route Server routing preference (preview) feature to change how it can learn routes.
Previously updated : 07/31/2023 Last updated : 10/16/2023+
+#CustomerIntent: As an Azure administrator, I want learn about routing preference feature so that I know how to influence route selection in Azure Route Server.
# Routing preference (preview) Azure Route Server enables dynamic routing between network virtual appliances (NVAs) and virtual networks (VNets). In addition to supporting third-party NVAs, Route Server also seamlessly integrates with ExpressRoute and VPN gateways. Route Server uses built-in route selection algorithms to make routing decisions to set connection preferences.
-When **branch-to-branch** is enabled and Route Server learns multiple routes across site-to-site (S2S) VPN, ExpressRoute and SD-WAN NVAs, for the same on-premises destination route prefix, users can now configure connection preferences to influence Route Server route selection.
+You can configure routing preference to influence how Route Server selects routes that it learned across site-to-site (S2S) VPN, ExpressRoute and SD-WAN NVAs for the same on-premises destination route prefix.
> [!IMPORTANT]
-> Routing preference is currently in PREVIEW.
-> See the [Supplemental Terms of Use for Microsoft Azure Previews](https://azure.microsoft.com/support/legal/preview-supplemental-terms/) for legal terms that apply to Azure features that are in beta, preview, or otherwise not yet released into general availability.
+> Routing preference is currently in PREVIEW. See the [Supplemental Terms of Use for Microsoft Azure Previews](https://azure.microsoft.com/support/legal/preview-supplemental-terms/) for legal terms that apply to Azure features that are in beta, preview, or otherwise not yet released into general availability.
## Routing preference configuration
When Route Server has multiple routes to an on-premises destination prefix, Rout
- **ExpressRoute**: (default setting): Prefer routes learned through ExpressRoute over routes learned through VPN/SD-WAN connections - **VPN/NVA**: Prefer routes learned through VPN/NVA connections over routes learned through ExpressRoute. > [!IMPORTANT]
- > Routing preference doesn't allow users to set preference between routes learned over VPN and NVA connections. If the same routes are learned over VPN and NVA connections, Route Server will prefer the route with the shortest BGP AS-PATH.
- - **AS-Path**: Prefer routes with the shortest BGP AS-PATH length, irrespective of the source of the route advertisement.
+ > Routing preference doesn't allow users to set preference between routes learned over VPN and NVA connections. If the same routes are learned over VPN and NVA connections, Route Server will prefer the route with the shortest BGP AS path.
+ - **AS Path**: Prefer routes with the shortest BGP AS path length, irrespective of the source of the route advertisement.
+
+## Next step
-## Next steps
+> [!div class="nextstepaction"]
+> [Configure routing preference](hub-routing-preference-portal.md)
-- Learn how to [configure routing preference](hub-routing-preference-powershell.md)-- Learn how to [create and configure Azure Route Server](quickstart-configure-route-server-portal.md).-- Learn how to [monitor Azure Route Server](monitor-route-server.md).
sap Tutorial https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sap/automation/tutorial.md
If you don't assign the User Access Administrator role to the service principal,
Note the Terraform variable file locations for future edits during deployment.
+> [!IMPORTANT]
+> Ensure that the ┬┤dns_label┬┤ matches your Azure Private DNS.
++ ## Deploy the control plane Use the [deploy_controlplane.sh](bash/deploy-controlplane.md) script to deploy the deployer and library. These deployment pieces make up the control plane for a chosen automation area.
export ARM_TENANT_ID="<tenantId>"
1. Create the deployer and the SAP library. Add the service principal details to the deployment key vault.
- ```bash
+```bash
- export env_code="MGMT"
- export vnet_code="DEP00"
- export region_code="<region_code>"
+export env_code="MGMT"
+export vnet_code="DEP00"
+export region_code="<region_code>"
- export DEPLOYMENT_REPO_PATH="${HOME}/Azure_SAP_Automated_Deployment/sap-automation"
- export CONFIG_REPO_PATH="${HOME}/Azure_SAP_Automated_Deployment/WORKSPACES"
-
+export DEPLOYMENT_REPO_PATH="${HOME}/Azure_SAP_Automated_Deployment/sap-automation"
+export CONFIG_REPO_PATH="${HOME}/Azure_SAP_Automated_Deployment/WORKSPACES"
- cd $CONFIG_REPO_PATH
- deployer_parameter_file="${CONFIG_REPO_PATH}/DEPLOYER/${env_code}-${region_code}-${vnet_code}-INFRASTRUCTURE/${env_code}-${region_code}-${vnet_code}-INFRASTRUCTURE.tfvars"
- library_parameter_file="${CONFIG_REPO_PATH}/LIBRARY/${env_code}-${region_code}-SAP_LIBRARY/${env_code}-${region_code}-SAP_LIBRARY.tfvars"
+cd $CONFIG_REPO_PATH
- ${SAP_AUTOMATION_REPO_PATH}/deploy/scripts/deploy_controlplane.sh \
- --deployer_parameter_file "${deployer_parameter_file}" \
- --library_parameter_file "${library_parameter_file}" \
- --subscription "${ARM_SUBSCRIPTION_ID}" \
- --spn_id "${ARM_CLIENT_ID}" \
- --spn_secret "${ARM_CLIENT_SECRET}" \
- --tenant_id "${ARM_TENANT_ID}"
+deployer_parameter_file="${CONFIG_REPO_PATH}/DEPLOYER/${env_code}-${region_code}-${vnet_code}-INFRASTRUCTURE/${env_code}-${region_code}-${vnet_code}-INFRASTRUCTURE.tfvars"
+library_parameter_file="${CONFIG_REPO_PATH}/LIBRARY/${env_code}-${region_code}-SAP_LIBRARY/${env_code}-${region_code}-SAP_LIBRARY.tfvars"
- ```
+${SAP_AUTOMATION_REPO_PATH}/deploy/scripts/deploy_controlplane.sh \
+ --deployer_parameter_file "${deployer_parameter_file}" \
+ --library_parameter_file "${library_parameter_file}" \
+ --subscription "${ARM_SUBSCRIPTION_ID}" \
+ --spn_id "${ARM_CLIENT_ID}" \
+ --spn_secret "${ARM_CLIENT_SECRET}" \
+ --tenant_id "${ARM_TENANT_ID}"
- If you run into authentication issues, run `az logout` to sign out and clear the `token-cache`. Then run `az login` to reauthenticate.
+```
- Wait for the automation framework to run the Terraform operations `plan` and `apply`.
+If you run into authentication issues, run `az logout` to sign out and clear the `token-cache`. Then run `az login` to reauthenticate.
- The deployment of the deployer might run for about 15 to 20 minutes.
+Wait for the automation framework to run the Terraform operations `plan` and `apply`.
- You need to note some values for upcoming steps. Look for this text block in the output:
+The deployment of the deployer might run for about 15 to 20 minutes.
- ```text
- #########################################################################################
- # #
- # Please save these values: #
- # - Key Vault: MGMTNOEUDEP00user39B #
- # - Deployer IP: x.x.x.x #
- # - Storage Account: mgmtnoeutfstate53e #
- # #
- #########################################################################################
- ```
+You need to note some values for upcoming steps. Look for this text block in the output:
+
+```text
+#########################################################################################
+# #
+# Please save these values: #
+# - Key Vault: MGMTNOEUDEP00user39B #
+# - Deployer IP: x.x.x.x #
+# - Storage Account: mgmtnoeutfstate53e #
+# #
+#########################################################################################
+```
1. Go to the [Azure portal](https://portal.azure.com).
The rest of the tasks must be executed on the deployer.
## Securing the control plane The control plane is the most critical part of the SAP automation framework. It's important to secure the control plane. The following steps help you secure the control plane.+
+To copy the control plane configuration files to the deployer VM, you can use the `sync_deployer.sh` script. Sign in to the deployer VM and run the following commands:
+
+```bash
+
+cd ~/Azure_SAP_Automated_Deployment/WORKSPACES
+
+../sap-automation/deploy/scripts/sync_deployer.sh --storageaccountname mgtneweeutfstate### --state_subscription xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx
++
+```
+
+This copies the tfvars configuration files from the SAP Library's storage account to the deployer VM.
+
+Change the configuration files for the control plane to
+
+```terraform
+
+ # use_private_endpoint defines that the storage accounts and key vaults have private endpoints enabled
+ use_private_endpoint = true
+
+ # enable_firewall_for_keyvaults_and_storage defines that the storage accounts and key vaults have firewall enabled
+ enable_firewall_for_keyvaults_and_storage = true
+
+ # public_network_access_enabled controls if storage account and key vaults have public network access enabled
+ public_network_access_enabled = false
+
+```
+
+Rerun the deployment to apply the changes. Update the storage account name and key vault name in the script.
++
+```bash
+
+export ARM_SUBSCRIPTION_ID="<subscriptionId>"
+export ARM_CLIENT_ID="<appId>"
+export ARM_CLIENT_SECRET="<password>"
+export ARM_TENANT_ID="<tenantId>"
+
+```
+
+1. Create the deployer and the SAP library.
+
+```bash
+
+export env_code="MGMT"
+export vnet_code="DEP00"
+export region_code="<region_code>"
+
+storage_accountname="mgmtneweeutfstate###"
+vault_name="MGMTNOEUDEP00user###"
+
+export DEPLOYMENT_REPO_PATH="${HOME}/Azure_SAP_Automated_Deployment/sap-automation"
+export CONFIG_REPO_PATH="${HOME}/Azure_SAP_Automated_Deployment/WORKSPACES"
+
+cd $CONFIG_REPO_PATH
+
+deployer_parameter_file="${CONFIG_REPO_PATH}/DEPLOYER/${env_code}-${region_code}-${vnet_code}-INFRASTRUCTURE/${env_code}-${region_code}-${vnet_code}-INFRASTRUCTURE.tfvars"
+library_parameter_file="${CONFIG_REPO_PATH}/LIBRARY/${env_code}-${region_code}-SAP_LIBRARY/${env_code}-${region_code}-SAP_LIBRARY.tfvars"
+
+${SAP_AUTOMATION_REPO_PATH}/deploy/scripts/deploy_controlplane.sh \
+ --deployer_parameter_file "${deployer_parameter_file}" \
+ --library_parameter_file "${library_parameter_file}" \
+ --subscription "${ARM_SUBSCRIPTION_ID}" \
+ --storageaccountname "${storage_accountname}" \
+ --vault "${vault_name}"
+```
+++ ## Get SAP software by using the Bill of Materials The automation framework gives you tools to download software from SAP by using the SAP BOM. The software is downloaded to the SAP library, which acts as the archive for all media required to deploy SAP.
Use the [install_workloadzone](bash/install-workloadzone.md) script to deploy th
- Name of the `tfstate` storage account - Name of the deployer key vault
- ```bash
+```bash
- export tfstate_storage_account="<storageaccountName>"
- export deployer_env_code="MGMT"
- export sap_env_code="DEV"
- export region_code="<region_code>"
- export key_vault="<vaultID>"
+export tfstate_storage_account="<storageaccountName>"
+export deployer_env_code="MGMT"
+export sap_env_code="DEV"
+export region_code="<region_code>"
+export key_vault="<vaultID>"
- export deployer_vnet_code="DEP01"
- export vnet_code="SAP02"
+export deployer_vnet_code="DEP01"
+export vnet_code="SAP02"
- export ARM_SUBSCRIPTION_ID="<subscriptionId>"
- export ARM_CLIENT_ID="<appId>"
- export ARM_CLIENT_SECRET="<password>"
- export ARM_TENANT_ID="<tenantId>"
+export ARM_SUBSCRIPTION_ID="<subscriptionId>"
+export ARM_CLIENT_ID="<appId>"
+export ARM_CLIENT_SECRET="<password>"
+export ARM_TENANT_ID="<tenantId>"
- cd ~/Azure_SAP_Automated_Deployment/WORKSPACES/LANDSCAPE/${sap_env_code}-${region_code}-SAP01-INFRASTRUCTURE
+cd ~/Azure_SAP_Automated_Deployment/WORKSPACES/LANDSCAPE/${sap_env_code}-${region_code}-${vnet_code}-INFRASTRUCTURE
- export CONFIG_REPO_PATH="${HOME}/Azure_SAP_Automated_Deployment/WORKSPACES"
- export SAP_AUTOMATION_REPO_PATH="${HOME}/Azure_SAP_Automated_Deployment/sap-automation"
+export CONFIG_REPO_PATH="${HOME}/Azure_SAP_Automated_Deployment/WORKSPACES"
+export SAP_AUTOMATION_REPO_PATH="${HOME}/Azure_SAP_Automated_Deployment/sap-automation"
- az login --service-principal -u "${ARM_CLIENT_ID}" -p="${ARM_CLIENT_SECRET}" --tenant "${ARM_TENANT_ID}"
-
- cd "${CONFIG_REPO_PATH}/LANDSCAPE/${sap_env_code}-${region_code}-${vnet_code}-INFRASTRUCTURE"
- parameterFile="${sap_env_code}-${region_code}-${vnet_code}-INFRASTRUCTURE.tfvars"
- deployerState="${deployer_env_code}-${region_code}-${deployer_vnet_code}-INFRASTRUCTURE.terraform.tfstate"
-
- $SAP_AUTOMATION_REPO_PATH/deploy/scripts/install_workloadzone.sh \
- --parameterfile "${parameterFile}" \
- --deployer_environment "${deployer_env_code}" \
- --deployer_tfstate_key "${deployerState}" \
- --keyvault "${key_vault}" \
- --storageaccountname "${tfstate_storage_account}" \
- --subscription "${ARM_SUBSCRIPTION_ID}" \
- --spn_id "${ARM_CLIENT_ID}" \
- --spn_secret "${ARM_CLIENT_SECRET}" \
- --tenant_id "${ARM_TENANT_ID}"
- ```
+az login --service-principal -u "${ARM_CLIENT_ID}" -p="${ARM_CLIENT_SECRET}" --tenant "${ARM_TENANT_ID}"
+
+cd "${CONFIG_REPO_PATH}/LANDSCAPE/${sap_env_code}-${region_code}-${vnet_code}-INFRASTRUCTURE"
+parameterFile="${sap_env_code}-${region_code}-${vnet_code}-INFRASTRUCTURE.tfvars"
+deployerState="${deployer_env_code}-${region_code}-${deployer_vnet_code}-INFRASTRUCTURE.terraform.tfstate"
+
+$SAP_AUTOMATION_REPO_PATH/deploy/scripts/install_workloadzone.sh \
+ --parameterfile "${parameterFile}" \
+ --deployer_environment "${deployer_env_code}" \
+ --deployer_tfstate_key "${deployerState}" \
+ --keyvault "${key_vault}" \
+ --storageaccountname "${tfstate_storage_account}" \
+ --subscription "${ARM_SUBSCRIPTION_ID}" \
+ --spn_id "${ARM_CLIENT_ID}" \
+ --spn_secret "${ARM_CLIENT_SECRET}" \
+ --tenant_id "${ARM_TENANT_ID}"
+```
The workload zone deployment should start automatically.
search Vector Search How To Create Index https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/search/vector-search-how-to-create-index.md
You can use the Azure portal, REST APIs, or the beta packages of the Azure SDKs
### [**2023-10-01-Preview**](#tab/rest-2023-10-01-Preview)
-In the following REST API example, "title" and "content" contain textual content used in full text search and semantic search, while "titleVector" and "contentVector" contain vector data.
+In the following REST API example, "title" and "content" contain textual content used in full text search and semantic ranking, while "titleVector" and "contentVector" contain vector data.
> [!TIP] > Updating an existing index to include vector fields? Make sure the `allowIndexDowntime` query parameter is set to `true`
search Vector Search How To Query https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/search/vector-search-how-to-query.md
Last updated 10/13/2023
> [!IMPORTANT] > Vector search is in public preview under [supplemental terms of use](https://azure.microsoft.com/support/legal/preview-supplemental-terms/). It's available through the Azure portal, preview REST APIs, and [beta client libraries](https://github.com/Azure/cognitive-search-vector-pr#readme).
-In Azure Cognitive Search, if you added vector fields to a search index, this article explains how to:
+In Azure Cognitive Search, if you [added vector fields](vector-search-how-to-create-index.md) to a search index, this article explains how to:
> [!div class="checklist"] > + [Query vector fields](#vector-query-request)
Code samples in the [cognitive-search-vector-pr](https://github.com/Azure/cognit
## Prerequisites
-+ Azure Cognitive Search, in any region and on any tier. Most existing services support vector search. For services created prior to January 2019, there's a small subset that won't support vector search. If an index containing vector fields fails to be created or updated, this is an indicator. In this situation, a new service must be created.
++ Azure Cognitive Search, in any region and on any tier. Most existing services support vector search. For services created prior to January 2019, a small subset won't support vector search. If an index containing vector fields fails to be created or updated, this is an indicator. In this situation, a new service must be created. + A search index containing vector fields. See [Add vector fields to a search index](vector-search-how-to-create-index.md).
-+ Use REST API version **2023-10-01-Preview** if you want pre-filters. Otherwise, you can use **2023-07-01-Preview**, the [beta client libraries](https://github.com/Azure/cognitive-search-vector-pr/tree/main), or Search Explorer in the Azure portal.
++ Use REST API version **2023-10-01-Preview** if you want pre-filters and the latest behaviors. Otherwise, you can continue to use **2023-07-01-Preview**, the [beta client libraries](https://github.com/Azure/cognitive-search-vector-pr/tree/main), or Search Explorer in the Azure portal. ## Limitations
If you aren't sure whether your search index already has vector fields, look for
+ A non-empty `vectorSearch` property containing algorithms and other vector-related configurations embedded in the index schema.
-+ In the fields collection, look for fields of type `Collection(Edm.Single)`, with a `dimensions` attribute and a `vectorSearchConfiguration` set to the name of the `vectorSearch` algorithm configuration used by the field.
++ In the fields collection, look for fields of type `Collection(Edm.Single)` with a `dimensions` attribute, and a `vectorSearch` section in the index. You can also send an empty query (`search=*`) against the index. If the vector field is "retrievable", the response includes a vector field consisting of an array of floating point values.
You can use the Azure portal, REST APIs, or the beta packages of the Azure SDKs
REST API version [**2023-10-01-Preview**](/rest/api/searchservice/search-service-api-versions#2023-10-01-Preview) introduces breaking changes to the vector query definition in [Search Documents](/rest/api/searchservice/2023-10-01-preview/documents/search-post). This version adds: + `vectorQueries` for specifying a vector to search for, vector fields to search in, and the k-number of nearest neighbors to return.
-+ `kind` is a parameter of `vectorQueries` and it can only be set to `vector` in this preview.
-+ `exhaustive` can be set to true or false, and invokes exhaustive KNN at query time.
++ `kind` as a parameter of `vectorQueries`. It can only be set to `vector` in this preview.++ `exhaustive` can be set to true or false, and invokes exhaustive KNN at query time, even if you indexed the field for HNSW. In the following example, the vector is a representation of this query string: `"what Azure services support full text search"`. The query targets the "contentVector" field. The actual vector has 1536 embeddings, so it's trimmed in this example for readability.
api-key: {{admin-api-key}}
### [**2023-07-01-Preview**](#tab/query-vector-query)
-REST API version [**2023-07-01-Preview**](/rest/api/searchservice/index-preview) introduces vector query support to [Search Documents](/rest/api/searchservice/preview-api/search-documents). This version adds:
+REST API version [**2023-07-01-Preview**](/rest/api/searchservice/index-preview) first introduced vector query support to [Search Documents](/rest/api/searchservice/preview-api/search-documents). This version added:
+ `vectors` for specifying a vector to search for, vector fields to search in, and the k-number of nearest neighbors to return.
Be sure to the **JSON view** and formulate the query in JSON. The search bar in
A query request can include a vector query and a [filter expression](search-filters.md). Filters apply to "filterable" text and numeric fields, and are useful for including or excluding search documents based on filter criteria. Although a vector field isn't filterable itself, a query can include filters on other fields in the same index.
-In contrast with full text search, a filter in a pure vector query is effectively processed as a post-query operation. The set of `"k"` nearest neighbors is retrieved, and then combined with the set of filtered results. As such, the value of `"k"` predetermines the surface over which the filter is applied. For `"k": 10`, the filter is applied to 10 most similar documents. For `"k": 100`, the filter iterates over 100 documents (assuming the index contains 100 documents that are sufficiently similar to the query).
+In **2023-10-01-Preview**, you can apply a filter before or after query execution. The default is pre-query. If you want post-query filtering instead, set the `vectorFiltermode` parameter.
+
+In **2023-07-01-Preview**, a filter in a pure vector query is processed as a post-query operation.
> [!TIP] > If you don't have source fields with text or numeric values, check for document metadata, such as LastModified or CreatedBy properties, that might be useful in a metadata filter.
In contrast with full text search, a filter in a pure vector query is effectivel
REST API version [**2023-10-01-Preview**](/rest/api/searchservice/search-service-api-versions#2023-10-01-Preview) introduces filter options. This version adds:
-+ `vectorFilterMode` for prefiltering (default) or postfiltering during query execution.
-+ `filter` provides the criteria, which is applied to a filterable text field ("category" in this example)
++ `vectorFilterMode` for prefiltering (default) or postfiltering during query execution. Valid values are `preFilter` (default), `postFilter`, and null.++ `filter` provides the criteria. In the following example, the vector is a representation of this query string: `"what Azure services support full text search"`. The query targets the "contentVector" field. The actual vector has 1536 embeddings, so it's trimmed in this example for readability.
-The filter criteria are applied before the search engine executes the vector query.
+The filter criteria are applied to a filterable text field ("category" in this example) before the search engine executes the vector query.
```http POST https://{{search-service-name}}.search.windows.net/indexes/{{index-name}}/docs/search?api-version=2023-10-01-Preview
api-key: {{admin-api-key}}
### [**2023-07-01-Preview**](#tab/filter-2023-07-01-Preview)
-REST API version [**2023-07-01-Preview**](/rest/api/searchservice/index-preview) supports post-filtering over query results.
+REST API version [**2023-07-01-Preview**](/rest/api/searchservice/index-preview) supports post-filtering over query results.
In the following example, the vector is a representation of this query string: `"what Azure services support full text search"`. The query targets the "contentVector" field. The actual vector has 1536 embeddings, so it's trimmed in this example for readability.
-The filter criteria are applied after the search engine executes the vector query.
+In this API version, there is no pre-filter support or `vectorFilterMode` parameter. The filter criteria are applied after the search engine executes the vector query. The set of `"k"` nearest neighbors is retrieved, and then combined with the set of filtered results. As such, the value of `"k"` predetermines the surface over which the filter is applied. For `"k": 10`, the filter is applied to 10 most similar documents. For `"k": 100`, the filter iterates over 100 documents (assuming the index contains 100 documents that are sufficiently similar to the query).
```http POST https://{{search-service-name}}.search.windows.net/indexes/{{index-name}}/docs/search?api-version=2023-07-01-Preview
api-key: {{admin-api-key}}
You can set the "vectors.fields" property to multiple vector fields. For example, the Postman collection has vector fields named "titleVector" and "contentVector". A single vector query executes over both the "titleVector" and "contentVector" fields, which must have the same embedding space since they share the same query vector. ```http
-POST https://{{search-service-name}}.search.windows.net/indexes/{{index-name}}/docs/search?api-version=2023-07-01-Preview
+POST https://{{search-service-name}}.search.windows.net/indexes/{{index-name}}/docs/search?api-version=2023-10-01-Preview
Content-Type: application/json api-key: {{admin-api-key}} {
Both "k" and "top" are optional. Unspecified, the default number of results in a
Ranking of results is computed by either:
-+ The similarity metric specified in the index `vectorConfiguration` for a vector-only query. Valid values are `cosine` , `euclidean`, and `dotProduct`.
++ The similarity metric specified in the index `vectorSearch` section for a vector-only query. Valid values are `cosine` , `euclidean`, and `dotProduct`. + Reciprocal Rank Fusion (RRF) if there are multiple sets of search results. Azure OpenAI embedding models use cosine similarity, so if you're using Azure OpenAI embedding models, `cosine` is the recommended metric. Other supported ranking metrics include `euclidean` and `dotProduct`.
-Multiple sets are created if the query targets multiple vector fields, or if the query is a hybrid of vector and full text search, with or without the optional semantic reranking capabilities of [semantic search](semantic-search-overview.md). Within vector search, a vector query can only target one internal vector index. So for [multiple vector fields](#multiple-vector-fields) and [multiple vector queries](#multiple-vector-queries), the search engine generates multiple queries that target the respective vector indexes of each field. Output is a set of ranked results for each query, which are fused using RRF. For more information, see [Vector query execution and scoring](vector-search-ranking.md).
+Multiple sets are created if the query targets multiple vector fields, or if the query is a hybrid of vector and full text search, with or without [semantic ranking](semantic-search-overview.md). Within vector search, a vector query can only target one internal vector index. So for [multiple vector fields](#multiple-vector-fields) and [multiple vector queries](#multiple-vector-queries), the search engine generates multiple queries that target the respective vector indexes of each field. Output is a set of ranked results for each query, which are fused using RRF. For more information, see [Vector query execution and scoring](vector-search-ranking.md).
## Next steps
search Vector Search Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/search/vector-search-overview.md
Scenarios for vector search include:
+ **Multi-lingual search**. Use a multi-lingual embeddings model to represent your document in multiple languages in a single vector space to find documents regardless of the language they are in.
-+ [**Hybrid search**](hybrid-search-overview.md). Vector search is implemented at the field level, which means you can build queries that include both vector fields and searchable text fields. The queries execute in parallel and the results are merged into a single response. Optionally, add [semantic search (preview)](semantic-search-overview.md) for even more accuracy with L2 reranking using the same language models that power Bing.
++ [**Hybrid search**](hybrid-search-overview.md). Vector search is implemented at the field level, which means you can build queries that include both vector fields and searchable text fields. The queries execute in parallel and the results are merged into a single response. Optionally, add [semantic ranking](semantic-search-overview.md) for even more accuracy with L2 reranking using the same language models that power Bing. + **Filtered vector search**. A query request can include a vector query and a [filter expression](search-filters.md). Filters apply to text and numeric fields, and are useful for metadata filters, and including or excluding search documents based on filter criteria. Although a vector field isn't filterable itself, you can set up a filterable text or numeric field. The search engine can process the filter before or after the vector query executes.
service-bus-messaging Authenticate Application https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/service-bus-messaging/authenticate-application.md
If your application is a console application, you must register a native applica
## Assign Azure roles using the Azure portal Assign one of the [Service Bus roles](#azure-built-in-roles-for-azure-service-bus) to the application's service principal at the desired scope (Service Bus namespace, resource group, subscription). For detailed steps, see [Assign Azure roles using the Azure portal](../role-based-access-control/role-assignments-portal.md).
-Once you define the role and its scope, you can test this behavior with the [sample on GitHub](https://github.com/Azure/azure-service-bus/tree/master/samples/DotNet/Microsoft.ServiceBus.Messaging/RoleBasedAccessControl). This sample uses the old Microsoft.Azure.ServiceBus package. For information about migrating this sample to use the newer Azure.Messaging.ServiceBus package, see the [Guide for migrating to Azure.Messaging.ServiceBus from Microsoft.Azure.ServiceBus](https://github.com/Azure/azure-sdk-for-net/blob/main/sdk/servicebus/Azure.Messaging.ServiceBus/MigrationGuide.md).
-
+Once you define the role and its scope, you can test this behavior with the [sample on GitHub](https://github.com/Azure/azure-sdk-for-net/blob/main/sdk/servicebus/Azure.Messaging.ServiceBus/samples/Sample00_AuthenticateClient.md#authenticate-with-azureidentity).
### Authenticating the Service Bus client Once you've registered your application and granted it permissions to send/receive data in Azure Service Bus, you can authenticate your client with the client secret credential, which will enable you to make requests against Azure Service Bus.
If you're using the older .NET packages, see the RoleBasedAccessControl samples
To learn more about Service Bus messaging, see the following topics. -- [Service Bus Azure RBAC samples](https://github.com/Azure/azure-service-bus/tree/master/samples/DotNet/Microsoft.ServiceBus.Messaging/RoleBasedAccessControl)
+- [Service Bus Azure RBAC samples](https://github.com/Azure/azure-sdk-for-net/blob/main/sdk/servicebus/Azure.Messaging.ServiceBus/samples/Sample00_AuthenticateClient.md)
- [Service Bus queues, topics, and subscriptions](service-bus-queues-topics-subscriptions.md) - [Get started with Service Bus queues](service-bus-dotnet-get-started-with-queues.md) - [How to use Service Bus topics and subscriptions](service-bus-dotnet-how-to-use-topics-subscriptions.md)
service-bus-messaging Duplicate Detection https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/service-bus-messaging/duplicate-detection.md
Try the samples in the language of your choice to explore Azure Service Bus feat
See samples for the older .NET and Java client libraries here: - [Azure Service Bus client library samples for .NET (legacy)](https://github.com/Azure/azure-service-bus/tree/master/samples/DotNet/Microsoft.Azure.ServiceBus/) - [Azure Service Bus client library samples for Java (legacy)](https://github.com/Azure/azure-service-bus/tree/master/samples/Java/azure-servicebus)+
service-bus-messaging Enable Auto Forward https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/service-bus-messaging/enable-auto-forward.md
srcSubscription.ForwardTo = destTopic;
namespaceManager.CreateSubscription(srcSubscription)); ``` + ## Java ### azure-messaging-servicebus (latest)
You can enable the auto forwarding feature by using the [CreateQueueOptions.setF
### azure-servicebus (legacy) You can enable autoforwarding by using the [QueueDescription.setForwardTo(String forwardTo)](/java/api/com.microsoft.azure.servicebus.management.queuedescription.setforwardto#com_microsoft_azure_servicebus_management_QueueDescription_setForwardTo_java_lang_String_) or [SubscriptionDescription.setForwardTo(String forwardTo)](/java/api/com.microsoft.azure.servicebus.management.subscriptiondescription.setforwardto) for the source. ## Next steps Try the samples in the language of your choice to explore Azure Service Bus features.
Try the samples in the language of your choice to explore Azure Service Bus feat
Find samples for the older .NET and Java client libraries below: - [Azure Service Bus client library samples for .NET (legacy)](https://github.com/Azure/azure-service-bus/tree/master/samples/DotNet/Microsoft.Azure.ServiceBus/) - [Azure Service Bus client library samples for Java (legacy)](https://github.com/Azure/azure-service-bus/tree/master/samples/Java/azure-servicebus)+
service-bus-messaging Enable Dead Letter https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/service-bus-messaging/enable-dead-letter.md
Try the samples in the language of your choice to explore Azure Service Bus feat
Find samples for the older .NET and Java client libraries below: - [Azure Service Bus client library samples for .NET (legacy)](https://github.com/Azure/azure-service-bus/tree/master/samples/DotNet/Microsoft.Azure.ServiceBus/) - [Azure Service Bus client library samples for Java (legacy)](https://github.com/Azure/azure-service-bus/tree/master/samples/Java/azure-servicebus)+
service-bus-messaging Enable Duplicate Detection https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/service-bus-messaging/enable-duplicate-detection.md
Try the samples in the language of your choice to explore Azure Service Bus feat
Find samples for the older .NET and Java client libraries below: - [Azure Service Bus client library samples for .NET (legacy)](https://github.com/Azure/azure-service-bus/tree/master/samples/DotNet/Microsoft.Azure.ServiceBus/) - [Azure Service Bus client library samples for Java (legacy)](https://github.com/Azure/azure-service-bus/tree/master/samples/Java/azure-servicebus)+
service-bus-messaging Enable Message Sessions https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/service-bus-messaging/enable-message-sessions.md
Try the samples in the language of your choice to explore Azure Service Bus feat
Find samples for the older .NET and Java client libraries below: - [Azure Service Bus client library samples for .NET (legacy)](https://github.com/Azure/azure-service-bus/tree/master/samples/DotNet/Microsoft.Azure.ServiceBus/) - [Azure Service Bus client library samples for Java (legacy)](https://github.com/Azure/azure-service-bus/tree/master/samples/Java/azure-servicebus)+
service-bus-messaging Enable Partitions Basic Standard https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/service-bus-messaging/enable-partitions-basic-standard.md
Try the samples in the language of your choice to explore Azure Service Bus feat
Find samples for the older .NET and Java client libraries below: - [Azure Service Bus client library samples for .NET (legacy)](https://github.com/Azure/azure-service-bus/tree/master/samples/DotNet/Microsoft.Azure.ServiceBus/) - [Azure Service Bus client library samples for Java (legacy)](https://github.com/Azure/azure-service-bus/tree/master/samples/Java/azure-servicebus)+
service-bus-messaging Entity Suspend https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/service-bus-messaging/entity-suspend.md
Here's how the behavior is based on the status you set on a topic and its subscr
| Disabled or Send Disabled | Disabled or Receive Disabled | You can't send messages to the topic and you can't receive from the subscription either. | ## Other statuses
-The [EntityStatus](/dotnet/api/microsoft.servicebus.messaging.entitystatus) enumeration also defines a set of transitional states that can only be set by the system.
+The [EntityStatus](/dotnet/api/azure.messaging.servicebus.administration.entitystatus) enumeration also defines a set of transitional states that can only be set by the system.
## Next steps
To learn more about Service Bus messaging, see the following topics:
* [Service Bus queues, topics, and subscriptions](service-bus-queues-topics-subscriptions.md) * [Get started with Service Bus queues](service-bus-dotnet-get-started-with-queues.md) * [How to use Service Bus topics and subscriptions](service-bus-dotnet-how-to-use-topics-subscriptions.md)-
-[1]: ./media/entity-suspend/entity-state-change.png
service-bus-messaging Message Browsing https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/service-bus-messaging/message-browsing.md
Find samples for the older .NET and Java client libraries below:
- [Azure Service Bus client library samples for .NET (legacy)](https://github.com/Azure/azure-service-bus/tree/master/samples/DotNet/Microsoft.Azure.ServiceBus/) - **Message Browsing (Peek)** sample - [Azure Service Bus client library samples for Java (legacy)](https://github.com/Azure/azure-service-bus/tree/master/samples/Java/azure-servicebus) - **Message Browse** sample.
service-bus-messaging Message Counters https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/service-bus-messaging/message-counters.md
Try the samples in the language of your choice to explore Azure Service Bus feat
Find samples for the older .NET and Java client libraries below: - [Azure Service Bus client library samples for .NET (legacy)](https://github.com/Azure/azure-service-bus/tree/master/samples/DotNet/Microsoft.Azure.ServiceBus/) - [Azure Service Bus client library samples for Java (legacy)](https://github.com/Azure/azure-service-bus/tree/master/samples/Java/azure-servicebus)+
service-bus-messaging Message Deferral https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/service-bus-messaging/message-deferral.md
See samples for the older .NET and Java client libraries here:
- [Azure Service Bus client library samples for .NET (legacy)](https://github.com/Azure/azure-service-bus/tree/master/samples/DotNet/Microsoft.Azure.ServiceBus/) - See the **Deferral** sample. - [Azure Service Bus client library samples for Java (legacy)](https://github.com/Azure/azure-service-bus/tree/master/samples/Java/azure-servicebus/MessageBrowse) + ## Related resources - [Tutorial showing the use of message deferral as a part of a workflow, using NServiceBus](https://docs.particular.net/tutorials/nservicebus-sagas/2-timeouts/)
service-bus-messaging Message Transfers Locks Settlement https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/service-bus-messaging/message-transfers-locks-settlement.md
Using any of the supported Service Bus API clients, send operations into Service
If the message is rejected by Service Bus, the rejection contains an error indicator and text with a **tracking-id** in it. The rejection also includes information about whether the operation can be retried with any expectation of success. In the client, this information is turned into an exception and raised to the caller of the send operation. If the message has been accepted, the operation silently completes. Advanced Messaging Queuing Protocol (AMQP) is the only protocol supported for .NET Standard, Java, JavaScript, Python, and Go clients. For [.NET Framework clients](service-bus-amqp-dotnet.md), you can use Service Bus Messaging Protocol (SBMP) or AMQP. When you use the AMQP protocol, message transfers and settlements are pipelined and asynchronous. We recommend that you use the asynchronous programming model API variants.+ A sender can put several messages on the wire in rapid succession without having to wait for each message to be acknowledged, as would otherwise be the case with the SBMP protocol or with HTTP 1.1. Those asynchronous send operations complete as the respective messages are accepted and stored, on partitioned entities or when send operation to different entities overlap. The completions might also occur out of the original send order.
For receive operations, the Service Bus API clients enable two different explici
### ReceiveAndDelete
-The [Receive-and-Delete](/dotnet/api/microsoft.servicebus.messaging.receivemode) mode tells the broker to consider all messages it sends to the receiving client as settled when sent. That means that the message is considered consumed as soon as the broker has put it onto the wire. If the message transfer fails, the message is lost.
+The [Receive-and-Delete](/dotnet/api/azure.messaging.servicebus.servicebusreceivemode) mode tells the broker to consider all messages it sends to the receiving client as settled when sent. That means that the message is considered consumed as soon as the broker has put it onto the wire. If the message transfer fails, the message is lost.
The upside of this mode is that the receiver doesn't need to take further action on the message and is also not slowed by waiting for the outcome of the settlement. If the data contained in the individual messages have low value and/or are only meaningful for a very short time, this mode is a reasonable choice. ### PeekLock
-The [Peek-Lock](/dotnet/api/microsoft.servicebus.messaging.receivemode) mode tells the broker that the receiving client wants to settle received messages explicitly. The message is made available for the receiver to process, while held under an exclusive lock in the service so that other, competing receivers can't see it. The duration of the lock is initially defined at the queue or subscription level and can be extended by the client owning the lock, via the [RenewLock](/dotnet/api/microsoft.azure.servicebus.core.messagereceiver.renewlockasync#Microsoft_Azure_ServiceBus_Core_MessageReceiver_RenewLockAsync_System_String_) operation. For details about renewing locks, see the [Renew locks](#renew-locks) section in this article.
+The [Peek-Lock](/dotnet/api/azure.messaging.servicebus.servicebusreceivemode) mode tells the broker that the receiving client wants to settle received messages explicitly. The message is made available for the receiver to process, while held under an exclusive lock in the service so that other, competing receivers can't see it. The duration of the lock is initially defined at the queue or subscription level and can be extended by the client owning the lock, via the [RenewMessageLockAsync](/dotnet/api/azure.messaging.servicebus.servicebusreceiver.renewmessagelockasync) operation. For details about renewing locks, see the [Renew locks](#renew-locks) section in this article.
When a message is locked, other clients receiving from the same queue or subscription can take on locks and retrieve the next available messages not under active lock. When the lock on a message is explicitly released or when the lock expires, the message pops back up at or near the front of the retrieval order for redelivery. When the message is repeatedly released by receivers or they let the lock elapse for a defined number of times ([Max Delivery Count](service-bus-dead-letter-queues.md#maximum-delivery-count)), the message is automatically removed from the queue or subscription and placed into the associated dead-letter queue.
-The receiving client initiates settlement of a received message with a positive acknowledgment when it calls the [Complete](/dotnet/api/microsoft.servicebus.messaging.queueclient.complete#Microsoft_ServiceBus_Messaging_QueueClient_Complete_System_Guid_) API for the message. It indicates to the broker that the message has been successfully processed and the message is removed from the queue or subscription. The broker replies to the receiver's settlement intent with a reply that indicates whether the settlement could be performed.
+The receiving client initiates settlement of a received message with a positive acknowledgment when it calls the [Complete](/dotnet/api/azure.messaging.servicebus.servicebusreceiver.completemessageasync) API for the message. It indicates to the broker that the message has been successfully processed and the message is removed from the queue or subscription. The broker replies to the receiver's settlement intent with a reply that indicates whether the settlement could be performed.
-When the receiving client fails to process a message but wants the message to be redelivered, it can explicitly ask for the message to be released and unlocked instantly by calling the [Abandon](/dotnet/api/microsoft.servicebus.messaging.queueclient.abandon) API for the message or it can do nothing and let the lock elapse.
+When the receiving client fails to process a message but wants the message to be redelivered, it can explicitly ask for the message to be released and unlocked instantly by calling the [Abandon](/dotnet/api/azure.messaging.servicebus.servicebusreceiver.abandonmessageasync) API for the message or it can do nothing and let the lock elapse.
-If a receiving client fails to process a message and knows that redelivering the message and retrying the operation won't help, it can reject the message, which moves it into the dead-letter queue by calling the [DeadLetter](/dotnet/api/microsoft.servicebus.messaging.queueclient.deadletter) API on the message, which also allows setting a custom property including a reason code that can be retrieved with the message from the dead-letter queue.
+If a receiving client fails to process a message and knows that redelivering the message and retrying the operation won't help, it can reject the message, which moves it into the dead-letter queue by calling the [DeadLetter](/dotnet/api/azure.messaging.servicebus.servicebusreceiver.deadlettermessageasync) API on the message, which also allows setting a custom property including a reason code that can be retrieved with the message from the dead-letter queue.
A special case of settlement is deferral, which is discussed in a [separate article](message-deferral.md).
service-bus-messaging Monitor Service Bus Reference https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/service-bus-messaging/monitor-service-bus-reference.md
Counts the number of data and management operations requests.
The following two types of errors are classified as **user errors**: 1. Client-side errors (In HTTP that would be 400 errors).
-2. Errors that occur while processing messages, such as [MessageLockLostException](/dotnet/api/microsoft.azure.servicebus.messagelocklostexception).
+2. Errors that occur while processing messages, such as [MessageLockLostException](/dotnet/api/azure.messaging.servicebus.servicebusfailurereason).
### Message metrics
Resource specific table entry:
} ```++ ## Azure Monitor Logs tables Azure Service Bus uses Kusto tables from Azure Monitor Logs. You can query these tables with Log Analytics. For a list of Kusto tables the service uses, see [Azure Monitor Logs table reference](/azure/azure-monitor/reference/tables/tables-resourcetype#service-bus).
service-bus-messaging Service Bus Amqp Dotnet https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/service-bus-messaging/service-bus-amqp-dotnet.md
# Use legacy WindowsAzure.ServiceBus .NET framework library with AMQP 1.0 > [!NOTE]
-> This article is for existing users of the WindowsAzure.ServiceBus package looking to switch to using AMQP within the same package. While this package will continue to receive critical bug fixes, we strongly encourage to upgrade to the new [Azure.Messaging.ServiceBus](https://www.nuget.org/packages/Azure.Messaging.ServiceBus) package instead which is available as of November 2020 and which support AMQP by default.
+> This article is for existing users of the WindowsAzure.ServiceBus package looking to switch to using AMQP within the same package. While this package will continue to receive critical bug fixes until 30 September 2026, we strongly encourage to upgrade to the new [Azure.Messaging.ServiceBus](https://www.nuget.org/packages/Azure.Messaging.ServiceBus) package instead which is available as of November 2020 and which support AMQP by default.
+ By default, the WindowsAzure.ServiceBus package communicates with the Service Bus service using a dedicated SOAP-based protocol called Service Bus Messaging Protocol (SBMP). In version 2.1 support for AMQP 1.0 was added which we recommend using rather than the default protocol.
service-bus-messaging Service Bus Amqp Protocol Guide https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/service-bus-messaging/service-bus-amqp-protocol-guide.md
A "receive" call at the API level translates into a *flow* performative being se
The lock on a message is released when the transfer is settled into one of the terminal states *accepted*, *rejected*, or *released*. The message is removed from Service Bus when the terminal state is *accepted*. It remains in Service Bus and is delivered to the next receiver when the transfer reaches any of the other states. Service Bus automatically moves the message into the entity's deadletter queue when it reaches the maximum delivery count allowed for the entity due to repeated rejections or releases.
-Even though the Service Bus APIs don't directly expose such an option today, a lower-level AMQP protocol client can use the link-credit model to turn the "pull-style" interaction of issuing one unit of credit for each receive request into a "push-style" model by issuing a large number of link credits and then receive messages as they become available without any further interaction. Push is supported through the [MessagingFactory.PrefetchCount](/dotnet/api/microsoft.servicebus.messaging.messagingfactory) or [MessageReceiver.PrefetchCount](/dotnet/api/microsoft.servicebus.messaging.messagereceiver) property settings. When they're non-zero, the AMQP client uses it as the link credit.
+Even though the Service Bus APIs don't directly expose such an option today, a lower-level AMQP protocol client can use the link-credit model to turn the "pull-style" interaction of issuing one unit of credit for each receive request into a "push-style" model by issuing a large number of link credits and then receive messages as they become available without any further interaction. Push is supported through the [ServiceBusProcessor.PrefetchCount](/dotnet/api/azure.messaging.servicebus.servicebusprocessor) or [ServiceBusReceiver.PrefetchCount](/dotnet/api/azure.messaging.servicebus.servicebusreceiver) property settings. When they're non-zero, the AMQP client uses it as the link credit.
In this context, it's important to understand that the clock for the expiration of the lock on the message inside the entity starts when the message is taken from the entity, not when the message is put on the wire. Whenever the client indicates readiness to receive messages by issuing link credit, it's therefore expected to be actively pulling messages across the network and be ready to handle them. Otherwise the message lock may have expired before the message is even delivered. The use of link-credit flow control should directly reflect the immediate readiness to deal with available messages dispatched to the receiver.
Any property that application needs to define should be mapped to AMQP's `applic
| | | | | durable |- |- | | priority |- |- |
-| ttl |Time to live for this message |[TimeToLive](/dotnet/api/microsoft.servicebus.messaging.brokeredmessage) |
+| ttl |Time to live for this message |[TimeToLive](/dotnet/api/azure.messaging.servicebus.servicebusmessage) |
| first-acquirer |- |- |
-| delivery-count |- |[DeliveryCount](/dotnet/api/microsoft.servicebus.messaging.brokeredmessage) |
+| delivery-count |- |[DeliveryCount](/dotnet/api/azure.messaging.servicebus.servicebusreceivedmessage) |
#### properties | Field Name | Usage | API name | | | | |
-| message-id |Application-defined, free-form identifier for this message. Used for duplicate detection. |[MessageId](/dotnet/api/microsoft.servicebus.messaging.brokeredmessage) |
+| message-id |Application-defined, free-form identifier for this message. Used for duplicate detection. |[MessageId](/dotnet/api/azure.messaging.servicebus.servicebusmessage.messageid) |
| user-id |Application-defined user identifier, not interpreted by Service Bus. |Not accessible through the Service Bus API. |
-| to |Application-defined destination identifier, not interpreted by Service Bus. |[To](/dotnet/api/microsoft.servicebus.messaging.brokeredmessage) |
-| subject |Application-defined message purpose identifier, not interpreted by Service Bus. |[Label](/dotnet/api/microsoft.servicebus.messaging.brokeredmessage) |
-| reply-to |Application-defined reply-path indicator, not interpreted by Service Bus. |[ReplyTo](/dotnet/api/microsoft.servicebus.messaging.brokeredmessage) |
-| correlation-id |Application-defined correlation identifier, not interpreted by Service Bus. |[CorrelationId](/dotnet/api/microsoft.servicebus.messaging.brokeredmessage) |
-| content-type |Application-defined content-type indicator for the body, not interpreted by Service Bus. |[ContentType](/dotnet/api/microsoft.servicebus.messaging.brokeredmessage) |
+| to |Application-defined destination identifier, not interpreted by Service Bus. |[To](/dotnet/api/azure.messaging.servicebus.servicebusmessage.to) |
+| subject |Application-defined message purpose identifier, not interpreted by Service Bus. |[Subject](/dotnet/api/azure.messaging.servicebus.servicebusmessage.subject) |
+| reply-to |Application-defined reply-path indicator, not interpreted by Service Bus. |[ReplyTo](/dotnet/api/azure.messaging.servicebus.servicebusmessage.replyto) |
+| correlation-id |Application-defined correlation identifier, not interpreted by Service Bus. |[CorrelationId](/dotnet/api/azure.messaging.servicebus.servicebusmessage.correlationid) |
+| content-type |Application-defined content-type indicator for the body, not interpreted by Service Bus. |[ContentType](/dotnet/api/azure.messaging.servicebus.servicebusmessage.contenttype) |
| content-encoding |Application-defined content-encoding indicator for the body, not interpreted by Service Bus. |Not accessible through the Service Bus API. |
-| absolute-expiry-time |Declares at which absolute instant the message expires. Ignored on input (header TTL is observed), authoritative on output. |[ExpiresAtUtc](/dotnet/api/microsoft.servicebus.messaging.brokeredmessage) |
+| absolute-expiry-time |Declares at which absolute instant the message expires. Ignored on input (header TTL is observed), authoritative on output. |Not accessible through the Service Bus API. |
| creation-time |Declares at which time the message was created. Not used by Service Bus |Not accessible through the Service Bus API. |
-| group-id |Application-defined identifier for a related set of messages. Used for Service Bus sessions. |[SessionId](/dotnet/api/microsoft.servicebus.messaging.brokeredmessage) |
+| group-id |Application-defined identifier for a related set of messages. Used for Service Bus sessions. |[SessionId](/dotnet/api/azure.messaging.servicebus.servicebusmessage.sessionid) |
| group-sequence |Counter identifying the relative sequence number of the message inside a session. Ignored by Service Bus. |Not accessible through the Service Bus API. |
-| reply-to-group-id |- |[ReplyToSessionId](/dotnet/api/microsoft.servicebus.messaging.brokeredmessage) |
+| reply-to-group-id |- |[ReplyToSessionId](/dotnet/api/azure.messaging.servicebus.servicebusmessage.replytosessionid) |
#### Message annotations
There are few other service bus message properties, which aren't part of AMQP me
| Annotation Map Key | Usage | API name | | | | |
-| x-opt-scheduled-enqueue-time | Declares at which time the message should appear on the entity |[ScheduledEnqueueTime](/dotnet/api/microsoft.servicebus.messaging.brokeredmessage.scheduledenqueuetimeutc) |
-| x-opt-partition-key | Application-defined key that dictates which partition the message should land in. | [PartitionKey](/dotnet/api/microsoft.servicebus.messaging.brokeredmessage.partitionkey) |
-| x-opt-via-partition-key | Application-defined partition-key value when a transaction is to be used to send messages via a transfer queue. | [ViaPartitionKey](/dotnet/api/microsoft.servicebus.messaging.brokeredmessage.viapartitionkey) |
-| x-opt-enqueued-time | Service-defined UTC time representing the actual time of enqueuing the message. Ignored on input. | [EnqueuedTimeUtc](/dotnet/api/microsoft.servicebus.messaging.brokeredmessage.enqueuedtimeutc) |
-| x-opt-sequence-number | Service-defined unique number assigned to a message. | [SequenceNumber](/dotnet/api/microsoft.servicebus.messaging.brokeredmessage.sequencenumber) |
-| x-opt-offset | Service-defined enqueued sequence number of the message. | [EnqueuedSequenceNumber](/dotnet/api/microsoft.servicebus.messaging.brokeredmessage.enqueuedsequencenumber) |
-| x-opt-locked-until | Service-defined. The date and time until which the message will be locked in the queue/subscription. | [LockedUntilUtc](/dotnet/api/microsoft.servicebus.messaging.brokeredmessage.lockeduntilutc) |
-| x-opt-deadletter-source | Service-Defined. If the message is received from dead letter queue, it represents the source of the original message. | [DeadLetterSource](/dotnet/api/microsoft.servicebus.messaging.brokeredmessage.deadlettersource) |
+| x-opt-scheduled-enqueue-time | Declares at which time the message should appear on the entity |[ScheduledEnqueueTime](/dotnet/api/azure.messaging.servicebus.servicebusreceivedmessage.scheduledenqueuetime) |
+| x-opt-partition-key | Application-defined key that dictates which partition the message should land in. | [PartitionKey](/dotnet/api/azure.messaging.servicebus.servicebusreceivedmessage.partitionkey) |
+| x-opt-via-partition-key | Application-defined partition-key value when a transaction is to be used to send messages via a transfer queue. | [TransactionPartitionKey](/dotnet/api/azure.messaging.servicebus.servicebusreceivedmessage.transactionpartitionkey) |
+| x-opt-enqueued-time | Service-defined UTC time representing the actual time of enqueuing the message. Ignored on input. | [EnqueuedTime](/dotnet/api/azure.messaging.servicebus.servicebusreceivedmessage.enqueuedtime) |
+| x-opt-sequence-number | Service-defined unique number assigned to a message. | [SequenceNumber](/dotnet/api/azure.messaging.servicebus.servicebusreceivedmessage.sequencenumber) |
+| x-opt-offset | Service-defined enqueued sequence number of the message. | [EnqueuedSequenceNumber](/dotnet/api/azure.messaging.servicebus.servicebusreceivedmessage.enqueuedsequencenumber) |
+| x-opt-locked-until | Service-defined. The date and time until which the message will be locked in the queue/subscription. | [LockedUntil](/dotnet/api/azure.messaging.servicebus.servicebusreceivedmessage.lockeduntil) |
+| x-opt-deadletter-source | Service-Defined. If the message is received from dead letter queue, it represents the source of the original message. | [DeadLetterSource](/dotnet/api/azure.messaging.servicebus.servicebusreceivedmessage.deadlettersource) |
### Transaction capability
service-bus-messaging Service Bus Amqp Request Response https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/service-bus-messaging/service-bus-amqp-request-response.md
For a detailed wire-level AMQP 1.0 protocol guide, which explains how Service Bu
## Concepts
-### Brokered message
+### ServiceBusReceivedMessage / ServiceBusMessage
Represents a message in Service Bus, which is mapped to an AMQP message. The mapping is defined in the [Service Bus AMQP protocol guide](service-bus-amqp-protocol-guide.md).
The **correlation-filter** map must include at least one of the following entrie
|session-id|string|No|| |reply-to-session-id|string|No|| |content-type|string|No||
-|properties|map|No|Maps to Service Bus [BrokeredMessage.Properties](/dotnet/api/microsoft.servicebus.messaging.brokeredmessage).|
+|properties|map|No|Maps to Service Bus [ServiceBusMessage.Properties](/dotnet/api/azure.messaging.servicebus.servicebusmessage.applicationproperties)|
The **sql-rule-action** map must include the following entries:
service-bus-messaging Service Bus Dead Letter Queues https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/service-bus-messaging/service-bus-dead-letter-queues.md
Azure Service Bus queues and topic subscriptions provide a secondary subqueue, called a *dead-letter queue* (DLQ). The dead-letter queue doesn't need to be explicitly created and can't be deleted or managed independent of the main entity.
-This article describes dead-letter queues in Service Bus. Much of the discussion is illustrated by the [Dead-Letter queues sample](https://github.com/Azure/azure-service-bus/tree/master/samples/DotNet/Microsoft.Azure.ServiceBus/DeadletterQueue) on GitHub. This sample uses the deprecated library, not the current `Azure.Messaging.ServiceBus`, but the concepts are the same.
+This article describes dead-letter queues in Service Bus. Much of the discussion is illustrated by the [Dead-Letter queues sample](https://github.com/Azure/azure-sdk-for-net/tree/main/sdk/servicebus/Azure.Messaging.ServiceBus/samples/DeadLetterQueue) on GitHub.
## The dead-letter queue
If you enable dead-lettering on filter evaluation exceptions, any errors that oc
In addition to the system-provided dead-lettering features, applications can use the DLQ to explicitly reject unacceptable messages. They can include messages that can't be properly processed because of any sort of system issue, messages that hold malformed payloads, or messages that fail authentication when some message-level security scheme is used.
-This can be done by calling [QueueClient.DeadLetterAsync(Guid lockToken, string deadLetterReason, string deadLetterErrorDescription)](/dotnet/api/microsoft.servicebus.messaging.queueclient.deadletterasync#microsoft-servicebus-messaging-queueclient-deadletterasync(system-guid-system-string-system-string)) method.
+This can be done by calling [ServiceBusReceiver.DeadLetterMessageAsync method](/dotnet/api/azure.messaging.servicebus.servicebusreceiver.deadlettermessageasync).
We recommend that you include the type of the exception in the `DeadLetterReason` and the stack trace of the exception in the `DeadLetterDescription` as it makes it easier to troubleshoot the cause of the problem resulting in messages being dead-lettered. Be aware that this might result in some messages exceeding [the 256 KB quota limit for the Standard Tier of Azure Service Bus](./service-bus-quotas.md), further indicating that the Premium Tier is what should be used for production environments.
Messages are sent to the dead-letter queue under the following conditions:
## Dead-lettering in send via scenarios -- If the destination queue or topic is disabled, the message is sent to a transfer dead letter queue (TDLQ).
+- If the destination queue or topic is disabled, the message is sent to a transfer dead letter queue (TDLQ) of the source queue.
- If the destination queue or topic is deleted, the 404 exception is raised.-- If the destination queue or entity exceeds the entity size, the message doesn't go to either DLQ or TDLQ.
+- If the destination queue or entity exceeds the entity size, the message is sent to a TDLQ of the source queue.
## Path to the dead-letter queue
service-bus-messaging Service Bus End To End Tracing https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/service-bus-messaging/service-bus-end-to-end-tracing.md
In presence of multiple `DiagnosticSource` listeners for the same source, it's e
# [Microsoft.Azure.ServiceBus SDK](#tab/net-standard-sdk) + | Property Name | Description | |-|-| | Diagnostic-Id | Unique identifier of an external call from producer to the queue. Refer to [Request-Id in HTTP protocol](https://github.com/dotnet/runtime/blob/master/src/libraries/System.Diagnostics.DiagnosticSource/src/HttpCorrelationProtocol.md#request-id) for the rationale, considerations, and format |
service-bus-messaging Service Bus Federation Patterns https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/service-bus-messaging/service-bus-federation-patterns.md
targets, illustrated here in C#:
``` csharp [FunctionName("SBRouter")] public static async Task Run(
- [ServiceBusTrigger("source", Connection = "serviceBusConnectionAppSetting")] Message[] messages,
- [ServiceBus("dest1", Connection = "serviceBusConnectionAppSetting")] QueueClient output1,
- [ServiceBus("dest2", Connection = "serviceBusConnectionAppSetting")] QueueClient output2,
+ [ServiceBusTrigger("source", Connection = "serviceBusConnectionAppSetting")] ServiceBusReceivedMessage[] messages,
+ [ServiceBusOutput("dest1", Connection = "serviceBusConnectionAppSetting")] IAsyncCollector<dynamic> output1,
+ [ServiceBusOutput("dest2", Connection = "serviceBusConnectionAppSetting")] IAsyncCollector<dynamic> output2,
ILogger log) { foreach (Message messageData in messages)
service-bus-messaging Service Bus Filter Examples https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/service-bus-messaging/service-bus-filter-examples.md
MessageProperty = 'A'
user.SuperHero like 'SuperMan%' ``` + ## Filter on message properties with special characters If the message property name has special characters, use double quotes (`"`) to enclose the property name. For example if the property name is `"http://schemas.microsoft.com/xrm/2011/Claims/EntityLogicalName"`, use the following syntax in the filter.
namespace SendAndReceiveMessages
## Next steps See the following samples: -- [Managing rules](https://github.com/Azure/azure-sdk-for-net/blob/main/sdk/servicebus/Azure.Messaging.ServiceBus/samples/Sample12_ManagingRules.md). -- [.NET - Basic send and receive tutorial with filters](https://github.com/Azure/azure-service-bus/tree/master/samples/DotNet/GettingStarted/BasicSendReceiveTutorialwithFilters/BasicSendReceiveTutorialWithFilters). This sample uses the old `Microsoft.Azure.ServiceBus` package. See the [Migration guide](https://github.com/Azure/azure-sdk-for-net/blob/main/sdk/servicebus/Azure.Messaging.ServiceBus/MigrationGuide.md) to learn how to migrate from using the old SDK to new SDK (`Azure.Messaging.ServiceBus`).-- [.NET - Topic filters](https://github.com/Azure/azure-service-bus/tree/master/samples/DotNet/Microsoft.Azure.ServiceBus/TopicFilters). This sample uses the old `Microsoft.Azure.ServiceBus` package.
+- [Managing rules](https://github.com/Azure/azure-sdk-for-net/blob/main/sdk/servicebus/Azure.Messaging.ServiceBus/samples/Sample12_ManagingRules.md)
+- [.NET - Topic filters](https://github.com/Azure/azure-sdk-for-net/tree/main/sdk/servicebus/Azure.Messaging.ServiceBus/samples/TopicFilters)
- [Azure Resource Manager template](/azure/templates/microsoft.servicebus/2017-04-01/namespaces/topics/subscriptions/rules)
service-bus-messaging Service Bus Java How To Use Queues https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/service-bus-messaging/service-bus-java-how-to-use-queues.md
See the following documentation and samples:
[Azure SDK for Java]: /azure/developer/java/sdk/get-started [Azure Toolkit for Eclipse]: /azure/developer/java/toolkit-for-eclipse/installation
-[Queues, topics, and subscriptions]: service-bus-queues-topics-subscriptions.md
-[BrokeredMessage]: /dotnet/api/microsoft.servicebus.messaging.brokeredmessage
service-bus-messaging Service Bus Java How To Use Topics Subscriptions https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/service-bus-messaging/service-bus-java-how-to-use-topics-subscriptions.md
See the following documentation and samples:
- [Samples on GitHub](/samples/azure/azure-sdk-for-java/servicebus-samples/) - [Java API reference](https://azuresdkdocs.blob.core.windows.net/$web/java/azure-messaging-servicebus/7.0.0/https://docsupdatetracker.net/index.html) -
+<!-- Local links -->
[Azure SDK for Java]: /java/api/overview/azure/ [Azure Toolkit for Eclipse]: /azure/developer/java/toolkit-for-eclipse/installation
-[Service Bus queues, topics, and subscriptions]: service-bus-queues-topics-subscriptions.md
-[SqlFilter]: /dotnet/api/microsoft.azure.servicebus.sqlfilter
-[SqlFilter.SqlExpression]: /dotnet/api/microsoft.azure.servicebus.sqlfilter.sqlexpression
-[BrokeredMessage]: /dotnet/api/microsoft.servicebus.messaging.brokeredmessage
service-bus-messaging Service Bus Management Libraries https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/service-bus-messaging/service-bus-management-libraries.md
Service Bus client libraries that are used for operations like send and receive
|.NET|[Microsoft.Azure.ServiceBus](https://www.nuget.org/packages/Microsoft.Azure.ServiceBus/)|[ManagementClient](/dotnet/api/microsoft.azure.servicebus.management.managementclient)|[.NET](https://github.com/Azure/azure-service-bus/tree/master/samples/DotNet/Microsoft.Azure.ServiceBus)| |Java|[azure-mgmt-servicebus](https://search.maven.org/artifact/com.microsoft.azure/azure-mgmt-servicebus)|[ManagementClientAsync](/java/api/com.microsoft.azure.servicebus.management.managementclientasync), [ManagementClient](/java/api/com.microsoft.azure.servicebus.management.managementclient)|[Java](https://github.com/Azure/azure-service-bus/tree/master/samples/Java)| ## Next steps - Send messages to and receive messages from queue using the latest Service Bus library: [.NET](./service-bus-dotnet-get-started-with-queues.md#send-messages-to-the-queue), [Java](./service-bus-java-how-to-use-queues.md), [JavaScript](./service-bus-nodejs-how-to-use-queues.md), [Python](./service-bus-python-how-to-use-queues.md)
service-bus-messaging Service Bus Messages Payloads https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/service-bus-messaging/service-bus-messages-payloads.md
While the following names use pascal casing, note that JavaScript and Python cli
| `EnqueuedTimeUtc` | The UTC instant at which the message has been accepted and stored in the entity. This value can be used as an authoritative and neutral arrival time indicator when the receiver doesn't want to trust the sender's clock. This property is read-only. | | `ExpiresΓÇïAtUtc` (absolute-expiry-time) | The UTC instant at which the message is marked for removal and no longer available for retrieval from the entity because of its expiration. Expiry is controlled by the **TimeToLive** property and this property is computed from EnqueuedTimeUtc+TimeToLive. This property is read-only. | | `Label` or `Subject` (subject) | This property enables the application to indicate the purpose of the message to the receiver in a standardized fashion, similar to an email subject line. |
-| `LockedΓÇïUntilΓÇïUtc` | For messages retrieved under a lock (peek-lock receive mode, not presettled) this property reflects the UTC instant until which the message is held locked in the queue/subscription. When the lock expires, the [DeliveryCount](/dotnet/api/microsoft.servicebus.messaging.brokeredmessage.deliverycount) is incremented and the message is again available for retrieval. This property is read-only. |
+| `LockedΓÇïUntilΓÇïUtc` | For messages retrieved under a lock (peek-lock receive mode, not presettled) this property reflects the UTC instant until which the message is held locked in the queue/subscription. When the lock expires, the [DeliveryCount](/dotnet/api/azure.messaging.servicebus.servicebusreceivedmessage.deliverycount) is incremented and the message is again available for retrieval. This property is read-only. |
| `LockΓÇïToken` | The lock token is a reference to the lock that is being held by the broker in *peek-lock* receive mode. The token can be used to pin the lock permanently through the [Deferral](message-deferral.md) API and, with that, take the message out of the regular delivery state flow. This property is read-only. | | `MessageΓÇïId` (message-id) | The message identifier is an application-defined value that uniquely identifies the message and its payload. The identifier is a free-form string and can reflect a GUID or an identifier derived from the application context. If enabled, the [duplicate detection](duplicate-detection.md) feature identifies and removes second and further submissions of messages with the same **MessageId**. | | `PartitionΓÇïKey` | For [partitioned entities](service-bus-partitioning.md), setting this value enables assigning related messages to the same internal partition, so that submission sequence order is correctly recorded. The partition is chosen by a hash function over this value and can't be chosen directly. For session-aware entities, the **SessionId** property overrides this value. |
When in transit or stored inside of Service Bus, the payload is always an opaque
Unlike the Java or .NET Standard variants, the .NET Framework version of the Service Bus API supports creating **BrokeredMessage** instances by passing arbitrary .NET objects into the constructor. + When you use the legacy SBMP protocol, those objects are then serialized with the default binary serializer, or with a serializer that is externally supplied. The object is serialized into an AMQP object. The receiver can retrieve those objects with the [GetBody\<T>()](/dotnet/api/microsoft.servicebus.messaging.brokeredmessage.getbody#Microsoft_ServiceBus_Messaging_BrokeredMessage_GetBody__1) method, supplying the expected type. With AMQP, the objects are serialized into an AMQP graph of `ArrayList` and `IDictionary<string,object>` objects, and any AMQP client can decode them. + While this hidden serialization magic is convenient, applications should take explicit control of object serialization and turn their object graphs into streams before including them into a message, and do the reverse on the receiver side. This yields interoperable results. While AMQP has a powerful binary encoding model, it's tied to the AMQP messaging ecosystem, and HTTP clients will have trouble decoding such payloads. The .NET Standard and Java API variants only accept byte arrays, which means that the application must handle object serialization control.
service-bus-messaging Service Bus Messaging Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/service-bus-messaging/service-bus-messaging-overview.md
Subscribers can define which messages they want to receive from a topic. These m
### Auto-delete on idle
-[Auto-delete on idle](/dotnet/api/microsoft.servicebus.messaging.queuedescription.autodeleteonidle) enables you to specify an idle interval after which the queue is automatically deleted. The interval is reset when there's traffic on the queue. The minimum duration is 5 minutes.
+[Auto-delete on idle](/dotnet/api/azure.messaging.servicebus.administration.queueproperties.autodeleteonidle) enables you to specify an idle interval after which the queue is automatically deleted. The interval is reset when there's traffic on the queue. The minimum duration is 5 minutes.
### Duplicate detection
service-bus-messaging Service Bus Messaging Sql Filter https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/service-bus-messaging/service-bus-messaging-sql-filter.md
For examples, see [Service Bus filter examples](service-bus-filter-examples.md).
## Next steps -- [SQLFilter class (.NET Framework)](/dotnet/api/microsoft.servicebus.messaging.sqlfilter)-- [SQLFilter class (.NET Standard)](/dotnet/api/microsoft.azure.servicebus.sqlfilter)-- [SqlFilter class (Java)](/java/api/com.microsoft.azure.servicebus.rules.SqlFilter)
+- [SqlRuleFilter (.NET)](/dotnet/api/azure.messaging.servicebus.administration.sqlrulefilter)
+- [SqlRuleFilter (Java)](/java/api/com.azure.messaging.servicebus.administration.models.sqlrulefilter)
- [SqlRuleFilter (JavaScript)](/javascript/api/@azure/service-bus/sqlrulefilter) - [`az servicebus topic subscription rule`](/cli/azure/servicebus/topic/subscription/rule) - [New-AzServiceBusRule](/powershell/module/az.servicebus/new-azservicebusrule)
service-bus-messaging Service Bus Messaging Sql Rule Action https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/service-bus-messaging/service-bus-messaging-sql-rule-action.md
For examples, see [Service Bus filter examples](service-bus-filter-examples.md).
- SET performs implicit conversion if possible when the expression type and the existing property type are different. - Action fails if non-existent system properties were referenced. - Action doesn't fail if non-existent user properties were referenced.-- A non-existent user property is evaluated as "Unknown" internally, following the same semantics as [SQLFilter](/dotnet/api/microsoft.servicebus.messaging.sqlfilter) when evaluating operators.
+- A non-existent user property is evaluated as "Unknown" internally, following the same semantics as [SQLRuleFilter](/dotnet/api/azure.messaging.servicebus.administration.sqlrulefilter) when evaluating operators.
## Next steps -- [SQLRuleAction class (.NET Framework)](/dotnet/api/microsoft.servicebus.messaging.sqlruleaction)-- [SQLRuleAction class (.NET Standard)](/dotnet/api/microsoft.azure.servicebus.sqlruleaction)-- [SqlRuleAction class (Java)](/java/api/com.microsoft.azure.servicebus.rules.sqlruleaction)
+- [SQLRuleAction (.NET)](/dotnet/api/azure.messaging.servicebus.administration.sqlruleaction)
+- [SqlRuleAction (Java)](/java/api/com.azure.messaging.servicebus.administration.models.sqlruleaction)
- [SqlRuleAction (JavaScript)](/javascript/api/@azure/service-bus/sqlruleaction) - [`az servicebus topic subscription rule`](/cli/azure/servicebus/topic/subscription/rule) - [New-AzServiceBusRule](/powershell/module/az.servicebus/new-azservicebusrule)
service-bus-messaging Service Bus Outages Disasters https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/service-bus-messaging/service-bus-outages-disasters.md
If the application doesn't require permanent sender-to-receiver communication, t
### Active replication Active replication uses entities in both namespaces for every operation. Any client that sends a message sends two copies of the same message. The first copy is sent to the primary entity (for example, **contosoPrimary.servicebus.windows.net/sales**), and the second copy of the message is sent to the secondary entity (for example, **contosoSecondary.servicebus.windows.net/sales**).
-A client receives messages from both queues. The receiver processes the first copy of a message, and the second copy is suppressed. To suppress duplicate messages, the sender must tag each message with a unique identifier. Both copies of the message must be tagged with the same identifier. You can use the [BrokeredMessage.MessageId][BrokeredMessage.MessageId] or [BrokeredMessage.Label][BrokeredMessage.Label] properties, or a custom property to tag the message. The receiver must maintain a list of messages that it has already received.
+A client receives messages from both queues. The receiver processes the first copy of a message, and the second copy is suppressed. To suppress duplicate messages, the sender must tag each message with a unique identifier. Both copies of the message must be tagged with the same identifier. You can use the [ServiceBusMessage.MessageId](/dotnet/api/azure.messaging.servicebus.servicebusmessage.messageid) or [ServiceBusMessage.Subject](/dotnet/api/azure.messaging.servicebus.servicebusreceivedmessage.subject) properties, or a custom property to tag the message. The receiver must maintain a list of messages that it has already received.
The [geo-replication with Service Bus standard tier][Geo-replication with Service Bus Standard Tier] sample demonstrates active replication of messaging entities.
When using passive replication, in the following scenarios, messages can be lost
* **Message delay or loss**: Assume that the sender successfully sent a message m1 to the primary queue, and then the queue becomes unavailable before the receiver receives m1. The sender sends a subsequent message m2 to the secondary queue. If the primary queue is temporarily unavailable, the receiver receives m1 after the queue becomes available again. In case of a disaster, the receiver may never receive m1. * **Duplicate reception**: Assume that the sender sends a message m to the primary queue. Service Bus successfully processes m but fails to send a response. After the send operation times out, the sender sends an identical copy of m to the secondary queue. If the receiver is able to receive the first copy of m before the primary queue becomes unavailable, the receiver receives both copies of m at approximately the same time. If the receiver isn't able to receive the first copy of m before the primary queue becomes unavailable, the receiver initially receives only the second copy of m, but then receives a second copy of m when the primary queue becomes available.
-The [Geo-replication with Service Bus standard Tier][Geo-replication with Service Bus Standard Tier] sample demonstrates passive replication of messaging entities.
+The [Azure Messaging Replication Tasks with .NET Core][Azure Messaging Replication Tasks with .NET Core] sample demonstrates replication of messages between namespaces.
## Next steps To learn more about disaster recovery, see these articles:
To learn more about disaster recovery, see these articles:
* [Azure SQL Database Business Continuity][Azure SQL Database Business Continuity] * [Designing resilient applications for Azure][Azure resiliency technical guidance]
-[Service Bus Authentication]: service-bus-authentication-and-authorization.md
-[Asynchronous messaging patterns and high availability]: service-bus-async-messaging.md#failure-of-service-bus-within-an-azure-datacenter
-[BrokeredMessage.MessageId]: /dotnet/api/microsoft.servicebus.messaging.brokeredmessage
-[BrokeredMessage.Label]: /dotnet/api/microsoft.servicebus.messaging.brokeredmessage
-[Geo-replication with Service Bus Standard Tier]: https://github.com/Azure/azure-service-bus/tree/master/samples/DotNet/Microsoft.ServiceBus.Messaging/GeoReplication
+[Azure Messaging Replication Tasks with .NET Core]: https://github.com/Azure-Samples/azure-messaging-replication-dotnet
[Azure SQL Database Business Continuity]:/azure/azure-sql/database/business-continuity-high-availability-disaster-recover-hadr-overview [Azure resiliency technical guidance]: /azure/architecture/framework/resiliency/app-design-
-[1]: ./media/service-bus-outages-disasters/az.png
service-bus-messaging Service Bus Performance Improvements https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/service-bus-messaging/service-bus-performance-improvements.md
AMQP is the most efficient, because it maintains the connection to Service Bus.
> [!IMPORTANT] > The SBMP protocol is only available for .NET Framework. AMQP is the default for .NET Standard. + ## Choosing the appropriate Service Bus .NET SDK
-The `Azure.Messaging.ServiceBus` package is the latest Azure Service Bus .NET SDK available as of November 2020. There are two older .NET SDKs that will continue to receive critical bug fixes, but we strongly encourage you to use the latest SDK instead. Read the [migration guide](https://aka.ms/azsdk/net/migrate/sb) for details on how to move from the older SDKs.
+The `Azure.Messaging.ServiceBus` package is the latest Azure Service Bus .NET SDK available as of November 2020. There are two older .NET SDKs that will continue to receive critical bug fixes until 30 September 2026, but we strongly encourage you to use the latest SDK instead. Read the [migration guide](https://aka.ms/azsdk/net/migrate/sb) for details on how to move from the older SDKs.
| NuGet Package | Primary Namespace(s) | Minimum Platform(s) | Protocol(s) | ||-||-|
The `Azure.Messaging.ServiceBus` package is the latest Azure Service Bus .NET SD
For more information on minimum .NET Standard platform support, see [.NET implementation support](/dotnet/standard/net-standard#net-implementation-support). + ## Reusing factories and clients # [Azure.Messaging.ServiceBus SDK](#tab/net-standard-sdk-2) The Service Bus clients that interact with the service, such as [ServiceBusClient](/dotnet/api/azure.messaging.servicebus.servicebusclient), [ServiceBusSender](/dotnet/api/azure.messaging.servicebus.servicebussender), [ServiceBusReceiver](/dotnet/api/azure.messaging.servicebus.servicebusreceiver), and [ServiceBusProcessor](/dotnet/api/azure.messaging.servicebus.servicebusprocessor), should be registered for dependency injection as singletons (or instantiated once and shared). ServiceBusClient can be registered for dependency injection with the [ServiceBusClientBuilderExtensions](https://github.com/Azure/azure-sdk-for-net/blob/master/sdk/servicebus/Azure.Messaging.ServiceBus/src/Compatibility/ServiceBusClientBuilderExtensions.cs).
This guidance doesn't apply to the [ServiceBusSessionReceiver](/dotnet/api/azure
# [Microsoft.Azure.ServiceBus SDK](#tab/net-standard-sdk)
-> Please note, a newer package Azure.Messaging.ServiceBus is available as of November 2020. While the Microsoft.Azure.ServiceBus package will continue to receive critical bug fixes, we strongly encourage you to upgrade. Read the [migration guide](https://aka.ms/azsdk/net/migrate/sb) for more details.
+> Please note, a newer package Azure.Messaging.ServiceBus is available as of November 2020. While the Microsoft.Azure.ServiceBus package will continue to receive critical bug fixes until 30 September 2026, we strongly encourage you to upgrade. Read the [migration guide](https://aka.ms/azsdk/net/migrate/sb) for more details.
-Service Bus client objects, such as implementations of [`IQueueClient`][QueueClient] or [`IMessageSender`][MessageSender], should be registered for dependency injection as singletons (or instantiated once and shared). We recommend that you don't close messaging factories, queue, topic, or subscription clients after you send a message, and then re-create them when you send the next message. Closing a messaging factory deletes the connection to the Service Bus service. A new connection is established when recreating the factory.
+Service Bus client objects, such as implementations of [`IQueueClient`](/dotnet/api/microsoft.azure.servicebus.queueclient) or [`IMessageSender`](/dotnet/api/microsoft.azure.servicebus.core.messagesender), should be registered for dependency injection as singletons (or instantiated once and shared). We recommend that you don't close messaging factories, queue, topic, or subscription clients after you send a message, and then re-create them when you send the next message. Closing a messaging factory deletes the connection to the Service Bus service. A new connection is established when recreating the factory.
Batched store access doesn't affect the number of billable messaging operations.
## Prefetching
-[Prefetching](service-bus-prefetch.md) enables the queue or subscription client to load additional messages from the service when it receives messages. The client stores these messages in a local cache. The size of the cache is determined by the `QueueClient.PrefetchCount` or `SubscriptionClient.PrefetchCount` properties. Each client that enables prefetching maintains its own cache. A cache isn't shared across clients. If the client starts a receive operation and its cache is empty, the service transmits a batch of messages. The size of the batch equals the size of the cache or 256 KB, whichever is smaller. If the client starts a receive operation and the cache contains a message, the message is taken from the cache.
+[Prefetching](service-bus-prefetch.md) enables the queue or subscription client to load additional messages from the service when it receives messages. The client stores these messages in a local cache. The size of the cache is determined by the `ServiceBusReceiver.PrefetchCount` properties. Each client that enables prefetching maintains its own cache. A cache isn't shared across clients. If the client starts a receive operation and its cache is empty, the service transmits a batch of messages. The size of the batch equals the size of the cache or 256 KB, whichever is smaller. If the client starts a receive operation and the cache contains a message, the message is taken from the cache.
When a message is prefetched, the service locks the prefetched message. With the lock, the prefetched message can't be received by a different receiver. If the receiver can't complete the message before the lock expires, the message becomes available to other receivers. The prefetched copy of the message remains in the cache. The receiver that consumes the expired cached copy receives an exception when it tries to complete that message. By default, the message lock expires after 60 seconds. This value can be extended to 5 minutes. To prevent the consumption of expired messages, set the cache size smaller than the number of messages that a client can consume within the lock timeout interval.
For more information, see the following `PrefetchCount` properties:
-## Prefetching and ReceiveBatch
-While the concepts of prefetching multiple messages together have similar semantics to processing messages in a batch (`ReceiveBatch`), there are some minor differences that must be kept in mind when using these approaches together.
+## Prefetching and ReceiveMessagesAsync
+While the concepts of prefetching multiple messages together have similar semantics to processing messages in a batch (`ReceiveMessagesAsync`), there are some minor differences that must be kept in mind when using these approaches together.
-Prefetch is a configuration (or mode) on the client (`QueueClient` and `SubscriptionClient`) and `ReceiveBatch` is an operation (that has request-response semantics).
+Prefetch is a configuration (or mode) on the ServiceBusReceiver and `ReceiveMessagesAsync` is an operation (that has request-response semantics).
While using these approaches together, consider the following cases -
-* Prefetch should be greater than or equal to the number of messages you're expecting to receive from `ReceiveBatch`.
+* Prefetch should be greater than or equal to the number of messages you're expecting to receive from `ReceiveMessagesAsync`.
* Prefetch can be up to n/3 times the number of messages processed per second, where n is the default lock duration. There are some challenges with having a greedy approach, that is, keeping the prefetch count high, because it implies that the message is locked to a particular receiver. We recommend that you try out prefetch values that are between the thresholds mentioned above, and identify what fits.
To maximize throughput, try the following steps:
* Use asynchronous operations to take advantage of client-side batching. * Leave batched store access enabled. This access increases the overall rate at which messages can be written into the topic. * Set the prefetch count to 20 times the expected receive rate in seconds. This count reduces the number of Service Bus client protocol transmissions.-
-<!-- .NET Standard SDK, Microsoft.Azure.ServiceBus -->
-[QueueClient]: /dotnet/api/microsoft.azure.servicebus.queueclient
-[MessageSender]: /dotnet/api/microsoft.azure.servicebus.core.messagesender
-
-<!-- .NET Framework SDK, Microsoft.Azure.ServiceBus -->
-[MessagingFactory]: /dotnet/api/microsoft.servicebus.messaging.messagingfactory
-[BatchFlushInterval]: /dotnet/api/microsoft.servicebus.messaging.messagesender.batchflushinterval
-[ForcePersistence]: /dotnet/api/microsoft.servicebus.messaging.brokeredmessage.forcepersistence
-[EnablePartitioning]: /dotnet/api/microsoft.servicebus.messaging.queuedescription.enablepartitioning
-[TopicDescription.EnableFiltering]: /dotnet/api/microsoft.servicebus.messaging.topicdescription.enablefilteringmessagesbeforepublishing
-
-<!-- Local links -->
-[Partitioned messaging entities]: service-bus-partitioning.md
service-bus-messaging Service Bus Prefetch https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/service-bus-messaging/service-bus-prefetch.md
Try the samples in the language of your choice to explore Azure Service Bus feat
Samples for the older .NET and Java client libraries: - [Azure Service Bus client library samples for .NET (legacy)](https://github.com/Azure/azure-service-bus/tree/master/samples/DotNet/Microsoft.Azure.ServiceBus/) - See the **Prefetch** sample. - [Azure Service Bus client library samples for Java (legacy)](https://github.com/Azure/azure-service-bus/tree/master/samples/Java/azure-servicebus) - See the **Prefetch** sample. +
service-bus-messaging Service Bus Premium Messaging https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/service-bus-messaging/service-bus-premium-messaging.md
Here are some considerations when sending large messages on Azure Service Bus -
- Batching isn't supported. - Service Bus Explorer doesn't support sending or receiving large messages. - ### Enabling large messages support for a new queue (or topic)
service-bus-messaging Service Bus Queues Topics Subscriptions https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/service-bus-messaging/service-bus-queues-topics-subscriptions.md
Try the samples in the language of your choice to explore Azure Service Bus feat
Find samples for the older .NET and Java client libraries below: - [Azure Service Bus client library samples for .NET (legacy)](https://github.com/Azure/azure-service-bus/tree/master/samples/DotNet/Microsoft.Azure.ServiceBus/) - [Azure Service Bus client library samples for Java (legacy)](https://github.com/Azure/azure-service-bus/tree/master/samples/Java/azure-servicebus/MessageBrowse)+
service-bus-messaging Service Bus Samples https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/service-bus-messaging/service-bus-samples.md
# Service Bus messaging samples or examples
-The Service Bus messaging samples demonstrate key features in [Service Bus messaging](https://azure.microsoft.com/services/service-bus/). Currently, you can find the samples in the following places:
+The Service Bus messaging samples demonstrate key features in [Service Bus messaging](https://azure.microsoft.com/services/service-bus/). Currently, you can find the samples in the following places.
+ ## .NET samples
service-bus-messaging Service Bus Sas https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/service-bus-messaging/service-bus-sas.md
An authorization rule is assigned a *Primary Key* and a *Secondary Key*. These k
When you create a Service Bus namespace, a policy rule named **RootManageSharedAccessKey** is automatically created for the namespace. This policy has Manage permissions for the entire namespace. It's recommended that you treat this rule like an administrative **root** account and don't use it in your application. You can create more policy rules in the **Configure** tab for the namespace in the portal, via PowerShell or Azure CLI.
-It is recommended that you periodically regenerate the keys used in the [SharedAccessAuthorizationRule](/dotnet/api/microsoft.servicebus.messaging.sharedaccessauthorizationrule) object. The primary and secondary key slots exist so that you can rotate keys gradually. If your application generally uses the primary key, you can copy the primary key into the secondary key slot, and only then regenerate the primary key. The new primary key value can then be configured into the client applications, which have continued access using the old primary key in the secondary slot. Once all clients are updated, you can regenerate the secondary key to finally retire the old primary key.
+It is recommended that you periodically regenerate the keys used in the [SharedAccessAuthorizationRule](/dotnet/api/azure.messaging.servicebus.administration.sharedaccessauthorizationrule) object. The primary and secondary key slots exist so that you can rotate keys gradually. If your application generally uses the primary key, you can copy the primary key into the secondary key slot, and only then regenerate the primary key. The new primary key value can then be configured into the client applications, which have continued access using the old primary key in the secondary slot. Once all clients are updated, you can regenerate the secondary key to finally retire the old primary key.
-If you know or suspect that a key is compromised and you have to revoke the keys, you can regenerate both the [PrimaryKey](/dotnet/api/microsoft.servicebus.messaging.sharedaccessauthorizationrule) and the [SecondaryKey](/dotnet/api/microsoft.servicebus.messaging.sharedaccessauthorizationrule) of a [SharedAccessAuthorizationRule](/dotnet/api/microsoft.servicebus.messaging.sharedaccessauthorizationrule), replacing them with new keys. This procedure invalidates all tokens signed with the old keys.
+If you know or suspect that a key is compromised and you have to revoke the keys, you can regenerate both the [PrimaryKey](/dotnet/api/azure.messaging.servicebus.administration.sharedaccessauthorizationrule.primarykey) and the [SecondaryKey](/dotnet/api/azure.messaging.servicebus.administration.sharedaccessauthorizationrule.secondarykey) of a [SharedAccessAuthorizationRule](/dotnet/api/azure.messaging.servicebus.administration.sharedaccessauthorizationrule), replacing them with new keys. This procedure invalidates all tokens signed with the old keys.
## Best practices when using SAS When you use shared access signatures in your applications, you need to be aware of two potential risks:
service-bus-messaging Service Bus Transactions https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/service-bus-messaging/service-bus-transactions.md
# Overview of Service Bus transaction processing
-This article discusses the transaction capabilities of Microsoft Azure Service Bus. Much of the discussion is illustrated by the [AMQP Transactions with Service Bus sample](https://github.com/Azure/azure-service-bus/tree/master/samples/DotNet/Microsoft.Azure.ServiceBus/TransactionsAndSendVia/TransactionsAndSendVia/AMQPTransactionsSendVia). This article is limited to an overview of transaction processing and the *send via* feature in Service Bus, while the Atomic Transactions sample is broader and more complex in scope.
+This article discusses the transaction capabilities of Microsoft Azure Service Bus. Much of the discussion is illustrated by the [Transactions sample](https://github.com/Azure/azure-sdk-for-net/blob/main/sdk/servicebus/Azure.Messaging.ServiceBus/samples/Sample06_Transactions.md). This article is limited to an overview of transaction processing and the *send via* feature in Service Bus, while the Atomic Transactions sample is broader and more complex in scope.
> [!NOTE] > - The basic tier of Service Bus doesn't support transactions. The standard and premium tiers support transactions. For differences between these tiers, see [Service Bus pricing](https://azure.microsoft.com/pricing/details/service-bus/).
For more information about Service Bus queues, see the following articles:
* [How to use Service Bus queues](service-bus-dotnet-get-started-with-queues.md) * [Chaining Service Bus entities with autoforwarding](service-bus-auto-forwarding.md)
-* [Autoforward sample](https://github.com/Azure/azure-service-bus/tree/master/samples/DotNet/Microsoft.ServiceBus.Messaging/AutoForward) (`Microsoft.ServiceBus.Messaging` library)
-* [Atomic Transactions with Service Bus sample](https://github.com/Azure/azure-service-bus/tree/master/samples/DotNet/Microsoft.ServiceBus.Messaging/AtomicTransactions) (`Microsoft.ServiceBus.Messaging` library)
* [Working with transactions sample](https://github.com/Azure/azure-sdk-for-net/blob/main/sdk/servicebus/Azure.Messaging.ServiceBus/samples/Sample06_Transactions.md) (`Azure.Messaging.ServiceBus` library) * [Azure Queue Storage and Service Bus queues compared](service-bus-azure-and-service-bus-queues-compared-contrasted.md)
service-bus-messaging Service Bus Troubleshooting Guide https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/service-bus-messaging/service-bus-troubleshooting-guide.md
The following steps may help you with troubleshooting connectivity/certificate/t
- Obtain a network trace if the previous steps don't help and analyze it using tools such as [Wireshark](https://www.wireshark.org/). Contact [Microsoft Support](https://support.microsoft.com/) if needed. - To find the right IP addresses to add to allowlist for your connections, see [What IP addresses do I need to add to allowlist](service-bus-faq.yml#what-ip-addresses-do-i-need-to-add-to-allowlist-). ## Issues that may occur with service upgrades/restarts
You receive the following error message:
`Microsoft.Azure.ServiceBus.ServiceBusException: Put token failed. status-code: 403, status-description: The maximum number of '1000' tokens per connection has been reached.` + ### Cause Number of authentication tokens for concurrent links in a single connection to a Service Bus namespace has exceeded the limit: 1000.
service-bus-messaging Transport Layer Security Enforce Minimum Version https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/service-bus-messaging/transport-layer-security-enforce-minimum-version.md
When a client sends a request to Service Bus namespace, the client establishes a
> [!NOTE] > Due to backwards compatibility, namespaces that do not have the `MinimumTlsVersion` setting specified or have specified this as 1.0, we do not do any TLS checks when connecting via the SBMP protocol. + Here're a few important points to consider: - A network trace would show the successful establishment of a TCP connection and successful TLS negotiation, before a 401 is returned if the TLS version used is less than the minimum TLS version configured.
spring-apps How To Scale Manual https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/spring-apps/how-to-scale-manual.md
After you finish, you'll know how to make quick manual changes to each applicati
As you modify the scaling attributes, keep the following notes in mind:
-* **vCPU**: The maximum number of CPUs per application instance is four. The total number of CPUs for an application is the value set here multiplied by the number of application instances.
+* **vCPU**: The total number of CPUs for an application is the value set here multiplied by the number of application instances.
-* **Memory**: The maximum amount of memory per application instance is 8 GB. The total amount of memory for an application is the value set here multiplied by the number of application instances.
+* **Memory**: The total amount of memory for an application is the value set here multiplied by the number of application instances.
-* **instance count**: In the Standard plan, you can scale out to a maximum of 20 instances. This value changes the number of separate running instances of the Spring application.
+* **instance count**: This value changes the number of separate running instances of the Spring application.
Be sure to select **Save** to apply your scaling settings.
Be sure to select **Save** to apply your scaling settings.
After a few seconds, the scaling changes you make are reflected on the **Overview** page of the app. Select **App instance** in the navigation pane for details about the instance of the app.
-## Upgrade to the Standard plan
-
-If you're on the Basic plan and constrained by current limits, you can upgrade to the Standard plan. For more information, see [Quotas and service plans for Azure Spring Apps](./quotas.md) and [Migrate an Azure Spring Apps Basic or Standard plan instance to the Enterprise plan](how-to-migrate-standard-tier-to-enterprise-tier.md).
+> [!NOTE]
+> For more information about the maximum number of CPUs, the amount of memory, and the instance count, see [Quotas and service plans for Azure Spring Apps](./quotas.md).
## Next steps
storage Access Tiers Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/access-tiers-overview.md
The cold tier is now generally available in all public and Azure Government regi
- [Object replication](object-replication-overview.md) is not yet compatible with the cold tier. - The default access tier setting of the account can't be set to cold tier.-- Setting the cold tier in a batch call is not yet supported (For example: using the [Blob Batch](/rest/api/storageservices/blob-batch) REST operation along with the [Set Blob Tier](/rest/api/storageservices/set-blob-tier) subrequest). ### Required versions of REST, SDKs, and tools
stream-analytics Service Bus Queues Output https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/stream-analytics/service-bus-queues-output.md
The following image is of the expected output message properties inspected in Ev
## System properties
-You can attach query columns as [system properties](/dotnet/api/microsoft.servicebus.messaging.brokeredmessage#properties) to your outgoing service bus Queue or Topic messages.
+You can attach query columns as [system properties](/dotnet/api/azure.messaging.servicebus.servicebusmessage#properties) to your outgoing service bus Queue or Topic messages.
-These columns don't go into the payload instead the corresponding BrokeredMessage [system property](/dotnet/api/microsoft.servicebus.messaging.brokeredmessage#properties) is populated with the query column values.
+These columns don't go into the payload instead the corresponding ServiceBusMessage [system property](/dotnet/api/azure.messaging.servicebus.servicebusmessage#properties) is populated with the query column values.
These system properties are supported - `MessageId, ContentType, Label, PartitionKey, ReplyTo, SessionId, CorrelationId, To, ForcePersistence, TimeToLive, ScheduledEnqueueTimeUtc`. String values of these columns are parsed as corresponding system property value type and any parsing failures are treated as data errors.
stream-analytics Service Bus Topics Output https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/stream-analytics/service-bus-topics-output.md
The following image is of the expected output message properties inspected in Ev
## System properties
-You can attach query columns as [system properties](/dotnet/api/microsoft.servicebus.messaging.brokeredmessage#properties) to your outgoing service bus Queue or Topic messages.
-These columns don't go into the payload instead the corresponding BrokeredMessage [system property](/dotnet/api/microsoft.servicebus.messaging.brokeredmessage#properties) is populated with the query column values.
+You can attach query columns as [system properties](/dotnet/api/azure.messaging.servicebus.servicebusmessage#properties) to your outgoing service bus Queue or Topic messages.
+These columns don't go into the payload instead the corresponding ServiceBusMessage [system property](/dotnet/api/azure.messaging.servicebus.servicebusmessage#properties) is populated with the query column values.
These system properties are supported - `MessageId, ContentType, Label, PartitionKey, ReplyTo, SessionId, CorrelationId, To, ForcePersistence, TimeToLive, ScheduledEnqueueTimeUtc`. String values of these columns are parsed as corresponding system property value type and any parsing failures are treated as data errors.
synapse-analytics Get Started Analyze Sql Pool https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/synapse-analytics/get-started-analyze-sql-pool.md
Title: 'Tutorial: Get started analyze data with dedicated SQL pools'
-description: In this tutorial, you'll use the NYC Taxi sample data to explore SQL pool's analytic capabilities.
+ Title: "Tutorial: Get started analyze data with dedicated SQL pools"
+description: In this tutorial, use the NYC Taxi sample data to explore SQL pool's analytic capabilities.
Last updated : 10/16/2023 Previously updated : 11/18/2022 # Analyze data with dedicated SQL pools
-In this tutorial, you'll use the NYC Taxi data to explore a dedicated SQL pool's capabilities.
+In this tutorial, use the NYC Taxi data to explore a dedicated SQL pool's capabilities.
## Create a dedicated SQL pool 1. In Synapse Studio, on the left-side pane, select **Manage** > **SQL pools** under **Analytics pools**.
-1. Select **New**
-1. For **Dedicated SQL pool name** select **SQLPOOL1**
-1. For **Performance level** choose **DW100C**
+1. Select **New**.
+1. For **Dedicated SQL pool name** select `SQLPOOL1`.
+1. For **Performance level** choose **DW100C**.
1. Select **Review + create** > **Create**. Your dedicated SQL pool will be ready in a few minutes.
-Your dedicated SQL pool is associated with a SQL database that's also called **SQLPOOL1**.
+Your dedicated SQL pool is associated with a SQL database that's also called `SQLPOOL1`.
+ 1. Navigate to **Data** > **Workspace**.
-1. You should see a database named **SQLPOOL1**. If you do not see it, click **Refresh**.
+1. You should see a database named **SQLPOOL1**. If you do not see it, select **Refresh**.
A dedicated SQL pool consumes billable resources as long as it's active. You can pause the pool later to reduce costs.
A dedicated SQL pool consumes billable resources as long as it's active. You can
## Load the NYC Taxi Data into SQLPOOL1
-1. In Synapse Studio, navigate to the **Develop** hub, click the **+** button to add new resource, then create new SQL script.
-1. Select the pool 'SQLPOOL1' (pool created in [STEP 1](./get-started-create-workspace.md) of this tutorial) in **Connect to** drop down list above the script.
+1. In Synapse Studio, navigate to the **Develop** hub, select the **+** button to add new resource, then create new SQL script.
+1. Select the pool `SQLPOOL1` (pool created in [STEP 1](./get-started-create-workspace.md) of this tutorial) in **Connect to** drop down list above the script.
1. Enter the following code: ```sql IF NOT EXISTS (SELECT * FROM sys.objects O JOIN sys.schemas S ON O.schema_id = S.schema_id WHERE O.NAME = 'NYCTaxiTripSmall' AND O.TYPE = 'U' AND S.NAME = 'dbo') CREATE TABLE dbo.NYCTaxiTripSmall (
- [DateID] int,
- [MedallionID] int,
- [HackneyLicenseID] int,
- [PickupTimeID] int,
- [DropoffTimeID] int,
- [PickupGeographyID] int,
- [DropoffGeographyID] int,
- [PickupLatitude] float,
- [PickupLongitude] float,
- [PickupLatLong] nvarchar(4000),
- [DropoffLatitude] float,
- [DropoffLongitude] float,
- [DropoffLatLong] nvarchar(4000),
- [PassengerCount] int,
- [TripDurationSeconds] int,
- [TripDistanceMiles] float,
- [PaymentType] nvarchar(4000),
- [FareAmount] numeric(19,4),
- [SurchargeAmount] numeric(19,4),
- [TaxAmount] numeric(19,4),
- [TipAmount] numeric(19,4),
- [TollsAmount] numeric(19,4),
- [TotalAmount] numeric(19,4)
+ [VendorID] bigint,
+ [store_and_fwd_flag] nvarchar(1) NULL,
+ [RatecodeID] float NULL,
+ [PULocationID] bigint NULL,
+ [DOLocationID] bigint NULL,
+ [passenger_count] float NULL,
+ [trip_distance] float NULL,
+ [fare_amount] float NULL,
+ [extra] float NULL,
+ [mta_tax] float NULL,
+ [tip_amount] float NULL,
+ [tolls_amount] float NULL,
+ [ehail_fee] float NULL,
+ [improvement_surcharge] float NULL,
+ [total_amount] float NULL,
+ [payment_type] float NULL,
+ [trip_type] float NULL,
+ [congestion_surcharge] float NULL
) WITH (
A dedicated SQL pool consumes billable resources as long as it's active. You can
GO COPY INTO dbo.NYCTaxiTripSmall
- (DateID 1, MedallionID 2, HackneyLicenseID 3, PickupTimeID 4, DropoffTimeID 5,
- PickupGeographyID 6, DropoffGeographyID 7, PickupLatitude 8, PickupLongitude 9,
- PickupLatLong 10, DropoffLatitude 11, DropoffLongitude 12, DropoffLatLong 13,
- PassengerCount 14, TripDurationSeconds 15, TripDistanceMiles 16, PaymentType 17,
- FareAmount 18, SurchargeAmount 19, TaxAmount 20, TipAmount 21, TollsAmount 22,
- TotalAmount 23)
+ (VendorID 1, store_and_fwd_flag 4, RatecodeID 5, PULocationID 6 , DOLocationID 7,
+ passenger_count 8,trip_distance 9, fare_amount 10, extra 11, mta_tax 12, tip_amount 13,
+ tolls_amount 14, ehail_fee 15, improvement_surcharge 16, total_amount 17,
+ payment_type 18, trip_type 19, congestion_surcharge 20 )
FROM 'https://contosolake.dfs.core.windows.net/users/NYCTripSmall.parquet' WITH (
A dedicated SQL pool consumes billable resources as long as it's active. You can
,IDENTITY_INSERT = 'OFF' ) ```
-1. Click the **Run** button to execute the script.
-1. This script will finish in less than 60 seconds. It loads 2 million rows of NYC Taxi data into a table called `dbo.NYCTaxiTripSmall`.
+1. Select the **Run** button to execute the script.
+1. This script finishes in less than 60 seconds. It loads 2 million rows of NYC Taxi data into a table called `dbo.NYCTaxiTripSmall`.
## Explore the NYC Taxi data in the dedicated SQL pool 1. In Synapse Studio, go to the **Data** hub. 1. Go to **SQLPOOL1** > **Tables**.
-3. Right-click the **dbo.NYCTaxiTripSmall** table and select **New SQL Script** > **Select TOP 100 Rows**.
-4. Wait while a new SQL script is created and runs.
-5. Notice that at the top of the SQL script **Connect to** is automatically set to the SQL pool called **SQLPOOL1**.
-6. Replace the text of the SQL script with this code and run it.
+1. Right-click the **dbo.NYCTaxiTripSmall** table and select **New SQL Script** > **Select TOP 100 Rows**.
+1. Wait while a new SQL script is created and runs.
+1. At the top of the SQL script **Connect to** is automatically set to the SQL pool called **SQLPOOL1**.
+1. Replace the text of the SQL script with this code and run it.
```sql
- SELECT PassengerCount,
- SUM(TripDistanceMiles) as SumTripDistance,
- AVG(TripDistanceMiles) as AvgTripDistance
+ SELECT passenger_count as PassengerCount,
+ SUM(trip_distance) as SumTripDistance_miles,
+ AVG(trip_distance) as AvgTripDistance_miles
INTO dbo.PassengerCountStats FROM dbo.NYCTaxiTripSmall
- WHERE TripDistanceMiles > 0 AND PassengerCount > 0
- GROUP BY PassengerCount;
+ WHERE trip_distance > 0 AND passenger_count > 0
+ GROUP BY passenger_count;
+ SELECT * FROM dbo.PassengerCountStats
- ORDER BY PassengerCount;
+ ORDER BY passenger_count;
```
- This query shows how the total trip distances and average trip distance relate to the number of passengers.
-1. In the SQL script result window, change the **View** to **Chart** to see a visualization of the results as a line chart.
+ This query creates a table `dbo.PassengerCountStats` with aggregate data from the `trip_distance` field, then queries the new table. The data shows how the total trip distances and average trip distance relate to the number of passengers.
+1. In the SQL script result window, change the **View** to **Chart** to see a visualization of the results as a line chart. Change **Category column** to `PassengerCount`.
-## Next steps
+## Next step
> [!div class="nextstepaction"] > [Analyze data in an Azure Storage account](get-started-analyze-storage.md)
synapse-analytics Get Started Visualize Power Bi https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/synapse-analytics/get-started-visualize-power-bi.md
Title: 'Tutorial: Get started with Azure Synapse Analytics - visualize workspace data with Power BI'
-description: In this tutorial, you'll learn how to use Power BI to visualize data in Azure Synapse Analytics.
+ Title: "'Tutorial: Get started with Azure Synapse Analytics - visualize workspace data with Power BI'"
+description: In this tutorial, you learn how to use Power BI to visualize data in Azure Synapse Analytics.
Last updated : 10/16/2023 Previously updated : 03/25/2021 # Visualize data with Power BI
-In this tutorial, you'll learn how to create a Power BI workspace, link your Azure Synapse workspace, and create a Power BI data set that utilizes data in your Azure Synapse workspace.
+In this tutorial, you learn how to create a Power BI workspace, link your Azure Synapse workspace, and create a Power BI data set that utilizes data in your Azure Synapse workspace.
## Prerequisites To complete this tutorial, [install Power BI Desktop](https://aka.ms/pbidesktopstore).
You can link a Power BI workspace to your Azure Synapse workspace. This capabili
### Create a Power BI workspace 1. Sign in to [powerbi.microsoft.com](https://powerbi.microsoft.com/).
-1. Click on **Workspaces**, then select **Create a workspace**. Create a new Power BI workspace named **NYCTaxiWorkspace1** or similar, since this name must be unique.
+1. Select **Workspaces**, then select **Create a workspace**. Create a new Power BI workspace named **NYCTaxiWorkspace1** or similar, since this name must be unique.
### Link your Azure Synapse workspace to your new Power BI workspace 1. In Synapse Studio, go to **Manage** > **Linked Services**. 1. Select **New** > **Connect to Power BI**.
-1. Set **Name** to **NYCTaxiWorkspace1**.
-1. Set **Workspace name** to the Power BI workspace you created above, similar to **NYCTaxiWorkspace1**.
+1. Set **Name** to **NYCTaxiWorkspace1** or similar.
+1. Set **Workspace name** to the Power BI workspace you created earlier, similar to **NYCTaxiWorkspace1**.
1. Select **Create**. ### Create a Power BI dataset that uses data in your Azure Synapse workspace 1. In Synapse Studio, go to **Develop** > **Power BI**.
-1. Go to **NYCTaxiWorkspace1** > **Power BI datasets** and select **New Power BI dataset**. Click **Start**.
-1. Select the **SQLPOOL1** data source, click **Continue**.
-1. Click **Download** to download the .pbids file for your **NYCTaxiWorkspace1SQLPOOL1.pbids** file. Click **Continue**.
-1. Open the downloaded **.pbids** file. Power BI Desktop opens and automatically connects to **SQLDB1** in your Azure Synapse workspace.
+1. Go to **NYCTaxiWorkspace1** > **Power BI datasets** and select **New Power BI dataset**. Select **Start**.
+1. Select the **SQLPOOL1** data source, select **Continue**.
+1. Select **Download** to download the `.pbids` file for your `NYCTaxiWorkspace1SQLPOOL1.pbids` file. Select **Continue**.
+1. Open the downloaded `.pbids` file. Power BI Desktop opens and automatically connects to **SQLDB1** in your Azure Synapse workspace.
1. If you see a dialog box appear called **SQL Server database**: 1. Select **Microsoft account**. 1. Select **Sign in** and sign in to your account.
You can link a Power BI workspace to your Azure Synapse workspace. This capabili
1. After the **Navigator** dialog box opens, check the **PassengerCountStats** table and select **Load**. 1. After the **Connection settings** dialog box appears, select **DirectQuery** > **OK**. 1. Select the **Report** button on the left side.
-1. Under **Visualizations**, click to the line chart icon to add a **Line chart** to your report.
- 1. Under **Fields**, drag the **PassengerCount** column to **Visualizations** > **Axis**.
- 1. Drag the **SumTripDistance** and **AvgTripDistance** columns to **Visualizations** > **Values**.
+1. Under **Visualizations**, select to the line chart icon to add a **Line chart** to your report.
+ 1. Under **Fields**, drag the `PassengerCount` column to **Visualizations** > **Axis**.
+ 1. Drag the `SumTripDistance` and `AvgTripDistance` columns to **Visualizations** > **Values**.
1. On the **Home** tab, select **Publish**. 1. Select **Save** to save your changes.
-1. Choose the file name **PassengerAnalysis.pbix**, and then select **Save**.
-1. In the **Publish to Power BI** window, under **Select a destination**, choose your **NYCTaxiWorkspace1**, and then click **Select**.
+1. Choose the file name `PassengerAnalysis.pbix`, and then select **Save**.
+1. In the **Publish to Power BI** window, under **Select a destination**, choose your `NYCTaxiWorkspace1`, and then select **Select**.
1. Wait for publishing to finish. ### Configure authentication for your dataset
You can link a Power BI workspace to your Azure Synapse workspace. This capabili
1. On the left side, under **Workspaces**, select the **NYCTaxiWorkspace1** workspace. 1. Inside that workspace, locate a dataset called **Passenger Analysis** and a report called **Passenger Analysis**. 1. Hover over the **PassengerAnalysis** dataset, select the ellipsis (...) button, and then select **Settings**.
-1. In **Data source credentials**, click **Edit**, set the **Authentication method** to **OAuth2**, and then select **Sign in**.
+1. In **Data source credentials**, select **Edit**, set the **Authentication method** to **OAuth2**, and then select **Sign in**.
### Edit a report in Synapse Studio 1. Go back to Synapse Studio and select **Close and refresh**. 1. Go to the **Develop** hub.
-1. To the right of the **Power BI** layer, ellipsis (...) button, and click **refresh** to refresh the **Power BI reports** node.
+1. To the right of the **Power BI** layer, ellipsis (...) button, and select **Refresh** to refresh the **Power BI reports** node.
1. Under **Power BI** you should see: * In **NYCTaxiWorkspace1** > **Power BI datasets**, a new dataset called **PassengerAnalysis**. * Under **NYCTaxiWorkspace1** > **Power BI reports**, a new report called **PassengerAnalysis**. 1. Select the **PassengerAnalysis** report. The report opens and you can edit it directly within Synapse Studio. --
-## Next steps
+## Next step
> [!div class="nextstepaction"] > [Monitor](get-started-monitor.md)
-
-
virtual-machines Image Builder Triggers How To https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/image-builder-triggers-how-to.md
Title: How to use Azure Image Builder triggers preview to set up an automatic image build
+ Title: How to use Azure Image Builder triggers to set up an automatic image build
description: Use triggers in Azure Image Builder to set up automatic image builds when criteria are met in a build pipeline
Previously updated : 06/05/2023 Last updated : 10/16/2023
You can use triggers in Azure Image Builder (AIB) to set up automatic image builds when certain criteria are met in your build pipeline. > [!IMPORTANT]
-> Azure Image Builder triggers is currently in Preview. Please be informed that there exists a restriction on the number of triggers allowable per region, specifically 100 per region per subscription.
+> Please be informed that there exists a restriction on the number of triggers allowable per region, specifically 100 per region per subscription.
> [!NOTE] > Currently, we only support setting a trigger for a new source image, but we do expect to support different kinds of triggers in the future.
Register the auto image build triggers feature:
az feature register --namespace Microsoft.VirtualMachineImages --name Triggers ```
-To register the auto image build triggers feature using PowerShell, run the following command:
-```azurepowershell-interactive
-Register-AzProviderPreviewFeature -ProviderNamespace Microsoft.VirtualMachineImages -Name Triggers
-```
+ ### Set variables First, you need to set some variables that you'll repeatedly use in commands.
virtual-machines Oracle Create Upload Vhd https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/linux/oracle-create-upload-vhd.md
Preparing an Oracle Linux 7 virtual machine for Azure is similar to Oracle Linux
7. Install the python-pyasn1 package by running the following command: ```bash
- sudo yum install python-pyasn1
+ sudo yum install python3-pyasn1
``` 8. Run the following command to clear the current yum metadata and install any updates:
virtual-machines Quick Create Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/linux/quick-create-portal.md
Previously updated : 09/15/2023 Last updated : 10/16/2023
If the VM is still needed, Azure provides an Auto-shutdown feature for virtual m
> [!NOTE] > Remember to configure the time zone correctly to match your requirements, as (UTC) Coordinated Universal Time is the default setting in the Time zone dropdown.
+For more information see [Auto-shutdown](/azure/virtual-machines/auto-shutdown-vm).
+ ## Next steps In this quickstart, you deployed a virtual machine, created a Network Security Group and rule, and installed a basic web server.
virtual-machines Nct4 V3 Series https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/nct4-v3-series.md
Nvidia NVLink Interconnect: Not Supported<br>
| Size | vCPU | Memory: GiB | Temp storage (SSD) GiB | GPU | GPU memory: GiB | Max data disks | Max NICs / Expected network bandwidth (Mbps) | | | | | | | | | | | Standard_NC4as_T4_v3 |4 |28 |180 | 1 | 16 | 8 | 2 / 8000 |
-| Standard_NC8as_T4_v3 |8 |56 |360 | 1 | 16 | 16 | 4 / 8000 |
-| Standard_NC16as_T4_v3 |16 |110 |360 | 1 | 16 | 32 | 8 / 8000 |
+| Standard_NC8as_T4_v3 |8 |56 |352 | 1 | 16 | 16 | 4 / 8000 |
+| Standard_NC16as_T4_v3 |16 |110 |352 | 1 | 16 | 32 | 8 / 8000 |
| Standard_NC64as_T4_v3 |64 |440 |2880 | 4 | 64 | 32 | 8 / 32000 |
virtual-machines Quick Create Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/windows/quick-create-portal.md
Previously updated : 09/15/2023 Last updated : 10/16/2023
If the VM is still needed, Azure provides an Auto-shutdown feature for virtual m
> [!NOTE] > Remember to configure the time zone correctly to match your requirements, as (UTC) Coordinated Universal Time is the default setting in the Time zone dropdown.
+For more information see [Auto-shutdown](/azure/virtual-machines/auto-shutdown-vm).
+ ## Next steps In this quickstart, you deployed a simple virtual machine, opened a network port for web traffic, and installed a basic web server. To learn more about Azure virtual machines, continue to the tutorial for Windows VMs.
vpn-gateway Vpn Gateway About Vpn Gateway Settings https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/vpn-gateway/vpn-gateway-about-vpn-gateway-settings.md
The values in this article apply VPN gateways (virtual network gateways that use
Currently, Azure supports two gateway VPN types: route-based VPN gateways and policy-based VPN gateways. They're built on different internal platforms, which result in different specifications.
-As of Oct 1, 2023, you can't create a policy-based VPN gateway. All new VPN gateways will automatically be created as route-based. If you already have a policy-based gateway, you don't need to upgrade your gateway to route-based.
+As of Oct 1, 2023, you can't create a policy-based VPN gateway through Azure portal. All new VPN gateways will automatically be created as route-based. If you already have a policy-based gateway, you don't need to upgrade your gateway to route-based. You can use Powershell/CLI to create the policy-based gateways.
Previously, the older gateway SKUs didn't support IKEv1 for route-based gateways. Now, most of the current gateway SKUs support both IKEv1 and IKEv2.
web-application-firewall Quick Create Bicep https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/web-application-firewall/ag/quick-create-bicep.md
Previously updated : 06/22/2022 Last updated : 10/16/2023
web-application-firewall Cdn Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/web-application-firewall/cdn/cdn-overview.md
Previously updated : 05/26/2022 Last updated : 10/16/2023
You can configure a WAF policy and associate that policy to one or more CDN endp
- custom rules that you can create. -- managed rule sets that are a collection of Azure-managed pre-configured rules.
+- managed rule sets that are a collection of Azure-managed preconfigured rules.
When both are present, custom rules are processed before processing the rules in a managed rule set. A rule is made of a match condition, a priority, and an action. Action types supported are: *ALLOW*, *BLOCK*, *LOG*, and *REDIRECT*. You can create a fully customized policy that meets your specific application protection requirements by combining managed and custom rules.
You can choose one of the following actions when a request matches a rule's cond
A WAF policy can consist of two types of security rules: - *custom rules*: rules that you can create yourself. -- *managed rule sets*: Azure managed pre-configured set of rules that you can enable.
+- *managed rule sets*: Azure managed preconfigured set of rules that you can enable.
### Custom rules
A rate control rule limits abnormally high traffic from any client IP address.
### Azure-managed rule sets
-Azure-managed rule sets provide an easy way to deploy protection against a common set of security threats. Since these rulesets are managed by Azure, the rules are updated as needed to protect against new attack signatures. The Azure managed Default Rule Set includes rules against the following threat categories:
+Azure-managed rule sets provide an easy way to deploy protection against a common set of security threats. Since Azure manages these rulesets, the rules are updated as needed to protect against new attack signatures. The Azure managed Default Rule Set includes rules against the following threat categories:
- Cross-site scripting - Java attacks