Service | Microsoft Docs article | Related commit history on GitHub | Change details |
---|---|---|---|
active-directory | Plan Cloud Hr Provision | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/app-provisioning/plan-cloud-hr-provision.md | The following example describes the end-to-end user provisioning solution archit The following key steps are indicated in the diagram:   1. **HR team** performs the transactions in the cloud HR app tenant.-2. **Azure AD provisioning service** runs the scheduled cycles from the cloud HR app tenant and identifies changes that need to be processed for sync with Active Directory. +2. **Azure AD provisioning service** runs the scheduled cycles from the cloud HR app tenant and identifies changes to process for sync with Active Directory. 3. **Azure AD provisioning service** invokes the Azure AD Connect provisioning agent with a request payload that contains Active Directory account create, update, enable, and disable operations. 4. **Azure AD Connect provisioning agent** uses a service account to manage Active Directory account data. 5. **Azure AD Connect** runs delta [sync](../hybrid/how-to-connect-sync-whatis.md) to pull updates in Active Directory. For high availability, you can deploy more than one Azure AD Connect provisionin ## Design HR provisioning app deployment topology -Depending on the number of Active Directory domains involved in the inbound user provisioning configuration, you may consider one of the following deployment topologies. Each topology diagram uses an example deployment scenario to highlight configuration aspects. Use the example that closely resembles your deployment requirement to determine the configuration that will meet your needs. +Depending on the number of Active Directory domains involved in the inbound user provisioning configuration, you may consider one of the following deployment topologies. Each topology diagram uses an example deployment scenario to highlight configuration aspects. Use the example that closely resembles your deployment requirement to determine the configuration that meets your needs. -### Deployment topology 1: Single app to provision all users from Cloud HR to single on-premises Active Directory domain +### Deployment topology one: Single app to provision all users from Cloud HR to single on-premises Active Directory domain -This is the most common deployment topology. Use this topology, if you need to provision all users from Cloud HR to a single AD domain and same provisioning rules apply to all users. +Deployment topology one is the most common deployment topology. Use this topology, if you need to provision all users from Cloud HR to a single AD domain and same provisioning rules apply to all users. :::image type="content" source="media/plan-cloud-hr-provision/topology-1-single-app-with-single-ad-domain.png" alt-text="Screenshot of single app to provision users from Cloud HR to single AD domain" lightbox="media/plan-cloud-hr-provision/topology-1-single-app-with-single-ad-domain.png"::: This is the most common deployment topology. Use this topology, if you need to p * When configuring the provisioning app, select the AD domain from the dropdown of registered domains. * If you're using scoping filters, configure [skip out of scope deletions flag](skip-out-of-scope-deletions.md) to prevent accidental account deactivations. -### Deployment topology 2: Separate apps to provision distinct user sets from Cloud HR to single on-premises Active Directory domain +### Deployment topology two: Separate apps to provision distinct user sets from Cloud HR to single on-premises Active Directory domain This topology supports business requirements where attribute mapping and provisioning logic differ based on user type (employee/contractor), user location or user's business unit. You can also use this topology to delegate the administration and maintenance of inbound user provisioning based on division or country. This topology supports business requirements where attribute mapping and provisi **Salient configuration aspects** * Setup two provisioning agent nodes for high availability and failover. * Create an HR2AD provisioning app for each distinct user set that you want to provision. -* Use [scoping filters](define-conditional-rules-for-provisioning-user-accounts.md) in the provisioning app to define users to be processed by each app. -* To handle the scenario where managers references need to be resolved across distinct user sets (e.g. contractors reporting to managers who are employees), you can create a separate HR2AD provisioning app for updating only the *manager* attribute. Set the scope of this app to all users. +* Use [scoping filters](define-conditional-rules-for-provisioning-user-accounts.md) in the provisioning app to define users to process each app. +* In the scenario where manager references need to be resolved across distinct user sets, create a separate HR2AD provisioning app. For example, contractors reporting to managers who are employees. Use the separate app to update only the *manager* attribute. Set the scope of this app to all users. * Configure [skip out of scope deletions flag](skip-out-of-scope-deletions.md) to prevent accidental account deactivations. > [!NOTE] > If you do not have a test AD domain and use a TEST OU container in AD, then you may use this topology to create two separate apps *HR2AD (Prod)* and *HR2AD (Test)*. Use the *HR2AD (Test)* app to test your attribute mapping changes before promoting it to the *HR2AD (Prod)* app. -### Deployment topology 3: Separate apps to provision distinct user sets from Cloud HR to multiple on-premises Active Directory domains (no cross-domain visibility) +### Deployment topology three: Separate apps to provision distinct user sets from Cloud HR to multiple on-premises Active Directory domains (no cross-domain visibility) -Use this topology to manage multiple independent child AD domains belonging to the same forest, if managers always exist in the same domain as the user and your unique ID generation rules for attributes like *userPrincipalName*, *samAccountName* and *mail* doesn't require a forest-wide lookup. It also offers the flexibility of delegating the administration of each provisioning job by domain boundary. +Use topology three to manage multiple independent child AD domains belonging to the same forest. Make sure that managers always exist in the same domain as the user. Also make sure that your unique ID generation rules for attributes like *userPrincipalName*, *samAccountName*, and *mail* don't require a forest-wide lookup. Topology three offers the flexibility of delegating the administration of each provisioning job by domain boundary. -For example: In the diagram below, the provisioning apps are set up for each geographic region: North America (NA), Europe, Middle East and Africa (EMEA) and Asia Pacific (APAC). Depending on the location, users are provisioned to the respective AD domain. Delegated administration of the provisioning app is possible so that *EMEA administrators* can independently manage the provisioning configuration of users belonging to the EMEA region. +For example: In the diagram, the provisioning apps are set up for each geographic region: North America (NA), Europe, Middle East and Africa (EMEA) and Asia Pacific (APAC). Depending on the location, users are provisioned to the respective AD domain. Delegated administration of the provisioning app is possible so that *EMEA administrators* can independently manage the provisioning configuration of users belonging to the EMEA region. :::image type="content" source="media/plan-cloud-hr-provision/topology-3-separate-apps-with-multiple-ad-domains-no-cross-domain.png" alt-text="Screenshot of separate apps to provision users from Cloud HR to multiple AD domains" lightbox="media/plan-cloud-hr-provision/topology-3-separate-apps-with-multiple-ad-domains-no-cross-domain.png"::: For example: In the diagram below, the provisioning apps are set up for each geo * Configure [skip out of scope deletions flag](skip-out-of-scope-deletions.md) to prevent accidental account deactivations. -### Deployment topology 4: Separate apps to provision distinct user sets from Cloud HR to multiple on-premises Active Directory domains (with cross-domain visibility) +### Deployment topology four: Separate apps to provision distinct user sets from Cloud HR to multiple on-premises Active Directory domains (with cross-domain visibility) -Use this topology to manage multiple independent child AD domains belonging to the same forest, if a user's manager may exist in the different domain and your unique ID generation rules for attributes like *userPrincipalName*, *samAccountName* and *mail* requires a forest-wide lookup. +Use topology four to manage multiple independent child AD domains belonging to the same forest. A user's manager may exist in a different domain. Also, your unique ID generation rules for attributes like *userPrincipalName*, *samAccountName* and *mail* require a forest-wide lookup. -For example: In the diagram below, the provisioning apps are set up for each geographic region: North America (NA), Europe, Middle East and Africa (EMEA) and Asia Pacific (APAC). Depending on the location, users are provisioned to the respective AD domain. Cross-domain manager references and forest-wide lookup are handled by enabling referral chasing on the provisioning agent. +For example: In the diagram, the provisioning apps are set up for each geographic region: North America (NA), Europe, Middle East and Africa (EMEA) and Asia Pacific (APAC). Depending on the location, users are provisioned to the respective AD domain. Cross-domain manager references and forest-wide lookup are handled by enabling referral chasing on the provisioning agent. :::image type="content" source="media/plan-cloud-hr-provision/topology-4-separate-apps-with-multiple-ad-domains-cross-domain.png" alt-text="Screenshot of separate apps to provision users from Cloud HR to multiple AD domains with cross domain support" lightbox="media/plan-cloud-hr-provision/topology-4-separate-apps-with-multiple-ad-domains-cross-domain.png"::: For example: In the diagram below, the provisioning apps are set up for each geo Use this topology if you want to use a single provisioning app to manage users belonging to all your parent and child AD domains. This topology is recommended if provisioning rules are consistent across all domains and there's no requirement for delegated administration of provisioning jobs. This topology supports resolving cross-domain manager references and can perform forest-wide uniqueness check. -For example: In the diagram below, a single provisioning app manages users present in three different child domains grouped by region: North America (NA), Europe, Middle East and Africa (EMEA) and Asia Pacific (APAC). The attribute mapping for *parentDistinguishedName* is used to dynamically create a user in the appropriate child domain. Cross-domain manager references and forest-wide lookup are handled by enabling referral chasing on the provisioning agent. +For example: In the diagram, a single provisioning app manages users present in three different child domains grouped by region: North America (NA), Europe, Middle East and Africa (EMEA) and Asia Pacific (APAC). The attribute mapping for *parentDistinguishedName* is used to dynamically create a user in the appropriate child domain. Cross-domain manager references and forest-wide lookup are handled by enabling referral chasing on the provisioning agent. :::image type="content" source="media/plan-cloud-hr-provision/topology-5-single-app-with-multiple-ad-domains-cross-domain.png" alt-text="Screenshot of single app to provision users from Cloud HR to multiple AD domains with cross domain support" lightbox="media/plan-cloud-hr-provision/topology-5-single-app-with-multiple-ad-domains-cross-domain.png"::: Use this topology if your IT infrastructure has disconnected/disjoint AD forests ### Deployment topology 7: Separate apps to provision distinct users from multiple Cloud HR to disconnected on-premises Active Directory forests -In large organizations, it isn't uncommon to have multiple HR systems. During business M&A (mergers and acquisitions) scenarios, you may come across a need to connect your on-premises Active Directory to multiple HR sources. We recommend the topology below if you have multiple HR sources and would like to channel the identity data from these HR sources to either the same or different on-premises Active Directory domains. +In large organizations, it isn't uncommon to have multiple HR systems. During business M&A (mergers and acquisitions) scenarios, you may come across a need to connect your on-premises Active Directory to multiple HR sources. We recommend the topology if you have multiple HR sources and would like to channel the identity data from these HR sources to either the same or different on-premises Active Directory domains. :::image type="content" source="media/plan-cloud-hr-provision/topology-7-separate-apps-from-multiple-hr-to-disconnected-ad-forests.png" alt-text="Screenshot of separate apps to provision users from multiple Cloud HR to disconnected AD forests" lightbox="media/plan-cloud-hr-provision/topology-7-separate-apps-from-multiple-hr-to-disconnected-ad-forests.png"::: |
active-directory | App Proxy Protect Ndes | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/app-proxy/app-proxy-protect-ndes.md | + + Title: Integrate with Azure Active Directory Application Proxy on an NDES server +description: Guidance on deploying an Azure Active Directory Application Proxy to protect your NDES server. +++++++ Last updated : 04/19/2023++++# Integrate with Azure Active Directory Application Proxy on a Network Device Enrollment Service (NDES) server ++Azure Active Directory (AD) Application Proxy lets you publish applications inside your network. These applications are ones such as SharePoint sites, Microsoft Outlook Web App, and other web applications. It also provides secure access to users outside your network via Azure. ++If you're new to Azure AD Application Proxy and want to learn more, see [Remote access to on-premises applications through Azure AD Application Proxy](application-proxy.md). ++Azure AD Application Proxy is built on Azure. It gives you a massive amount of network bandwidth and server infrastructure for better protection against distributed denial-of-service (DDOS) attacks and superb availability. Furthermore, there's no need to open external firewall ports to your on-premises network and no DMZ server is required. All traffic is originated inbound. For a complete list of outbound ports, see [Tutorial: Add an on-premises application for remote access through Application Proxy in Azure Active Directory](./application-proxy-add-on-premises-application.md#prepare-your-on-premises-environment). ++> Azure AD Application Proxy is a feature that is available only if you are using the Premium or Basic editions of Azure Active Directory. For more information, see [Azure Active Directory pricing](https://www.microsoft.com/security/business/identity-access-management/azure-ad-pricing). +> If you have Enterprise Mobility Suite (EMS) licenses, you are eligible to use this solution. +> The Azure AD Application Proxy connector only installs on Windows Server 2012 R2 or later. This is also a requirement of the NDES server. ++## Install and register the connector on the NDES server ++1. Sign in to the [Azure portal](https://portal.azure.com/) as an application administrator of the directory that uses Application Proxy. For example, if the tenant domain is contoso.com, the admin should be admin@contoso.com or any other admin alias on that domain. +1. Select your username in the upper-right corner. Verify you're signed in to a directory that uses Application Proxy. If you need to change directories, select **Switch directory** and choose a directory that uses Application Proxy. +1. In left navigation panel, select **Azure Active Directory**. +1. Under **Manage**, select **Application proxy**. +1. Select **Download connector service**. ++  ++1. Read the Terms of Service. When you're ready, select **Accept terms & Download**. +1. Copy the Azure AD Application Proxy connector setup file to your NDES server. + > You can install the connector on any server within your corporate network with access to NDES. You don't have to install it on the NDES server itself. +1. Run the setup file, such as *AADApplicationProxyConnectorInstaller.exe*. Accept the software license terms. +1. During the install, you're prompted to register the connector with the Application Proxy in your Azure AD directory. + * Provide the credentials for a global or application administrator in your Azure AD directory. The Azure AD global or application administrator credentials may be different from your Azure credentials in the portal. ++ > [!NOTE] + > The global or application administrator account used to register the connector must belong to the same directory where you enable the Application Proxy service. + > + > For example, if the Azure AD domain is *contoso.com*, the global/application administrator should be `admin@contoso.com` or another valid alias on that domain. ++ * If Internet Explorer Enhanced Security Configuration is turned on for the server where you install the connector, the registration screen might be blocked. To allow access, follow the instructions in the error message, or turn off Internet Explorer Enhanced Security during the install process. + * If connector registration fails, see [Troubleshoot Application Proxy](application-proxy-troubleshoot.md). +1. At the end of the setup, a note is shown for environments with an outbound proxy. To configure the Azure AD Application Proxy connector to work through the outbound proxy, run the provided script, such as `C:\Program Files\Microsoft AAD App Proxy connector\ConfigureOutBoundProxy.ps1`. +1. On the Application proxy page in the Azure portal, the new connector is listed with a status of *Active*, as shown in the following example: ++  ++ > [!NOTE] + > To provide high availability for applications authenticating through the Azure AD Application Proxy, you can install connectors on multiple VMs. Repeat the same steps listed in the previous section to install the connector on other servers joined to the Azure AD DS managed domain. ++1. After successful installation, go back to the Azure portal. ++1. Select **Enterprise applications**. ++  ++1. Select **+New Application**, and then select **On-premises application**. ++1. On the **Add your own on-premises application**, configure the following fields: ++ * **Name**: Enter a name for the application. + * **Internal Url**: Enter the internal URL/FQDN of your NDES server on which you installed the connector. + * **Pre Authentication**: Select **Passthrough**. ItΓÇÖs not possible to use any form of pre authentication. The protocol used for Certificate Requests (SCEP) doesn't provide such option. + * Copy the provided **External URL** to your clipboard. ++1. Select **+Add** to save your application. ++1. Test whether you can access your NDES server via the Azure AD Application proxy by pasting the link you copied in step 15 into a browser. You should see a default IIS welcome page. ++1. As a final test, add the *mscep.dll* path to the existing URL you pasted in the previous step: ++ `https://scep-test93635307549127448334.msappproxy.net/certsrv/mscep/mscep.dll` ++1. You should see an **HTTP Error 403 ΓÇô Forbidden** response. ++1. Change the NDES URL provided (via Microsoft Intune) to devices. This change could either be in Microsoft Configuration Manager or the Microsoft Intune admin center. ++ * For Configuration Manager, go to the certificate registration point and adjust the URL. This URL is what devices call out to and present their challenge. + * For Intune standalone, either edit or create a new SCEP policy and add the new URL. ++## Next steps ++With the Azure AD Application Proxy integrated with NDES, publish applications for users to access. For more information, see [publish applications using Azure AD Application Proxy](./application-proxy-add-on-premises-application.md). |
active-directory | Concept Sspr Howitworks | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/authentication/concept-sspr-howitworks.md | To get started with SSPR, complete the following tutorial: > [!div class="nextstepaction"] > [Tutorial: Enable self-service password reset (SSPR)](tutorial-enable-sspr.md)--The following articles provide additional information regarding password reset through Azure AD: --[Authentication]: ./media/concept-sspr-howitworks/manage-authentication-methods-for-password-reset.png "Azure AD authentication methods available and quantity required" -[Registration]: ./media/concept-sspr-howitworks/configure-registration-options.png "Configure SSPR registration options in the Azure portal" -[Writeback]: ./media/concept-sspr-howitworks/on-premises-integration.png "On-premises integration for SSPR in the Azure portal" |
active-directory | How To Mfa Authenticator Lite | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/authentication/how-to-mfa-authenticator-lite.md | ->[!NOTE] ->Rollout has not yet completed across Outlook applications. If this feature is enabled in your tenant, your users may not yet be prompted for the experience. To minimize user disruption, we recommend enabling this feature when the rollout completes. Microsoft Authenticator Lite is another surface for Azure Active Directory (Azure AD) users to complete multifactor authentication by using push notifications or time-based one-time passcodes (TOTP) on their Android or iOS device. With Authenticator Lite, users can satisfy a multifactor authentication requirement from the convenience of a familiar app. Authenticator Lite is currently enabled in [Outlook mobile](https://www.microsoft.com/microsoft-365/outlook-mobile-for-android-and-ios). Users receive a notification in Outlook mobile to approve or deny sign-in, or th | Operating system | Outlook version | |:-:|::|- |Android | 4.2309.1 | - |iOS | 4.2309.0 | + |Android | 4.2310.1 | + |iOS | 4.2312.1 | ## Enable Authenticator Lite By default, Authenticator Lite is [Microsoft managed](concept-authentication-def To enable Authenticator Lite in the Azure portal, complete the following steps: - 1. In the Azure portal, click Security > Authentication methods > Microsoft Authenticator. + 1. In the Azure portal, click Azure Active Directory > Security > Authentication methods > Microsoft Authenticator. + In the Entra admin center, on the sidebar select Azure Active Directory > Protect & Secure > Authentication methods > Microsoft Authenticator. 2. On the Enable and Target tab, click Yes and All users to enable the policy for everyone or add selected users and groups. Set the Authentication mode for these users/groups to Any or Push. |
active-directory | Msal Net Token Cache Serialization | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/develop/msal-net-token-cache-serialization.md | If you're using the [MSAL library](/dotnet/api/microsoft.identity.client) direct | Extension method | Description | | - | | | [AddInMemoryTokenCaches](/dotnet/api/microsoft.identity.web.microsoftidentityappcallswebapiauthenticationbuilder.addinmemorytokencaches) | Creates a temporary cache in memory for token storage and retrieval. In-memory token caches are faster than other cache types, but their tokens aren't persisted between application restarts, and you can't control the cache size. In-memory caches are good for applications that don't require tokens to persist between app restarts. Use an in-memory token cache in apps that participate in machine-to-machine auth scenarios like services, daemons, and others that use [AcquireTokenForClient](/dotnet/api/microsoft.identity.client.acquiretokenforclientparameterbuilder) (the client credentials grant). In-memory token caches are also good for sample applications and during local app development. Microsoft.Identity.Web versions 1.19.0+ share an in-memory token cache across all application instances.-| [AddSessionTokenCaches](/dotnet/api/microsoft.identity.web.microsoftidentityappcallswebapiauthenticationbuilder.addsessiontokencaches) | The token cache is bound to the user session. This option isn't ideal if the ID token contains many claims, because the cookie becomes too large. +| [AddSessionTokenCaches](/dotnet/api/microsoft.identity.web.microsoftidentityappcallswebapiauthenticationbuilderextension.addsessiontokencaches) | The token cache is bound to the user session. This option isn't ideal if the ID token contains many claims, because the cookie becomes too large. | `AddDistributedTokenCaches` | The token cache is an adapter against the ASP.NET Core `IDistributedCache` implementation. It enables you to choose between a distributed memory cache, a Redis cache, a distributed NCache, or a SQL Server cache. For details about the `IDistributedCache` implementations, see [Distributed memory cache](/aspnet/core/performance/caching/distributed). |
active-directory | Security Best Practices For App Registration | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/develop/security-best-practices-for-app-registration.md | Certificates and secrets, also known as credentials, are a vital part of an appl Consider the following guidance related to certificates and secrets: - Always use [certificate credentials](./active-directory-certificate-credentials.md) whenever possible and don't use password credentials, also known as *secrets*. While it's convenient to use password secrets as a credential, when possible use x509 certificates as the only credential type for getting tokens for an application.+ - Configure [application authentication method policies](/graph/api/resources/applicationauthenticationmethodpolicy) to govern the use of secrets by limiting their lifetimes or blocking their use altogether. - Use Key Vault with [managed identities](../managed-identities-azure-resources/overview.md) to manage credentials for an application. - If an application is used only as a Public Client App (allows users to sign in using a public endpoint), make sure that there are no credentials specified on the application object. - Review the credentials used in applications for freshness of use and their expiration. An unused credential on an application can result in a security breach. Rollover credentials frequently and don't share credentials across applications. Don't have many credentials on one application. |
active-directory | Configure Logic App Lifecycle Workflows | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/governance/configure-logic-app-lifecycle-workflows.md | +## Determine type of token security of your custom task extension ++Before configuring your Azure Logic App custom extension for use with Lifecycle Workflows, you must first figure out what type of token security it has. The two token security types can either be: ++- Normal +- Proof of Possession(POP) +++To determine the security token type of your custom task extension, you'd check the **Custom extensions (Preview)** page: ++++> [!NOTE] +> New custom task extensions will only have Proof of Possession(POP) token security type. Only task extensions created before the inclusion of the Proof of Possession token security type will have a type of Normal. + ## Configure existing Logic Apps for LCW use Making an Azure Logic app compatible to run with the **Custom Task Extension** requires the following steps: - Configure the logic app trigger-- Configure the callback action (only applicable to the callback scenario)-- Enable system assigned managed identity.-- Configure AuthZ policies.+- Configure the callback action (Only applicable to the callback scenario.) +- Enable system assigned managed identity (Always required for Normal security token type extensions. This is also the default for callback scenarios with custom task extensions. For more information on this, and other, custom task extension deployment scenarios, see: [Custom task extension deployment scenarios](lifecycle-workflow-extensibility.md#custom-task-extension-deployment-scenarios).) +- Configure AuthZ policies -To configure those you'll follow these steps: +To configure those you follow these steps: 1. Open the Azure Logic App you want to use with Lifecycle Workflow. Logic Apps may greet you with an introduction screen, which you can close with the X in the upper right corner. To configure those you'll follow these steps: 1. Select Save. -1. For Logic Apps authorization policy, we'll need the managed identities **Application ID**. Since the Azure portal only shows the Object ID, we need to look up the Application ID. You can search for the managed identity by Object ID under **Enterprise Applications in the Azure portal** to find the required Application ID. +## Configure authorization policy for custom task extension with POP security token type +If the security token type is **Proof of Possession (POP)** for your custom task extension, you'd set the authorization policy by following these steps: ++1. For Logic Apps authorization policy, we need the managed identities **Application ID**. Since the Azure portal only shows the Object ID, we need to look up the Application ID. You can search for the managed identity by Object ID under **Enterprise Applications in the Azure AD Portal** to find the required Application ID. 1. Go back to the logic app you created, and select **Authorization**. -1. Create two authorization policies based on the tables below: +1. Create two authorization policies based on these tables: - Policy name: AzureADLifecycleWorkflowsAuthPolicy + Policy name: POP-Policy + + Policy type: (Preview) AADPOP + + |Claim |Value | + ||| + |Issuer | https://sts.windows.net/(Tenant ID)/ | + |appid | ce79fdc4-cd1d-4ea5-8139-e74d7dbe0bb7 | + |m | POST | + |u | management.azure.com | + |p | /subscriptions/(subscriptionId)/resourceGroups/(resourceGroupName)/providers/Microsoft.Logic/workflows/(LogicApp name) | +++1. Save the Authorization policy. +++> [!CAUTION] +> Please pay attention to the details as minor differences can lead to problems later. +- For Issuer, ensure you did include the slash after your Tenant ID +- For appid, ensure the custom claim is ΓÇ£appidΓÇ¥ in all lowercase. The appid value represents Lifecycle Workflows and is always the same. ++## Configure authorization policy for custom task extension with normal security token type ++If the security token type is **Normal** for your custom task extension, you'd set the authorization policy by following these steps: ++1. For Logic Apps authorization policy, we need the managed identities **Application ID**. Since the Azure portal only shows the Object ID, we need to look up the Application ID. You can search for the managed identity by Object ID under **Enterprise Applications in the Azure AD Portal** to find the required Application ID. ++1. Go back to the logic app you created, and select **Authorization**. ++1. Create two authorization policies based on these tables: ++ Policy name: AzureADLifecycleWorkflowsAuthPolicy ++ Policy type: AAD |Claim |Value | ||| To configure those you'll follow these steps: |Audience | Application ID of your Logic Apps Managed Identity | |appid | ce79fdc4-cd1d-4ea5-8139-e74d7dbe0bb7 | - Policy name: AzureADLifecycleWorkflowsAuthPolicyV2App + Policy name: AzureADLifecycleWorkflowsAuthPolicyV2App ++ Policy type: AAD |Claim |Value | ||| To configure those you'll follow these steps: |azp | ce79fdc4-cd1d-4ea5-8139-e74d7dbe0bb7 | 1. Save the Authorization policy.-> [!NOTE] -> Due to a current bug in the Logic Apps UI you may have to save the authorization policy after each claim before adding another. > [!CAUTION] > Please pay attention to the details as minor differences can lead to problems later. |
active-directory | Entitlement Management Logic Apps Integration | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/governance/entitlement-management-logic-apps-integration.md | These triggers to Logic Apps are controlled in a tab within access package polic 1. The **Extension Configuration** tab allows you to decide if your extension has ΓÇ£launch and continueΓÇ¥ or ΓÇ£launch and waitΓÇ¥ behavior. With ΓÇ£Launch and continueΓÇ¥ the linked policy action on the access package, such as a request, triggers the Logic App attached to the custom extension. After the Logic App is triggered, the entitlement management process associated with the access package will continue. For ΓÇ£Launch and waitΓÇ¥, we'll pause the associated access package action until after the Logic App linked to the extension completes its task, and a resume action is sent by the admin to continue the process. If no response is sent back in the wait time period defined, this process would be considered a failure. This process is further described below in its own section [Configuring custom extensions that pause entitlement management processes](entitlement-management-logic-apps-integration.md#configuring-custom-extensions-that-pause-entitlement-management-processes). -1. In the **Details** tab, choose whether youΓÇÖd like to use an existing Logic App. Selecting Yes in the field ΓÇ£Create new logic appΓÇ¥ (default) creates a new blank Logic App that is already linked to this custom extension. Regardless, you need to provide: +1. In the **Details** tab, choose whether youΓÇÖd like to use an existing consumption plan Logic App. Selecting Yes in the field ΓÇ£Create new logic appΓÇ¥ (default) creates a new blank consumption plan Logic App that is already linked to this custom extension. Regardless, you need to provide: 1. An Azure subscription. A new update to the custom extensions feature is the ability to pause the access This pause process allows admins to have control of workflows theyΓÇÖd like to run before continuing with access lifecycle tasks in entitlement management. The only exception to this is if a timeout occurs. Launch and wait processes require a timeout of up to 14 days noted in minutes, hours, or days. If a resume response isn't sent back to entitlement management by the time the ΓÇ£timeoutΓÇ¥ period elapses, the entitlement management request workflow process pauses. -The admin is responsible for configuring an automated process that is able to send the API **resume request** payload back to entitlement management, once the Logic App workflow has completed. To send back the resume request payload, follow the instructions here in the graph API documents. See information here on the [resume request](/graph/api/accesspackageassignmentrequest-resume) +The admin is responsible for configuring an automated process that is able to send the API **resume request** payload back to entitlement management, once the Logic App workflow has completed. To send back the resume request payload, follow the instructions here in the graph API documents. See information here on the [resume request](/graph/api/accesspackageassignmentrequest-resume). Specifically, when an access package policy has been enabled to call out a custom extension and the request processing is waiting for the callback from the customer, the customer can initiate a resume action. It's performed on an [accessPackageAssignmentRequest](/graph/api/resources/accesspackageassignmentrequest) object whose **requestStatus** is in a **WaitingForCallback** state. The resume request can be sent back for the following stages: microsoft.graph.accessPackageCustomExtensionStage.assignmentRequestCreated microsoft.graph.accessPackageCustomExtensionStage.assignmentRequestApproved microsoft.graph.accessPackageCustomExtensionStage.assignmentRequestGranted -Microsoft.graph.accessPackageCustomExtensionStage.assignmentRequestRemoved +microsoft.graph.accessPackageCustomExtensionStage.assignmentRequestRemoved `` The following flow diagram shows the entitlement management callout to Logic Apps workflow: +The diagram flow diagram shows: ++1. The user creates a custom endpoint able to receive the call from the Identity Service +1. The identity service makes a test call to confirm the endpoint can be called by the Identity Service +1. The User calls Graph API to request to add a user to an access package +1. The Identity Service is added to the queue triggering the backend workflow +1. Entitlement Management Service request processing calls the logic app with the request payload +1. Workflow expects the accepted code +1. The Entitlement Management Service waits for the blocking custom action to resume +1. The customer system calls the request resume API to the identity service to resume processing the request +1. The identity service adds the resume request message to the Entitlement Management Service queue resuming the backend workflow +1. The Entitlement Management Service is resumed from the blocked state + An example of a resume request payload is: ``` http |
active-directory | Assign User Or Group Access Portal | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/manage-apps/assign-user-or-group-access-portal.md | This article shows you how to assign users and groups to an enterprise applicati When you assign a group to an application, only users in the group will have access. The assignment doesn't cascade to nested groups. -Group-based assignment requires Azure Active Directory Premium P1 or P2 edition. Group-based assignment is supported for Security groups only. Nested group memberships and Microsoft 365 groups aren't currently supported. For more licensing requirements for the features discussed in this article, see the [Azure Active Directory pricing page](https://azure.microsoft.com/pricing/details/active-directory). +Group-based assignment requires Azure Active Directory Premium P1 or P2 edition. Group-based assignment is supported for Security groups and Microsoft 365 groups whose `SecurityEnabled` setting is set to `True` only. Nested group memberships aren't currently supported. For more licensing requirements for the features discussed in this article, see the [Azure Active Directory pricing page](https://azure.microsoft.com/pricing/details/active-directory). For greater control, certain types of enterprise applications can be configured to require user assignment. For more information on requiring user assignment for an app, see [Manage access to an application](what-is-access-management.md#requiring-user-assignment-for-an-app). |
active-directory | Configure User Consent | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/manage-apps/configure-user-consent.md | To reduce the risk of malicious applications attempting to trick users into gran To configure user consent, you need: - A user account. If you don't already have one, you can [create an account for free](https://azure.microsoft.com/free/?WT.mc_id=A261C142F).-- A Global Administrator or Privileged Administrator role.+- A Global Administrator role. ## Configure user consent settings |
active-directory | Pim Resource Roles Activate Your Roles | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/privileged-identity-management/pim-resource-roles-activate-your-roles.md | Status code: 201 "type": "Microsoft.Authorization/RoleAssignmentScheduleRequests" } ````-## Activate a role with PowerShell --There is also an option to activate Privileged Identity Management using PowerShell. You may find more details as documented in the article [PowerShell for Azure AD roles PIM](powershell-for-azure-ad-roles.md). --The following is a sample script for how to activate Azure resource roles using PowerShell. --```powershell -$managementgroupID = "<management group ID" # Tenant Root Group -$guid = (New-Guid) -$startTime = Get-Date -Format o -$userObjectID = "<user object ID" -$RoleDefinitionID = "b24988ac-6180-42a0-ab88-20f7382dd24c" # Contributor -$scope = "/providers/Microsoft.Management/managementGroups/$managementgroupID" -New-AzRoleAssignmentScheduleRequest -Name $guid -Scope $scope -ExpirationDuration PT8H -ExpirationType AfterDuration -PrincipalId $userObjectID -RequestType SelfActivate -RoleDefinitionId /providersproviders/Microsoft.Management/managementGroups/$managementgroupID/providers/Microsoft.Authorization/roleDefinitions/$roledefinitionId -ScheduleInfoStartDateTime $startTime -Justification work -``` ## View the status of your requests |
active-directory | Powershell For Azure Ad Roles | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/privileged-identity-management/powershell-for-azure-ad-roles.md | - Title: PowerShell for Azure AD roles in PIM -description: Manage Azure AD roles using PowerShell cmdlets in Azure AD Privileged Identity Management (PIM). -------- Previously updated : 10/07/2021-------# PowerShell for Azure AD roles in Privileged Identity Management --This article tells you how to use PowerShell cmdlets to manage Azure AD roles using Privileged Identity Management (PIM) in Azure Active Directory (Azure AD), part of Microsoft Entra. It also tells you how to get set up with the Azure AD PowerShell module. --## Installation and Setup --1. Install the Azure AD Preview module -- ```powershell - Install-module AzureADPreview - ``` --1. Ensure that you have the required role permissions before proceeding. If you are trying to perform management tasks like giving a role assignment or updating role setting, ensure that you have either the Global administrator or Privileged role administrator role. If you are just trying to activate your own assignment, no permissions beyond the default user permissions are required. --1. Connect to Azure AD. -- ```powershell - $AzureAdCred = Get-Credential - Connect-AzureAD -Credential $AzureAdCred - ``` --1. Find the Tenant ID for your Azure AD organization by going to **Azure Active Directory** > **Properties** > **Directory ID**. In the cmdlets section, use this ID whenever you need to supply the resourceId. --  --> [!Note] -> The following sections are simple examples that can help get you up and running. You can find more detailed documentation regarding the following cmdlets at [/powershell/module/azuread/?view=azureadps-2.0-preview&preserve-view=true#privileged_role_management](/powershell/module/azuread/?view=azureadps-2.0-preview&preserve-view=true#privileged_role_management). However, you must replace "azureResources" in the providerID parameter with "aadRoles". You will also need to remember to use the Tenant ID for your Azure AD organization as the resourceId parameter. --## Retrieving role definitions --Use the following cmdlet to get all built-in and custom Azure AD roles in your Azure AD organization. This important step gives you the mapping between the role name and the roleDefinitionId. The roleDefinitionId is used throughout these cmdlets in order to reference a specific role. --The roleDefinitionId is specific to your Azure AD organization and is different from the roleDefinitionId returned by the role management API. --```powershell -Get-AzureADMSPrivilegedRoleDefinition -ProviderId aadRoles -ResourceId 926d99e7-117c-4a6a-8031-0cc481e9da26 -``` --Result: -- --## Retrieving role assignments --Use the following cmdlet to retrieve all role assignments in your Azure AD organization. --```powershell -Get-AzureADMSPrivilegedRoleAssignment -ProviderId "aadRoles" -ResourceId "926d99e7-117c-4a6a-8031-0cc481e9da26" -``` --Use the following cmdlet to retrieve all role assignments for a particular user. This list is also known as "My Roles" in the Azure portal. The only difference here is that you have added a filter for the subject ID. The subject ID in this context is the user ID or the group ID. --```powershell -Get-AzureADMSPrivilegedRoleAssignment -ProviderId "aadRoles" -ResourceId "926d99e7-117c-4a6a-8031-0cc481e9da26" -Filter "subjectId eq 'f7d1887c-7777-4ba3-ba3d-974488524a9d'" -``` --Use the following cmdlet to retrieve all role assignments for a particular role. The roleDefinitionId here is the ID that is returned by the previous cmdlet. --```powershell -Get-AzureADMSPrivilegedRoleAssignment -ProviderId "aadRoles" -ResourceId "926d99e7-117c-4a6a-8031-0cc481e9da26" -Filter "roleDefinitionId eq '0bb54a22-a3df-4592-9dc7-9e1418f0f61c'" -``` --The cmdlets result in a list of role assignment objects shown below. The subject ID is the user ID of the user to whom the role is assigned. The assignment state could either be active or eligible. If the user is active and there is an ID in the LinkedEligibleRoleAssignmentId field, that means the role is currently activated. --Result: -- --## Assign a role --Use the following cmdlet to create an eligible assignment. --```powershell -Open-AzureADMSPrivilegedRoleAssignmentRequest -ProviderId 'aadRoles' -ResourceId '926d99e7-117c-4a6a-8031-0cc481e9da26' -RoleDefinitionId 'ff690580-d1c6-42b1-8272-c029ded94dec' -SubjectId 'f7d1887c-7777-4ba3-ba3d-974488524a9d' -Type 'adminAdd' -AssignmentState 'Eligible' -schedule $schedule -reason "dsasdsas" -``` --The schedule, which defines the start and end time of the assignment, is an object that can be created like the following example: --```powershell -$schedule = New-Object Microsoft.Open.MSGraph.Model.AzureADMSPrivilegedSchedule -$schedule.Type = "Once" -$schedule.StartDateTime = (Get-Date).ToUniversalTime().ToString("yyyy-MM-ddTHH:mm:ss.fffZ") -$schedule.endDateTime = "2020-07-25T20:49:11.770Z" -``` -> [!Note] -> If the value of endDateTime is set to null, it indicates a permanent assignment. --## Activate a role assignment --Use the following cmdlet to activate an eligible assignment in a context of a regular user: --```powershell -Open-AzureADMSPrivilegedRoleAssignmentRequest -ProviderId 'aadRoles' -ResourceId '926d99e7-117c-4a6a-8031-0cc481e9da26' -RoleDefinitionId 'f55a9a68-f424-41b7-8bee-cee6a442d418' -SubjectId 'f7d1887c-7777-4ba3-ba3d-974488524a9d' -Type 'UserAdd' -AssignmentState 'Active' -Schedule $schedule -Reason "Business Justification for the role assignment" -``` --If you need to activate an eligible assignment as administrator, for the `Type` parameter, specify `adminAdd`: --```powershell -Open-AzureADMSPrivilegedRoleAssignmentRequest -ProviderId 'aadRoles' -ResourceId '926d99e7-117c-4a6a-8031-0cc481e9da26' -RoleDefinitionId 'f55a9a68-f424-41b7-8bee-cee6a442d418' -SubjectId 'f7d1887c-7777-4ba3-ba3d-974488524a9d' -Type 'adminAdd' -AssignmentState 'Active' -Schedule $schedule -Reason "Business Justification for the role assignment" -``` --This cmdlet is almost identical to the cmdlet for creating a role assignment. The key difference between the cmdlets is that for the ΓÇôType parameter, activation is "userAdd" instead of "adminAdd". The other difference is that the ΓÇôAssignmentState parameter is "Active" instead of "Eligible." --> [!Note] -> There are two limiting scenarios for role activation through PowerShell. -> 1. If you require ticket system / ticket number in your role setting, there is no way to supply those as a parameter. Thus, it would not be possible to activate the role beyond the Azure portal. This feature is being rolled out to PowerShell over the next few months. -> 1. If you require multi-factor authentication for role activation, there is currently no way for PowerShell to challenge the user when they activate their role. Instead, users will need to trigger the MFA challenge when they connect to Azure AD by following [this blog post](http://www.anujchaudhary.com/2020/02/connect-to-azure-ad-powershell-with-mfa.html) from one of our engineers. If you are developing an app for PIM, one possible implementation is to challenge users and reconnect them to the module after they receive a "MfaRule" error. --## Retrieving and updating role settings --Use the following cmdlet to get all role settings in your Azure AD organization. --```powershell -Get-AzureADMSPrivilegedRoleSetting -ProviderId 'aadRoles' -Filter "ResourceId eq '926d99e7-117c-4a6a-8031-0cc481e9da26'" -``` --There are four main objects in the setting. Only three of these objects are currently used by PIM. The UserMemberSettings are activation settings, AdminEligibleSettings are assignment settings for eligible assignments, and the AdminmemberSettings are assignment settings for active assignments. --[](media/powershell-for-azure-ad-roles/get-update-role-settings-result.png#lightbox) --To update the role setting, you must get the existing setting object for a particular role and make changes to it: --```powershell -Get-AzureADMSPrivilegedRoleSetting -ProviderId 'aadRoles' -Filter "ResourceId eq 'tenant id' and RoleDefinitionId eq 'role id'" -$settinga = New-Object Microsoft.Open.MSGraph.Model.AzureADMSPrivilegedRuleSetting -$settinga.RuleIdentifier = "JustificationRule" -$settinga.Setting = '{"required":false}' -``` --You can then go ahead and apply the setting to one of the objects for a particular role as shown below. The ID here is the role setting ID that can be retrieved from the result of the list role settings cmdlet. --```powershell -Set-AzureADMSPrivilegedRoleSetting -ProviderId 'aadRoles' -Id 'ff518d09-47f5-45a9-bb32-71916d9aeadf' -ResourceId '3f5887ed-dd6e-4821-8bde-c813ec508cf9' -RoleDefinitionId '2387ced3-4e95-4c36-a915-73d803f93702' -UserMemberSettings $settinga -``` --## Next steps --- [Role definitions in Azure AD](../roles/permissions-reference.md) |
active-directory | Leapsome Tutorial | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/leapsome-tutorial.md | Follow these steps to enable Azure AD SSO in the Azure portal. | Name | Source Attribute | Namespace | | | | | - | firstname | user.givenname | http://schemas.xmlsoap.org/ws/2005/05/identity/claims | - | lastname | user.surname | http://schemas.xmlsoap.org/ws/2005/05/identity/claims | - | title | user.jobtitle | http://schemas.xmlsoap.org/ws/2005/05/identity/claims | - | picture | URL to the employee's picture | http://schemas.xmlsoap.org/ws/2005/05/identity/claims | + | firstname | user.givenname | https://schemas.xmlsoap.org/ws/2005/05/identity/claims | + | lastname | user.surname | https://schemas.xmlsoap.org/ws/2005/05/identity/claims | + | title | user.jobtitle | https://schemas.xmlsoap.org/ws/2005/05/identity/claims | + | picture | URL to the employee's picture | https://schemas.xmlsoap.org/ws/2005/05/identity/claims | | | | > [!Note] |
active-directory | Textmagic Tutorial | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/textmagic-tutorial.md | Follow these steps to enable Azure AD SSO in the Azure portal. | Name | Source Attribute| Namespace | | | | |- | company | user.companyname | http://schemas.xmlsoap.org/ws/2005/05/identity/claims | - | firstName | user.givenname | http://schemas.xmlsoap.org/ws/2005/05/identity/claims | - | lastName | user.surname | http://schemas.xmlsoap.org/ws/2005/05/identity/claims | - | phone | user.telephonenumber | http://schemas.xmlsoap.org/ws/2005/05/identity/claims | + | company | user.companyname | https://schemas.xmlsoap.org/ws/2005/05/identity/claims | + | firstName | user.givenname | https://schemas.xmlsoap.org/ws/2005/05/identity/claims | + | lastName | user.surname | https://schemas.xmlsoap.org/ws/2005/05/identity/claims | + | phone | user.telephonenumber | https://schemas.xmlsoap.org/ws/2005/05/identity/claims | 1. On the **Set up single sign-on with SAML** page, in the **SAML Signing Certificate** section, find **Certificate (Base64)** and select **Download** to download the certificate and save it on your computer. |
active-directory | Hipaa Access Controls | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/standards/hipaa-access-controls.md | The following table has HIPAA guidance on the automatic logoff safeguard. Find M | Recommendation | Action | | - | - | | Create group policy | Support for devices not migrated to Azure AD and managed by Intune, [Group Policy (GPO)](../../active-directory-domain-services/manage-group-policy.md) can enforce sign out, or lock screen time for devices on AD, or in hybrid environments. |-| Assess device management requirements | [Microsoft IntTune](/mem/intune/fundamentals/what-is-intune) provides mobile device management (MDM) and mobile application management (MAM). It provides control over company and personal devices. You can manage device usage and enforce policies to control mobile applications. | +| Assess device management requirements | [Microsoft Intune](/mem/intune/fundamentals/what-is-intune) provides mobile device management (MDM) and mobile application management (MAM). It provides control over company and personal devices. You can manage device usage and enforce policies to control mobile applications. | | Device Conditional Access policy | Implement device lock by using a conditional access policy to restrict access to [compliant](../conditional-access/concept-conditional-access-grant.md) or hybrid Azure AD joined devices. Configure [policy settings](../conditional-access/concept-conditional-access-grant.md#require-hybrid-azure-ad-joined-device).</br>For unmanaged devices, configure the [Sign-In Frequency](../conditional-access/howto-conditional-access-session-lifetime.md) setting to force users to reauthenticate. | | Configure session time out for Microsoft 365 | Review the [session timeouts](/microsoft-365/admin/manage/idle-session-timeout-web-apps) for Microsoft 365 applications and services, to amend any prolonged timeouts. | | Configure session time out for Azure portal | Review the [session timeouts for Azure portal session](../../azure-portal/set-preferences.md), by implementing a timeout due to inactivity it helps to protect resources from unauthorized access. | |
active-directory | Using Authenticator | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/verifiable-credentials/using-authenticator.md | Using the Authenticator for the first time presents a set of screens that you ha When the Microsoft Authenticator app is installed and ready, you use the public end to end demo webapp to issue your first verifiable credential onto the Authenticator. -1. Open [end to end demo](http://woodgroveemployee.azurewebsites.net/) in your browser +1. Open [end to end demo](https://woodgroveemployee.azurewebsites.net/) in your browser 1. Enter your First Name and Last Name and press **Next** 1. Select **Verify with True Identity** 1. Click **Take a selfie** and **Upload government issued ID**. The demo uses simulated data and you don't need to provide a real selfie or an ID. |
advisor | Advisor Reference Cost Recommendations | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/advisor/advisor-reference-cost-recommendations.md | Learn more about [Subscription - MySQLReservedCapacity (Consider Database for My ### Consider Database for PostgreSQL reserved instance to save over your pay-as-you-go costs -We analyzed your Database for PostgreSQL usage pattern over last 30 days and recommend reserved instance purchase that maximizes your savings. With reserved instance you can pre-purchase PostgresSQL Database hourly usage and save over your on-demand costs. Reserved instance is a billing benefit and will automatically apply to new or existing deployments. Saving estimates are calculated for individual subscriptions and the usage pattern over last 30 days. Shared scope recommendations are available in reservation purchase experience and can increase savings further. +We analyzed your Database for PostgreSQL usage pattern over last 30 days and recommend reserved instance purchase that maximizes your savings. With reserved instance you can pre-purchase PostgreSQL Database hourly usage and save over your on-demand costs. Reserved instance is a billing benefit and will automatically apply to new or existing deployments. Saving estimates are calculated for individual subscriptions and the usage pattern over last 30 days. Shared scope recommendations are available in reservation purchase experience and can increase savings further. Learn more about [Subscription - PostgreSQLReservedCapacity (Consider Database for PostgreSQL reserved instance to save over your pay-as-you-go costs)](https://aka.ms/rirecommendations). |
advisor | Advisor Release Notes | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/advisor/advisor-release-notes.md | Title: Release notes for Azure Advisor description: A description of what's new and changed in Azure Advisor Previously updated : 01/03/2022 Last updated : 04/18/2023 # What's new in Azure Advisor? Learn what's new in the service. These items may be release notes, videos, blog posts, and other types of information. Bookmark this page to stay up to date with the service. +## April 2023 ++### VM/VMSS right-sizing recommendations with custom lookback period ++Customers can now improve the relevance of recommendations to make them more actionable, resulting in additional cost savings. +The right sizing recommendations help optimize costs by identifying idle or underutilized virtual machines based on their CPU, memory, and network activity over the default lookback period of seven days. +Now, with this latest update, customers can adjust the default look back period to get recommendations based on 14, 21,30, 60, or even 90 days of use. The configuration can be applied at the subscription level. This is especially useful when the workloads have biweekly or monthly peaks (such as with payroll applications). ++To learn more, visit [Optimize virtual machine (VM) or virtual machine scale set (VMSS) spend by resizing or shutting down underutilized instances](advisor-cost-recommendations.md#optimize-virtual-machine-vm-or-virtual-machine-scale-set-vmss-spend-by-resizing-or-shutting-down-underutilized-instances). ## May 2022 |
aks | Auto Upgrade Cluster | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/auto-upgrade-cluster.md | Part of the AKS cluster lifecycle involves performing periodic upgrades to the l Cluster auto-upgrade provides a set once and forget mechanism that yields tangible time and operational cost benefits. By enabling auto-upgrade, you can ensure your clusters are up to date and don't miss the latest AKS features or patches from AKS and upstream Kubernetes. -AKS follows a strict versioning window with regard to supportability. With properly selected auto-upgrade channels, you can avoid clusters falling into an unsupported version. For more on the AKS support window, see [Alias minor versions][supported-kubernetes-versions]. +AKS follows a strict supportability versioning window. With properly selected auto-upgrade channels, you can avoid clusters falling into an unsupported version. For more on the AKS support window, see [Alias minor versions][supported-kubernetes-versions]. ++## Customer versus AKS-initiated auto-upgrades ++Customers can specify cluster auto-upgrade specifics in the following guidance. These upgrades occur based on the cadence the customer specifies and are recommended for customers to remain on supported Kubernetes versions. ++AKS also initiates auto-upgrades for unsupported clusters. When a cluster in an n-3 version (where n is the latest supported AKS GA minor version) is about to drop to n-4, AKS automatically upgrades the cluster to n-2 to remain in an AKS support [policy][supported-kubernetes-versions]. Automatically upgrading a platform supported cluster to a supported version is enabled by default. ++For example, Kubernetes v1.25 will upgrade to v1.26 during the v1.29 GA release. To minimize disruptions, set up [maintenance windows][planned-maintenance]. ## Cluster auto-upgrade limitations -If youΓÇÖre using cluster auto-upgrade, you can no longer upgrade the control plane first and then upgrade the individual node pools. Cluster auto-upgrade will always upgrade the control plane and the node pools together. There is no ability of upgrading the control plane only, and trying to run the command `az aks upgrade --control-plane-only` will raise the error: `NotAllAgentPoolOrchestratorVersionSpecifiedAndUnchanged: Using managed cluster api, all Agent pools' OrchestratorVersion must be all specified or all unspecified. If all specified, they must be stay unchanged or the same with control plane.` +If youΓÇÖre using cluster auto-upgrade, you can no longer upgrade the control plane first, and then upgrade the individual node pools. Cluster auto-upgrade always upgrades the control plane and the node pools together. There's no ability of upgrading the control plane only, and trying to run the command `az aks upgrade --control-plane-only` raises the following error: `NotAllAgentPoolOrchestratorVersionSpecifiedAndUnchanged: Using managed cluster api, all Agent pools' OrchestratorVersion must be all specified or all unspecified. If all specified, they must be stay unchanged or the same with control plane.` -If using the `node-image` cluster auto-upgrade channel or the `NodeImage` node image auto-upgrade channel, Linux [unattended upgrades][unattended-upgrades] will be disabled by default. +If using the `node-image` cluster auto-upgrade channel or the `NodeImage` node image auto-upgrade channel, Linux [unattended upgrades][unattended-upgrades] is disabled by default. ## Using cluster auto-upgrade The following upgrade channels are available: | `patch`| automatically upgrade the cluster to the latest supported patch version when it becomes available while keeping the minor version the same.| For example, if a cluster is running version *1.17.7* and versions *1.17.9*, *1.18.4*, *1.18.6*, and *1.19.1* are available, your cluster is upgraded to *1.17.9*| | `stable`| automatically upgrade the cluster to the latest supported patch release on minor version *N-1*, where *N* is the latest supported minor version.| For example, if a cluster is running version *1.17.7* and versions *1.17.9*, *1.18.4*, *1.18.6*, and *1.19.1* are available, your cluster is upgraded to *1.18.6*. | `rapid`| automatically upgrade the cluster to the latest supported patch release on the latest supported minor version.| In cases where the cluster is at a version of Kubernetes that is at an *N-2* minor version where *N* is the latest supported minor version, the cluster first upgrades to the latest supported patch version on *N-1* minor version. For example, if a cluster is running version *1.17.7* and versions *1.17.9*, *1.18.4*, *1.18.6*, and *1.19.1* are available, your cluster first is upgraded to *1.18.6*, then is upgraded to *1.19.1*. -| `node-image`| automatically upgrade the node image to the latest version available.| Microsoft provides patches and new images for image nodes frequently (usually weekly), but your running nodes won't get the new images unless you do a node image upgrade. Turning on the node-image channel will automatically update your node images whenever a new version is available. If you use this channel, Linux [unattended upgrades] will be disabled by default.| +| `node-image`| automatically upgrade the node image to the latest version available.| Microsoft provides patches and new images for image nodes frequently (usually weekly), but your running nodes don't get the new images unless you do a node image upgrade. Turning on the node-image channel automatically updates your node images whenever a new version is available. If you use this channel, Linux [unattended upgrades] are disabled by default.| > [!NOTE] > Cluster auto-upgrade only updates to GA versions of Kubernetes and will not update to preview versions. The Azure portal also highlights all the deprecated APIs between your current ve ## Using auto-upgrade with Planned Maintenance -If youΓÇÖre using Planned Maintenance and cluster auto-upgrade, your upgrade will start during your specified maintenance window. +If youΓÇÖre using Planned Maintenance and cluster auto-upgrade, your upgrade starts during your specified maintenance window. > [!NOTE] > To ensure proper functionality, use a maintenance window of four hours or more. For more information on Planned Maintenance, see [Use Planned Maintenance to sch ## Best practices for cluster auto-upgrade -The following best practices will help maximize your success when using auto-upgrade: +Use the following best practices to help maximize your success when using auto-upgrade: - In order to keep your cluster always in a supported version (i.e within the N-2 rule), choose either `stable` or `rapid` channels. - If you're interested in getting the latest patches as soon as possible, use the `patch` channel. The `node-image` channel is a good fit if you want your agent pools to always be running the most recent node images. The following best practices will help maximize your success when using auto-upg - Follow [PDB best practices][pdb-best-practices]. <!-- INTERNAL LINKS -->-[supported-kubernetes-versions]: supported-kubernetes-versions.md -[upgrade-aks-cluster]: upgrade-cluster.md -[planned-maintenance]: planned-maintenance.md +[supported-kubernetes-versions]: ./supported-kubernetes-versions.md +[upgrade-aks-cluster]: ./upgrade-cluster.md +[planned-maintenance]: ./planned-maintenance.md [operator-best-practices-scheduler]: operator-best-practices-scheduler.md#plan-for-availability-using-pod-disruption-budgets [node-image-auto-upgrade]: auto-upgrade-node-image.md |
aks | Csi Secrets Store Driver | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/csi-secrets-store-driver.md | In addition to an AKS cluster, you'll need an Azure key vault resource that stor The Secrets Store CSI Driver allows for the following methods to access an Azure key vault: * An [Azure Active Directory pod identity][aad-pod-identity] (preview)-* An [Azure Active Directory workload identity][aad-workload-identity] (preview) +* An [Azure Active Directory workload identity][aad-workload-identity] * A user-assigned or system-assigned managed identity Follow the instructions in [Provide an identity to access the Azure Key Vault Provider for Secrets Store CSI Driver][identity-access-methods] for your chosen method. |
aks | Faq | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/faq.md | Most clusters are deleted upon user request; in some cases, especially where cus No, you're unable to restore your cluster after deleting it. When you delete your cluster, the associated resource group and all its resources will also be deleted. If you want to keep any of your resources, move them to another resource group before deleting your cluster. If you have the **Owner** or **User Access Administrator** built-in role, you can lock Azure resources to protect them from accidental deletions and modifications. For more information, see [Lock your resources to protect your infrastructure][lock-azure-resources]. +## What is platform support, and what does it include? ++Platform support is a reduced support plan for unsupported "N-3" version clusters. Platform support only includes Azure infrastructure support. Platform support does not include anything related to Kubernetes functionality and components, cluster or node pool creation, hotfixes, bug fixes, security patches, retired components, etc. See [platform support policy][supported-kubernetes-versions] for additional restrictions. ++AKS relies on the releases and patches from [Kubernetes](https://kubernetes.io/releases/), which is an Open Source project that only supports a sliding window of 3 minor versions. AKS can only guarantee [full support](./supported-kubernetes-versions.md#kubernetes-version-support-policy) while those versions are being serviced upstream. Since there's no more patches being produced upstream, AKS can either leave those versions unpatched or fork. Due to this limitation, platform support will not support anything from relying on kubernetes upstream. ++## Will AKS automatically upgrade my unsupported clusters? ++AKS will initiate auto-upgrades for unsupported clusters. When a cluster in an n-3 version (where n is the latest supported AKS GA minor version) is about to drop to n-4, AKS will automatically upgrade the cluster to n-2 to remain in an AKS support [policy][supported-kubernetes-versions]. Automatically upgrading a platform supported cluster to a supported version is enabled by default. ++For example, kubernetes v1.25 will be upgraded to v1.26 during the v1.29 GA release. To minimize disruptions, set up [maintenance windows][planned-maintenance]. See [auto-upgrade][auto-upgrade-cluster] for details on automatic upgrade channels. + ## If I have pod / deployments in state 'NodeLost' or 'Unknown' can I still upgrade my cluster? You can, but we don't recommend it. Upgrades should be performed when the state of the cluster is known and healthy. The extension **does not** require any additional outbound access to any URLs, I <!-- LINKS - internal --> [aks-upgrade]: ./upgrade-cluster.md+[auto-upgrade-cluster]: ./auto-upgrade-cluster.md +[planned-maintenance]: ./planned-maintenance.md [aks-cluster-autoscale]: ./cluster-autoscaler.md [aks-advanced-networking]: ./configure-azure-cni.md [aks-rbac-aad]: ./azure-ad-integration-cli.md The extension **does not** require any additional outbound access to any URLs, I [multi-node-pools]: ./use-multiple-node-pools.md [availability-zones]: ./availability-zones.md [private-clusters]: ./private-clusters.md+[supported-kubernetes-versions]: ./supported-kubernetes-versions.md [bcdr-bestpractices]: ./operator-best-practices-multi-region.md#plan-for-multiregion-deployment [availability-zones]: ./availability-zones.md [az-regions]: ../availability-zones/az-region.md |
aks | Ingress Tls | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/ingress-tls.md | When you upgrade your ingress controller, you must pass a parameter to the Helm helm upgrade ingress-nginx ingress-nginx/ingress-nginx \ --namespace $NAMESPACE \ --set controller.service.annotations."service\.beta\.kubernetes\.io/azure-dns-label-name"=$DNS_LABEL \- --set controller.service.loadBalancerIP=$STATIC_IP + --set controller.service.loadBalancerIP=$STATIC_IP \ + --set controller.service.annotations."service\.beta\.kubernetes\.io/azure-load-balancer-health-probe-request-path"=/healthz ``` ### [Azure PowerShell](#tab/azure-powershell) When you upgrade your ingress controller, you must pass a parameter to the Helm helm upgrade ingress-nginx ingress-nginx/ingress-nginx ` --namespace $Namespace ` --set controller.service.annotations."service\.beta\.kubernetes\.io/azure-dns-label-name"=$DnsLabel `- --set controller.service.loadBalancerIP=$StaticIP + --set controller.service.loadBalancerIP=$StaticIP ` + --set controller.service.annotations."service\.beta\.kubernetes\.io/azure-load-balancer-health-probe-request-path"=/healthz ``` NAMESPACE="ingress-basic" helm upgrade ingress-nginx ingress-nginx/ingress-nginx \ --namespace $NAMESPACE \- --set controller.service.annotations."service\.beta\.kubernetes\.io/azure-dns-label-name"=$DNSLABEL + --set controller.service.annotations."service\.beta\.kubernetes\.io/azure-dns-label-name"=$DNSLABEL \ + --set controller.service.annotations."service\.beta\.kubernetes\.io/azure-load-balancer-health-probe-request-path"=/healthz ``` ### [Azure PowerShell](#tab/azure-powershell) $Namespace = "ingress-basic" helm upgrade ingress-nginx ingress-nginx/ingress-nginx ` --namespace $Namespace `- --set controller.service.annotations."service\.beta\.kubernetes\.io/azure-dns-label-name"=$DnsLabel + --set controller.service.annotations."service\.beta\.kubernetes\.io/azure-dns-label-name"=$DnsLabel ` + --set controller.service.annotations."service\.beta\.kubernetes\.io/azure-load-balancer-health-probe-request-path"=/healthz ``` |
aks | Tutorial Kubernetes Workload Identity | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/learn/tutorial-kubernetes-workload-identity.md | Title: Tutorial - Use a workload identity with an application on Azure Kubernete description: In this Azure Kubernetes Service (AKS) tutorial, you deploy an Azure Kubernetes Service cluster and configure an application to use a workload identity. Previously updated : 01/11/2023 Last updated : 04/19/2023 # Tutorial: Use a workload identity with an application on Azure Kubernetes Service (AKS) This tutorial assumes a basic understanding of Kubernetes concepts. For more inf [!INCLUDE [quickstarts-free-trial-note](../../../includes/quickstarts-free-trial-note.md)] +- This article requires version 2.47.0 or later of the Azure CLI. If using Azure Cloud Shell, the latest version is already installed. -- This article requires version 2.40.0 or later of the Azure CLI. If using Azure Cloud Shell, the latest version is already installed.--- You have installed the latest version of the `aks-preview` extension, version 0.5.102 or later.--- The identity you are using to create your cluster has the appropriate minimum permissions. For more information on access and identity for AKS, see [Access and identity options for Azure Kubernetes Service (AKS)][aks-identity-concepts].+- The identity you're using to create your cluster has the appropriate minimum permissions. For more information on access and identity for AKS, see [Access and identity options for Azure Kubernetes Service (AKS)][aks-identity-concepts]. - If you have multiple Azure subscriptions, select the appropriate subscription ID in which the resources should be billed using the [az account][az-account] command. ## Create a resource group -An [Azure resource group][azure-resource-group] is a logical group in which Azure resources are deployed and managed. When you create a resource group, you are prompted to specify a location. This location is: +An [Azure resource group][azure-resource-group] is a logical group in which Azure resources are deployed and managed. When you create a resource group, you're prompted to specify a location. This location is: * The storage location of your resource group metadata.-* Where your resources will run in Azure if you don't specify another region during resource creation. +* Where your resources run in Azure if you don't specify another region during resource creation. The following example creates a resource group named *myResourceGroup* in the *eastus* location. The following output example resembles successful creation of the resource group } ``` -## Install the aks-preview Azure CLI extension ---To install the aks-preview extension, run the following command: --```azurecli-interactive -az extension add --name aks-preview -``` --Run the following command to update to the latest version of the extension released: --```azurecli-interactive -az extension update --name aks-preview -``` --## Register the 'EnableWorkloadIdentityPreview' feature flag --Register the `EnableWorkloadIdentityPreview` feature flag by using the [az feature register][az-feature-register] command, as shown in the following example: --```azurecli-interactive -az feature register --namespace "Microsoft.ContainerService" --name "EnableWorkloadIdentityPreview" -``` --It takes a few minutes for the status to show *Registered*. Verify the registration status by using the [az feature list][az-feature-list] command: +## Export environmental variables -```azurecli-interactive -az feature show --namespace "Microsoft.ContainerService" --name "EnableWorkloadIdentityPreview" -``` +To help simplify steps to configure the identities required, the steps below define +environmental variables for reference on the cluster. -When the status shows *Registered*, refresh the registration of the *Microsoft.ContainerService* resource provider by using the [az provider register][az-provider-register] command: +Run the following commands to create these variables. Replace the default values for `RESOURCE_GROUP`, `LOCATION`, `SERVICE_ACCOUNT_NAME`, `SUBSCRIPTION`, `USER_ASSIGNED_IDENTITY_NAME`, and `FEDERATED_IDENTITY_CREDENTIAL_NAME`. -```azurecli-interactive -az provider register --namespace Microsoft.ContainerService +```bash +export RESOURCE_GROUP="myResourceGroup" +export LOCATION="westcentralus" +export SERVICE_ACCOUNT_NAMESPACE="default" +export SERVICE_ACCOUNT_NAME="workload-identity-sa" +export SUBSCRIPTION="$(az account show --query id --output tsv)" +export USER_ASSIGNED_IDENTITY_NAME="myIdentity" +export FEDERATED_IDENTITY_CREDENTIAL_NAME="myFedIdentity" +export KEYVAULT_NAME="azwi-kv-tutorial" +export KEYVAULT_SECRET_NAME="my-secret" ``` ## Create AKS cluster az provider register --namespace Microsoft.ContainerService Create an AKS cluster using the [az aks create][az-aks-create] command with the `--enable-oidc-issuer` parameter to use the OIDC Issuer. The following example creates a cluster named *myAKSCluster* with one node in the *myResourceGroup*: ```azurecli-interactive-az aks create -g myResourceGroup -n myAKSCluster --node-count 1 --enable-oidc-issuer --enable-workload-identity --generate-ssh-keys +az aks create -g "${RESOURCE_GROUP}" -n myAKSCluster --node-count 1 --enable-oidc-issuer --enable-workload-identity ``` After a few minutes, the command completes and returns JSON-formatted information about the cluster. After a few minutes, the command completes and returns JSON-formatted informatio > [!NOTE] > When you create an AKS cluster, a second resource group is automatically created to store the AKS resources. For more information, see [Why are two resource groups created with AKS?][aks-two-resource-groups]. -To get the OIDC Issuer URL and save it to an environmental variable, run the following command. Replace the default value for the arguments `-n`, which is the name of the cluster and `-g`, the resource group name: +To get the OIDC Issuer URL and save it to an environmental variable, run the following command. Replace the default value for the arguments `-n`, which is the name of the cluster: ```azurecli-interactive-export AKS_OIDC_ISSUER="$(az aks show -n myAKSCluster -g myResourceGroup --query "oidcIssuerProfile.issuerUrl" -otsv)" -``` --## Export environmental variables --To help simplify steps to configure creating Azure Key Vault and other identities required, the steps below define -environmental variables for reference on the cluster. --Run the following commands to create these variables. Replace the default values for `RESOURCE_GROUP`, `LOCATION`, `KEYVAULT_SECRET_NAME`, `SERVICE_ACCOUNT_NAME`, `SUBSCRIPTION`, `UAID`, and `FICID`. --```bash -# environment variables for the Azure Key Vault resource -export KEYVAULT_NAME="azwi-kv-tutorial" -export KEYVAULT_SECRET_NAME="my-secret" -export RESOURCE_GROUP="resourceGroupName" -export LOCATION="westcentralus" --# environment variables for the Kubernetes Service account & federated identity credential -export SERVICE_ACCOUNT_NAMESPACE="default" -export SERVICE_ACCOUNT_NAME="workload-identity-sa" --# environment variables for the Federated Identity -export SUBSCRIPTION="{your subscription ID}" -# user assigned identity name -export UAID="fic-test-ua" -# federated identity name -export FICID="fic-test-fic-name" +export AKS_OIDC_ISSUER="$(az aks show -n myAKSCluster -g "${RESOURCE_GROUP}" --query "oidcIssuerProfile.issuerUrl" -otsv)" ``` ## Create an Azure Key Vault and secret az keyvault secret set --vault-name "${KEYVAULT_NAME}" --name "${KEYVAULT_SECRET To add the Key Vault URL to the environment variable `KEYVAULT_URL`, you can run the Azure CLI [az keyvault show][az-keyvault-show] command. ```bash-export KEYVAULT_URL="$(az keyvault show -g ${RESOURCE_GROUP} -n ${KEYVAULT_NAME} --query properties.vaultUri -o tsv)" +export KEYVAULT_URL="$(az keyvault show -g "${RESOURCE_GROUP}" -n ${KEYVAULT_NAME} --query properties.vaultUri -o tsv)" ``` ## Create a managed identity and grant permissions to access the secret az account set --subscription "${SUBSCRIPTION}" ``` ```azurecli-interactive-az identity create --name "${UAID}" --resource-group "${RESOURCE_GROUP}" --location "${LOCATION}" --subscription "${SUBSCRIPTION}" +az identity create --name "${USER_ASSIGNED_IDENTITY_NAME}" --resource-group "${RESOURCE_GROUP}" --location "${LOCATION}" --subscription "${SUBSCRIPTION}" ``` Next, you need to set an access policy for the managed identity to access the Key Vault secret by running the following commands: ```azurecli-interactive-export USER_ASSIGNED_CLIENT_ID="$(az identity show --resource-group "${RESOURCE_GROUP}" --name "${UAID}" --query 'clientId' -otsv)" +export USER_ASSIGNED_CLIENT_ID="$(az identity show --resource-group "${RESOURCE_GROUP}" --name "${USER_ASSIGNED_IDENTITY_NAME}" --query 'clientId' -otsv)" ``` ```azurecli-interactive Serviceaccount/workload-identity-sa created Use the [az identity federated-credential create][az-identity-federated-credential-create] command to create the federated identity credential between the managed identity, the service account issuer, and the subject. ```azurecli-interactive-az identity federated-credential create --name ${FICID} --identity-name ${UAID} --resource-group ${RESOURCE_GROUP} --issuer ${AKS_OIDC_ISSUER} --subject system:serviceaccount:${SERVICE_ACCOUNT_NAMESPACE}:${SERVICE_ACCOUNT_NAME} +az identity federated-credential create --name ${FEDERATED_IDENTITY_CREDENTIAL_NAME} --identity-name ${USER_ASSIGNED_IDENTITY_NAME} --resource-group ${RESOURCE_GROUP} --issuer ${AKS_OIDC_ISSUER} --subject system:serviceaccount:${SERVICE_ACCOUNT_NAMESPACE}:${SERVICE_ACCOUNT_NAME} ``` > [!NOTE] The following output resembles successful creation of the pod: pod/quick-start created ``` -To check whether all properties are injected properly by the webhook, use +To check whether all properties are injected properly with the webhook, use the [kubectl describe][kubelet-describe] command: ```bash az group delete --name "${RESOURCE_GROUP}" ## Next steps In this tutorial, you deployed a Kubernetes cluster and then deployed a simple container application to-test working with an Azure AD workload identity (preview). +test working with an Azure AD workload identity. This tutorial is for introductory purposes. For guidance on a creating full solutions with AKS for production, see [AKS solution guidance][aks-solution-guidance]. |
aks | Manage Azure Rbac | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/manage-azure-rbac.md | -This article covers how to use Azure RBAC for Kubernetes Authorization, which allows for the unified management and access control across Azure resources, AKS, and Kubernetes resources. For more information, see [Azure RBAC for Kubernetes Authorization][azure-rbac-kubernetes-rbac]. +This article covers how to use Azure RBAC for Kubernetes Authorization, which allows for the unified management and access control across Azure resources, AKS, and Kubernetes resources. For more information, see [Azure RBAC for Kubernetes Authorization][kubernetes-rbac]. ## Before you begin az group delete -n myResourceGroup To learn more about AKS authentication, authorization, Kubernetes RBAC, and Azure RBAC, see: -* [Access and identity options for AKS](/concepts-identity.md) +* [Access and identity options for AKS](./concepts-identity.md) * [What is Azure RBAC?](../role-based-access-control/overview.md) * [Microsoft.ContainerService operations](../role-based-access-control/resource-provider-operations.md#microsoftcontainerservice) To learn more about AKS authentication, authorization, Kubernetes RBAC, and Azur [install-azure-cli]: /cli/azure/install-azure-cli [az-role-definition-create]: /cli/azure/role/definition#az_role_definition_create [az-aks-get-credentials]: /cli/azure/aks#az_aks_get-credentials-[kubernetes-rbac]: /concepts-identity#azure-rbac-for-kubernetes-authorization -[azure-rbac-kubernetes-rbac]: /concepts-identity#azure-rbac-for-kubernetes-authorization +[kubernetes-rbac]: ./concepts-identity.md#azure-rbac-for-kubernetes-authorization |
aks | Managed Aad | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/managed-aad.md | Title: Use Azure AD in Azure Kubernetes Service description: Learn how to use Azure AD in Azure Kubernetes Service (AKS) Previously updated : 03/02/2023 Last updated : 04/17/2023 In order to access the cluster, follow the steps in [access an Azure AD enabled There are some non-interactive scenarios, such as continuous integration pipelines, that aren't currently available with `kubectl`. You can use [`kubelogin`](https://github.com/Azure/kubelogin) to connect to the cluster with a non-interactive service principal credential. +Starting with Kubernetes version 1.24, the default format of the clusterUser credential for Azure AD clusters is `exec`, which requires [kubelogin](https://github.com/Azure/kubelogin) binary in the execution PATH. If you use the Azure CLI, it prompts you to download kubelogin. For non-Azure AD clusters, or Azure AD clusters where the version of Kubernetes is older than 1.24, there is no change in behavior. The version of kubeconfig installed continues to work. ++An optional query parameter named `format` is available when retrieving the clusterUser credential to overwrite the default behavior change. You can set the value to `azure` to use the original kubeconfig format. ++Example: ++```azurecli-interactive +az aks get-credentials --format azure +``` ++For Azure AD integrated clusters using a version of Kubernetes newer than 1.24, it uses the kubelogin format automatically and no conversion is needed. For Azure AD integrated clusters running a version older than 1.24, you need to run the following commands to convert the kubeconfig format manually ++```azurecli-interactive +export KUBECONFIG=/path/to/kubeconfig +kubelogin convert-kubeconfig +``` + ## Disable local accounts When you deploy an AKS cluster, local accounts are enabled by default. Even when enabling RBAC or Azure AD integration, `--admin` access still exists as a non-auditable backdoor option. You can disable local accounts using the parameter `disable-local-accounts`. The `properties.disableLocalAccounts` field has been added to the managed cluster API to indicate whether the feature is enabled or not on the cluster. |
aks | Open Service Mesh Deploy Addon Bicep | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/open-service-mesh-deploy-addon-bicep.md | touch osm.aks.bicep && touch osm.aks.parameters.json Open the *osm.aks.bicep* file and copy the following example content to it. Then save the file. -```azurecli-interactive +```bicep // https://learn.microsoft.com/azure/aks/troubleshooting#what-naming-restrictions-are-enforced-for-aks-resources-and-parameters @minLength(3) @maxLength(63) Open the *osm.aks.parameters.json* file and copy the following example content t > [!NOTE] > The *osm.aks.parameters.json* file is an example template parameters file needed for the Bicep deployment. Update the parameters specifically for your deployment environment. The specific parameter values in this example need the following parameters to be updated: `clusterName`, `clusterDNSPrefix`, `k8Version`, and `sshPubKey`. To find a list of supported Kubernetes versions in your region, use the `az aks get-versions --location <region>` command. -```azurecli-interactive +```json { "$schema": "https://schema.management.azure.com/schemas/2019-04-01/deploymentParameters.json#", "contentVersion": "1.0.0.0", kubectl get meshconfig osm-mesh-config -n kube-system -o yaml Here's an example output of MeshConfig: -``` +```yaml apiVersion: config.openservicemesh.io/v1alpha1 kind: MeshConfig metadata: Notice that `enablePermissiveTrafficPolicyMode` is configured to `true`. In OSM, When you no longer need the Azure resources, use the Azure CLI to delete the deployment's test resource group: -``` +```azurecli-interactive az group delete --name osm-bicep-test ``` |
aks | Servicemesh About | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/servicemesh-about.md | Title: About service meshes description: Obtain an overview of service meshes, supported scenarios, selection criteria, and next steps to explore. Previously updated : 04/06/2023 Last updated : 04/18/2023 Before you select a service mesh, make sure you understand your requirements and ## Next steps -Open Service Mesh (OSM) is a supported service mesh that runs Azure Kubernetes Service (AKS): +Azure Kubernetes Service (AKS) offers officially supported add-ons for Istio and Open Service Mesh: > [!div class="nextstepaction"]+> [Learn more about Istio][istio-about] > [Learn more about OSM][osm-about] There are also service meshes provided by open-source projects and third parties that are commonly used with AKS. These service meshes aren't covered by the [AKS support policy][aks-support-policy]. -- [Istio][istio] - [Linkerd][linkerd] - [Consul Connect][consul] For more details on service mesh standardization efforts, see: - [Service Mesh Performance (SMP)][smp] <!-- LINKS - external -->-[istio]: https://istio.io/latest/docs/setup/install/ [linkerd]: https://linkerd.io/getting-started/ [consul]: https://learn.hashicorp.com/tutorials/consul/service-mesh-deploy [service-mesh-landscape]: https://layer5.io/service-mesh-landscape For more details on service mesh standardization efforts, see: <!-- LINKS - internal --> [osm-about]: ./open-service-mesh-about.md+[istio-about]: ./istio-about.md [aks-support-policy]: support-policies.md |
aks | Static Ip | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/static-ip.md | This article shows you how to create a static public IP address and assign it to loadBalancerIP: 40.121.183.52 type: LoadBalancer ports:- - port: 80 + - port: 80 selector: app: azure-load-balancer ``` |
aks | Supported Kubernetes Versions | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/supported-kubernetes-versions.md | For the past release history, see [Kubernetes history](https://en.wikipedia.org/ > [!NOTE] > Alias minor version requires Azure CLI version 2.37 or above as well as API version 20220201 or above. Use `az upgrade` to install the latest version of the CLI. -With AKS, you can create a cluster without specifying the exact patch version. When you create a cluster without designating a patch, the cluster will run the minor version's latest GA patch. For example, if you create a cluster with **`1.21`**, your cluster will run **`1.21.7`**, which is the latest GA patch version of *1.21*. +AKS allows you to create a cluster without specifying the exact patch version. When you create a cluster without designating a patch, the cluster runs the minor version's latest GA patch. For example, if you create a cluster with **`1.21`**, your cluster will run **`1.21.7`**, which is the latest GA patch version of *1.21*. -When you upgrade by alias minor version, only a higher minor version is supported. For example, upgrading from `1.14.x` to `1.14` won't trigger an upgrade to the latest GA `1.14` patch, but upgrading to `1.15` will trigger an upgrade to the latest GA `1.15` patch. If you wish to upgrade your patch version in the same minor version, please use [auto-upgrade](./auto-upgrade-cluster.md#using-cluster-auto-upgrade). +When you upgrade by alias minor version, only a higher minor version is supported. For example, upgrading from `1.14.x` to `1.14` doesn't trigger an upgrade to the latest GA `1.14` patch, but upgrading to `1.15` triggers an upgrade to the latest GA `1.15` patch. -To see what patch you're on, run the `az aks show --resource-group myResourceGroup --name myAKSCluster` command. The `currentKubernetesVersion` property shows the whole Kubernetes version. +To see what patch you're on, run the `az aks show --resource-group myResourceGroup --name myAKSCluster` command. The property `currentKubernetesVersion` shows the whole Kubernetes version. ``` { AKS defines a generally available (GA) version as a version available in all reg AKS may also support preview versions, which are explicitly labeled and subject to [preview terms and conditions][preview-terms]. +AKS provides platform support only for one GA minor version of Kubernetes after the regular supported versions. The platform support window of Kubernetes versions on AKS is known as "N-3". For more information, see [platform support policy](#platform-support-policy). + > [!NOTE] > AKS uses safe deployment practices which involve gradual region deployment. This means it may take up to 10 business days for a new release or a new version to be available in all regions. New minor version | Supported Version List -- | - 1.17.a | 1.17.a, 1.17.b, 1.16.c, 1.16.d, 1.15.e, 1.15.f -Where ".letter" is representative of patch versions. --When a new minor version is introduced, the oldest minor version and patch releases supported are deprecated and removed. For example, if the current supported version list is: +When a new minor version is introduced, the oldest supported minor version and patch releases are deprecated and removed. For example, the current supported version list is: ``` 1.17.a New Supported Version List 1.17.*9*, 1.17.*8*, 1.16.*11*, 1.16.*10* ``` +## Platform support policy ++Platform support policy is a reduced support plan for certain unsupported kubernetes versions. During platform support, customers will only receive support from Microsoft for AKS/Azure platform related issues. Any issues related to Kubernetes functionality and components will not be supported. ++Platform support policy applies to clusters in an n-3 version (where n is the latest supported AKS GA minor version), before the cluster drops to n-4. For example, kubernetes v1.25 will be considered platform support when v1.28 is the latest GA version. However, during the v1.29 GA release, v1.25 will then be auto-upgraded to v1.26. ++AKS relies on the releases and patches from [kubernetes](https://kubernetes.io/releases/), which is an Open Source project that only supports a sliding window of 3 minor versions. AKS can only guarantee [full support](#kubernetes-version-support-policy) while those versions are being serviced upstream. Since there's no more patches being produced upstream, AKS can either leave those versions unpatched or fork. Due to this limitation, platform support will not support anything from relying on kubernetes upstream. ++This table outlines support guidelines for Community Support compared to Platform support. ++| Support category | Community Support (N-2) | Platform Support (N-3) | +|||| +| Upgrades from N-3 to a supported version| Supported | Supported| +| Platform (Azure) availability | Supported | Supported| +| Node pool scaling| Supported | Supported| +| VM availability| Supported | Supported| +| Storage, Networking related issues| Supported | Supported with the exception of bug fixes and retired components | +| Start/stop | Supported | Supported| +| Rotate certificates | Supported | Supported| +| Infrastructure SLA| Supported | Supported| +| Control Plane SLA| Supported | Supported| +| Platform (AKS) SLA| Supported | Not supported| +| Kubernetes components (including Add-ons) | Supported | Not supported| +| Component updates | Supported | Not supported| +| Component hotfixes | Supported | Not supported| +| Applying bug fixes | Supported | Not supported| +| Applying security patches | Supported | Not supported| +| Kubernetes API support | Supported | Not supported| +| Cluster or node pool creation| Supported | Not supported| +| Node pool snapshot| Supported | Not supported| +| Node image upgrade| Supported | Not supported| ++ > [!NOTE] + > The above table is subject to change and outlines common support scenarios. Any scenarios related to Kubernetes functionality and components will not be supported for N-3. For further support, see [Support and troubleshooting for AKS](./aks-support-help.md). + ### Supported `kubectl` versions You can use one minor version older or newer of `kubectl` relative to your *kube-apiserver* version, consistent with the [Kubernetes support policy for kubectl](https://kubernetes.io/docs/setup/release/version-skew-policy/#kubectl). |
aks | Workload Identity Deploy Cluster | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/workload-identity-deploy-cluster.md | Title: Deploy and configure an Azure Kubernetes Service (AKS) cluster with workload identity description: In this Azure Kubernetes Service (AKS) article, you deploy an Azure Kubernetes Service cluster and configure it with an Azure AD workload identity. Previously updated : 04/18/2023-+ Last updated : 04/19/2023 # Deploy and configure workload identity on an Azure Kubernetes Service (AKS) cluster Azure Kubernetes Service (AKS) is a managed Kubernetes service that lets you qui This article assumes you have a basic understanding of Kubernetes concepts. For more information, see [Kubernetes core concepts for Azure Kubernetes Service (AKS)][kubernetes-concepts]. If you aren't familiar with Azure AD workload identity, see the following [Overview][workload-identity-overview] article. -- This article requires version 2.40.0 or later of the Azure CLI. If using Azure Cloud Shell, the latest version is already installed.+- This article requires version 2.47.0 or later of the Azure CLI. If using Azure Cloud Shell, the latest version is already installed. - The identity you're using to create your cluster has the appropriate minimum permissions. For more information about access and identity for AKS, see [Access and identity options for Azure Kubernetes Service (AKS)][aks-identity-concepts]. - If you have multiple Azure subscriptions, select the appropriate subscription ID in which the resources should be billed using the [az account][az-account] command. +## Export environmental variables ++To help simplify steps to configure the identities required, the steps below define +environmental variables for reference on the cluster. ++Run the following commands to create these variables. Replace the default values for `RESOURCE_GROUP`, `LOCATION`, `SERVICE_ACCOUNT_NAME`, `SUBSCRIPTION`, `USER_ASSIGNED_IDENTITY_NAME`, and `FEDERATED_IDENTITY_CREDENTIAL_NAME`. ++```bash +export RESOURCE_GROUP="myResourceGroup" +export LOCATION="westcentralus" +export SERVICE_ACCOUNT_NAMESPACE="default" +export SERVICE_ACCOUNT_NAME="workload-identity-sa" +export SUBSCRIPTION="$(az account show --query id --output tsv)" +export USER_ASSIGNED_IDENTITY_NAME="myIdentity" +export FEDERATED_IDENTITY_CREDENTIAL="myFedIdentity" +``` + ## Create AKS cluster Create an AKS cluster using the [az aks create][az-aks-create] command with the `--enable-oidc-issuer` parameter to use the OIDC Issuer. The following example creates a cluster named *myAKSCluster* with one node in the *myResourceGroup*: ```azurecli-interactive-az group create --name myResourceGroup --location eastus --az aks create -g myResourceGroup -n myAKSCluster --enable-oidc-issuer --enable-workload-identity +az aks create -g "${RESOURCE_GROUP}" -n myAKSCluster --enable-oidc-issuer --enable-workload-identity ``` After a few minutes, the command completes and returns JSON-formatted information about the cluster. After a few minutes, the command completes and returns JSON-formatted informatio > [!NOTE] > When you create an AKS cluster, a second resource group is automatically created to store the AKS resources. For more information, see [Why are two resource groups created with AKS?][aks-two-resource-groups]. -To get the OIDC Issuer URL and save it to an environmental variable, run the following command. Replace the default values for the cluster name and the resource group name. +To get the OIDC Issuer URL and save it to an environmental variable, run the following command. Replace the default value for the arguments `-n`, which is the name of the cluster: ```bash-export AKS_OIDC_ISSUER="$(az aks show -n myAKSCluster -g myResourceGroup --query "oidcIssuerProfile.issuerUrl" -otsv)" +export AKS_OIDC_ISSUER="$(az aks show -n myAKSCluster -g "${RESOURCE_GROUP}" --query "oidcIssuerProfile.issuerUrl" -otsv)" ``` ## Create a managed identity export AKS_OIDC_ISSUER="$(az aks show -n myAKSCluster -g myResourceGroup --query Use the Azure CLI [az account set][az-account-set] command to set a specific subscription to be the current active subscription. Then use the [az identity create][az-identity-create] command to create a managed identity. ```azurecli-export SUBSCRIPTION_ID="$(az account show --query id --output tsv)" -export USER_ASSIGNED_IDENTITY_NAME="myIdentity" -export RG_NAME="myResourceGroup" -export LOCATION="eastus" +az identity create --name "${USER_ASSIGNED_IDENTITY_NAME}" --resource-group "${RESOURCE_GROUP}" --location "${LOCATION}" --subscription "${SUBSCRIPTION}" +``` -az identity create --name "${USER_ASSIGNED_IDENTITY_NAME}" --resource-group "${RG_NAME}" --location "${LOCATION}" --subscription "${SUBSCRIPTION_ID}" +Next, let's create a variable for the managed identity ID. ++```bash +export USER_ASSIGNED_CLIENT_ID="$(az identity show --resource-group "${RESOURCE_GROUP}" --name "${USER_ASSIGNED_IDENTITY_NAME}" --query 'clientId' -otsv)" ``` ## Create Kubernetes service account az identity create --name "${USER_ASSIGNED_IDENTITY_NAME}" --resource-group "${R Create a Kubernetes service account and annotate it with the client ID of the managed identity created in the previous step. Use the [az aks get-credentials][az-aks-get-credentials] command and replace the values for the cluster name and the resource group name. ```azurecli-az aks get-credentials -n myAKSCluster -g myResourceGroup +az aks get-credentials -n myAKSCluster -g "${RESOURCE_GROUP}" ``` -Copy and paste the following multi-line input in the Azure CLI, and update the values for `SERVICE_ACCOUNT_NAME` and `SERVICE_ACCOUNT_NAMESPACE` with the Kubernetes service account name and its namespace. +Copy and paste the following multi-line input in the Azure CLI. ```bash-export SERVICE_ACCOUNT_NAME="workload-identity-sa" -export SERVICE_ACCOUNT_NAMESPACE="my-namespace" -export USER_ASSIGNED_CLIENT_ID="$(az identity show --resource-group "${RESOURCE_GROUP}" --name "${UAID}" --query 'clientId' -otsv)" - cat <<EOF | kubectl apply -f - apiVersion: v1 kind: ServiceAccount Serviceaccount/workload-identity-sa created Use the [az identity federated-credential create][az-identity-federated-credential-create] command to create the federated identity credential between the managed identity, the service account issuer, and the subject. ```azurecli-az identity federated-credential create --name myfederatedIdentity --identity-name "${USER_ASSIGNED_IDENTITY_NAME}" --resource-group "${RG_NAME}" --issuer "${AKS_OIDC_ISSUER}" --subject system:serviceaccount:"${SERVICE_ACCOUNT_NAMESPACE}":"${SERVICE_ACCOUNT_NAME}" --audience api://AzureADTokenExchange +az identity federated-credential create --name ${FEDERATED_IDENTITY_CREDENTIAL_NAME} --identity-name "${USER_ASSIGNED_IDENTITY_NAME}" --resource-group "${RESOURCE_GROUP}" --issuer "${AKS_OIDC_ISSUER}" --subject system:serviceaccount:"${SERVICE_ACCOUNT_NAMESPACE}":"${SERVICE_ACCOUNT_NAME}" --audience api://AzureADTokenExchange ``` > [!NOTE] You can retrieve this information using the Azure CLI command: [az keyvault list 1. Set an access policy for the managed identity to access secrets in your Key Vault by running the following commands: ```azurecli- export RG_NAME="myResourceGroup" + export RESOURCE_GROUP="myResourceGroup" export USER_ASSIGNED_IDENTITY_NAME="myIdentity" export KEYVAULT_NAME="myKeyVault"- export USER_ASSIGNED_CLIENT_ID="$(az identity show --resource-group "${RG_NAME}" --name "${USER_ASSIGNED_IDENTITY_NAME}" --query 'clientId' -otsv)" + export USER_ASSIGNED_CLIENT_ID="$(az identity show --resource-group "${RESOURCE_GROUP}" --name "${USER_ASSIGNED_IDENTITY_NAME}" --query 'clientId' -otsv)" az keyvault set-policy --name "${KEYVAULT_NAME}" --secret-permissions get --spn "${USER_ASSIGNED_CLIENT_ID}" ``` |
aks | Workload Identity Overview | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/workload-identity-overview.md | Title: Use an Azure AD workload identities (preview) on Azure Kubernetes Service (AKS) description: Learn about Azure Active Directory workload identity (preview) for Azure Kubernetes Service (AKS) and how to migrate your application to authenticate using this identity. Previously updated : 04/18/2023 Last updated : 04/19/2023 Workloads deployed on an Azure Kubernetes Services (AKS) cluster require Azure A [Azure AD workload identity][azure-ad-workload-identity] uses [Service Account Token Volume Projection][service-account-token-volume-projection] enabling pods to use a Kubernetes identity (that is, a service account). A Kubernetes token is issued and [OIDC federation][oidc-federation] enables Kubernetes applications to access Azure resources securely with Azure AD based on annotated service accounts. -Azure AD workload identity works especially well with the Azure Identity client library using the [Azure SDK][azure-sdk-download] and the [Microsoft Authentication Library][microsoft-authentication-library] (MSAL) if you're using [application registration][azure-ad-application-registration]. Your workload can use any of these libraries to seamlessly authenticate and access Azure cloud resources. +Azure AD workload identity works especially well with the [Azure Identity client libraries](#azure-identity-client-libraries) and the [Microsoft Authentication Library][microsoft-authentication-library] (MSAL) collection if you're using [application registration][azure-ad-application-registration]. Your workload can use any of these libraries to seamlessly authenticate and access Azure cloud resources. This article helps you understand this new authentication feature, and reviews the options available to plan your project strategy and potential migration from Azure AD pod-managed identity. This article helps you understand this new authentication feature, and reviews t - The Azure CLI version 2.47.0 or later. Run `az --version` to find the version, and run `az upgrade` to upgrade the version. If you need to install or upgrade, see [Install Azure CLI][install-azure-cli]. -## Azure Identity SDK +## Azure Identity client libraries -The following client libraries are the **minimum** version required +In the Azure Identity client libraries, choose one of the following approaches: ++- Use `DefaultAzureCredential`, which will attempt to use the `WorkloadIdentityCredential`. +- Create a `ChainedTokenCredential` instance that includes `WorkloadIdentityCredential`. +- Use `WorkloadIdentityCredential` directly. ++The following table provides the **minimum** package version required for each language's client library. -| Language | Library | Minimum Version | Example | -|--|--|-|-| -| Go | [azure-sdk-for-go](https://github.com/Azure/azure-sdk-for-go) | [sdk/azidentity/v1.3.0-beta.1](https://github.com/Azure/azure-sdk-for-go/releases/tag/sdk/azidentity/v1.3.0-beta.1)| [Link](https://github.com/Azure/azure-workload-identity/tree/main/examples/azure-identity/go) | -| C# | [azure-sdk-for-net](https://github.com/Azure/azure-sdk-for-net) | [Azure.Identity_1.5.0](https://github.com/Azure/azure-sdk-for-net/releases/tag/Azure.Identity_1.5.0)| [Link](https://github.com/Azure/azure-workload-identity/tree/main/examples/azure-identity/dotnet) | -| JavaScript/TypeScript | [azure-sdk-for-js](https://github.com/Azure/azure-sdk-for-js) | [@azure/identity_2.0.0](https://github.com/Azure/azure-sdk-for-js/releases/tag/@azure/identity_2.0.0) | [Link](https://github.com/Azure/azure-workload-identity/tree/main/examples/azure-identity/node) | -| Python | [azure-sdk-for-python](https://github.com/Azure/azure-sdk-for-python) | [azure-identity_1.7.0](https://github.com/Azure/azure-sdk-for-python/releases/tag/azure-identity_1.7.0) | [Link](https://github.com/Azure/azure-workload-identity/tree/main/examples/azure-identity/python) | -| Java | [azure-sdk-for-java]() | [azure-identity_1.4.0](https://github.com/Azure/azure-sdk-for-java/releases/tag/azure-identity_1.4.0) | [Link](https://github.com/Azure/azure-workload-identity/tree/main/examples/azure-identity/java) | +| Language | Library | Minimum Version | Example | +||-|--|| +| .NET | [Azure.Identity](https://learn.microsoft.com/dotnet/api/overview/azure/identity-readme) | 1.9.0-beta.2 | [Link](https://github.com/Azure/azure-workload-identity/tree/main/examples/azure-identity/dotnet) | +| Go | [azidentity](https://pkg.go.dev/github.com/Azure/azure-sdk-for-go/sdk/azidentity) | 1.3.0-beta.1 | [Link](https://github.com/Azure/azure-workload-identity/tree/main/examples/azure-identity/go) | +| Java | [azure-identity](https://learn.microsoft.com/java/api/overview/azure/identity-readme) | 1.9.0-beta.1 | [Link](https://github.com/Azure/azure-workload-identity/tree/main/examples/azure-identity/java) | +| JavaScript | [@azure/identity](https://learn.microsoft.com/javascript/api/overview/azure/identity-readme) | 3.2.0-beta.1 | [Link](https://github.com/Azure/azure-workload-identity/tree/main/examples/azure-identity/node) | +| Python | [azure-identity](https://learn.microsoft.com/python/api/overview/azure/identity-readme) | 1.13.0b2 | [Link](https://github.com/Azure/azure-workload-identity/tree/main/examples/azure-identity/python) | ## Microsoft Authentication Library (MSAL) The following client libraries are the **minimum** version required | Language | Library | Image | Example | Has Windows | |--|--|-|-|-|+| .NET | [microsoft-authentication-library-for-dotnet](https://github.com/AzureAD/microsoft-authentication-library-for-dotnet) | ghcr.io/azure/azure-workload-identity/msal-net | [Link](https://github.com/Azure/azure-workload-identity/tree/main/examples/msal-net/akvdotnet) | Yes | | Go | [microsoft-authentication-library-for-go](https://github.com/AzureAD/microsoft-authentication-library-for-go) | ghcr.io/azure/azure-workload-identity/msal-go | [Link](https://github.com/Azure/azure-workload-identity/tree/main/examples/msal-go) | Yes |-| C# | [microsoft-authentication-library-for-dotnet](https://github.com/AzureAD/microsoft-authentication-library-for-dotnet) | ghcr.io/azure/azure-workload-identity/msal-net | [Link](https://github.com/Azure/azure-workload-identity/tree/main/examples/msal-net/akvdotnet) | Yes | -| JavaScript/TypeScript | [microsoft-authentication-library-for-js](https://github.com/AzureAD/microsoft-authentication-library-for-js) | ghcr.io/azure/azure-workload-identity/msal-node | [Link](https://github.com/Azure/azure-workload-identity/tree/main/examples/msal-node) | No | -| Python | [microsoft-authentication-library-for-python](https://github.com/AzureAD/microsoft-authentication-library-for-python) | ghcr.io/azure/azure-workload-identity/msal-python | [Link](https://github.com/Azure/azure-workload-identity/tree/main/examples/msal-python) | No | | Java | [microsoft-authentication-library-for-java](https://github.com/AzureAD/microsoft-authentication-library-for-java) | ghcr.io/azure/azure-workload-identity/msal-java | [Link](https://github.com/Azure/azure-workload-identity/tree/main/examples/msal-java) | No |+| JavaScript | [microsoft-authentication-library-for-js](https://github.com/AzureAD/microsoft-authentication-library-for-js) | ghcr.io/azure/azure-workload-identity/msal-node | [Link](https://github.com/Azure/azure-workload-identity/tree/main/examples/msal-node) | No | +| Python | [microsoft-authentication-library-for-python](https://github.com/AzureAD/microsoft-authentication-library-for-python) | ghcr.io/azure/azure-workload-identity/msal-python | [Link](https://github.com/Azure/azure-workload-identity/tree/main/examples/msal-python) | No | ## Limitations The following table summarizes our migration or deployment recommendations for w * See the tutorial [Use a workload identity with an application on Azure Kubernetes Service (AKS)][tutorial-use-workload-identity], which helps you deploy an Azure Kubernetes Service cluster and configure a sample application to use a workload identity. <!-- EXTERNAL LINKS -->-[azure-sdk-download]: https://azure.microsoft.com/downloads/ [custom-resource-definition]: https://kubernetes.io/docs/tasks/extend-kubernetes/custom-resources/custom-resource-definitions/ [service-account-token-volume-projection]: https://kubernetes.io/docs/tasks/configure-pod-container/configure-service-account/#serviceaccount-token-volume-projection [oidc-federation]: https://kubernetes.io/docs/reference/access-authn-authz/authentication/#openid-connect-tokens |
api-management | Api Management Features | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/api-management/api-management-features.md | Each API Management [pricing tier](https://aka.ms/apimpricing) offers a distinct | Direct management API | No | Yes | Yes | Yes | Yes | | Azure Monitor logs and metrics | No | Yes | Yes | Yes | Yes | | Static IP | No | Yes | Yes | Yes | Yes |-| [WebSocket APIs](websocket-api.md) | No | Yes | Yes | Yes | Yes | -| [GraphQL APIs](graphql-api.md)<sup>5</sup> | Yes | Yes | Yes | Yes | Yes | -| [Synthetic GraphQL APIs (preview)](graphql-schema-resolve-api.md) | No | Yes | Yes | Yes | Yes | +| [Pass-through WebSocket APIs](websocket-api.md) | No | Yes | Yes | Yes | Yes | +| [Pass-through GraphQL APIs](graphql-apis-overview.md) | Yes | Yes | Yes | Yes | Yes | +| [Synthetic GraphQL APIs](graphql-apis-overview.md) | Yes | Yes | Yes | Yes | Yes | <sup>1</sup> Enables the use of Azure AD (and Azure AD B2C) as an identity provider for user sign in on the developer portal.<br/> <sup>2</sup> Including related functionality such as users, groups, issues, applications, and email templates and notifications.<br/> <sup>3</sup> See [Gateway overview](api-management-gateways-overview.md#feature-comparison-managed-versus-self-hosted-gateways) for a feature comparison of managed versus self-hosted gateways. In the Developer tier self-hosted gateways are limited to a single gateway node. <br/> <sup>4</sup> See [Gateway overview](api-management-gateways-overview.md#policies) for differences in policy support in the dedicated, consumption, and self-hosted gateways. <br/>-<sup>5</sup> GraphQL subscriptions aren't supported in the Consumption tier. |
api-management | Api Management Gateways Overview | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/api-management/api-management-gateways-overview.md | The following table compares features available in the managed gateway versus th | [Function App](import-function-app-as-api.md) | ✔️ | ✔️ | ✔️ | | [Container App](import-container-app-with-oas.md) | ✔️ | ✔️ | ✔️ | | [Service Fabric](../service-fabric/service-fabric-api-management-overview.md) | Developer, Premium | ❌ | ❌ |-| [Passthrough GraphQL](graphql-api.md) | ✔️ | ✔️<sup>1</sup> | ❌ | -| [Synthetic GraphQL](graphql-schema-resolve-api.md) | ✔️ | ❌ | ❌ | -| [Passthrough WebSocket](websocket-api.md) | ✔️ | ❌ | ✔️ | --<sup>1</sup> GraphQL subscriptions aren't supported in the Consumption tier. +| [Pass-through GraphQL](graphql-apis-overview.md) | ✔️ | ✔️ | ❌ | +| [Synthetic GraphQL](graphql-apis-overview.md)| ✔️ | ✔️ | ❌ | +| [Pass-through WebSocket](websocket-api.md) | ✔️ | ❌ | ✔️ | ### Policies Managed and self-hosted gateways support all available [policies](api-management | Policy | Managed (Dedicated) | Managed (Consumption) | Self-hosted<sup>1</sup> | | | -- | -- | - | | [Dapr integration](api-management-policies.md#dapr-integration-policies) | ❌ | ❌ | ✔️ |+| [GraphQL resolvers](api-management-policies.md#graphql-resolver-policies) and [GraphQL validation](api-management-policies.md#validation-policies)| ✔️ | ✔️ | ❌ | | [Get authorization context](get-authorization-context-policy.md) | ✔️ | ✔️ | ❌ | | [Quota and rate limit](api-management-policies.md#access-restriction-policies) | ✔️ | ✔️<sup>2</sup> | ✔️<sup>3</sup>-| [Set GraphQL resolver](set-graphql-resolver-policy.md) | ✔️ | ❌ | ❌ | <sup>1</sup> Configured policies that aren't supported by the self-hosted gateway are skipped during policy execution.<br/> <sup>2</sup> The rate limit by key and quota by key policies aren't available in the Consumption tier.<br/> |
api-management | Api Management Howto Aad | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/api-management/api-management-howto-aad.md | description: Learn how to enable user sign-in to the API Management developer po Previously updated : 03/17/2023 Last updated : 04/18/2023 Now that you've enabled access for users in an Azure AD tenant, you can: * Add Azure AD groups into API Management. * Control product visibility using Azure AD groups. -Follow these steps to grant: -* `User.Read` **delegated** permission for Microsoft Graph API. -* `Directory.ReadAll` **application** permission for Microsoft Graph API. --1. Update the first 3 lines of the following Azure CLI script to match your environment and run it. -- ```azurecli - $subId = "Your Azure subscription ID" # Example: "1fb8fadf-03a3-4253-8993-65391f432d3a" - $tenantId = "Your Azure AD Tenant or Organization ID" # Example: 0e054eb4-e5d0-43b8-ba1e-d7b5156f6da8" - $appObjectID = "Application Object ID that has been registered in AAD" # Example: "2215b54a-df84-453f-b4db-ae079c0d2619" - #Login and Set the Subscription - az login - az account set --subscription $subId - #Assign the following permission: Microsoft Graph Delegated Permission: User.Read, Microsoft Graph Application Permission: Directory.ReadAll - az rest --method PATCH --uri "https://graph.microsoft.com/v1.0/$($tenantId)/applications/$($appObjectID)" --body "{'requiredResourceAccess':[{'resourceAccess': [{'id': 'e1fe6dd8-ba31-4d61-89e7-88639da4683d','type': 'Scope'},{'id': '7ab1d382-f21e-4acd-a863-ba3e13f7da61','type': 'Role'}],'resourceAppId': '00000003-0000-0000-c000-000000000000'}]}" - ``` --1. Sign out and sign back in to the Azure portal. 1. Navigate to the App Registration page for the application you registered in [the previous section](#enable-user-sign-in-using-azure-adportal). -1. Select **API Permissions**. You should see the permissions granted by the Azure CLI script in step 1. +1. Select **API Permissions**. +1. Add the following minimum **application** permissions for Microsoft Graph API: + * `User.Read.All` application permission ΓÇô so API Management can read the userΓÇÖs group membership to perform group synchronization at the time the user logs in. + * `Group.Read.All` application permission ΓÇô so API Management can read the Azure AD groups when an administrator tries to add the group to API Management using the **Groups** blade in the portal. 1. Select **Grant admin consent for {tenantname}** so that you grant access for all users in this directory. Now you can add external Azure AD groups from the **Groups** tab of your API Management instance. |
api-management | Api Management Howto Policies | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/api-management/api-management-howto-policies.md | When configuring a policy, you must first select the scope at which the policy a For more information, see [Set or edit policies](set-edit-policies.md#use-base-element-to-set-policy-evaluation-order). +### GraphQL resolver policies ++In API Management, a [GraphQL resolver](configure-graphql-resolver.md) is configured using policies scoped to a specific operation type and field in a [GraphQL schema](graphql-apis-overview.md#resolvers). ++* Currently, API Management supports GraphQL resolvers that specify HTTP data sources. Configure a single [`http-data-source`](http-data-source-policy.md) policy with elements to specify a request to (and optionally response from) an HTTP data source. +* You can't include a resolver policy in policy definitions at other scopes such as API, product, or all APIs. It also doesn't inherit policies configured at other scopes. +* The gateway evaluates a resolver-scoped policy *after* any configured `inbound` and `backend` policies in the policy execution pipeline. ++For more information, see [Configure a GraphQL resolver](configure-graphql-resolver.md). + ## Examples ### Apply policies specified at different scopes |
api-management | Api Management Policies | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/api-management/api-management-policies.md | More information about policies: - [Send message to Pub/Sub topic](publish-to-dapr-policy.md): Uses Dapr runtime to publish a message to a Publish/Subscribe topic. To learn more about Publish/Subscribe messaging in Dapr, see the description in this [README](https://github.com/dapr/docs/blob/master/README.md) file. - [Trigger output binding](invoke-dapr-binding-policy.md): Uses Dapr runtime to invoke an external system via output binding. To learn more about bindings in Dapr, see the description in this [README](https://github.com/dapr/docs/blob/master/README.md) file. -## GraphQL API policies -- [Validate GraphQL request](validate-graphql-request-policy.md) - Validates and authorizes a request to a GraphQL API. -- [Set GraphQL resolver](set-graphql-resolver-policy.md) - Retrieves or sets data for a GraphQL field in an object type specified in a GraphQL schema.+## GraphQL resolver policies +- [HTTP data source for resolver](http-data-source-policy.md) - Configures the HTTP request and optionally the HTTP response to resolve data for an object type and field in a GraphQL schema. +- [Publish event to GraphQL subscription](publish-event-policy.md) - Publishes an event to one or more subscriptions specified in a GraphQL API schema. Used in the `http-response` element of the `http-data-source` policy ## Transformation policies - [Convert JSON to XML](json-to-xml-policy.md) - Converts request or response body from JSON to XML. More information about policies: ## Validation policies - [Validate content](validate-content-policy.md) - Validates the size or content of a request or response body against one or more API schemas. The supported schema formats are JSON and XML.+- [Validate GraphQL request](validate-graphql-request-policy.md) - Validates and authorizes a request to a GraphQL API. - [Validate parameters](validate-parameters-policy.md) - Validates the request header, query, or path parameters against the API schema. - [Validate headers](validate-headers-policy.md) - Validates the response headers against the API schema. - [Validate status code](validate-status-code-policy.md) - Validates the HTTP status codes in |
api-management | Api Management Policy Expressions | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/api-management/api-management-policy-expressions.md | The `context` variable is implicitly available in every policy [expression](api- |Context Variable|Allowed methods, properties, and parameter values| |-|-|-|`context`|[`Api`](#ref-context-api): [`IApi`](#ref-iapi)<br /><br /> [`Deployment`](#ref-context-deployment)<br /><br /> Elapsed: `TimeSpan` - time interval between the value of `Timestamp` and current time<br /><br /> [`LastError`](#ref-context-lasterror)<br /><br /> [`Operation`](#ref-context-operation)<br /><br /> [`Product`](#ref-context-product)<br /><br /> [`Request`](#ref-context-request)<br /><br /> `RequestId`: `Guid` - unique request identifier<br /><br /> [`Response`](#ref-context-response)<br /><br /> [`Subscription`](#ref-context-subscription)<br /><br /> `Timestamp`: `DateTime` - point in time when request was received<br /><br /> `Tracing`: `bool` - indicates if tracing is on or off <br /><br /> [User](#ref-context-user)<br /><br /> [`Variables`](#ref-context-variables): `IReadOnlyDictionary<string, object>`<br /><br /> `void Trace(message: string)`| +|`context`|[`Api`](#ref-context-api): [`IApi`](#ref-iapi)<br /><br /> [`Deployment`](#ref-context-deployment)<br /><br /> Elapsed: `TimeSpan` - time interval between the value of `Timestamp` and current time<br /><br /> [`GraphQL`](#ref-context-graphql)<br /><br />[`LastError`](#ref-context-lasterror)<br /><br /> [`Operation`](#ref-context-operation)<br /><br /> [`Request`](#ref-context-request)<br /><br /> `RequestId`: `Guid` - unique request identifier<br /><br /> [`Response`](#ref-context-response)<br /><br /> [`Subscription`](#ref-context-subscription)<br /><br /> `Timestamp`: `DateTime` - point in time when request was received<br /><br /> `Tracing`: `bool` - indicates if tracing is on or off <br /><br /> [User](#ref-context-user)<br /><br /> [`Variables`](#ref-context-variables): `IReadOnlyDictionary<string, object>`<br /><br /> `void Trace(message: string)`| |<a id="ref-context-api"></a>`context.Api`|`Id`: `string`<br /><br /> `IsCurrentRevision`: `bool`<br /><br /> `Name`: `string`<br /><br /> `Path`: `string`<br /><br /> `Revision`: `string`<br /><br /> `ServiceUrl`: [`IUrl`](#ref-iurl)<br /><br /> `Version`: `string` <br /><br /> `Workspace`: [`IWorkspace`](#ref-iworkspace) | |<a id="ref-context-deployment"></a>`context.Deployment`|[`Gateway`](#ref-context-gateway)<br /><br /> `GatewayId`: `string` (returns 'managed' for managed gateways)<br /><br /> `Region`: `string`<br /><br /> `ServiceId`: `string`<br /><br /> `ServiceName`: `string`<br /><br /> `Certificates`: `IReadOnlyDictionary<string, X509Certificate2>`| |<a id="ref-context-gateway"></a>`context.Deployment.Gateway`|`Id`: `string` (returns 'managed' for managed gateways)<br /><br /> `InstanceId`: `string` (returns 'managed' for managed gateways)<br /><br /> `IsManaged`: `bool`|+|<a id="ref-context-graphql"></a>`context.GraphQL`|`GraphQLArguments`: `IGraphQLDataObject`<br /><br /> `Parent`: `IGraphQLDataObject`<br/><br/>[Examples](configure-graphql-resolver.md#graphql-context)| |<a id="ref-context-lasterror"></a>`context.LastError`|`Source`: `string`<br /><br /> `Reason`: `string`<br /><br /> `Message`: `string`<br /><br /> `Scope`: `string`<br /><br /> `Section`: `string`<br /><br /> `Path`: `string`<br /><br /> `PolicyId`: `string`<br /><br /> For more information about `context.LastError`, see [Error handling](api-management-error-handling-policies.md).| |<a id="ref-context-operation"></a>`context.Operation`|`Id`: `string`<br /><br /> `Method`: `string`<br /><br /> `Name`: `string`<br /><br /> `UrlTemplate`: `string`| |<a id="ref-context-product"></a>`context.Product`|`Apis`: `IEnumerable<`[`IApi`](#ref-iapi)`>`<br /><br /> `ApprovalRequired`: `bool`<br /><br /> `Groups`: `IEnumerable<`[`IGroup`](#ref-igroup)`>`<br /><br /> `Id`: `string`<br /><br /> `Name`: `string`<br /><br /> `State`: `enum ProductState {NotPublished, Published}`<br /><br /> `SubscriptionLimit`: `int?`<br /><br /> `SubscriptionRequired`: `bool`<br /><br /> `Workspace`: [`IWorkspace`](#ref-iworkspace)| The `context` variable is implicitly available in every policy [expression](api- |<a id="ref-context-subscription"></a>`context.Subscription`|`CreatedDate`: `DateTime`<br /><br /> `EndDate`: `DateTime?`<br /><br /> `Id`: `string`<br /><br /> `Key`: `string`<br /><br /> `Name`: `string`<br /><br /> `PrimaryKey`: `string`<br /><br /> `SecondaryKey`: `string`<br /><br /> `StartDate`: `DateTime?`| |<a id="ref-context-user"></a>`context.User`|`Email`: `string`<br /><br /> `FirstName`: `string`<br /><br /> `Groups`: `IEnumerable<`[`IGroup`](#ref-igroup)`>`<br /><br /> `Id`: `string`<br /><br /> `Identities`: `IEnumerable<`[`IUserIdentity`](#ref-iuseridentity)`>`<br /><br /> `LastName`: `string`<br /><br /> `Note`: `string`<br /><br /> `RegistrationDate`: `DateTime`| |<a id="ref-iapi"></a>`IApi`|`Id`: `string`<br /><br /> `Name`: `string`<br /><br /> `Path`: `string`<br /><br /> `Protocols`: `IEnumerable<string>`<br /><br /> `ServiceUrl`: [`IUrl`](#ref-iurl)<br /><br /> `SubscriptionKeyParameterNames`: [`ISubscriptionKeyParameterNames`](#ref-isubscriptionkeyparameternames)|+|<a id="ref-igraphqldataobject"></a>`IGraphQLDataObject`|TBD<br /><br />| |<a id="ref-igroup"></a>`IGroup`|`Id`: `string`<br /><br /> `Name`: `string`| |<a id="ref-imessagebody"></a>`IMessageBody`|`As<T>(bool preserveContent = false): Where T: string, byte[], JObject, JToken, JArray, XNode, XElement, XDocument` <br /><br /> - The `context.Request.Body.As<T>` and `context.Response.Body.As<T>` methods read a request or response message body in specified type `T`. <br/><br/> - Or - <br/><br/>`AsFormUrlEncodedContent(bool preserveContent = false)` <br/></br>- The `context.Request.Body.AsFormUrlEncodedContent()` and `context.Response.Body.AsFormUrlEncodedContent()` methods read URL-encoded form data in a request or response message body and return an `IDictionary<string, IList<string>` object. The decoded object supports `IDictionary` operations and the following expressions: `ToQueryString()`, `JsonConvert.SerializeObject()`, `ToFormUrlEncodedContent().` <br/><br/> By default, the `As<T>` and `AsFormUrlEncodedContent()` methods:<br /><ul><li>Use the original message body stream.</li><li>Render it unavailable after it returns.</li></ul> <br />To avoid that and have the method operate on a copy of the body stream, set the `preserveContent` parameter to `true`, as shown in examples for the [set-body](set-body-policy.md#examples) policy.| |<a id="ref-iprivateendpointconnection"></a>`IPrivateEndpointConnection`|`Name`: `string`<br /><br /> `GroupId`: `string`<br /><br /> `MemberName`: `string`<br /><br />For more information, see the [REST API](/rest/api/apimanagement/current-ga/private-endpoint-connection/list-private-link-resources).| |
api-management | Stv1 Platform Retirement August 2024 | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/api-management/breaking-changes/stv1-platform-retirement-august-2024.md | documentationcenter: '' Previously updated : 08/26/2022 Last updated : 01/10/2023 After 31 August 2024, any instance hosted on the `stv1` platform won't be suppor **Migrate all your existing instances hosted on the `stv1` compute platform to the `stv2` compute platform by 31 August 2024.** -If you have existing instances hosted on the `stv1` platform, you can follow our [migration guide](../compute-infrastructure.md#how-do-i-migrate-to-the-stv2-platform) which provides all the details to ensure a successful migration. +If you have existing instances hosted on the `stv1` platform, you can follow our [migration guide](../migrate-stv1-to-stv2.md) which provides all the details to ensure a successful migration. ## Help and support |
api-management | Compute Infrastructure | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/api-management/compute-infrastructure.md | Title: Azure API Management compute platform -description: Learn about the compute platform used to host your API Management service instance +description: Learn about the compute platform used to host your API Management service instance. Instances in the dedicated service tiers of API Management are hosted on the stv1 or stv2 compute platform. Previously updated : 03/16/2022 Last updated : 04/17/2023 As a cloud platform-as-a-service (PaaS), Azure API Management abstracts many det To enhance service capabilities, we're upgrading the API Management compute platform version - the Azure compute resources that host the service - for instances in several [service tiers](api-management-features.md). This article gives you context about the upgrade and the major versions of API Management's compute platform: `stv1` and `stv2`. -We've minimized impacts of this upgrade on your operation of your API Management instance. Upgrades are managed by the platform, and new instances created in service tiers other than the Consumption tier are mostly hosted on the `stv2` platform. However, for existing instances hosted on the `stv1` platform, you have options to trigger migration to the `stv2` platform. +Most new instances created in service tiers other than the Consumption tier are hosted on the `stv2` platform. However, for existing instances hosted on the `stv1` platform, you have options to migrate to the `stv2` platform. ## What are the compute platforms for API Management? The following table summarizes the compute platforms currently used for instance | Version | Description | Architecture | Tiers | | -| -| -- | - |-| `stv2` | Single-tenant v2 | Azure-allocated compute infrastructure that supports availability zones, private endpoints | Developer, Basic, Standard, Premium<sup>1</sup> | +| `stv2` | Single-tenant v2 | Azure-allocated compute infrastructure that supports added resiliency and security features. See [What are the benefits of the `stv2` platform?](#what-are-the-benefits-of-the-stv2-platform) in this article. | Developer, Basic, Standard, Premium<sup>1</sup> | | `stv1` | Single-tenant v1 | Azure-allocated compute infrastructure | Developer, Basic, Standard, Premium | | `mtv1` | Multi-tenant v1 | Shared infrastructure that supports native autoscaling and scaling down to zero in times of no traffic | Consumption | -<sup>1</sup> Newly created instances in these tiers, created using the Azure portal or specifying API version 2021-01-01-preview or later. Includes some existing instances in Developer and Premium tiers configured with virtual networks or availability zones. +<sup>1</sup> Newly created instances in these tiers and some existing instances in Developer and Premium tiers configured with virtual networks or availability zones. > [!NOTE] > Currently, the `stv2` platform isn't available in the US Government cloud or in the following Azure regions: China East, China East 2, China North, China North 2. ## How do I know which platform hosts my API Management instance? -Starting with API version `2021-04-01-preview`, the API Management instance exposes a read-only `platformVersion` property that shows this platform information. +Starting with API version `2021-04-01-preview`, the API Management instance exposes a read-only `platformVersion` property with this platform information. -You can find this information using the portal or the API Management [REST API](/rest/api/apimanagement/current-ga/api-management-service/get). +You can find the platform version of your instance using the portal, the API Management [REST API](/rest/api/apimanagement/current-ga/api-management-service/get), or other Azure tools. -To find the `platformVersion` property in the portal: +To find the platform version in the portal: -1. Go to your API Management instance. -1. On the **Overview** page, select **JSON view**. -1. In **API version**, select a current version such as `2021-08-01` or later. -1. In the JSON view, scroll down to find the `platformVersion` property. +1. Sign in to the [portal](https://portal.azure.com) and go to your API Management instance. +1. On the **Overview** page, under **Essentials**, the **Platform Version** is displayed. - :::image type="content" source="media/compute-infrastructure/platformversion-property.png" alt-text="platformVersion property in JSON view"::: + :::image type="content" source="media/compute-infrastructure/platformversion-property.png" alt-text="Screenshot of the API Management platform version in the portal."::: -## How do I migrate to the `stv2` platform? --The following table summarizes migration options for instances in the different API Management service tiers that are currently hosted on the `stv1` platform. See the linked documentation for detailed steps. --> [!NOTE] -> Check the [`platformVersion` property](#how-do-i-know-which-platform-hosts-my-api-management-instance) before starting migration, and after your configuration change. --|Tier |Migration options | -||| -|Premium | 1. Enable [zone redundancy](../reliability/migrate-api-mgt.md)<br/> -or-<br/> 2. Create new [external](api-management-using-with-vnet.md) or [internal](api-management-using-with-internal-vnet.md) VNet connection<sup>1</sup><br/> -or-<br/> 3. Update existing [VNet configuration](#update-vnet-configuration) | -|Developer | 1. Create new [external](api-management-using-with-vnet.md) or [internal](api-management-using-with-internal-vnet.md) VNet connection<sup>1</sup><br/>-or-<br/> 2. Update existing [VNet configuration](#update-vnet-configuration) | -| Standard | 1. [Change your service tier](upgrade-and-scale.md#change-your-api-management-service-tier) (downgrade to Developer or upgrade to Premium). Follow migration options in new tier.<br/>-or-<br/>2. Deploy new instance in existing tier and migrate configurations<sup>2</sup> | -| Basic | 1. [Change your service tier](upgrade-and-scale.md#change-your-api-management-service-tier) (downgrade to Developer or upgrade to Premium). Follow migration options in new tier<br/>-or-<br/>2. Deploy new instance in existing tier and migrate configurations<sup>2</sup> | -| Consumption | Not applicable | --<sup>1</sup> Use Azure portal or specify API version 2021-01-01-preview or later. - -<sup>2</sup> Migrate configurations with the following mechanisms: [Backup and restore](api-management-howto-disaster-recovery-backup-restore.md), [Migration script for the developer portal](automate-portal-deployments.md), [APIOps with Azure API Management](/azure/architecture/example-scenario/devops/automated-api-deployments-apiops). --## Update VNet configuration +## What are the benefits of the `stv2` platform? -If you have an existing Developer or Premium tier instance that's connected to a virtual network and hosted on the `stv1` platform, trigger migration to the `stv2` platform by updating the VNet configuration. +The `stv2` platform infrastructure supports several resiliency and security features of API Management that aren't available on the `stv1` platform, including: -### Prerequisites +* [Availability zones](zone-redundancy.md) +* [Private endpoints](private-endpoint.md) +* [Protection with Azure DDoS](protect-with-ddos-protection.md) -* A new or existing virtual network and subnet in the same region and subscription as your API Management instance. The subnet must be different from the one currently used for the instance hosted on the `stv1` platform, and a network security group must be attached. -* A new or existing Standard SKU [public IPv4 address](../virtual-network/ip-services/public-ip-addresses.md#sku) resource in the same region and subscription as your API Management instance. --To update the existing external or internal VNet configuration using the portal: --1. Navigate to your API Management instance. -1. In the left menu, select **Network** > **Virtual network**. -1. Select the network connection in the location you want to update. -1. Select the virtual network, subnet, and IP address resources you want to configure, and select **Apply**. -1. Continue configuring VNet settings for the remaining locations of your API Management instance. -1. In the top navigation bar, select **Save**, then select **Apply network configuration**. --The virtual network configuration is updated, and the instance is migrated to the `stv2` platform. Confirm migration by checking the [`platformVersion` property](#how-do-i-know-which-platform-hosts-my-api-management-instance). +## How do I migrate to the `stv2` platform? -> [!NOTE] -> * Updating the VNet configuration takes from 15 to 45 minutes to complete. -> * The VIP address(es) of your API Management instance will change. +> [!IMPORTANT] +> Support for API Management instances hosted on the `stv1` platform will be [retired by 31 August 2024](breaking-changes/stv1-platform-retirement-august-2024.md). To ensure proper operation of your API Management instance, you should migrate any instance hosted on the `stv1` platform to `stv2` before that date. +Migration steps depend on features enabled in your API Management instance. If the instance isn't injected in a VNet, you can use a migration API. For instances that are VNet-injected, follow manual steps. For details, see the [migration guide](migrate-stv1-to-stv2.md). ## Next steps -* Learn more about using a [virtual network](virtual-network-concepts.md) with API Management. -* Learn more about enabling [availability zones](../reliability/migrate-api-mgt.md). -+* [Migrate an API Management instance to the stv2 platform](migrate-stv1-to-stv2.md). +* Learn more about [upcoming breaking changes](breaking-changes/overview.md) in API Management. |
api-management | Configure Custom Domain | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/api-management/configure-custom-domain.md | There are several API Management endpoints to which you can assign a custom doma | **SCM** | Default is: `<apim-service-name>.scm.azure-api.net` | ### Considerations+ * You can update any of the endpoints supported in your service tier. Typically, customers update **Gateway** (this URL is used to call the APIs exposed through API Management) and **Developer portal** (the developer portal URL). * The default **Gateway** endpoint also is available after you configure a custom Gateway domain name. For other API Management endpoints (such as **Developer portal**) that you configure with a custom domain name, the default endpoint is no longer available. * Only API Management instance owners can use **Management** and **SCM** endpoints internally. These endpoints are less frequently assigned a custom domain name. * The **Premium** and **Developer** tiers support setting multiple hostnames for the **Gateway** endpoint.-* Wildcard domain names, like `*.contoso.com`, are supported in all tiers except the Consumption tier. +* Wildcard domain names, like `*.contoso.com`, are supported in all tiers except the Consumption tier. A specific subdomain certificate (for example, api.contoso.com) would take precedence over a wildcard certificate (*.contoso.com) for requests to api.contoso.com. ## Domain certificate options API Management offers a free, managed TLS certificate for your domain, if you do * Currently available only in the Azure cloud * Does not support root domain names (for example, `contoso.com`). Requires a fully qualified name such as `api.contoso.com`. * Can only be configured when updating an existing API Management instance, not when creating an instance- + ## Set a custom domain name - portal Choose the steps according to the [domain certificate](#domain-certificate-options) you want to use. # [Custom](#tab/custom)+ 1. Navigate to your API Management instance in the [Azure portal](https://portal.azure.com/). 1. In the left navigation, select **Custom domains**. 1. Select **+Add**, or select an existing [endpoint](#endpoints-for-custom-domains) that you want to update. Choose the steps according to the [domain certificate](#domain-certificate-optio :::image type="content" source="media/configure-custom-domain/gateway-domain-free-certifcate.png" alt-text="Configure gateway domain with free certificate"::: 1. Select **Add**, or select **Update** for an existing endpoint. 1. Select **Save**.-- + > [!NOTE] > The process of assigning the certificate may take 15 minutes or more depending on size of deployment. Developer tier has downtime, while Basic and higher tiers do not. You can also get a domain ownership identifier by calling the [Get Domain Owners ## Next steps [Upgrade and scale your service](upgrade-and-scale.md)+ |
api-management | Configure Graphql Resolver | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/api-management/configure-graphql-resolver.md | + + Title: Configure GraphQL resolver in Azure API Management +description: Configure a GraphQL resolver in Azure AI Management for a field in an object type specified in a GraphQL schema +++++ Last updated : 02/22/2023++++# Configure a GraphQL resolver ++Configure a resolver to retrieve or set data for a GraphQL field in an object type specified in a GraphQL schema. The schema must be imported to API Management. Currently, API Management supports resolvers that use HTTP-based data sources (REST or SOAP APIs). ++* A resolver is a resource containing a policy definition that's invoked only when a matching object type and field is executed. +* Each resolver resolves data for a single field. To resolve data for multiple fields, configure a separate resolver for each. +* Resolver-scoped policies are evaluated *after* any `inbound` and `backend` policies in the policy execution pipeline. They don't inherit policies from other scopes. For more information, see [Policies in API Management](api-management-howto-policies.md). +++> [!IMPORTANT] +> * If you use the preview `set-graphql-resolver` policy in policy definitions, you should migrate to the managed resolvers described in this article. +> * After you configure a managed resolver for a GraphQL field, the gateway will skip the `set-graphql-resolver` policy in any policy definitions. You can't combine use of managed resolvers and the `set-graphql-resolver` policy in your API Management instance. ++## Prerequisites ++- An existing API Management instance. [Create one if you haven't already](get-started-create-service-instance.md). +- Import a [pass-through](graphql-api.md) or [synthetic](graphql-schema-resolve-api.md) GraphQL API. ++## Create a resolver ++1. In the [Azure portal](https://portal.azure.com), navigate to your API Management instance. ++1. In the left menu, select **APIs** and then the name of your GraphQL API. +1. On the **Design** tab, review the schema for a field in an object type where you want to configure a resolver. + 1. Select a field, and then in the left margin, hover the pointer. + 1. Select **+ Add Resolver**. ++ :::image type="content" source="media/configure-graphql-resolver/add-resolver.png" alt-text="Screenshot of adding a resolver from a field in GraphQL schema in the portal."::: +1. On the **Create Resolver** page, update the **Name** property if you want to, optionally enter a **Description**, and confirm or update the **Type** and **Field** selections. +1. In the **Resolver policy** editor, update the [`http-data-source`](http-data-source-policy.md) policy with child elements for your scenario. + 1. Update the required `http-request` element with policies to transform the GraphQL operation to an HTTP request. + 1. Optionally add an `http-response` element, and add child policies to transform the HTTP response of the resolver. If the `http-response` element isn't specified, the response is returned as a raw string. + 1. Select **Create**. + + :::image type="content" source="media/configure-graphql-resolver/configure-resolver-policy.png" alt-text="Screenshot of resolver policy editor in the portal." lightbox="media/configure-graphql-resolver/configure-resolver-policy.png"::: ++1. The resolver is attached to the field. Go to the **Resolvers** tab to list and manage the resolvers configured for the API. ++ :::image type="content" source="media/configure-graphql-resolver/list-resolvers.png" alt-text="Screenshot of the resolvers list for GraphQL API in the portal." lightbox="media/configure-graphql-resolver/list-resolvers.png"::: ++ > [!TIP] + > The **Linked** column indicates whether or not the resolver is configured for a field that's currently in the GraphQL schema. If a resolver isn't linked, it can't be invoked. ++++## GraphQL context ++* The context for the HTTP request and HTTP response (if specified) differs from the context for the original gateway API request: + * `context.GraphQL` properties are set to the arguments (`Arguments`) and parent object (`Parent`) for the current resolver execution. + * The HTTP request context contains arguments that are passed in the GraphQL query as its body. + * The HTTP response context is the response from the independent HTTP call made by the resolver, not the context for the complete response for the gateway request. +The `context` variable that is passed through the request and response pipeline is augmented with the GraphQL context when used with a GraphQL resolver. ++### context.GraphQL.parent ++The `context.ParentResult` is set to the parent object for the current resolver execution. Consider the following partial schema: ++``` graphql +type Comment { + id: ID! + owner: string! + content: string! +} ++type Blog { + id: ID! + Title: string! + content: string! + comments: [Comment]! + comment(id: ID!): Comment +} ++type Query { + getBlog(): [Blog]! + getBlog(id: ID!): Blog +} +``` ++Also, consider a GraphQL query for all the information for a specific blog: ++``` graphql +query { + getBlog(id: 1) { + title + content + comments { + id + owner + content + } + } +} +``` ++If you set a resolver for the `comments` field in the `Blog` type, you'll want to understand which blog ID to use. You can get the ID of the blog using `context.GraphQL.Parent["id"]` as shown in the following resolver: ++``` xml +<http-data-source> + <http-request> + <set-method>GET</set-method> + <set-url>@($"https://data.contoso.com/api/blog/{context.GraphQL.Parent["id"]}") + }</set-url> + </http-request> +</http-data-source> +``` ++### context.GraphQL.Arguments ++The arguments for a parameterized GraphQL query are added to `context.GraphQL.Arguments`. For example, consider the following two queries: ++``` graphql +query($id: Int) { + getComment(id: $id) { + content + } +} ++query { + getComment(id: 2) { + content + } +} +``` ++These queries are two ways of calling the `getComment` resolver. GraphQL sends the following JSON payload: ++``` json +{ + "query": "query($id: Int) { getComment(id: $id) { content } }", + "variables": { "id": 2 } +} ++{ + "query": "query { getComment(id: 2) { content } }" +} +``` ++You can define the resolver as follows: ++``` xml +<http-data-source> + <http-request> + <set-method>GET</set-method> + <set-url>@($"https://data.contoso.com/api/comment/{context.GraphQL.Arguments["id"]}")</set-url> + </http-request> +</http-data-source> +``` ++## Next steps ++For more resolver examples, see: +++* [GraphQL resolver policies](api-management-policies.md#graphql-resolver-policies) ++* [Samples APIs for Azure API Management](https://github.com/Azure-Samples/api-management-sample-apis) |
api-management | Graphql Api | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/api-management/graphql-api.md | Title: Import a GraphQL API to Azure API Management | Microsoft Docs + Title: Add a GraphQL API to Azure API Management | Microsoft Docs description: Learn how to add an existing GraphQL service as an API in Azure API Management using the Azure portal, Azure CLI, or Azure PowerShell. Manage the API and enable queries to pass through to the GraphQL endpoint. Previously updated : 10/27/2022 Last updated : 04/10/2023 -> * Learn more about the benefits of using GraphQL APIs. -> * Add a GraphQL API to your API Management instance. +> * Add a pass-through GraphQL API to your API Management instance. > * Test your GraphQL API.-> * Learn the limitations of your GraphQL API in API Management. If you want to import a GraphQL schema and set up field resolvers using REST or SOAP API endpoints, see [Import a GraphQL schema and set up field resolvers](graphql-schema-resolve-api.md). If you want to import a GraphQL schema and set up field resolvers using REST or 1. In the dialog box, select **Full** and complete the required form fields. - :::image type="content" source="media/graphql-api/create-from-graphql-schema.png" alt-text="Screenshot of fields for creating a GraphQL API."::: + :::image type="content" source="media/graphql-api/create-from-graphql-endpoint.png" alt-text="Screenshot of fields for creating a GraphQL API."::: | Field | Description | |-|-| | **Display name** | The name by which your GraphQL API will be displayed. | | **Name** | Raw name of the GraphQL API. Automatically populates as you type the display name. |- | **GraphQL API endpoint** | The base URL with your GraphQL API endpoint name. <br /> For example: *`https://example.com/your-GraphQL-name`*. You can also use a common "Star Wars" GraphQL endpoint such as `https://swapi-graphql.azure-api.net/graphql` as a demo. | + | **GraphQL type** | Select **Pass-through GraphQL** to import from an existing GraphQL API endpoint. | + | **GraphQL API endpoint** | The base URL with your GraphQL API endpoint name. <br /> For example: *`https://example.com/your-GraphQL-name`*. You can also use a common "swapi" GraphQL endpoint such as `https://swapi-graphql.azure-api.net/graphql` as a demo. | | **Upload schema** | Optionally select to browse and upload your schema file to replace the schema retrieved from the GraphQL endpoint (if available). | | **Description** | Add a description of your API. |- | **URL scheme** | Select **HTTP**, **HTTPS**, or **Both**. Default selection: *Both*. | + | **URL scheme** | Make a selection based on your GraphQL endpoint. Select one of the options that includes a WebSocket scheme (**WS** or **WSS**) if your GraphQL API includes the subscription type. Default selection: *HTTP(S)*. | | **API URL suffix**| Add a URL suffix to identify this specific API in this API Management instance. It has to be unique in this API Management instance. | | **Base URL** | Uneditable field displaying your API base URL | | **Tags** | Associate your GraphQL API with new or existing tags. | | **Products** | Associate your GraphQL API with a product to publish it. |- | **Gateways** | Associate your GraphQL API with existing gateways. Default gateway selection: *Managed*. | | **Version this API?** | Select to apply a versioning scheme to your GraphQL API. | 1. Select **Create**.-1. After the API is created, browse the schema on the **Design** tab, in the **Frontend** section. +1. After the API is created, browse or modify the schema on the **Design** tab. :::image type="content" source="media/graphql-api/explore-schema.png" alt-text="Screenshot of exploring the GraphQL schema in the portal."::: #### [Azure CLI](#tab/cli) After importing the API, if needed, you can update the settings by using the [Se [!INCLUDE [api-management-graphql-test.md](../../includes/api-management-graphql-test.md)] +### Test a subscription +If your GraphQL API supports a subscription, you can test it in the test console. ++1. Ensure that your API allows a WebSocket URL scheme (**WS** or **WSS**) that's appropriate for your API. You can enable this setting on the **Settings** tab. +1. Set up a subscription query in the query editor, and then select **Connect** to establish a WebSocket connection to the backend service. ++ :::image type="content" source="media/graphql-api/test-graphql-subscription.png" alt-text="Screenshot of a subscription query in the query editor."::: +1. Review connection details in the **Subscription** pane. ++ :::image type="content" source="media/graphql-api/graphql-websocket-connection.png" alt-text="Screenshot of Websocket connection in the portal."::: + +1. Subscribed events appear in the **Subscription** pane. The WebSocket connection is maintained until you disconnect it or you connect to a new WebSocket subscription. ++ :::image type="content" source="media/graphql-api/graphql-subscription-event.png" alt-text="Screenshot of GraphQL subscription events in the portal."::: ++## Secure your GraphQL API ++Secure your GraphQL API by applying both existing [access control policies](api-management-policies.md#access-restriction-policies) and a [GraphQL validation policy](validate-graphql-request-policy.md) to protect against GraphQL-specific attacks. + [!INCLUDE [api-management-define-api-topics.md](../../includes/api-management-define-api-topics.md)] ## Next steps |
api-management | Graphql Apis Overview | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/api-management/graphql-apis-overview.md | + + Title: Support for GraphQL APIs - Azure API Management +description: Learn about GraphQL and how Azure API Management helps you manage GraphQL APIs. +++++ Last updated : 02/26/2023++++# Overview of GraphQL APIs in Azure API Management ++You can use API Management to manage GraphQL APIs - APIs based on the GraphQL query language. GraphQL provides a complete and understandable description of the data in an API, giving clients the power to efficiently retrieve exactly the data they need. [Learn more about GraphQL](https://graphql.org/learn/) ++API Management helps you import, manage, protect, test, publish, and monitor GraphQL APIs. You can choose one of two API models: +++|Pass-through GraphQL |Synthetic GraphQL | +||| +| ▪️ Pass-through API to existing GraphQL service endpoint<br><br/>▪️ Support for GraphQL queries, mutations, and subscriptions | ▪️ API based on a custom GraphQL schema<br></br>▪️ Support for GraphQL queries, mutations, and subscriptions<br/><br/>▪️ Configure custom resolvers, for example, to HTTP data sources<br/><br/>▪️ Develop GraphQL schemas and GraphQL-based clients while consuming data from legacy APIs | ++## Availability ++* GraphQL APIs are supported in all API Management service tiers +* Pass-through and synthetic GraphQL APIs currently aren't supported in a self-hosted gateway +* GraphQL subscription support in synthetic GraphQL APIs is currently in preview ++## What is GraphQL? ++GraphQL is an open-source, industry-standard query language for APIs. Unlike REST-style APIs designed around actions over resources, GraphQL APIs support a broader set of use cases and focus on data types, schemas, and queries. ++The GraphQL specification explicitly solves common issues experienced by client web apps that rely on REST APIs: ++* It can take a large number of requests to fulfill the data needs for a single page +* REST APIs often return more data than needed the page being rendered +* The client app needs to poll to get new information ++Using a GraphQL API, the client app can specify the data they need to render a page in a query document that is sent as a single request to a GraphQL service. A client app can also subscribe to data updates pushed from the GraphQL service in real time. ++## Schema and operation types ++In API Management, add a GraphQL API from a GraphQL schema, either retrieved from a backend GraphQL API endpoint or uploaded by you. A GraphQL schema describes: ++* Data object types and fields that clients can request from a GraphQL API +* Operation types allowed on the data, such as queries ++For example, a basic GraphQL schema for user data and a query for all users might look like: ++``` +type Query { + users: [User] +} ++type User { + id: String! + name: String! +} +``` ++API Management supports the following operation types in GraphQL schemas. For more information about these operation types, see the [GraphQL specification](https://spec.graphql.org/October2021/#sec-Subscription-Operation-Definitions). ++* **Query** - Fetches data, similar to a `GET` operation in REST +* **Mutation** - Modifies server-side data, similar to a `PUT` or `PATCH` operation in REST +* **Subscription** - Enables notifying subscribed clients in real time about changes to data on the GraphQL service ++ For example, when data is modified via a GraphQL mutation, subscribed clients could be automatically notified about the change. ++> [!IMPORTANT] +> API Management supports subscriptions implemented using the [graphql-ws](https://github.com/enisdenjo/graphql-ws) WebSocket protocol. Queries and mutations aren't supported over WebSocket. +> ++## Resolvers ++*Resolvers* take care of mapping the GraphQL schema to backend data, producing the data for each field in an object type. The data source could be an API, a database, or another service. For example, a resolver function would be responsible for returning data for the `users` query in the preceding example. ++In API Management, you can create a *custom resolver* to map a field in an object type to a backend data source. You configure resolvers for fields in synthetic GraphQL API schemas, but you can also configure them to override the default field resolvers used by pass-through GraphQL APIs. ++API Management currently supports HTTP-based resolvers to return the data for fields in a GraphQL schema. To use an HTTP-based resolver, configure a [`http-data-source`](http-data-source-policy.md) policy that transforms the API request (and optionally the response) into an HTTP request/response. ++For example, a resolver for the preceding `users` query might map to a `GET` operation in a backend REST API: ++```xml +<http-data-source> + <http-request> + <set-method>GET</set-method> + <set-url>https://myapi.contoso.com/api/users</set-url> + </http-request> +</http-data-source> +``` ++For more information, see [Configure a GraphQL resolver](configure-graphql-resolver.md). ++## Manage GraphQL APIs ++* Secure GraphQL APIs by applying both existing access control policies and a [GraphQL validation policy](validate-graphql-request-policy.md) to secure and protect against GraphQL-specific attacks. +* Explore the GraphQL schema and run test queries against the GraphQL APIs in the Azure and developer portals. +++## Next steps ++- [Import a GraphQL API](graphql-api.md) +- [Import a GraphQL schema and set up field resolvers](graphql-schema-resolve-api.md) |
api-management | Graphql Schema Resolve Api | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/api-management/graphql-schema-resolve-api.md | Title: Import GraphQL schema and set up field resolvers | Microsoft Docs + Title: Add a synthetic GraphQL API to Azure API Management | Microsoft Docs -description: Import a GraphQL schema to API Management and configure a policy to resolve a GraphQL query using an HTTP-based data source. +description: Add a synthetic GraphQL API by importing a GraphQL schema to API Management and configuring field resolvers that use HTTP-based data sources. Previously updated : 05/17/2022 Last updated : 02/21/2023 -# Import a GraphQL schema and set up field resolvers +# Add a synthetic GraphQL API and set up field resolvers [!INCLUDE [api-management-graphql-intro.md](../../includes/api-management-graphql-intro.md)] - In this article, you'll: > [!div class="checklist"] > * Import a GraphQL schema to your API Management instance-> * Set up a resolver for a GraphQL query using an existing HTTP endpoints +> * Set up a resolver for a GraphQL query using an existing HTTP endpoint > * Test your GraphQL API If you want to expose an existing GraphQL endpoint as an API, see [Import a GraphQL API](graphql-api.md). If you want to expose an existing GraphQL endpoint as an API, see [Import a Grap ## Add a GraphQL schema 1. From the side navigation menu, under the **APIs** section, select **APIs**.-1. Under **Define a new API**, select the **Synthetic GraphQL** icon. +1. Under **Define a new API**, select the **GraphQL** icon. - :::image type="content" source="media/graphql-schema-resolve-api/import-graphql-api.png" alt-text="Screenshot of selecting Synthetic GraphQL icon from list of APIs."::: + :::image type="content" source="media/graphql-api/import-graphql-api.png" alt-text="Screenshot of selecting GraphQL icon from list of APIs."::: 1. In the dialog box, select **Full** and complete the required form fields. :::image type="content" source="media/graphql-schema-resolve-api/create-from-graphql-schema.png" alt-text="Screenshot of fields for creating a GraphQL API."::: - | Field | Description | + | Field | Description | |-|-| | **Display name** | The name by which your GraphQL API will be displayed. | | **Name** | Raw name of the GraphQL API. Automatically populates as you type the display name. |- | **Fallback GraphQL endpoint** | For this scenario, optionally enter a URL with a GraphQL API endpoint name. API Management passes GraphQL queries to this endpoint when a custom resolver isn't set for a field. | - | **Upload schema file** | Select to browse and upload a valid GraphQL schema file with the `.graphql` extension. | - | Description | Add a description of your API. | - | URL scheme | Select **HTTP**, **HTTPS**, or **Both**. Default selection: *Both*. | + | **GraphQL type** | Select **Synthetic GraphQL** to import from a GraphQL schema file. | + | **Fallback GraphQL endpoint** | Optionally enter a URL with a GraphQL API endpoint name. API Management passes GraphQL queries to this endpoint when a custom resolver isn't set for a field. | + | **Description** | Add a description of your API. | + | **URL scheme** | Make a selection based on your GraphQL endpoint. Select one of the options that includes a WebSocket scheme (**WS** or **WSS**) if your GraphQL API includes the subscription type. Default selection: *HTTP(S)*. | | **API URL suffix**| Add a URL suffix to identify this specific API in this API Management instance. It has to be unique in this API Management instance. | | **Base URL** | Uneditable field displaying your API base URL | | **Tags** | Associate your GraphQL API with new or existing tags. | | **Products** | Associate your GraphQL API with a product to publish it. |- | **Gateways** | Associate your GraphQL API with existing gateways. Default gateway selection: *Managed*. | | **Version this API?** | Select to apply a versioning scheme to your GraphQL API. |+ 1. Select **Create**. -1. After the API is created, browse the schema on the **Design** tab, in the **Frontend** section. +1. After the API is created, browse or modify the schema on the **Design** tab. ## Configure resolver -Configure the [set-graphql-resolver](set-graphql-resolver-policy.md) policy to map a field in the schema to an existing HTTP endpoint. +Configure a resolver to map a field in the schema to an existing HTTP endpoint. ++<!-- Add link to resolver how-to article for details --> Suppose you imported the following basic GraphQL schema and wanted to set up a resolver for the *users* query. type User { ``` 1. From the side navigation menu, under the **APIs** section, select **APIs** > your GraphQL API.-1. On the **Design** tab of your GraphQL API, select **All operations**. -1. In the **Backend** processing section, select **+ Add policy**. -1. Configure the `set-graphql-resolver` policy to resolve the *users* query using an HTTP data source. +1. On the **Design** tab, review the schema for a field in an object type where you want to configure a resolver. + 1. Select a field, and then in the left margin, hover the pointer. + 1. Select **+ Add Resolver** ++ :::image type="content" source="media/graphql-schema-resolve-api/add-resolver.png" alt-text="Screenshot of adding a GraphQL resolver in the portal."::: ++1. On the **Create Resolver** page, update the **Name** property if you want to, optionally enter a **Description**, and confirm or update the **Type** and **Field** selections. - For example, the following `set-graphql-resolver` policy retrieves the *users* field by using a `GET` call on an existing HTTP data source. +1. In the **Resolver policy** editor, update the `<http-data-source>` element with child elements for your scenario. For example, the following resolver retrieves the *users* field by using a `GET` call on an existing HTTP data source. + ```xml- <set-graphql-resolver parent-type="Query" field="users"> <http-data-source> <http-request> <set-method>GET</set-method> <set-url>https://myapi.contoso.com/users</set-url> </http-request> </http-data-source>- </set-graphql-resolver> ```-1. To resolve data for other fields in the schema, repeat the preceding step. -1. Select **Save**. ++ :::image type="content" source="media/graphql-schema-resolve-api/configure-resolver-policy.png" alt-text="Screenshot of configuring resolver policy in the portal."::: +1. Select **Create**. +1. To resolve data for another field in the schema, repeat the preceding steps to create a resolver. [!INCLUDE [api-management-graphql-test.md](../../includes/api-management-graphql-test.md)] +## Secure your GraphQL API ++Secure your GraphQL API by applying both existing [access control policies](api-management-policies.md#access-restriction-policies) and a [GraphQL validation policy](validate-graphql-request-policy.md) to protect against GraphQL-specific attacks. ++ [!INCLUDE [api-management-define-api-topics.md](../../includes/api-management-define-api-topics.md)] ## Next steps |
api-management | Http Data Source Policy | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/api-management/http-data-source-policy.md | + + Title: Azure API Management policy reference - http-data-source | Microsoft Docs +description: Reference for the http-data-source resolver policy available for use in Azure API Management. Provides policy usage, settings, and examples. +++++ Last updated : 03/07/2023++++# HTTP data source for a resolver ++The `http-data-source` resolver policy configures the HTTP request and optionally the HTTP response to resolve data for an object type and field in a GraphQL schema. The schema must be imported to API Management. +++## Policy statement ++```xml +<http-data-source> + <http-request> + <get-authorization-context>...get-authorization-context policy configuration...</get-authorization-context> + <set-backend-service>...set-backend-service policy configuration...</set-backend-service> + <set-method>...set-method policy configuration...</set-method> + <set-url>URL</set-url> + <include-fragment>...include-fragment policy configuration...</include-fragment> + <set-header>...set-header policy configuration...</set-header> + <set-body>...set-body policy configuration...</set-body> + <authentication-certificate>...authentication-certificate policy configuration...</authentication-certificate> + </http-request> + <backend> + <forward-request>...forward-request policy configuration...</forward-request> + <http-response> + <set-body>...set-body policy configuration...</set-body> + <xml-to-json>...xml-to-json policy configuration...</xml-to-json> + <find-and-replace>...find-and-replace policy configuration...</find-and-replace> + <publish-event>...publish-event policy configuration...</publish-event> + <include-fragment>...include-fragment policy configuration...</include-fragment> + </http-response> +</http-data-source> +``` ++## Elements ++|Name|Description|Required| +|-|--|--| +| http-request | Specifies a URL and child policies to configure the resolver's HTTP request. | Yes | +| backend | Optionally forwards the resolver's HTTP request to a backend service, if specified. | No | +| http-response | Optionally specifies child policies to configure the resolver's HTTP response. If not specified, the response is returned as a raw string. | No | ++### http-request elements ++> [!NOTE] +> Except where noted, each child element may be specified at most once. Specify elements in the order listed. +++|Element|Description|Required| +|-|--|--| +| [get-authorization-context](get-authorization-context-policy.md) | Gets an authorization context for the resolver's HTTP request. | No | +| [set-backend-service](set-backend-service-policy.md) | Redirects the resolver's HTTP request to the specified backend. | No | +| [include-fragment](include-fragment-policy.md) | Inserts a policy fragment in the policy definition. If there are multiple fragments, then add additional `include-fragment` elements. | No | +| [set-method](set-method-policy.md) | Sets the method of the resolver's HTTP request. | Yes | +| set-url | Sets the URL of the resolver's HTTP request. | Yes | +| [set-header](set-header-policy.md) | Sets a header in the resolver's HTTP request. If there are multiple headers, then add additional `header` elements. | No | +| [set-body](set-body-policy.md) | Sets the body in the resolver's HTTP request. | No | +| [authentication-certificate](authentication-certificate-policy.md) | Authenticates using a client certificate in the resolver's HTTP request. | No | ++### backend element ++| Element|Description|Required| +|-|--|--| +| [forward-request](forward-request-policy.md) | Forwards the resolver's HTTP request to a configured backend service. | No | ++### http-response elements ++> [!NOTE] +> Except where noted, each child element may be specified at most once. Specify elements in the order listed. ++|Name|Description|Required| +|-|--|--| +| [set-body](set-body-policy.md) | Sets the body in the resolver's HTTP response. | No | +| [xml-to-json](xml-to-json-policy.md) | Transforms the resolver's HTTP response from XML to JSON. | No | +| [find-and-replace](find-and-replace-policy.md) | Finds a substring in the resolver's HTTP response and replaces it with a different substring. | No | +| [publish-event](publish-event-policy.md) | Publishes an event to one or more subscriptions specified in the GraphQL API schema. | No | +| [include-fragment](include-fragment-policy.md) | Inserts a policy fragment in the policy definition. If there are multiple fragments, then add additional `include-fragment` elements. | No | ++## Usage ++- [**Policy scopes:**](./api-management-howto-policies.md#scopes) GraphQL resolver +- [**Gateways:**](api-management-gateways-overview.md) dedicated, consumption ++### Usage notes ++* This policy is invoked only when resolving a single field in a matching GraphQL query, mutation, or subscription. ++## Examples ++### Resolver for GraphQL query ++The following example resolves a query by making an HTTP `GET` call to a backend data source. ++#### Example schema ++``` +type Query { + users: [User] +} ++type User { + id: String! + name: String! +} +``` ++#### Example policy ++```xml +<http-data-source> + <http-request> + <set-method>GET</set-method> + <set-url>https://data.contoso.com/get/users</set-url> + </http-request> +</http-data-source> +``` ++### Resolver for a GraqhQL query that returns a list, using a liquid template ++The following example uses a liquid template, supported for use in the [set-body](set-body-policy.md) policy, to return a list in the HTTP response to a query. It also renames the `username` field in the response from the REST API to `name` in the GraphQL response. ++#### Example schema ++``` +type Query { + users: [User] +} ++type User { + id: String! + name: String! +} +``` ++#### Example policy ++```xml +<http-data-source> + <http-request> + <set-method>GET</set-method> + <set-url>https://data.contoso.com/users</set-url> + </http-request> + <http-response> + <set-body template="liquid"> + [ + {% JSONArrayFor elem in body %} + { + "name": "{{elem.username}}" + } + {% endJSONArrayFor %} + ] + </set-body> + </http-response> +</http-data-source> +``` ++### Resolver for GraphQL mutation ++The following example resolves a mutation that inserts data by making a `POST` request to an HTTP data source. The policy expression in the `set-body` policy of the HTTP request modifies a `name` argument that is passed in the GraphQL query as its body. The body that is sent will look like the following JSON: ++``` json +{ + "name": "the-provided-name" +} +``` ++#### Example schema ++``` +type Query { + users: [User] +} ++type Mutation { + makeUser(name: String!): User +} ++type User { + id: String! + name: String! +} +``` ++#### Example policy ++```xml +<http-data-source> + <http-request> + <set-method>POST</set-method> + <set-url> https://data.contoso.com/user/create </set-url> + <set-header name="Content-Type" exists-action="override"> + <value>application/json</value> + </set-header> + <set-body>@{ + var args = context.Request.Body.As<JObject>(true)["arguments"]; + JObject jsonObject = new JObject(); + jsonObject.Add("name", args["name"]) + return jsonObject.ToString(); + }</set-body> + </http-request> +</http-data-source> +``` ++## Related policies ++* [GraphQL resolver policies](api-management-policies.md#graphql-resolver-policies) + |
api-management | Migrate Stv1 To Stv2 | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/api-management/migrate-stv1-to-stv2.md | + + Title: Migrate Azure API Management instance to stv2 platform | Microsoft Docs +description: Follow the steps in this article to migrate your Azure API Management instance from the stv1 compute platform to the stv2 compute platform. Migration steps depend on whether the instance is deployed (injected) in a VNet. ++++ Last updated : 04/17/2023+++++# Migrate an API Management instance hosted on the stv1 platform to stv2 ++You can migrate an API Management instance hosted on the `stv1` compute platform to the `stv2` platform. This article provides migration steps for two scenarios, depending on whether or not your API Management instance is currently deployed (injected) in an [external](api-management-using-with-vnet.md) or [internal](api-management-using-with-internal-vnet.md) VNet. ++* **Non-VNet-injected API Management instance** - Use the [Migrate to stv2](/rest/api/apimanagement/current-ga/api-management-service/migratetostv2) REST API ++* **VNet-injected API Management instance** - Manually update the VNet configuration settings ++For more information about the `stv1` and `stv2` platforms and the benefits of using the `stv2` platform, see [Compute platform for API Management](compute-infrastructure.md). ++> [!IMPORTANT] +> * Migration is a long-running operation. Your instance will experience downtime during the last 10-15 minutes of migration. Plan your migration accordingly. +> * The VIP address(es) of your API Management will change. +> * Migration to `stv2` is not reversible. ++> [!IMPORTANT] +> Support for API Management instances hosted on the `stv1` platform will be [retired by 31 August 2024](breaking-changes/stv1-platform-retirement-august-2024.md). To ensure proper operation of your API Management instance, you should migrate any instance hosted on the `stv1` platform to `stv2` before that date. +++## Prerequisites ++* An API Management instance hosted on the `stv1` compute platform. To confirm that your instance is hosted on the `stv1` platform, see [How do I know which platform hosts my API Management instance?](compute-infrastructure.md#how-do-i-know-which-platform-hosts-my-api-management-instance). +++## Scenario 1: Migrate API Management instance, not injected in a VNet ++For an API Management instance that's not deployed in a VNet, invoke the Migrate to `stv2` REST API. For example, run the following Azure CLI commands, setting variables where indicated with the name of your API Management instance and the name of the resource group in which it was created. ++> [!NOTE] +> The Migrate to `stv2` REST API is available starting in API Management REST API version `2022-04-01-preview`. +++```azurecli +# Verify currently selected subscription +az account show ++# View other available subscriptions +az account list --output table ++# Set correct subscription, if needed +az account set --subscription {your subscription ID} ++# Update these variables with the name and resource group of your API Management instance +APIM_NAME={name of your API Management instance} +RG_NAME={name of your resource group} ++# Get resource ID of API Management instance +APIM_RESOURCE_ID=$(az apim show --name $APIM_NAME --resource-group $RG_NAME --query id --output tsv) ++# Call REST API to migrate to stv2 +az rest --method post --uri "$APIM_RESOURCE_ID/migrateToStv2?api-version=2022-08-01" +``` ++## Scenario 2: Migrate a network-injected API Management instance ++Trigger migration of a network-injected API Management instance to the `stv2` platform by updating the existing network configuration (see the following section). You can also cause migrate to the `stv2` platform by enabling [zone redundancy](../reliability/migrate-api-mgt.md). ++### Update VNet configuration ++Update the configuration of the VNet in each location (region) where the API Management instance is deployed. ++#### Prerequisites ++* A new subnet in the current virtual network. (Alternatively, set up a subnet in a different virtual network in the same region and subscription as your API Management instance). A network security group must be attached to the subnet. ++* A Standard SKU [public IPv4 address](../virtual-network/ip-services/public-ip-addresses.md#sku) resource in the same region and subscription as your API Management instance. ++For details, see [Prerequisites for network connections](api-management-using-with-vnet.md#prerequisites). ++#### Update VNet configuration ++To update the existing external or internal VNet configuration: ++1. In the [portal](https://portal.azure.com), navigate to your API Management instance. +1. In the left menu, select **Network** > **Virtual network**. +1. Select the network connection in the location you want to update. +1. Select the virtual network, subnet, and IP address resources you want to configure, and select **Apply**. +1. Continue configuring VNet settings for the remaining locations of your API Management instance. +1. In the top navigation bar, select **Save**, then select **Apply network configuration**. ++The virtual network configuration is updated, and the instance is migrated to the `stv2` platform. ++## Verify migration ++To verify that the migration was successful, check the [platform version](compute-infrastructure.md#how-do-i-know-which-platform-hosts-my-api-management-instance) of your API Management instance. After successful migration, the value is `stv2`. ++## Next steps ++* Learn about [stv1 platform retirement](breaking-changes/stv1-platform-retirement-august-2024.md). +* For instances deployed in a VNet, see the [Virtual network configuration reference](virtual-network-reference.md). |
api-management | Publish Event Policy | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/api-management/publish-event-policy.md | + + Title: Azure API Management policy reference - publish-event | Microsoft Docs +description: Reference for the publish-event policy available for use in Azure API Management. Provides policy usage, settings, and examples. +++++ Last updated : 02/23/2023++++# Publish event to GraphQL subscription ++The `publish-event` policy publishes an event to one or more subscriptions specified in a GraphQL API schema. Configure the policy using an [http-data-source](http-data-source-policy.md) GraphQL resolver for a related field in the schema for another operation type such as a mutation. At runtime, the event is published to connected GraphQL clients. Learn more about [GraphQL APIs in API Management](graphql-apis-overview.md). +++<!--Link to resolver configuration article --> ++## Policy statement ++```xml +<http-data-source + <http-request> + [...] + </http-request> + <http-response> + [...] + <publish-event> + <targets> + <graphql-subscription id="subscription field" /> + </targets> + </publish-event> + </http-response> +</http-data-source> +``` ++## Elements ++|Name|Description|Required| +|-|--|--| +| targets | One or more subscriptions in the GraphQL schema, specified in `target` subelements, to which the event is published. | Yes | +++## Usage ++- [**Policy sections:**](./api-management-howto-policies.md#sections) `http-response` element in `http-data-source` resolver +- [**Policy scopes:**](./api-management-howto-policies.md#scopes) GraphQL resolver only +- [**Gateways:**](api-management-gateways-overview.md) dedicated, consumption ++### Usage notes ++* This policy is invoked only when a related GraphQL query or mutation is executed. ++## Example ++The following example policy definition is configured in a resolver for the `createUser` mutation. It publishes an event to the `onUserCreated` subscription. ++### Example schema ++``` +type User { + id: Int! + name: String! +} +++type Mutation { + createUser(id: Int!, name: String!): User +} ++type Subscription { + onUserCreated: User! +} +``` ++### Example policy ++```xml +<http-data-source> + <http-request> + <set-method>POST</set-method> + <set-url>https://contoso.com/api/user</set-url> + <set-body template="liquid">{ "id" : {{body.arguments.id}}, "name" : "{{body.arguments.name}}"}</set-body> + </http-request> + <http-response> + <publish-event> + <targets> + <graphql-subscription id="onUserCreated" /> + </targets> + </publish-event> + </http-response> +</http-data-source> +``` ++## Related policies ++* [GraphQL resolver policies](api-management-policies.md#graphql-resolver-policies) + |
api-management | Set Graphql Resolver Policy | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/api-management/set-graphql-resolver-policy.md | Title: Azure API Management policy reference - set-graphql-resolver | Microsoft Docs -description: Reference for the set-graphql-resolver policy available for use in Azure API Management. Provides policy usage, settings, and examples. +description: Reference for the set-graphql-resolver policy in Azure API Management. Provides policy usage, settings, and examples. This policy is retired. - Previously updated : 12/07/2022+ Last updated : 03/07/2023 -# Set GraphQL resolver +# Set GraphQL resolver (retired) -The `set-graphql-resolver` policy retrieves or sets data for a GraphQL field in an object type specified in a GraphQL schema. The schema must be imported to API Management. Currently the data must be resolved using an HTTP-based data source (REST or SOAP API). +> [!IMPORTANT] +> * The `set-graphql-resolver` policy is retired. Customers using the `set-graphql-resolver` policy must migrate to the [managed resolvers](configure-graphql-resolver.md) for GraphQL APIs, which provide enhanced functionality. +> * After you configure a managed resolver for a GraphQL field, the gateway skips the `set-graphql-resolver` policy in any policy definitions. You can't combine use of managed resolvers and the `set-graphql-resolver` policy in your API Management instance. +The `set-graphql-resolver` policy retrieves or sets data for a GraphQL field in an object type specified in a GraphQL schema. The schema must be imported to API Management. Currently the data must be resolved using an HTTP-based data source (REST or SOAP API). ## Policy statement The `set-graphql-resolver` policy retrieves or sets data for a GraphQL field in <authentication-certificate>...authentication-certificate policy configuration...</authentication-certificate> </http-request> <http-response>- <json-to-xml>...json-to-xml policy configuration...</json-to-xml> + <set-body>...set-body policy configuration...</set-body> <xml-to-json>...xml-to-json policy configuration...</xml-to-json> <find-and-replace>...find-and-replace policy configuration...</find-and-replace> </http-response> The `set-graphql-resolver` policy retrieves or sets data for a GraphQL field in |Name|Description|Required| |-|--|--| | http-data-source | Configures the HTTP request and optionally the HTTP response that are used to resolve data for the given `parent-type` and `field`. | Yes |-| http-request | Specifies a URL and child policies to configure the resolver's HTTP request. Each child element can be specified at most once. | Yes | -| set-method| Method of the resolver's HTTP request, configured using the [set-method](set-method-policy.md) policy. | Yes | -| set-url | URL of the resolver's HTTP request. | Yes | -| set-header | Header set in the resolver's HTTP request, configured using the [set-header](set-header-policy.md) policy. | No | -| set-body | Body set in the resolver's HTTP request, configured using the [set-body](set-body-policy.md) policy. | No | -| authentication-certificate | Client certificate presented in the resolver's HTTP request, configured using the [authentication-certificate](authentication-certificate-policy.md) policy. | No | -| http-response | Optionally specifies child policies to configure the resolver's HTTP response. If not specified, the response is returned as a raw string. Each child element can be specified at most once. | -| json-to-xml | Transforms the resolver's HTTP response using the [json-to-xml](json-to-xml-policy.md) policy. | No | -| xml-to-json | Transforms the resolver's HTTP response using the [xml-to-json](xml-to-json-policy.md) policy. | No | -| find-and-replace | Transforms the resolver's HTTP response using the [find-and-replace](find-and-replace-policy.md) policy. | No | +| http-request | Specifies a URL and child policies to configure the resolver's HTTP request. | Yes | +| http-response | Optionally specifies child policies to configure the resolver's HTTP response. If not specified, the response is returned as a raw string. | ++### http-request elements ++> [!NOTE] +> Except where noted, each child element may be specified at most once. Specify elements in the order listed. ++|Element|Description|Required| +|-|--|--| +| [set-method](set-method-policy.md) | Sets the method of the resolver's HTTP request. | Yes | +| set-url | Sets the URL of the resolver's HTTP request. | Yes | +| [set-header](set-header-policy.md) | Sets a header in the resolver's HTTP request. If there are multiple headers, then add additional `header` elements. | No | +| [set-body](set-body-policy.md) | Sets the body in the resolver's HTTP request. | No | +| [authentication-certificate](authentication-certificate-policy.md) | Authenticates using a client certificate in the resolver's HTTP request. | No | ++### http-response elements ++> [!NOTE] +> Each child element may be specified at most once. Specify elements in the order listed. ++|Name|Description|Required| +|-|--|--| +| [set-body](set-body-policy.md) | Sets the body in the resolver's HTTP response. | No | +| [xml-to-json](xml-to-json-policy.md) | Transforms the resolver's HTTP response from XML to JSON. | No | +| [find-and-replace](find-and-replace-policy.md) | Finds a substring in the resolver's HTTP response and replaces it with a different substring. | No | ## Usage The `set-graphql-resolver` policy retrieves or sets data for a GraphQL field in * This policy is invoked only when a matching GraphQL query is executed. * The policy resolves data for a single field. To resolve data for multiple fields, configure multiple occurrences of this policy in a policy definition. - ## GraphQL context * The context for the HTTP request and HTTP response (if specified) differs from the context for the original gateway API request: |
app-service | Quickstart Dotnetcore | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/quickstart-dotnetcore.md | If you've already installed Visual Studio 2022: ### [.NET 6.0](#tab/net60) - An Azure account with an active subscription. [Create an account for free](https://azure.microsoft.com/free/dotnet).-- A GitHub account [Create an account for free](http://github.com/).+- A GitHub account [Create an account for free](https://github.com/). ### [.NET Framework 4.8](#tab/netframework48) - An Azure account with an active subscription. [Create an account for free](https://azure.microsoft.com/free/dotnet).-- A GitHub account [Create an account for free](http://github.com/).+- A GitHub account [Create an account for free](https://github.com/). :::zone-end |
application-gateway | Application Gateway Private Deployment | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/application-gateway/application-gateway-private-deployment.md | AGIC v1.7 must be used to introduce support for private frontend IP only. If Application Gateway has a backend target or key vault reference to a private endpoint located in a VNet that is accessible via global VNet peering, traffic is dropped, resulting in an unhealthy status. +### Network watcher integration ++Connection Troubleshoot and NSG Diagnostics will return an error when running check and diagnostic tests. + ### Coexisting v2 Application Gateways created prior to enablement of enhanced network control If a subnet shares Application Gateway v2 deployments that were created both prior to and after enablement of the enhanced network control functionality, Network Security Group (NSG) and Route Table functionality is limited to the prior gateway deployment. Application gateways provisioned prior to enablement of the new functionality must either be reprovisioned, or newly created gateways must use a different subnet to enable enhanced network security group and route table features. |
application-gateway | Migrate V1 V2 | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/application-gateway/migrate-v1-v2.md | An Azure PowerShell script is available that does the following: * [Virtual network service endpoint policies](../virtual-network/virtual-network-service-endpoint-policies-overview.md) are currently not supported in an Application Gateway subnet. * To migrate a TLS/SSL configuration, you must specify all the TLS/SSL certs used in your v1 gateway. * If you have FIPS mode enabled for your V1 gateway, it won't be migrated to your new v2 gateway. FIPS mode isn't supported in v2.-* v2 doesn't support IPv6, so IPv6 enabled v1 gateways aren't migrated. If you run the script, it may not complete. -* If the v1 gateway has only a private IP address, the script creates a public IP address and a private IP address for the new v2 gateway. v2 gateways currently don't support only private IP addresses. +* In case of Private IP only V1 gateway, the script will generate a private and public IP address for the new V2 gateway. The Private IP only V2 gateway is currently in public preview. Once it becomes generally available, customers can utilize the script to transfer their private IP only V1 gateway to a private IP only V2 gateway. * Headers with names containing anything other than letters, digits, and hyphens are not passed to your application. This only applies to header names, not header values. This is a breaking change from v1. * NTLM and Kerberos authentication is not supported by Application Gateway v2. The script is unable to detect if the gateway is serving this type of traffic and may pose as a breaking change from v1 to v2 gateways if run. Here are a few scenarios where your current application gateway (Standard) may r Update your clients to use the IP address(es) associated with the newly created v2 application gateway. We recommend that you don't use IP addresses directly. Consider using the DNS name label (for example, yourgateway.eastus.cloudapp.azure.com) associated with your application gateway that you can CNAME to your own custom DNS zone (for example, contoso.com). +## ApplicationGateway V2 pricing ++The pricing models are different for the Application Gateway v1 and v2 SKUs. Please review the pricing at [Application Gateway pricing](https://azure.microsoft.com/pricing/details/application-gateway/) page before migrating from V1 to V2. + ## Common questions ### Are there any limitations with the Azure PowerShell script to migrate the configuration from v1 to v2? |
application-gateway | Overview V2 | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/application-gateway/overview-v2.md | This section describes features and limitations of the v2 SKU that differ from t |Performance logs in Azure diagnostics|Not supported.<br>Azure metrics should be used.| |FIPS mode|Currently not supported.| |Private frontend configuration only mode|Currently in public preview [Learn more](application-gateway-private-deployment.md).|-|Azure Network Watcher integration|Not supported.| |Microsoft Defender for Cloud integration|Not yet available. ## Migrate from v1 to v2 |
applied-ai-services | Create A Form Recognizer Resource | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/applied-ai-services/form-recognizer/create-a-form-recognizer-resource.md | recommendations: false [!INCLUDE [applies to v3.0 and v2.1](includes/applies-to-v3-0-and-v2-1.md)] -Azure Form Recognizer is a cloud-based [Azure Applied AI Service](../../applied-ai-services/index.yml) that uses machine-learning models to extract key-value pairs, text, and tables from your documents. Here, you'll learn how to create a Form Recognizer resource in the Azure portal. +Azure Form Recognizer is a cloud-based [Azure Applied AI Service](../../applied-ai-services/index.yml) that uses machine-learning models to extract key-value pairs, text, and tables from your documents. In this article, learn how to create a Form Recognizer resource in the Azure portal. ## Visit the Azure portal Let's get started: 1. Next, you're going to fill out the **Create Form Recognizer** fields with the following values: * **Subscription**. Select your current subscription.- * **Resource group**. The [Azure resource group](/azure/cloud-adoption-framework/govern/resource-consistency/resource-access-management#what-is-an-azure-resource-group) that will contain your resource. You can create a new group or add it to a pre-existing group. + * **Resource group**. The [Azure resource group](/azure/cloud-adoption-framework/govern/resource-consistency/resource-access-management#what-is-an-azure-resource-group) that contains your resource. You can create a new group or add it to a pre-existing group. * **Region**. Select your local region. * **Name**. Enter a name for your resource. We recommend using a descriptive name, for example *YourNameFormRecognizer*. * **Pricing tier**. The cost of your resource depends on the pricing tier you choose and your usage. For more information, see [pricing details](https://azure.microsoft.com/pricing/details/cognitive-services/). You can use the free pricing tier (F0) to try the service, and upgrade later to a paid tier for production. Let's get started: 1. Once you receive the *deployment is complete* message, select the **Go to resource** button. -1. Copy the key and endpoint values from your Form Recognizer resource paste them in a convenient location, such as *Microsoft Notepad*. You'll need the key and endpoint values to connect your application to the Form Recognizer API. +1. Copy the key and endpoint values from your Form Recognizer resource paste them in a convenient location, such as *Microsoft Notepad*. You need the key and endpoint values to connect your application to the Form Recognizer API. 1. If your overview page doesn't have the keys and endpoint visible, you can select the **Keys and Endpoint** button, on the left navigation bar, and retrieve them there. |
applied-ai-services | Project Share Custom Classifier | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/applied-ai-services/form-recognizer/how-to-guides/project-share-custom-classifier.md | + + Title: "Share custom model projects using Form Recognizer Studio" ++description: Learn how to share custom model projects using Form Recognizer Studio. +++++ Last updated : 04/17/2023++monikerRange: 'form-recog-3.0.0' +recommendations: false +++# Share custom model projects using Form Recognizer Studio ++Form Recognizer Studio is an online tool to visually explore, understand, train, and integrate features from the Form Recognizer service into your applications. Form Recognizer Studio enables project sharing feature within the custom extraction model. Projects can be shared easily via a project token. The same project token can also be used to import a project. ++## Prerequisite ++In order to share and import your custom extraction projects seamlessly, both users (user who shares and user who imports) need an An active [**Azure account**](https://azure.microsoft.com/free/cognitive-services/). If you don't have one, you can [**create a free account**](https://azure.microsoft.com/free/). Also, both users need to configure permissions to grant access to the Form Recognizer and storage resources. ++## Granted access and permissions ++ > [!IMPORTANT] + > Custom model projects can be imported only if you have the access to the storage account that is associated with the project you are trying to import. Check your storage account permission before starting to share or import projects with others. ++### Managed identity ++Enable a system-assigned managed identity for your Form Recognizer resource. A system-assigned managed identity is enabled directly on a service instance. It isn't enabled by default; you must go to your resource and update the identity setting. ++For more information, *see*, [Enable a system-assigned managed identity](../managed-identities.md#enable-a-system-assigned-managed-identity) ++### Role-based access control (RBAC) ++Grant your Form Recognizer managed identity access to your storage account using Azure role-based access control (Azure RBAC). The [Storage Blob Data Contributor](../../..//role-based-access-control/built-in-roles.md#storage-blob-data-reader) role grants read, write, and delete permissions for Azure Storage containers and blobs. ++For more information, *see*, [Grant access to your storage account](../managed-identities.md#grant-access-to-your-storage-account) ++### Configure cross origin resource sharing (CORS) ++CORS needs to be configured in your Azure storage account for it to be accessible to the Form Recognizer Studio. You can update the CORS setting in the Azure portal. ++Form more information, *see* [Configure CORS](../quickstarts/try-form-recognizer-studio.md#configure-cors) ++### Virtual networks and firewalls ++If your storage account VNet is enabled or if there are any firewall constraints, the project can't be shared. If you want to bypass those restrictions, ensure that those settings are turned off. ++A workaround is to manually create a project using the same settings as the project being shared. ++### User sharing requirements ++Users sharing the project need to create a project [**`ListAccountSAS`**](/rest/api/storagerp/storage-accounts/list-account-sas) to configure the storage account CORS and a [**`ListServiceSAS`**](/rest/api/storagerp/storage-accounts/list-service-sas) to generate a SAS token for *read*, *write* and *list* container's file in addition to blob storage data *update* permissions. ++### User importing requirements ++Users who want to import the project need a [**`ListServiceSAS`**](/rest/api/storagerp/storage-accounts/list-service-sas) to generate a SAS token for *read*, *write* and *list* container's file in addition to blob storage data *update* permissions. ++## Share a custom extraction model with Form Recognizer studio ++Follow these steps to share your project using Form Recognizer studio: ++1. Start by navigating to the [Form Recognizer Studio](https://formrecognizer.appliedai.azure.com/studio). ++1. In the Studio, select the **Custom extraction models** tile, under the custom models section. ++ :::image type="content" source="../media/how-to/studio-custom-extraction.png" alt-text="Screenshot showing how to select a custom extraction model in the Studio."::: ++1. On the custom extraction models page, select the desired model to share and then select the **Share** button. ++ :::image type="content" source="../media/how-to/studio-project-share.png" alt-text="Screenshot showing how to select the desired model and select the share option."::: ++1. On the share project dialog, copy the project token for the selected project. +++## Import custom extraction model with Form Recognizer studio ++Follow these steps to import a project using Form Recognizer studio. ++1. Start by navigating to the [Form Recognizer Studio](https://formrecognizer.appliedai.azure.com/studio). ++1. In the Studio, select the **Custom extraction models** tile, under the custom models section. ++ :::image type="content" source="../media/how-to/studio-custom-extraction.png" alt-text="Screenshot: Select custom extraction model in the Studio."::: ++1. On the custom extraction models page, select the **Import** button. ++ :::image type="content" source="../media/how-to/studio-project-import.png" alt-text="Screenshot: Select import within custom extraction model page."::: ++1. On the import project dialog, paste the project token shared with you and select import. +++## Next steps ++> [!div class="nextstepaction"] +> [Back up and recover models](../disaster-recovery.md) |
applied-ai-services | Language Support | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/applied-ai-services/form-recognizer/language-support.md | |
applied-ai-services | V3 Error Guide | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/applied-ai-services/form-recognizer/v3-error-guide.md | |
applied-ai-services | Whats New | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/applied-ai-services/form-recognizer/whats-new.md | |
automation | Automation Services | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/automation/automation-services.md | Replaces repetitive, day-to-day operational tasks with an exception-only managem ### Azure Policy based Guest Configuration -Azure Policy based Guest configuration is the next iteration of Azure Automation State configuration. [Learn more](../governance/machine-configuration/machine-configuration-policy-effects.md). +Azure Policy based Guest configuration is the next iteration of Azure Automation State configuration. [Learn more](../governance/machine-configuration/remediation-options.md). You can check on what is installed in: Azure Policy based Guest configuration is the next iteration of Azure Automation | **Scenarios** | **Users** | | - | - |- | Obtain compliance data that may include: The configuration of the operating system ΓÇô files, registry, and services, Application configuration or presence, Check environment settings. </br> </br> Audit or deploy settings to all machines (Set) in scope either reactively to existing machines or proactively to new machines as they are deployed. </br> </br> Respond to policy events to provide [remediation on demand or continuous remediation.](../governance/machine-configuration/machine-configuration-policy-effects.md#remediation-on-demand-applyandmonitor) | The Central IT, Infrastructure Administrators, Auditors (Cloud custodians) are working towards the regulatory requirements at scale and ensuring that servers' end state looks as desired. </br> </br> The application teams validate compliance before releasing change. | + | Obtain compliance data that may include: The configuration of the operating system ΓÇô files, registry, and services, Application configuration or presence, Check environment settings. </br> </br> Audit or deploy settings to all machines (Set) in scope either reactively to existing machines or proactively to new machines as they are deployed. </br> </br> Respond to policy events to provide [remediation on demand or continuous remediation.](../governance/machine-configuration/remediation-options.md#remediation-on-demand-applyandmonitor) | The Central IT, Infrastructure Administrators, Auditors (Cloud custodians) are working towards the regulatory requirements at scale and ensuring that servers' end state looks as desired. </br> </br> The application teams validate compliance before releasing change. | ### Azure Automation - Process Automation |
automation | Automation Use Azure Ad | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/automation/automation-use-azure-ad.md | Before installing the Azure AD modules on your computer: 3. Run Windows PowerShell as an administrator to create an elevated Windows PowerShell command prompt. -4. Deploy Azure Active Directory from [MSOnline 1.0](http://www.powershellgallery.com/packages/MSOnline/1.0). +4. Deploy Azure Active Directory from [MSOnline 1.0](https://www.powershellgallery.com/packages/MSOnline/1.0). 5. If you're prompted to install the NuGet provider, type Y and press ENTER. |
azure-arc | Troubleshooting | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/kubernetes/troubleshooting.md | Title: "Troubleshoot common Azure Arc-enabled Kubernetes issues" Previously updated : 03/28/2023 Last updated : 04/18/2023 description: "Learn how to resolve common issues with Azure Arc-enabled Kubernetes clusters and GitOps." az k8s-extension create --resource-group <resource-group> --cluster-name <cluste ### Flux v2 - `microsoft.flux` extension installation CPU and memory limits -The controllers installed in your Kubernetes cluster with the Microsoft Flux extension require the following CPU and memory resource limits to properly schedule on Kubernetes cluster nodes. +The controllers installed in your Kubernetes cluster with the Microsoft Flux extension require CPU and memory resources to properly schedule on Kubernetes cluster nodes. This table shows the minimum memory and CPU resources that may be requested, along with the maximum limits for potential CPU and memory resource requirements. -| Container Name | CPU limit | Memory limit | +| Container Name | Minimum CPU | Minimum memory | Maximum CPU | Maximum memory | | -- | -- | -- |-| fluxconfig-agent | 50 m | 150 Mi | -| fluxconfig-controller | 100 m | 150 Mi | -| fluent-bit | 20 m | 150 Mi | -| helm-controller | 1000 m | 1 Gi | -| source-controller | 1000 m | 1 Gi | -| kustomize-controller | 1000 m | 1 i | -| notification-controller | 1000 m | 1 Gi | -| image-automation-controller | 1000 m | 1 Gi | -| image-reflector-controller | 1000 m | 1 Gi | +| fluxconfig-agent | 5 m | 30 Mi | 50 m | 150 Mi | +| fluxconfig-controller | 5 m | 30 Mi | 100 m | 150 Mi | +| fluent-bit | 5 m | 30 Mi | 20 m | 150 Mi | +| helm-controller | 100 m | 64 Mi | 1000 m | 1 Gi | +| source-controller | 50 m | 64 Mi | 1000 m | 1 Gi | +| kustomize-controller | 100 m | 64 Mi | 1000 m | 1 Gi | +| notification-controller | 100 m | 64 Mi | 1000 m | 1 Gi | +| image-automation-controller | 100 m | 64 Mi | 1000 m | 1 Gi | +| image-reflector-controller | 100 m | 64 Mi | 1000 m | 1 Gi | If you've enabled a custom or built-in Azure Gatekeeper Policy that limits the resources for containers on Kubernetes clusters, such as `Kubernetes cluster containers CPU and memory resource limits should not exceed the specified limits`, ensure that either the resource limits on the policy are greater than the limits shown above or that the `flux-system` namespace is part of the `excludedNamespaces` parameter in the policy assignment. |
azure-arc | Agent Release Notes Archive | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/servers/agent-release-notes-archive.md | Download for [Windows](https://download.microsoft.com/download/1/c/4/1c4a0bde-0b ### Fixed -- The guest configuration policy agent can now configure and remediate system settings. Existing policy assignments continue to be audit-only. Learn more about the Azure Policy [guest configuration remediation options](../../governance/machine-configuration/machine-configuration-policy-effects.md).+- The guest configuration policy agent can now configure and remediate system settings. Existing policy assignments continue to be audit-only. Learn more about the Azure Policy [guest configuration remediation options](../../governance/machine-configuration/remediation-options.md). - The guest configuration policy agent now restarts every 48 hours instead of every 6 hours. ## Version 1.9 - July 2021 |
azure-arc | Manage Agent | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/servers/manage-agent.md | Title: Managing the Azure Arc-enabled servers agent description: This article describes the different management tasks that you will typically perform during the lifecycle of the Azure Connected Machine agent. Previously updated : 04/07/2023 Last updated : 04/19/2023 The proxy bypass feature does not require you to enter specific URLs to bypass. | | | | `AAD` | `login.windows.net`, `login.microsoftonline.com`, `pas.windows.net` | | `ARM` | `management.azure.com` |-| `Arc` | `his.arc.azure.com`, `guestconfiguration.azure.com`, `guestnotificationservice.azure.com`, `servicebus.windows.net` | +| `Arc` | `his.arc.azure.com`, `guestconfiguration.azure.com` | To send Azure Active Directory and Azure Resource Manager traffic through a proxy server but skip the proxy for Azure Arc traffic, run the following command: |
azure-arc | Manage Vm Extensions | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/servers/manage-vm-extensions.md | In this release, we support the following VM extensions on Windows and Linux mac To learn about the Azure Connected Machine agent package and details about the Extension agent component, see [Agent overview](agent-overview.md). > [!NOTE]-> The Desired State Configuration VM extension is no longer available for Azure Arc-enabled servers. Alternatively, we recommend [migrating to machine configuration](../../governance/machine-configuration/machine-configuration-azure-automation-migration.md) or using the Custom Script Extension to manage the post-deployment configuration of your server. +> The Desired State Configuration VM extension is no longer available for Azure Arc-enabled servers. Alternatively, we recommend [migrating to machine configuration](../../governance/machine-configuration/migrate-from-azure-automation.md) or using the Custom Script Extension to manage the post-deployment configuration of your server. Arc-enabled servers support moving machines with one or more VM extensions installed between resource groups or another Azure subscription without experiencing any impact to their configuration. The source and destination subscriptions must exist within the same [Azure Active Directory tenant](../../active-directory/develop/quickstart-create-new-tenant.md). This support is enabled starting with the Connected Machine agent version **1.8.21197.005**. For more information about moving resources and considerations before proceeding, see [Move resources to a new resource group or subscription](../../azure-resource-manager/management/move-resource-group-and-subscription.md). |
azure-functions | Durable Functions Troubleshooting Guide | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/durable/durable-functions-troubleshooting-guide.md | + + Title: Durable Functions Troubleshooting Guide - Azure Functions +description: Guide to troubleshoot common issues with durable functions. ++ Last updated : 03/10/2023++++# Durable Functions Troubleshooting Guide ++Durable Functions is an extension of [Azure Functions](../functions-overview.md) that lets you build serverless orchestrations using ordinary code. For more information on Durable Functions, see the [Durable Functions overview](./durable-functions-overview.md). ++This article provides a guide for troubleshooting common scenarios in Durable Functions apps. ++> [!NOTE] +> Microsoft support engineers are available to assist in diagnosing issues with your application. If you're not able to diagnose your problem using this guide, you can file a support ticket by accessing the **New Support request** blade in the **Support + troubleshooting** section of your function app page in the Azure portal. ++ ++> [!TIP] +> When debugging and diagnosing issues, it's recommended that you start by ensuring your app is using the latest Durable Functions extension version. Most of the time, using the latest version mitigates known issues already reported by other users. Please read the [Upgrade Durable Functions extension version](./durable-functions-extension-upgrade.md) article for instructions on how to upgrade your extension version. ++The **Diagnose and solve problems** tab in the Azure portal is a useful resource to monitor and diagnose possible issues related to your application. It also supplies potential solutions to your problems based on the diagnosis. See [Azure Function app diagnostics](./function-app-diagnostics.md) for more details. ++If the resources above didn't solve your problem, the following sections provide advice for specific application symptoms: ++## Orchestration is stuck in the `Pending` state ++When you start an orchestration, a "start" message gets written to an internal queue managed by the Durable extension, and the status of the orchestration gets set to "Pending". After the orchestration message gets picked up and successfully processed by an available app instance, the status will transition to "Running" (or to some other non-"Pending" state). ++Use the following steps to troubleshoot orchestration instances that remain stuck indefinitely in the "Pending" state. ++* Check the Durable Task Framework traces for warnings or errors for the impacted orchestration instance ID. A sample query can be found in the [Trace Errors/Warnings section](#trace-errorswarnings). ++* Check the Azure Storage control queues assigned to the stuck orchestrator to see if its "start message" is still there For more information on control queues, see the [Azure Storage provider control queue documentation](durable-functions-azure-storage-provider.md#control-queues). ++* Change your app's [platform configuration](../../app-service/configure-common.md#configure-general-settings) version to ΓÇ£64 BitΓÇ¥. + Sometimes orchestrations don't start because the app is running out of memory. Switching to 64-bit process allows the app to allocate more total memory. This only applies to App Service Basic, Standard, Premium, and Elastic Premium plans. Free or Consumption plans **do not** support 64-bit processes. ++## Orchestration starts after a long delay ++Normally, orchestrations start within a few seconds after they're scheduled. However, there are certain cases where orchestrations may take longer to start. Use the following steps to troubleshoot when orchestrations take more than a few seconds to start executing. ++* Refer to the [documentation on delayed orchestrations in Azure Storage](./durable-functions-azure-storage-provider.md#orchestration-start-delays) to check whether the delay may be caused by known limitations. ++* Check the Durable Task Framework traces for warnings or errors with the impacted orchestration instance ID. A sample query can be found in [Trace Errors/Warnings section](#trace-errorswarnings). ++## Orchestration doesn't complete / is stuck in the `Running` state ++If an orchestration remains in the "Running" state for a long period of time, it usually means that it's waiting for a long-running task that is scheduled to complete. For example, it could be waiting for a durable timer task, an activity task, or an external event task to be completed. However, if you observe that scheduled tasks have completed successfully but the orchestration still isn't making progress, then there might be a problem preventing the orchestration from proceeding to its next task. We often refer to orchestrations in this state as "stuck orchestrations". ++Use the following steps to troubleshoot stuck orchestrations: ++* Try restarting the function app. This step can help if the orchestration gets stuck due to a transient bug or deadlock in either the app or the extension code. ++* Check the Azure Storage account control queues to see if any queues are growing continuously. [This Azure Storage messaging KQL query](./durable-functions-troubleshooting-guide.md#azure-storage-messaging) can help identify problems with dequeuing orchestration messages. If the problem impacts only a single control queue, it might indicate a problem that exists only on a specific app instance, in which case scaling up or down to move off the unhealthy VM instance could help. ++* Use the Application Insights query in the [Azure Storage Messaging section](./durable-functions-troubleshooting-guide.md#azure-storage-messaging) to filter on that queue name as the Partition ID and look for any problems related to that control queue partition. ++* Check the guidance in [Durable Functions Best Practice and Diagnostic Tools](./durable-functions-best-practice-reference.md). Some problems may be caused by known Durable Functions anti-patterns. ++* Check the [Durable Functions Versioning documentation](durable-functions-versioning.md). Some problems may be caused by breaking changes to in-flight orchestration instances. ++## Orchestration runs slowly ++Heavy data processing, internal errors, and insufficient compute resources can cause orchestrations to execute slower than normal. Use the following steps to troubleshoot orchestrations that are taking longer than expected to execute: ++* Check the Durable Task Framework traces for warnings or errors for the impacted orchestration instance ID. A sample query can be found in the [Trace Errors/Warnings section](#trace-errorswarnings). ++* If your app utilizes the .NET in-process model, consider enabling [extended sessions](./durable-functions-azure-storage-provider.md#extended-sessions). + Extended sessions can minimize history loads, which can slow down processing. ++* Check for performance and scalability bottlenecks. + Application performance depends on many factors. For example, high CPU usage, or large memory consumption can result in delays. Read [Performance and scale in Durable Functions](./durable-functions-perf-and-scale.md) for detailed guidance. ++## Sample Queries ++This section shows how to troubleshoot issues by writing custom [KQL queries](/azure/data-explorer/kusto/query/) in the Azure Application Insights instance configured for your Azure Functions app. ++### Azure Storage Messaging ++When using the default Azure Storage provider, all Durable Functions behavior is driven by Azure Storage queue messages and all state related to an orchestration is stored in table storage and blob storage. When Durable Task Framework tracing is enabled, all Azure Storage interactions are logged to Application Insights, and this data is critically important for debugging execution and performance problems. ++Starting in v2.3.0 of the Durable Functions extension, you can have these Durable Task Framework logs published to your Application Insights instance by updating your logging configuration in the host.json file. See the [Durable Task Framework logging article](./durable-functions-diagnostics.md) for information and instructions on how to do this. ++The following query is for inspecting end-to-end Azure Storage interactions for a specific orchestration instance. Edit `start` and `orchestrationInstanceID` to filter by time range and instance ID. ++```kusto +let start = datetime(XXXX-XX-XXTXX:XX:XX); // edit this +let orchestrationInstanceID = "XXXXXXX"; //edit this +traces +| where timestamp > start and timestamp < start + 1h +| where customDimensions.Category == "DurableTask.AzureStorage" +| extend taskName = customDimensions["EventName"] +| extend eventType = customDimensions["prop__EventType"] +| extend extendedSession = customDimensions["prop__IsExtendedSession"] +| extend account = customDimensions["prop__Account"] +| extend details = customDimensions["prop__Details"] +| extend instanceId = customDimensions["prop__InstanceId"] +| extend messageId = customDimensions["prop__MessageId"] +| extend executionId = customDimensions["prop__ExecutionId"] +| extend age = customDimensions["prop__Age"] +| extend latencyMs = customDimensions["prop__LatencyMs"] +| extend dequeueCount = customDimensions["prop__DequeueCount"] +| extend partitionId = customDimensions["prop__PartitionId"] +| extend eventCount = customDimensions["prop__TotalEventCount"] +| extend taskHub = customDimensions["prop__TaskHub"] +| extend pid = customDimensions["ProcessId"] +| extend appName = cloud_RoleName +| extend newEvents = customDimensions["prop__NewEvents"] +| where instanceId == orchestrationInstanceID +| sort by timestamp asc +| project timestamp, appName, severityLevel, pid, taskName, eventType, message, details, messageId, partitionId, instanceId, executionId, age, latencyMs, dequeueCount, eventCount, newEvents, taskHub, account, extendedSession, sdkVersion +``` ++### Trace Errors/Warnings ++The following query searches for errors and warnings for a given orchestration instance. You'll need to provide a value for `orchestrationInstanceID`. ++```kusto +let orchestrationInstanceID = "XXXXXX"; // edit this +let start = datetime(XXXX-XX-XXTXX:XX:XX); +traces +| where timestamp > start and timestamp < start + 1h +| extend instanceId = iif(isnull(customDimensions["prop__InstanceId"] ) , customDimensions["prop__instanceId"], customDimensions["prop__InstanceId"] ) +| extend logLevel = customDimensions["LogLevel"] +| extend functionName = customDimensions["prop__functionName"] +| extend status = customDimensions["prop__status"] +| extend details = customDimensions["prop__Details"] +| extend reason = customDimensions["prop__reason"] +| where severityLevel > 1 // to see all logs of severity level "Information" or greater. +| where instanceId == orchestrationInstanceID +| sort by timestamp asc +``` ++### Control queue / Partition ID logs ++The following query searches for all activity associated with an instanceId's control queue. You'll need to provide the value for the instanceID in `orchestrationInstanceID` and the query's start time in `start`. ++```kusto +let orchestrationInstanceID = "XXXXXX"; // edit this +let start = datetime(XXXX-XX-XXTXX:XX:XX); // edit this +traces // determine control queue for this orchestrator +| where timestamp > start and timestamp < start + 1h +| extend instanceId = customDimensions["prop__TargetInstanceId"] +| extend partitionId = tostring(customDimensions["prop__PartitionId"]) +| where partitionId contains "control" +| where instanceId == orchestrationInstanceID +| join kind = rightsemi( +traces +| where timestamp > start and timestamp < start + 1h +| where customDimensions.Category == "DurableTask.AzureStorage" +| extend taskName = customDimensions["EventName"] +| extend eventType = customDimensions["prop__EventType"] +| extend extendedSession = customDimensions["prop__IsExtendedSession"] +| extend account = customDimensions["prop__Account"] +| extend details = customDimensions["prop__Details"] +| extend instanceId = customDimensions["prop__InstanceId"] +| extend messageId = customDimensions["prop__MessageId"] +| extend executionId = customDimensions["prop__ExecutionId"] +| extend age = customDimensions["prop__Age"] +| extend latencyMs = customDimensions["prop__LatencyMs"] +| extend dequeueCount = customDimensions["prop__DequeueCount"] +| extend partitionId = tostring(customDimensions["prop__PartitionId"]) +| extend eventCount = customDimensions["prop__TotalEventCount"] +| extend taskHub = customDimensions["prop__TaskHub"] +| extend pid = customDimensions["ProcessId"] +| extend appName = cloud_RoleName +| extend newEvents = customDimensions["prop__NewEvents"] +) on partitionId +| sort by timestamp asc +| project timestamp, appName, severityLevel, pid, taskName, eventType, message, details, messageId, partitionId, instanceId, executionId, age, latencyMs, dequeueCount, eventCount, newEvents, taskHub, account, extendedSession, sdkVersion +``` ++### Application Insights column reference ++Below is a list of the columns projected by the queries above and their respective descriptions. ++|Column |Description | +|-|| +|pid|Process ID of the function app instance. This is useful for determining if the process was recycled while an orchestration was executing.| +|taskName|The name of the event being logged.| +|eventType|The type of message, which usually represents work done by an orchestrator. A full list of its possible values, and their descriptions, is [here](https://github.com/Azure/durabletask/blob/main/src/DurableTask.Core/History/EventType.cs)| +|extendedSession|Boolean value indicating whether [extended sessions](durable-functions-azure-storage-provider.md#extended-sessions) is enabled.| +|account|The storage account used by the app.| +|details|Additional information about a particular event, if available.| +|instanceId|The ID for a given orchestration or entity instance.| +|messageId|The unique Azure Storage ID for a given queue message. This value most commonly appears in ReceivedMessage, ProcessingMessage, and DeletingMessage trace events. Note that it's NOT present in SendingMessage events because the message ID is generated by Azure Storage _after_ we send the message.| +|executionId|The ID of the orchestrator execution, which changes whenever `continue-as-new` is invoked.| +|age|The number of milliseconds since a message was enqueued. Large numbers often indicate performance problems. An exception is the TimerFired message type, which may have a large Age value depending on timer's duration.| +|latencyMs|The number of milliseconds taken by some storage operation.| +|dequeueCount|The number of times a message has been dequeued. Under normal circumstances, this value is always 1. If it's more than one, then there might be a problem.| +|partitionId|The name of the queue associated with this log.| +|totalEventCount|The number of history events involved in the current action.| +|taskHub|The name of your [task hub](./durable-functions-task-hubs.md).| +|newEvents|A comma-separated list of history events that are being written to the History table in storage.| |
azure-maps | How To Render Custom Data | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-maps/how-to-render-custom-data.md | The following are examples of custom data: This article uses the [Postman] application, but you may use a different API development environment. -We'll use the Azure Maps [Data service] to store and render overlays. +Use the Azure Maps [Data service] to store and render overlays. ## Render pushpins with labels and a custom image To get a static image with custom pins and labels: > [!NOTE] > The procedure in this section requires an Azure Maps account Gen 1 (S1) or Gen 2 pricing tier. -In this section, we'll upload path and pin data to Azure Map data storage. +In this section, you upload path and pin data to Azure Map data storage. To upload pins and path data: To render a polygon with color and opacity: > [!NOTE] > The procedure in this section requires an Azure Maps account Gen 1 (S1) or Gen 2 pricing tier. -You can modify the appearance of the pins by adding style modifiers. For example, to make pushpins and their labels larger or smaller, use the `sc` "scale style" modifier. This modifier takes a value that's greater than zero. A value of 1 is the standard scale. Values larger than 1 will make the pins larger, and values smaller than 1 will make them smaller. For more information about style modifiers, see [static image service path parameters]. +You can modify the appearance of the pins by adding style modifiers. For example, to make pushpins and their labels larger or smaller, use the `sc` "scale style" modifier. This modifier takes a value that's greater than zero. A value of 1 is the standard scale. Values larger than 1 makes the pins larger, and values smaller than 1 makes them smaller. For more information about style modifiers, see [static image service path parameters]. To render a circle and pushpins with custom labels: To render a circle and pushpins with custom labels: :::image type="content" source="./media/how-to-render-custom-data/circle-custom-pins.png" alt-text="Render a circle with custom pushpins."::: -8. Now we'll change the color of the pushpins by modifying the `co` style modifier. If you look at the value of the `pins` parameter (`pins=default|la15+50|al0.66|lc003C62|co002D62|`), you'll see that the current color is `#002D62`. To change the color to `#41d42a`, we'll replace `#002D62` with `#41d42a`. Now the `pins` parameter is `pins=default|la15+50|al0.66|lc003C62|co41D42A|`. The request looks like the following URL: +8. Next, change the color of the pushpins by modifying the `co` style modifier. If you look at the value of the `pins` parameter (`pins=default|la15+50|al0.66|lc003C62|co002D62|`), notice that the current color is `#002D62`. To change the color to `#41d42a`, replace `#002D62` with `#41d42a`. Now the `pins` parameter is `pins=default|la15+50|al0.66|lc003C62|co41D42A|`. The request looks like the following URL: ```HTTP https://atlas.microsoft.com/map/static/png?api-version=1.0&style=main&layer=basic&zoom=14&height=700&Width=700¢er=-122.13230609893799,47.64599069048016&path=lcFF0000|lw2|la0.60|ra1000||-122.13230609893799 47.64599069048016&pins=default|la15+50|al0.66|lc003C62|co41D42A||'Microsoft Corporate Headquarters'-122.14131832122801 47.64690503939462|'Microsoft Visitor Center'-122.136828 47.642224|'Microsoft Conference Center'-122.12552547454833 47.642940335653996|'Microsoft The Commons'-122.13687658309935 47.64452336193245&subscription-key={Your-Azure-Maps-Subscription-key} Similarly, you can change, add, and remove other style modifiers. ## Next steps -- Explore the [Azure Maps Get Map Image API] documentation.-- To learn more about Azure Maps Data service, see the [service documentation].+> [!div class="nextstepaction"] +> [Render - Get Map Image] ++> [!div class="nextstepaction"] +> [Data service] + [Azure Maps account]: quick-demo-map-app.md#create-an-azure-maps-account-[Subscription key]: quick-demo-map-app.md#get-the-subscription-key-for-your-account -[Postman]: https://www.postman.com/ +[Render - Get Map Image]: /rest/api/maps/render/getmapimage [Data service]: /rest/api/maps/data-[static image service]: /rest/api/maps/render/getmapimage [Data Upload]: /rest/api/maps/data-v2/upload-[Render service]: /rest/api/maps/render/get-map-image [path parameter]: /rest/api/maps/render/getmapimage#uri-parameters-[Azure Maps Get Map Image API]: /rest/api/maps/render/getmapimage -[service documentation]: /rest/api/maps/data +[Postman]: https://www.postman.com/ +[Render service]: /rest/api/maps/render/get-map-image [static image service path parameters]: /rest/api/maps/render/getmapimage#uri-parameters+[static image service]: /rest/api/maps/render/getmapimage +[Subscription key]: quick-demo-map-app.md#get-the-subscription-key-for-your-account |
azure-maps | How To Secure Webapp Users | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-maps/how-to-secure-webapp-users.md | Title: How to secure a web application with interactive single-sign-in + Title: How to secure a web application with interactive single sign-in -description: How to configure a web application which supports Azure AD single-sign-on with Azure Maps Web SDK using OpenID Connect protocol. +description: How to configure a web application that supports Azure AD single sign-in with Azure Maps Web SDK using OpenID Connect protocol. Last updated 06/12/2020-The following guide pertains to an application which is hosted on web servers, maintains multiple business scenarios, and deploys to web servers. The application has the requirement to provide protected resources secured only to Azure AD users. The objective of the scenario is to enable the web application to authenticate to Azure AD and call Azure Maps REST APIs on behalf of the user. +The following guide pertains to an application that is hosted on web servers, maintains multiple business scenarios, and deploys to web servers. The application has the requirement to provide protected resources secured only to Azure AD users. The objective of the scenario is to enable the web application to authenticate to Azure AD and call Azure Maps REST APIs on behalf of the user. [!INCLUDE [authentication details](./includes/view-authentication-details.md)] ## Create an application registration in Azure AD -You must create the web application in Azure AD for users to sign in. This web application will then delegate user access to Azure Maps REST APIs. +You must create the web application in Azure AD for users to sign in. This web application then delegates user access to Azure Maps REST APIs. 1. In the Azure portal, in the list of Azure services, select **Azure Active Directory** > **App registrations** > **New registration**. - > [!div class="mx-imgBorder"] - >  + :::image type="content" source="./media/how-to-manage-authentication/app-registration.png" alt-text="A screenshot showing App registration." lightbox="./media/how-to-manage-authentication/app-registration.png"::: -2. Enter a **Name**, choose a **Support account type**, provide a redirect URI which will represent the url which Azure AD will issue the token and is the url where the map control is hosted. For more details please see Azure AD [Scenario: Web app that signs in users](../active-directory/develop/scenario-web-app-sign-user-overview.md). Complete the provided steps from the Azure AD scenario. +2. Enter a **Name**, choose a **Support account type**, provide a redirect URI that represents the url to which Azure AD issues the token, which is the url where the map control is hosted. For more information, see Azure AD [Scenario: Web app that signs in users](../active-directory/develop/scenario-web-app-sign-user-overview.md). Complete the provided steps from the Azure AD scenario. -3. Once the application registration is complete, Confirm that application sign-in works for users. Once sign-in works, then the application can be granted delegated access to Azure Maps REST APIs. - -4. To assign delegated API permissions to Azure Maps, go to the application. Then select **API permissions** > **Add a permission**. Under **APIs my organization uses**, search for and select **Azure Maps**. +3. Once the application registration is complete, confirm that application sign-in works for users. Once sign-in works, the application can be granted delegated access to Azure Maps REST APIs. - > [!div class="mx-imgBorder"] - >  +4. To assign delegated API permissions to Azure Maps, go to the application and select **API permissions** > **Add a permission**. select **Azure Maps** in the **APIs my organization uses** list. ++ :::image type="content" source="./media/how-to-manage-authentication/app-permissions.png" alt-text="A screenshot showing add app API permissions." lightbox="./media/how-to-manage-authentication/app-permissions.png"::: 5. Select the check box next to **Access Azure Maps**, and then select **Add permissions**. - > [!div class="mx-imgBorder"] - >  + :::image type="content" source="./media/how-to-manage-authentication/select-app-permissions.png" alt-text="A screenshot showing select app API permissions." lightbox="./media/how-to-manage-authentication/select-app-permissions.png"::: ++6. Enable the web application to call Azure Maps REST APIs by configuring the app registration with an application secret, For detailed steps, see [A web app that calls web APIs: App registration](../active-directory/develop/scenario-web-app-call-api-app-registration.md). A secret is required to authenticate to Azure AD on-behalf of the user. The app registration certificate or secret should be stored in a secure store for the web application to retrieve to authenticate to Azure AD. ++ * This step may be skipped if the application already has an Azure AD app registration and secret configured. ++ > [!TIP] + > If the application is hosted in an Azure environment, we recommend using [Managed identities for Azure resources](../active-directory/managed-identities-azure-resources/overview.md) and an Azure Key Vault instance to access secrets by [acquiring an access token](../active-directory/managed-identities-azure-resources/how-to-use-vm-token.md) for accessing Azure Key Vault secrets or certificates. To connect to Azure Key Vault to retrieve secrets, see [tutorial to connect through managed identity](../key-vault/general/tutorial-net-create-vault-azure-web-app.md). -6. Enable the web application to call Azure Maps REST APIs by configuring the app registration with an application secret, For detailed steps, see [A web app that calls web APIs: App registration](../active-directory/develop/scenario-web-app-call-api-app-registration.md). A secret is required to authenticate to Azure AD on-behalf of the user. The app registration certificate or secret should be stored in a secure store for the web application to retrieve to authenticate to Azure AD. - - * If the application already has configured an Azure AD app registration and a secret this step may be skipped. +7. Implement a secure token endpoint for the Azure Maps Web SDK to access a token. -> [!Tip] -> If the application is hosted in an Azure environment, we recommend using [Managed identities for Azure resources](../active-directory/managed-identities-azure-resources/overview.md) and an Azure Key Vault instance to access secrets by [acquiring an access token](../active-directory/managed-identities-azure-resources/how-to-use-vm-token.md) for accessing Azure Key Vault secrets or certificates. To connect to Azure Key Vault to retrieve secrets, see [tutorial to connect through managed identity](../key-vault/general/tutorial-net-create-vault-azure-web-app.md). - -7. Implement a secure token endpoint for the Azure Maps Web SDK to access a token. - - * For a sample token controller, see [Azure Maps Azure AD Samples](https://github.com/Azure-Samples/Azure-Maps-AzureAD-Samples/blob/master/src/OpenIdConnect/AzureMapsOpenIdConnectv1/AzureMapsOpenIdConnect/Controllers/TokenController.cs). + * For a sample token controller, see [Azure Maps Azure AD Samples](https://github.com/Azure-Samples/Azure-Maps-AzureAD-Samples/blob/master/src/OpenIdConnect/AzureMapsOpenIdConnectv1/AzureMapsOpenIdConnect/Controllers/TokenController.cs). * For a non-AspNetCore implementation or other, see [Acquire token for the app](../active-directory/develop/scenario-web-app-call-api-acquire-token.md) from Azure AD documentation. * The secured token endpoint is responsible to return an access token for the authenticated and authorized user to call Azure Maps REST APIs. -8. Configure Azure role-based access control (Azure RBAC) for users or groups. See [grant role-based access for users](#grant-role-based-access-for-users-to-azure-maps). +8. To configure Azure role-based access control (Azure RBAC) for users or groups, see [grant role-based access for users](#grant-role-based-access-for-users-to-azure-maps). -9. Configure the web application page with the Azure Maps Web SDK to access the secure token endpoint. +9. Configure the web application page with the Azure Maps Web SDK to access the secure token endpoint. ```javascript var map = new atlas.Map("map", { Find the API usage metrics for your Azure Maps account: Explore samples that show how to integrate Azure AD with Azure Maps: > [!div class="nextstepaction"]-> [Azure Maps Azure AD Web App Samples](https://github.com/Azure-Samples/Azure-Maps-AzureAD-Samples/tree/master/src/OpenIdConnect) +> [Azure Maps Azure AD Web App Samples](https://github.com/Azure-Samples/Azure-Maps-AzureAD-Samples/tree/master/src/OpenIdConnect) |
azure-monitor | Agents Overview | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/agents/agents-overview.md | Using Azure Monitor agent, you get immediate benefits as shown below: - **Security and Performance** - Enhanced security through Managed Identity and Azure Active Directory (Azure AD) tokens (for clients). - Higher event throughput that is 25% better than the legacy Log Analytics (MMA/OMS) agents.-- **A single agent** that serves all data collection needs across [supported](https://learn.microsoft.com/azure/azure-monitor/agents/agents-overview#supported-operating-systems) servers and client devices. A single agent is the goal, although Azure Monitor Agent is currently converging with the Log Analytics agents.+- **A single agent** that serves all data collection needs across [supported](#supported-operating-systems) servers and client devices. A single agent is the goal, although Azure Monitor Agent is currently converging with the Log Analytics agents. ## Consolidating legacy agents In addition to the generally available data collection listed above, Azure Monit | [Microsoft Defender for Cloud](../../security-center/security-center-introduction.md) | Public preview | <ul><li>Azure Security Agent extension</li><li>SQL Advanced Threat Protection extension</li><li>SQL Vulnerability Assessment extension</li></ul> | [Auto-deployment of Azure Monitor Agent (Preview)](../../defender-for-cloud/auto-deploy-azure-monitoring-agent.md) | | [Microsoft Sentinel](../../sentinel/overview.md) | <ul><li>Windows Security Events: [Generally available](../../sentinel/connect-windows-security-events.md?tabs=AMA)</li><li>Windows Forwarding Event (WEF): [Public preview](../../sentinel/data-connectors/windows-forwarded-events.md)</li><li>Windows DNS logs: [Public preview](../../sentinel/connect-dns-ama.md)</li><li>Linux Syslog CEF: [Public preview](../../sentinel/connect-cef-ama.md#set-up-the-common-event-format-cef-via-ama-connector)</li></ul> | Sentinel DNS extension, if youΓÇÖre collecting DNS logs. For all other data types, you just need the Azure Monitor Agent extension. | - | | [Change Tracking and Inventory Management](../../automation/change-tracking/overview.md) | Public preview | Change Tracking extension | [Change Tracking and Inventory using Azure Monitor Agent](../../automation/change-tracking/overview-monitoring-agent.md) |-| [Update Management](../../automation/update-management/overview.md) (available without Azure Monitor Agent) | Use Update Management v2 - Public preview | None | [Update management center (Public preview) documentation](../../update-center/index.yml) | -| [Automation Hybrid Runbook Worker overview](../../automation/automation-hybrid-runbook-worker.md) (available without Azure Monitor Agent) | Migrate to Azure Automation Hybrid Worker Extension - Generally available | None | [Migrate an existing Agent based to Extension based Hybrid Workers](../../automation/extension-based-hybrid-runbook-worker-install.md#migrate-an-existing-agent-based-to-extension-based-hybrid-workers) | | [Network Watcher](../../network-watcher/network-watcher-monitoring-overview.md) | Connection Monitor: Public preview | Azure NetworkWatcher extension | [Monitor network connectivity by using Azure Monitor Agent](../../network-watcher/azure-monitor-agent-with-connection-monitor.md) | | Azure Stack HCI Insights | private preview | No additional extension installed | [Sign up here](https://aka.ms/amadcr-privatepreviews) | | Azure Virtual Desktop (AVD) Insights | private preview | No additional extension installed | [Sign up here](https://aka.ms/amadcr-privatepreviews) | |
azure-monitor | Alerts Common Schema | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/alerts/alerts-common-schema.md | If the custom properties are not set in the Alert rule, this field will be null. "metricValue": 7.727 } ]- } + }, "customProperties":{ "Key1": "Value1", "Key2": "Value2" |
azure-monitor | Proactive Failure Diagnostics | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/alerts/proactive-failure-diagnostics.md | Click the alert to configure it. ## Delete alerts -You can disable or delete a Failure Anomalies alert rule, but once deleted you can't create another one for the same Application Insights resource. +You can disable or delete a Failure Anomalies alert rule. -Notice that if you delete an Application Insights resource, the associated Failure Anomalies alert rule doesn't get deleted automatically. You can do so manually on the Alert rules page or with the following Azure CLI command: +You can do so manually on the Alert rules page or with the following Azure CLI command: ```azurecli az resource delete --ids <Resource ID of Failure Anomalies alert rule> ```+Notice that if you delete an Application Insights resource, the associated Failure Anomalies alert rule doesn't get deleted automatically. ## Example of Failure Anomalies alert webhook payload |
azure-monitor | Azure Ad Authentication | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/azure-ad-authentication.md | Application Insights now supports [Azure Active Directory (Azure AD) authenticat Using various authentication systems can be cumbersome and risky because it's difficult to manage credentials at scale. You can now choose to [opt out of local authentication](#disable-local-authentication) to ensure only telemetry exclusively authenticated by using [managed identities](../../active-directory/managed-identities-azure-resources/overview.md) and [Azure AD](../../active-directory/fundamentals/active-directory-whatis.md) is ingested in your resource. This feature is a step to enhance the security and reliability of the telemetry used to make critical operational ([alerting](../alerts/alerts-overview.md#what-are-azure-monitor-alerts)and [autoscale](../autoscale/autoscale-overview.md#overview-of-autoscale-in-azure)) and business decisions. +> [!NOTE] +> Note +> This document is used to cover data ingestion into Application Insights using Azure AD. authentication. If you are looking for information on querying data within Application Insights, please refer to **[Query Application Insights using Azure AD Authentication](/azure/azure-monitor/logs/api/app-insights-azure-ad-api)**. + ## Prerequisites+> The following prerequisites enable Azure AD authenticated ingestion. You need to: tracer = Tracer( ) ... ```- +- ## Disable local authentication This error usually occurs when the provided credentials don't grant access to in ## Next steps+ * [Monitor your telemetry in the portal](overview-dashboard.md) * [Diagnose with Live Metrics Stream](live-stream.md)+* [Query Application Insights using Azure AD Authentication](/azure/azure-monitor/logs/api/app-insights-azure-ad-api) ++ |
azure-monitor | Container Insights Manage Agent | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/containers/container-insights-manage-agent.md | To reenable discovery of the environmental variables, apply the same process you - name: AZMON_COLLECT_ENV value: "True" ``` +## Semantic version update of container insights agent version ++Container Insights has shifted the image version and naming convention to [semver format] (https://semver.org/). SemVer helps developers keep track of every change made to a software during its development phase and ensures that the software versioning is consistent and meaningful. The old version was in format of ciprod<timestamp>-<commitId> and win-ciprod<timestamp>-<commitId>, our first image versions using the Semver format are 3.1.4 for Linux and win-3.1.4 for Windows. ++Semver is a universal software versioning schema which is defined in the format MAJOR.MINOR.PATCH, which follows the following constraints: ++1. Increment the MAJOR version when you make incompatible API changes. +2. Increment the MINOR version when you add functionality in a backwards compatible manner. +3. Increment the PATCH version when you make backwards compatible bug fixes. + +With the rise of Kubernetes and the OSS ecosystem, Container Insights migrate to use semver image following the K8s recommended standard wherein with each minor version introduced, all breaking changes were required to be publicly documented with each new Kubernetes release. ## Next steps |
azure-monitor | Cost Logs | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/logs/cost-logs.md | The count of monitored servers is calculated on an hourly granularity. The daily Subscriptions that contained a Log Analytics workspace or Application Insights resource on April 2, 2018, or are linked to an Enterprise Agreement that started before February 1, 2019, and is still active, will continue to have access to use the following legacy pricing tiers: - Standalone (Per GB)-- Per Node (Operations Management Suite [OMS])+- Per Node (Operations Management Suite [OMS]) -Access to the legacy Free Trial pricing tier was limited on July 1, 2022. +Access to the legacy Free Trial pricing tier was limited on July 1, 2022. Pricing information for the Standalone and Per Node pricing tiers is available [here](https://aka.ms/OMSpricing). ### Free Trial pricing tier Usage on the Standalone pricing tier is billed by the ingested data volume. It's ### Per Node pricing tier -The Per Node pricing tier charges per monitored VM (node) on an hour granularity. For each monitored node, the workspace is allocated 500 MB of data per day that's not billed. This allocation is calculated with hourly granularity and is aggregated at the workspace level each day. Data ingested above the aggregate daily data allocation is billed per GB as data overage. +The Per Node pricing tier charges per monitored VM (node) on an hour granularity. For each monitored node, the workspace is allocated 500 MB of data per day that's not billed. This allocation is calculated with hourly granularity and is aggregated at the workspace level each day. Data ingested above the aggregate daily data allocation is billed per GB as data overage. The Per Node pricing tier should only be used by customers with active Operations Management Suite (OMS) licenses. On your bill, the service will be **Insight and Analytics** for Log Analytics usage if the workspace is in the Per Node pricing tier. Workspaces in the Per Node pricing tier have user-configurable retention from 30 to 730 days. Workspaces in the Per Node pricing tier don't support the use of [Basic Logs](basic-logs-configure.md). Usage is reported on three meters: |
azure-netapp-files | Backup Introduction | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-netapp-files/backup-introduction.md | -> The Azure NetApp Files backup feature is currently in preview. You need to submit a waitlist request for accessing the feature through the **[Azure NetApp Files Backup Public Preview](https://aka.ms/anfbackuppreviewsignup)** page. Wait for an official confirmation email from the Azure NetApp Files team before using the Azure NetApp Files backup feature. +> The Azure NetApp Files backup feature is currently in preview. You need to submit a waitlist request for accessing the feature through the **[Azure NetApp Files Backup Public Preview](https://aka.ms/anfbackuppreviewsignup)** page. The Azure NetApp Files backup feature is expected to be enabled within a week after you submit the waitlist request. You can check the status of feature registration by using the following command: +> +> ```azurepowershell-interactive +> Get-AzProviderFeature -ProviderNamespace Microsoft.NetApp -FeatureName ANFBackupPreview +> +> FeatureName ProviderName RegistrationState +> -- -- +> ANFBackupPreview Microsoft.NetApp Registered +> ``` ## Supported regions |
azure-netapp-files | Configure Customer Managed Keys | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-netapp-files/configure-customer-managed-keys.md | The following diagram demonstrates how customer-managed keys work with Azure Net ## Considerations > [!IMPORTANT]-> Customer-managed keys for Azure NetApp Files volume encryption is currently in preview. You need to submit a waitlist request for accessing the feature through the **[Customer-managed keys for Azure NetApp Files volume encryption](https://aka.ms/anfcmkpreviewsignup)** page. Customer-managed keys feature is expected to be enabled within a week from submitting waitlist request. +> Customer-managed keys for Azure NetApp Files volume encryption is currently in preview. You need to submit a waitlist request for accessing the feature through the **[Customer-managed keys for Azure NetApp Files volume encryption](https://aka.ms/anfcmkpreviewsignup)** page. Customer-managed keys feature is expected to be enabled within a week after you submit the waitlist request. You can check the status of feature registration by using the following command: +> +> ```azurepowershell-interactive +> Get-AzProviderFeature -ProviderNamespace Microsoft.NetApp -FeatureName ANFAzureKeyVaultEncryption +> +> FeatureName ProviderName RegistrationState +> -- -- +> ANFAzureKeyVaultEncryption Microsoft.NetApp Registered +> ``` * Customer-managed keys can only be configured on new volumes. You can't migrate existing volumes to customer-managed key encryption. * To create a volume using customer-managed keys, you must select the *Standard* network features. You can't use customer-managed key volumes with volume configured using Basic network features. Follow instructions in to [Set the Network Features option](configure-network-features.md#set-the-network-features-option) in the volume creation page. |
azure-netapp-files | Enable Continuous Availability Existing SMB | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-netapp-files/enable-continuous-availability-existing-SMB.md | -> The SMB Continuous Availability feature is currently in public preview. You need to submit a waitlist request for accessing the feature through the **[Azure NetApp Files SMB Continuous Availability Shares Public Preview waitlist submission page](https://aka.ms/anfsmbcasharespreviewsignup)**. Wait for an official confirmation email from the Azure NetApp Files team before using the Continuous Availability feature. +> The SMB Continuous Availability feature is currently in public preview. You need to submit a waitlist request for accessing the feature through the **[Azure NetApp Files SMB Continuous Availability Shares Public Preview waitlist submission page](https://aka.ms/anfsmbcasharespreviewsignup)**. The SMB Continuous Availability feature is expected to be enabled within a week after you submit the waitlist request. You can check the status of feature registration by using the following command: +> +> ```azurepowershell-interactive +> Get-AzProviderFeature -ProviderNamespace Microsoft.NetApp -FeatureName ANFSMBCAShare > -> See the [**Enable Continuous Availability**](azure-netapp-files-create-volumes-smb.md#continuous-availability) option for additional details and considerations. +> FeatureName ProviderName RegistrationState +> -- -- +> ANFSMBCAShare Microsoft.NetApp Registered +> ``` >[!IMPORTANT] > Custom applications are not supported with SMB Continuous Availability.+> +> See the [**Enable Continuous Availability**](azure-netapp-files-create-volumes-smb.md#continuous-availability) option for additional details and considerations. ## Steps |
azure-resource-manager | Bicep Cli | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-resource-manager/bicep/bicep-cli.md | Title: Bicep CLI commands and overview description: Describes the commands that you can use in the Bicep CLI. These commands include building Azure Resource Manager templates from Bicep. Previously updated : 01/10/2023 Last updated : 04/18/2023 # Bicep CLI commands The `publish` command adds a module to a registry. The Azure container registry After publishing the file to the registry, you can [reference it in a module](modules.md#file-in-registry). -To use the publish command, you must have Bicep CLI version **0.4.1008 or later**. +To use the publish command, you must have Bicep CLI version **0.4.1008 or later**. To use the `--documentationUri`/`-d` parameter, you must have Bicep CLI version **0.14.46 or later**. To publish a module to a registry, use: ```azurecli-az bicep publish --file <bicep-file> --target br:<registry-name>.azurecr.io/<module-path>:<tag> +az bicep publish --file <bicep-file> --target br:<registry-name>.azurecr.io/<module-path>:<tag> --documentationUri <documentation-uri> ``` For example: ```azurecli-az bicep publish --file storage.bicep --target br:exampleregistry.azurecr.io/bicep/modules/storage:v1 +az bicep publish --file storage.bicep --target br:exampleregistry.azurecr.io/bicep/modules/storage:v1 --documentationUri https://www.contoso.com/exampleregistry.html ``` The `publish` command doesn't recognize aliases that you've defined in a [bicepconfig.json](bicep-config-modules.md) file. Provide the full module path. The local cache is found in: /home/<username>/.bicep ``` +- On Mac ++ ```path + ~/.bicep + ``` + The `restore` command doesn't refresh the cache if a module is already cached. To fresh the cache, you can either delete the module path from the cache or use the `--force` switch with the `restore` command. ## upgrade |
azure-resource-manager | Bicep Extensibility Kubernetes Provider | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-resource-manager/bicep/bicep-extensibility-kubernetes-provider.md | Title: Bicep extensibility Kubernetes provider description: Learn how to Bicep Kubernetes provider to deploy .NET applications to Azure Kubernetes Service clusters. Previously updated : 02/21/2023 Last updated : 04/18/2023 # Bicep extensibility Kubernetes provider (Preview) param kubeConfig string import 'kubernetes@1.0.0' with { namespace: 'default' kubeConfig: kubeConfig-} +} as k8s ``` - **namespace**: Specify the namespace of the provider. |
azure-resource-manager | Installation Troubleshoot | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-resource-manager/bicep/installation-troubleshoot.md | Title: Troubleshoot problems with Bicep installation description: How to resolve errors and problems with your Bicep installation. Previously updated : 12/15/2021 Last updated : 04/18/2023 # Troubleshoot Bicep installation Failed to install .NET runtime v5.0 Failed to download .NET 5.0.x ....... Error! ``` +> [!WARNING] +> This is a last resort solution that may cause problems when updating versions. + To solve the problem, you can manually install .NET from the [.NET website](https://aka.ms/dotnet-core-download), and then configure Visual Studio Code to reuse an existing installation of .NET with the following settings: **Windows** |
azure-resource-manager | Msbuild Bicep File | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-resource-manager/bicep/msbuild-bicep-file.md | description: Use MSBuild to convert a Bicep file to Azure Resource Manager templ Last updated 09/26/2022 --+ # Customer intent: As a developer I want to convert Bicep files to Azure Resource Manager template (ARM template) JSON in an MSBuild pipeline. |
azure-resource-manager | Private Module Registry | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-resource-manager/bicep/private-module-registry.md | Title: Create private registry for Bicep module description: Learn how to set up an Azure container registry for private Bicep modules Previously updated : 01/10/2023 Last updated : 04/18/2023 # Create private registry for Bicep modules After setting up the container registry, you can publish files to it. Use the [p # [PowerShell](#tab/azure-powershell) ```azurepowershell-Publish-AzBicepModule -FilePath ./storage.bicep -Target br:exampleregistry.azurecr.io/bicep/modules/storage:v1 +Publish-AzBicepModule -FilePath ./storage.bicep -Target br:exampleregistry.azurecr.io/bicep/modules/storage:v1 -DocumentationUri https://www.contoso.com/exampleregistry.html ``` # [Azure CLI](#tab/azure-cli) Publish-AzBicepModule -FilePath ./storage.bicep -Target br:exampleregistry.azure To run this deployment command, you must have the [latest version](/cli/azure/install-azure-cli) of Azure CLI. ```azurecli-az bicep publish --file storage.bicep --target br:exampleregistry.azurecr.io/bicep/modules/storage:v1 +az bicep publish --file storage.bicep --target br:exampleregistry.azurecr.io/bicep/modules/storage:v1 --documentationUri https://www.contoso.com/exampleregistry.html ``` |
azure-resource-manager | Quickstart Private Module Registry | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-resource-manager/bicep/quickstart-private-module-registry.md | Title: Publish modules to private module registry description: Publish Bicep modules to private module registry and use the modules. Previously updated : 04/01/2022 Last updated : 04/18/2023 #Customer intent: As a developer new to Azure deployment, I want to learn how to publish Bicep modules to private module registry. Use the following syntax to publish a Bicep file as a module to a private module # [Azure CLI](#tab/azure-cli) ```azurecli-az bicep publish --file storage.bicep --target br:exampleregistry.azurecr.io/bicep/modules/storage:v1 +az bicep publish --file storage.bicep --target br:exampleregistry.azurecr.io/bicep/modules/storage:v1 --documentationUri https://www.contoso.com/exampleregistry.html ``` # [Azure PowerShell](#tab/azure-powershell) ```azurepowershell-Publish-AzBicepModule -FilePath ./storage.bicep -Target br:exampleregistry.azurecr.io/bicep/modules/storage:v1 +Publish-AzBicepModule -FilePath ./storage.bicep -Target br:exampleregistry.azurecr.io/bicep/modules/storage:v1 -DocumentationUri https://www.contoso.com/exampleregistry.html ``` |
azure-resource-manager | Template Functions Numeric | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-resource-manager/templates/template-functions-numeric.md | Title: Template functions - numeric description: Describes the functions to use in an Azure Resource Manager template (ARM template) to work with numbers. Previously updated : 03/10/2022 Last updated : 04/18/2023 # Numeric functions for ARM templates The output from the preceding example with the default values is: | Name | Type | Value | | - | - | -- |-| mulResult | Int | 15 | +| mulResult | Int | 45 | ## sub |
azure-video-indexer | Monitor Video Indexer Data Reference | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-video-indexer/monitor-video-indexer-data-reference.md | The following schemas are in use by Azure Video Indexer "Filename": "1 Second Video 1.mp4", "AnimationModelId": null, "BrandsCategories": null,- "CustomLanguages": null, - "ExcludedAIs": "Face", + "CustomLanguages": "en-US,ar-BH,hi-IN,es-MX", + "ExcludedAIs": "Faces", "LogoGroupId": "ea9d154d-0845-456c-857e-1c9d5d925d95" } } |
azure-video-indexer | Release Notes | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-video-indexer/release-notes.md | -* [Important notice](#important-notice) about planned changes * The latest releases * Known issues * Bug fixes * Deprecated functionality -## Important notice +## April 2023 +### The animation character recognition model has been retired -## April 2023 +The **animation character recognition** model has been retired on March 1st, 2023. For any related issues, [open a support ticket via the Azure portal](https://portal.azure.com/#blade/Microsoft_Azure_Support/HelpAndSupportBlade/overview). ### Excluding sensitive AI models |
backup | Archive Tier Support | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/backup/archive-tier-support.md | Title: Azure Backup - Archive tier overview description: Learn about Archive tier support for Azure Backup. Previously updated : 04/06/2023 Last updated : 04/15/2023 When you move recovery points to archive, they're subjected to an early deletion Stop protection and delete data deletes all recovery points. For recovery points in archive that haven't stayed for a duration of 180 days in archive tier, deletion of recovery points leads to early deletion cost. +## Stop protection and retain data ++Azure Backup now supports tiering to archive when you choose to *Stop protection and retain data*. If the backup item is associated with a long term retention policy and is moved to *Stop protection and retain data* state, you can choose to move recommended recovery points to vault-archive tier. ++>[!Note] +>For Azure VM backups, moving recommended recovery points to vault-archive saves costs. For other supported workloads, you can choose to move all eligible recovery points to archive to save costs. If backup item is associated with a short term retention policy and it's moved to *Stop protection & retain data* state, you can't tier the recovery points to archive. + ## Archive tier pricing -You can view the Archive tier pricing from our [pricing page](azure-backup-pricing.md). +You can view the Archive tier pricing from our [pricing page](https://azure.microsoft.com/pricing/details/backup/). ## Frequently asked questions |
batch | Error Handling | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/batch/error-handling.md | Title: Error handling and detection in Azure Batch description: Learn about error handling in Batch service workflows from a development standpoint. Previously updated : 12/20/2021 Last updated : 04/13/2023 # Error handling and detection in Azure Batch At times, you might need to handle task and application failures in your Azure B ## Error codes -Some general types of errors you might see in Batch are: +Some general types of errors that you might see in Batch are: -- Networking failures for requests that never reached Batch. Or, networking failures when the Batch response didn't reach the client in time.+- Networking failures for requests that never reached Batch, or networking failures when the Batch response didn't reach the client in time. - Internal server errors. These errors have a standard `5xx` status code HTTP response. - Throttling-related errors. These errors include `429` or `503` status code HTTP responses with the `Retry-after` header. - `4xx` errors such as `AlreadyExists` and `InvalidOperation`. These errors indicate that the resource isn't in the correct state for the state transition. -For detailed information about specific error codes, see [Batch Status and Error Codes](/rest/api/batchservice/batch-status-and-error-codes). This reference includes error codes for REST API, Batch service, and job tasks and scheduling. +For detailed information about specific error codes, see [Batch status and error codes](/rest/api/batchservice/batch-status-and-error-codes). This reference includes error codes for REST API, Batch service, and for job tasks and scheduling. ## Application failures -During execution, an application might produce diagnostic output. You can use this output to troubleshoot issues. The Batch service writes standard output and standard error output to the `stdout.txt` and `stderr.txt` files in the task directory on the compute node. For more information, see [Files and directories in Batch](files-and-directories.md). +During execution, an application might produce diagnostic output. You can use this output to troubleshoot issues. The Batch service writes standard output and standard error output to the *stdout.txt* and *stderr.txt* files in the task directory on the compute node. For more information, see [Files and directories in Batch](files-and-directories.md). To download these output files, use the Azure portal or one of the Batch SDKs. For example, to retrieve files for troubleshooting purposes, use [ComputeNode.GetNodeFile](/dotnet/api/microsoft.azure.batch.computenode) and [CloudTask.GetNodeFile](/dotnet/api/microsoft.azure.batch.cloudtask) in the Batch .NET library. If files that you specified for a task fail to upload for any reason, a file upl - The shared access signature (SAS) token supplied for accessing Azure Storage is invalid. - The SAS token doesn't provide write permissions.-- The storage account is no longer available+- The storage account is no longer available. - Another issue happened that prevented the successful copying of files from the node. ### Application errors -The process that the task's command line specifies can also fail. For more information, see [Task exit codes](#task-exit-codes). +The process specified by the task's command line can also fail. For more information, see [Task exit codes](#task-exit-codes). For application errors, configure Batch to automatically retry the task up to a specified number of times. ### Constraint errors -To specify the maximum execution duration for a job or task, set the **maxWallClockTime** constraint. Use this setting to terminate tasks that fail to progress. +To specify the maximum execution duration for a job or task, set the `maxWallClockTime` constraint. Use this setting to terminate tasks that fail to progress. When the task exceeds the maximum time: -- The task is marked as **completed**.-- The exit code is set to `0xC000013A`+- The task is marked as *completed*. +- The exit code is set to `0xC000013A`. - The **schedulingError** field is marked as `{ category:"ServerError", code="TaskEnded"}`. ## Task exit codes When a task executes a process, Batch populates the task's exit code property with the return code of the process. If the process returns a nonzero exit code, the Batch service marks the task as failed. -The Batch service doesn't determine a task's exit code. The process itself, or the operating system on which the process executed, determines the exit code. +The Batch service doesn't determine a task's exit code. The process itself, or the operating system on which the process executes, determines the exit code. ## Task failures or interruptions It's also possible for an intermittent issue to cause a task to stop responding ## Connect to compute nodes -You can perform additional debugging and troubleshooting by signing in to a compute node remotely. Use the Azure portal to download a Remote Desktop Protocol (RDP) file for Windows nodes, and obtain Secure Shell (SSH) connection information for Linux nodes. You can also download this information using the [Batch .NET](/dotnet/api/microsoft.azure.batch.computenode) or [Batch Python](batch-linux-nodes.md#connect-to-linux-nodes-using-ssh) APIs. +You can perform debugging and troubleshooting by signing in to a compute node remotely. Use the Azure portal to download a Remote Desktop Protocol (RDP) file for Windows nodes, and obtain Secure Shell (SSH) connection information for Linux nodes. You can also download this information using the [Batch .NET](/dotnet/api/microsoft.azure.batch.computenode) or [Batch Python](batch-linux-nodes.md#connect-to-linux-nodes-using-ssh) APIs. To connect to a node via RDP or SSH, first create a user on the node. Use one of the following methods: -- The Azure portal+- The [Azure portal](https://portal.azure.com) - Batch REST API: [adduser](/rest/api/batchservice/computenode/adduser) - Batch .NET API: [ComputeNode.CreateComputeNodeUser](/dotnet/api/microsoft.azure.batch.computenode) - Batch Python module: [add_user](batch-linux-nodes.md#connect-to-linux-nodes-using-ssh) -If necessary, [restrict or disable RDP or SSH access to compute nodes](pool-endpoint-configuration.md). +If necessary, [configure or disable access to compute nodes](pool-endpoint-configuration.md). + ## Troubleshoot problem nodes Your Batch client application or service can examine the metadata of failed tasks to identify a problem node. Each node in a pool has a unique ID. Task metadata includes the node where a task runs. After you find the problem node, try the following methods to resolve the failure. Reimaging a node reinstalls the operating system. Start tasks and job preparatio Removing the node from the pool is sometimes necessary. - Batch REST API: [removenodes](/rest/api/batchservice/pool/remove-nodes)-- Batch .NET API: [pooloperations](/dotnet/api/microsoft.azure.batch.pooloperations)+- Batch .NET API: [PoolOperations](/dotnet/api/microsoft.azure.batch.pooloperations) ### Disable task scheduling on node -Disabling task scheduling on a node effectively takes the node offline. Batch assigns no further tasks to the node. However, the node continues running in the pool. You can then further investigate the failures without losing the failed tasks's data. The node also won't cause additional task failures. +Disabling task scheduling on a node effectively takes the node offline. Batch assigns no further tasks to the node. However, the node continues running in the pool. You can then further investigate the failures without losing the failed task's data. The node also won't cause more task failures. For example, disable task scheduling on the node. Then, sign in to the node remotely. Examine the event logs, and do other troubleshooting. After you solve the problems, enable task scheduling again to bring the node back online. For example, disable task scheduling on the node. Then, sign in to the node remo You can use these actions to specify Batch handles tasks currently running on the node. For example, when you disable task scheduling with the Batch .NET API, you can specify an enum value for [DisableComputeNodeSchedulingOption](/dotnet/api/microsoft.azure.batch.common.disablecomputenodeschedulingoption). You can choose to: -- Terminate running tasks (`Terminate`).-- Requeue tasks for scheduling on other nodes (`Requeue`).-- Allow running tasks to complete before performing the action (`TaskCompletion`).+- Terminate running tasks: `Terminate` +- Requeue tasks for scheduling on other nodes: `Requeue` +- Allow running tasks to complete before performing the action: `TaskCompletion` ## Retry after errors After a failure, wait several seconds before retrying. If you retry too frequent ## Next steps -- [Check for Batch pool and node errors](batch-pool-node-error-checking.md).-- [Check for Batch job and task errors](batch-job-task-error-checking.md).+- [Check for Batch pool and node errors](batch-pool-node-error-checking.md) +- [Check for Batch job and task errors](batch-job-task-error-checking.md) |
batch | Simplified Compute Node Communication | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/batch/simplified-compute-node-communication.md | Title: Use simplified compute node communication description: Learn about the simplified compute node communication mode in the Azure Batch service and how to enable it. Previously updated : 03/29/2023 Last updated : 04/14/2023 -Batch supports two types of node communication modes: -- Classic: where the Batch service initiates communication to the compute nodes-- Simplified: where the compute nodes initiate communication to the Batch service+Batch supports two types of communication modes: +- **Classic**: the Batch service initiates communication with the compute nodes. +- **Simplified**: the compute nodes initiate communication with the Batch service. -This document describes the simplified compute node communication mode and the associated network configuration requirements. +This article describes the *simplified* communication mode and the associated network configuration requirements. > [!TIP]-> Information in this document pertaining to networking resources and rules such as NSGs does not apply to -> Batch pools with [no public IP addresses](simplified-node-communication-pool-no-public-ip.md) using the -> node management private endpoint without Internet outbound access. +> Information in this document pertaining to networking resources and rules such as NSGs doesn't apply to Batch pools with [no public IP addresses](simplified-node-communication-pool-no-public-ip.md) that use the node management private endpoint without internet outbound access. > [!WARNING]-> The classic compute node communication model will be retired on **31 March 2026** and will be replaced with -> the simplified compute node communication model as described in this document. For more information, see the -> classic compute node communication mode -> [migration guide](batch-pools-to-simplified-compute-node-communication-model-migration-guide.md). +> The *classic* compute node communication mode will be retired on **31 March 2026** and replaced with the *simplified* communication mode described in this document. For more information, see the communication mode [migration guide](batch-pools-to-simplified-compute-node-communication-model-migration-guide.md). ## Supported regions Simplified compute node communication in Azure Batch is currently available for the following regions: -- Public: all public regions where Batch is present except for West India and France South.+- **Public**: all public regions where Batch is present except for West India and France South. +- **Government**: USGov Arizona, USGov Virginia, USGov Texas. +- **China**: all China regions where Batch is present except for China North 1 and China East 1. -- Government: USGov Arizona, USGov Virginia, USGov Texas.+## Differences between classic and simplified modes -- China: all China regions where Batch is present except for China North 1 and China East 1.+The simplified compute node communication mode streamlines the way Batch pool infrastructure is managed on behalf of users. This communication mode reduces the complexity and scope of inbound and outbound networking connections required in baseline operations. -## Compute node communication differences between classic and simplified modes --The simplified compute node communication mode streamlines the way Batch pool infrastructure is -managed on behalf of users. This communication mode reduces the complexity and scope of inbound -and outbound networking connections required in baseline operations. --Batch pools with the `classic` communication mode require the following networking rules in network -security groups (NSGs), user-defined routes (UDRs), and firewalls when -[creating a pool in a virtual network](batch-virtual-network.md): +Batch pools with the *classic* communication mode require the following networking rules in network security groups (NSGs), user-defined routes (UDRs), and firewalls when [creating a pool in a virtual network](batch-virtual-network.md): - Inbound:- - Destination ports 29876, 29877 over TCP from BatchNodeManagement.*region* + - Destination ports `29876`, `29877` over TCP from `BatchNodeManagement.<region>` - Outbound:- - Destination port 443 over TCP to Storage.*region* - - Destination port 443 over TCP to BatchNodeManagement.*region* for certain workloads that require communication back to the Batch Service, such as Job Manager tasks + - Destination port `443` over TCP to `Storage.<region>` + - Destination port `443` over TCP to `BatchNodeManagement.<region>` for certain workloads that require communication back to the Batch Service, such as Job Manager tasks -Batch pools with the `simplified` communication mode require the following networking rules in -NSGs, UDRs, and firewalls: +Batch pools with the *simplified* communication mode require the following networking rules in NSGs, UDRs, and firewalls: - Inbound: - None - Outbound:- - Destination port 443 over ANY to BatchNodeManagement.*region* + - Destination port `443` over ANY to `BatchNodeManagement.<region>` -Outbound requirements for a Batch account can be discovered using the -[List Outbound Network Dependencies Endpoints API](/rest/api/batchmanagement/batch-account/list-outbound-network-dependencies-endpoints) -This API reports the base set of dependencies, depending upon the Batch account pool communication mode. -User-specific workloads may need extra rules such as opening traffic to other Azure resources (such as Azure -Storage for Application Packages, Azure Container Registry, etc.) or endpoints like the Microsoft package -repository for virtual file system mounting functionality. +Outbound requirements for a Batch account can be discovered using the [List Outbound Network Dependencies Endpoints API](/rest/api/batchmanagement/batch-account/list-outbound-network-dependencies-endpoints). This API reports the base set of dependencies, depending upon the Batch account pool communication mode. User-specific workloads might need extra rules such as opening traffic to other Azure resources (such as Azure Storage for Application Packages, Azure Container Registry) or endpoints like the Microsoft package repository for virtual file system mounting functionality. -## Benefits of the simplified communication mode +## Benefits of simplified mode -Azure Batch users utilizing the simplified mode benefit from simplification of networking connections and -rules. Simplified compute node communication helps reduce security risks by removing the requirement to open -ports for inbound communication from the internet. Only a single outbound rule to a well-known Service Tag is -required for baseline operation. +Azure Batch users utilizing the simplified mode benefit from simplification of networking connections and rules. Simplified compute node communication helps reduce security risks by removing the requirement to open ports for inbound communication from the internet. Only a single outbound rule to a well-known Service Tag is required for baseline operation. -The `simplified` mode also provides more fine-grained data exfiltration control over the `classic` -communication mode since outbound communication to Storage.*region* is no longer required. You can -explicitly lock down outbound communication to Azure Storage if necessary for your workflow. For -example, you can scope your outbound communication rules to Azure Storage to enable your AppPackage -storage accounts or other storage accounts for resource files or output files. +The *simplified* mode also provides more fine-grained data exfiltration control over the *classic* communication mode since outbound communication to `Storage.<region>` is no longer required. You can explicitly lock down outbound communication to Azure Storage if necessary for your workflow. For example, you can scope your outbound communication rules to Azure Storage to enable your AppPackage storage accounts or other storage accounts for resource files or output files. -Even if your workloads aren't currently impacted by the changes (as described in the next section), it's -recommended to move to the `simplified` mode. Future improvements in the Batch service may only be functional -with simplified compute node communication. +Even if your workloads aren't currently impacted by the changes (as described in the following section), it's recommended to move to the simplified mode. Future improvements in the Batch service might only be functional with simplified compute node communication. ## Potential impact between classic and simplified communication modes -In many cases, the `simplified` communication mode doesn't directly affect your Batch workloads. However, -simplified compute node communication has an impact for the following cases: +In many cases, the simplified communication mode doesn't directly affect your Batch workloads. However, simplified compute node communication has an impact for the following cases: -- Users who specify a Virtual Network as part of creating a Batch pool and do one or both of the following actions:+- Users who specify a virtual network as part of creating a Batch pool and do one or both of the following actions: - Explicitly disable outbound network traffic rules that are incompatible with simplified compute node communication. - Use UDRs and firewall rules that are incompatible with simplified compute node communication. - Users who enable software firewalls on compute nodes and explicitly disable outbound traffic in software firewall rules that are incompatible with simplified compute node communication. -If either of these cases applies to you, then follow the steps outlined in the next section to ensure that -your Batch workloads can still function under the `simplified` mode. We strongly recommend that you test and -verify all of your changes in a dev and test environment first before pushing your changes into production. +If either of these cases applies to you, then follow the steps outlined in the next section to ensure that your Batch workloads can still function in simplified mode. It's strongly recommended that you test and verify all of your changes in a dev and test environment first before pushing your changes into production. -### Required network configuration changes for simplified communication mode +### Required network configuration changes for simplified mode -The following set of steps is required to migrate to the new communication mode: +The following steps are required to migrate to the new communication mode: -1. Ensure your networking configuration as applicable to Batch pools (NSGs, UDRs, firewalls, etc.) includes a union of the modes (that is, the combined network rules of both `classic` and `simplified` modes). At a minimum, these rules would be: +1. Ensure your networking configuration as applicable to Batch pools (NSGs, UDRs, firewalls, etc.) includes a union of the modes, that is, the combined network rules of both classic and simplified modes. At a minimum, these rules would be: - Inbound:- - Destination ports 29876, 29877 over TCP from BatchNodeManagement.*region* + - Destination ports `29876`, `29877` over TCP from `BatchNodeManagement.<region>` - Outbound:- - Destination port 443 over TCP to Storage.*region* - - Destination port 443 over ANY to BatchNodeManagement.*region* + - Destination port `443` over TCP to `Storage.<region>` + - Destination port `443` over ANY to `BatchNodeManagement.<region>` 1. If you have any other inbound or outbound scenarios required by your workflow, you need to ensure that your rules reflect these requirements. 1. Use one of the following options to update your workloads to use the new communication mode.- - Create new pools with the `targetNodeCommunicationMode` set to `simplified` and validate that the new pools are working correctly. Migrate your workload to the new pools and delete any earlier pools. - - Update existing pools `targetNodeCommunicationMode` property to `simplified` and then resize all existing pools to zero nodes and scale back out. -1. Use the [Get Pool](/rest/api/batchservice/pool/get), [List Pool](/rest/api/batchservice/pool/list) API or Portal to confirm the `currentNodeCommunicationMode` is set to the desired communication mode of `simplified`. -1. Modify all applicable networking configuration to the Simplified Compute Node Communication rules, at the minimum (note any extra rules needed as discussed above): + - Create new pools with the `targetNodeCommunicationMode` set to *simplified* and validate that the new pools are working correctly. Migrate your workload to the new pools and delete any earlier pools. + - Update existing pools `targetNodeCommunicationMode` property to *simplified* and then resize all existing pools to zero nodes and scale back out. +1. Use the [Get Pool](/rest/api/batchservice/pool/get) API, [List Pool](/rest/api/batchservice/pool/list) API, or the Azure portal to confirm the `currentNodeCommunicationMode` is set to the desired communication mode of *simplified*. +1. Modify all applicable networking configuration to the simplified communication rules, at the minimum (note any extra rules needed as discussed above): - Inbound: - None - Outbound:- - Destination port 443 over ANY to BatchNodeManagement.*region* + - Destination port `443` over ANY to `BatchNodeManagement.<region>` -If you follow these steps, but later want to switch back to `classic` compute node communication, you need to take the following actions: +If you follow these steps, but later want to switch back to *classic* compute node communication, you need to take the following actions: -1. Revert any networking configuration operating exclusively in `simplified` compute node communication mode. -1. Create new pools or update existing pools `targetNodeCommunicationMode` property set to `classic`. +1. Revert any networking configuration operating exclusively in *simplified* compute node communication mode. +1. Create new pools or update existing pools `targetNodeCommunicationMode` property set to *classic*. 1. Migrate your workload to these pools, or resize existing pools and scale back out (see step 3 above).-1. See step 4 above to confirm that your pools are operating in `classic` communication mode. +1. See step 4 above to confirm that your pools are operating in *classic* communication mode. 1. Optionally restore your networking configuration. -## Specifying the node communication mode on a Batch pool +## Specify the communication mode on a Batch pool -The [`targetNodeCommunicationMode`](/rest/api/batchservice/pool/add) property on Batch pools allows you to indicate a preference -to the Batch service for which communication mode to utilize between the Batch service and compute nodes. The following are -the allowable options on this property: +The [targetNodeCommunicationMode](/rest/api/batchservice/pool/add) property on Batch pools allows you to indicate a preference to the Batch service for which communication mode to utilize between the Batch service and compute nodes. The following are the allowable options on this property: -- `classic`: create the pool using classic compute node communication.-- `simplified`: create the pool using simplified compute node communication.-- `default`: allow the Batch service to select the appropriate compute node communication mode. For pools without a virtual-network, the pool may be created in either `classic` or `simplified` mode. For pools with a virtual network, the pool will always -default to `classic` until **30 September 2024**. For more information, see the classic compute node communication mode -[migration guide](batch-pools-to-simplified-compute-node-communication-model-migration-guide.md). +- **Classic**: creates the pool using classic compute node communication. +- **Simplified**: creates the pool using simplified compute node communication. +- **Default**: allows the Batch service to select the appropriate compute node communication mode. For pools without a virtual network, the pool may be created in either classic or simplified mode. For pools with a virtual network, the pool always defaults to classic until **30 September 2024**. For more information, see the classic compute node communication mode [migration guide](batch-pools-to-simplified-compute-node-communication-model-migration-guide.md). > [!TIP]-> Specifying the target node communication mode is a preference indication for the Batch service and not a guarantee that it -> will be honored. Certain configurations on the pool may prevent the Batch service from honoring the specified target node -> communication mode, such as interaction with No public IP address, virtual networks, and the pool configuration type. +> Specifying the target node communication mode indicates a preference for the Batch service, but doesn't guarantee that it will be honored. Certain configurations on the pool might prevent the Batch service from honoring the specified target node communication mode, such as interaction with no public IP address, virtual networks, and the pool configuration type. -The following are examples of how to create a Batch pool with `simplified` compute node communication. +The following are examples of how to create a Batch pool with simplified compute node communication. ### Azure portal -Navigate to the Pools blade of your Batch account and click the Add button. Under `OPTIONAL SETTINGS`, you can -select `Simplified` as an option from the pull-down of `Node communication mode` as shown below. +First, sign in to the [Azure portal](https://portal.azure.com). Then, navigate to the **Pools** blade of your Batch account and select the **Add** button. Under **OPTIONAL SETTINGS**, you can select **Simplified** as an option from the pull-down of **Node communication mode** as shown: :::image type="content" source="media/simplified-compute-node-communication/add-pool-simplified-mode.png" alt-text="Screenshot that shows creating a pool with simplified mode."::: -To update an existing pool to simplified communication mode, navigate to the Pools blade of your Batch account and -click on the pool to update. On the left-side navigation, select `Node communication mode`. There you're able -to select a new target node communication mode as shown below. After selecting the appropriate communication mode, -click the `Save` button to update. You need to scale the pool down to zero nodes first, and then back out -for the change to take effect, if conditions allow. +To update an existing pool to simplified communication mode, navigate to the **Pools** blade of your Batch account and select the pool to update. On the left-side navigation, select **Node communication mode**. There you can select a new target node communication mode as shown below. After selecting the appropriate communication mode, select the **Save** button to update. You need to scale the pool down to zero nodes first, and then back out for the change to take effect, if conditions allow. :::image type="content" source="media/simplified-compute-node-communication/update-pool-simplified-mode.png" alt-text="Screenshot that shows updating a pool to simplified mode."::: -To display the current node communication mode for a pool, navigate to the Pools blade of your Batch account, and -click on the pool to view. Select `Properties` on the left-side navigation and the pool node communication mode -will be shown under the General section. +To display the current node communication mode for a pool, navigate to the **Pools** blade of your Batch account, and select the pool to view. Select **Properties** on the left-side navigation and the pool node communication mode appears under the **General** section. :::image type="content" source="media/simplified-compute-node-communication/get-pool-simplified-mode.png" alt-text="Screenshot that shows properties with a pool with simplified mode."::: ### REST API -This example shows how to use the [Batch Service REST API](/rest/api/batchservice/pool/add) to create a pool with -`simplified` compute node communication. +This example shows how to use the [Batch Service REST API](/rest/api/batchservice/pool/add) to create a pool with simplified compute node communication. ```http POST {batchURL}/pools?api-version=2022-10-01.16.0 client-request-id: 00000000-0000-0000-0000-000000000000 ## Limitations -The following are known limitations of the `simplified` communication mode: --- Limited migration support for previously created pools without public IP addresses-([V1 preview](batch-pool-no-public-ip-address.md)). These pools can only be migrated if created in a -[virtual network](batch-virtual-network.md), otherwise they won't use simplified compute node communication, even -if specified on the pool. For more information, see the -[migration guide](batch-pools-without-public-ip-addresses-classic-retirement-migration-guide.md). -- Cloud Service Configuration pools are currently not supported for simplified compute node communication and are-[deprecated](https://azure.microsoft.com/updates/azure-batch-cloudserviceconfiguration-pools-will-be-retired-on-29-february-2024/). -Specifying a communication mode for these types of pools aren't honored and always results in `classic` -communication mode. We recommend using Virtual Machine Configuration for your Batch pools. For more information, see -[Migrate Batch pool configuration from Cloud Services to Virtual Machine](batch-pool-cloud-service-to-virtual-machine-configuration.md). +The following are known limitations of the simplified communication mode: +- Limited migration support for previously created pools [without public IP addresses](batch-pool-no-public-ip-address.md). These pools can only be migrated if created in a [virtual network](batch-virtual-network.md), otherwise they won't use simplified compute node communication, even if specified on the pool. For more information, see the [migration guide](batch-pools-without-public-ip-addresses-classic-retirement-migration-guide.md). +- Cloud Service Configuration pools are currently not supported for simplified compute node communication and are [deprecated](https://azure.microsoft.com/updates/azure-batch-cloudserviceconfiguration-pools-will-be-retired-on-29-february-2024/). Specifying a communication mode for these types of pools aren't honored and always results in *classic* communication mode. We recommend using Virtual Machine Configuration for your Batch pools. For more information, see [Migrate Batch pool configuration from Cloud Services to Virtual Machine](batch-pool-cloud-service-to-virtual-machine-configuration.md). ## Next steps |
cognitive-services | Storage Lab Tutorial | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Computer-vision/Tutorials/storage-lab-tutorial.md | In this section, you'll create a new Web app in Visual Studio and add code to im } ``` - The language used here is [Razor](http://www.asp.net/web-pages/overview/getting-started/introducing-razor-syntax-c), which lets you embed executable code in HTML markup. The ```@foreach``` statement in the middle of the file enumerates the **BlobInfo** objects passed from the controller in **ViewBag** and creates HTML ```<img>``` elements from them. The ```src``` property of each element is initialized with the URI of the blob containing the image thumbnail. + The language used here is [Razor](https://www.asp.net/web-pages/overview/getting-started/introducing-razor-syntax-c), which lets you embed executable code in HTML markup. The ```@foreach``` statement in the middle of the file enumerates the **BlobInfo** objects passed from the controller in **ViewBag** and creates HTML ```<img>``` elements from them. The ```src``` property of each element is initialized with the URI of the blob containing the image thumbnail. 1. Download and unzip the _photos.zip_ file from the [GitHub sample data repository](https://github.com/Azure-Samples/cognitive-services-sample-data-files/tree/master/ComputerVision/storage-lab-tutorial). This is an assortment of different photos you can use to test the app. |
cognitive-services | Language Identification | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Speech-Service/language-identification.md | speechRecognizer.recognizeOnceAsync((result: SpeechSDK.SpeechRecognitionResult) ::: zone-end -### Using Speech-to-text custom models +### Speech-to-text custom models > [!NOTE] > Language detection with custom models can only be used with real-time speech to text and speech translation. Batch transcription only supports language detection for base models. var autoDetectSourceLanguageConfig = SpeechSDK.AutoDetectSourceLanguageConfig.fr ::: zone-end -### Using Speech-to-text batch transcription --To identify languages in [Batch transcription](batch-transcription.md), you need to use `languageIdentification` property in the body of your [transcription REST request](https://eastus.dev.cognitive.microsoft.com/docs/services/speech-to-text-api-v3-1/operations/Transcriptions_Create). The example in this section shows the usage of `languageIdentification` property with four candidate languages. --> [!WARNING] -> Batch transcription only supports language identification for base models. If both language identification and a custom model are specified in the transcription request, the service will fall back to use the base models for the specified candidate languages. This may result in unexpected recognition results. -> -> If your speech to text scenario requires both language identification and custom models, use [real-time speech to text](#using-speech-to-text-custom-models) instead of batch transcription. --```json -{ - <...> - - "properties": { - <...> - - "languageIdentification": { - "candidateLocales": [ - "en-US", - "ja-JP", - "zh-CN", - "hi-IN" - ] - }, - <...> - } -} -``` - ## Speech translation You use Speech translation when you need to identify the language in an audio source and then translate it to another language. For more information, see [Speech translation overview](speech-translation.md). recognizer.stop_continuous_recognition() ::: zone-end +## Run and use a container ++Speech containers provide websocket-based query endpoint APIs that are accessed through the Speech SDK and Speech CLI. By default, the Speech SDK and Speech CLI use the public Speech service. To use the container, you need to change the initialization method. Use a container host URL instead of key and region. ++When you run language ID in a container, use the `SourceLanguageRecognizer` object instead of `SpeechRecognizer` or `TranslationRecognizer`. ++For more information about containers, see the [language identification speech containers](speech-container-lid.md#use-the-container) how-to guide. +++## Speech-to-text batch transcription ++To identify languages with [Batch transcription REST API](batch-transcription.md), you need to use `languageIdentification` property in the body of your [Transcriptions_Create](https://eastus.dev.cognitive.microsoft.com/docs/services/speech-to-text-api-v3-1/operations/Transcriptions_Create) request. ++> [!WARNING] +> Batch transcription only supports language identification for base models. If both language identification and a custom model are specified in the transcription request, the service will fall back to use the base models for the specified candidate languages. This may result in unexpected recognition results. +> +> If your speech to text scenario requires both language identification and custom models, use [real-time speech to text](#speech-to-text-custom-models) instead of batch transcription. ++The following example shows the usage of the `languageIdentification` property with four candidate languages. For more information about request properties see [Create a batch transcription](batch-transcription-create.md#request-configuration-options). ++```json +{ + <...> + + "properties": { + <...> + + "languageIdentification": { + "candidateLocales": [ + "en-US", + "ja-JP", + "zh-CN", + "hi-IN" + ] + }, + <...> + } +} +``` + ## Next steps * [Try the speech to text quickstart](get-started-speech-to-text.md) |
cognitive-services | Language Support | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Speech-Service/language-support.md | You can also get a list of locales and voices supported for each specific region Language support varies by Speech service functionality. > [!NOTE]-> See [Speech Containers](speech-container-howto.md#available-speech-containers) and [Embedded Speech](embedded-speech.md#models-and-voices) separately for their supported languages. +> See [Speech Containers](speech-container-overview.md#available-speech-containers) and [Embedded Speech](embedded-speech.md#models-and-voices) separately for their supported languages. **Choose a Speech feature** Each prebuilt neural voice model is available at 24kHz and high-fidelity 48kHz. Please note that the following neural voices are retired. -- The English (United Kingdom) voice `en-GB-MiaNeural` retired on October 30, 2021. All service requests to `en-GB-MiaNeural` will be redirected to `en-GB-SoniaNeural` automatically as of October 30, 2021. If you're using container Neural TTS, [download](speech-container-howto.md#get-the-container-image-with-docker-pull) and deploy the latest version. Starting from October 30, 2021, all requests with previous versions will not succeed.+- The English (United Kingdom) voice `en-GB-MiaNeural` retired on October 30, 2021. All service requests to `en-GB-MiaNeural` will be redirected to `en-GB-SoniaNeural` automatically as of October 30, 2021. If you're using container Neural TTS, [download](speech-container-ntts.md#get-the-container-image-with-docker-pull) and deploy the latest version. Starting from October 30, 2021, all requests with previous versions will not succeed. - The `en-US-JessaNeural` voice is retired and replaced by `en-US-AriaNeural`. If you were using "Jessa" before, convert to "Aria." ### Custom Neural Voice |
cognitive-services | Openai Speech | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Speech-Service/openai-speech.md | +zone_pivot_groups: programming-languages-csharp-python keywords: speech to text, openai # Azure OpenAI speech to speech chat + [!INCLUDE [Python include](./includes/quickstarts/openai-speech/python.md)] ## Next steps |
cognitive-services | Speech Container Batch Processing | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Speech-Service/speech-container-batch-processing.md | Use the batch processing kit to complement and scale out workloads on Speech con :::image type="content" source="media/containers/general-diagram.png" alt-text="A diagram showing an example batch-kit container workflow."::: -The batch kit container is available for free on [GitHub](https://github.com/microsoft/batch-processing-kit) and [Docker hub](https://hub.docker.com/r/batchkit/speech-batch-kit/tags). You are only [billed](speech-container-howto.md#billing) for the Speech containers you use. +The batch kit container is available for free on [GitHub](https://github.com/microsoft/batch-processing-kit) and [Docker hub](https://hub.docker.com/r/batchkit/speech-batch-kit/tags). You are only [billed](speech-container-overview.md#billing) for the Speech containers you use. | Feature | Description | ||| Use the Docker `run` command to start the container. This will start an interact -```Docker +```bash docker run --network host --rm -ti -v /mnt/my_nfs:/my_nfs --entrypoint /bin/bash /mnt/my_nfs:/my_nfs docker.io/batchkit/speech-batch-kit:latest ``` To run the batch client: -```Docker +```bash run-batch-client -config /my_nfs/config.yaml -input_folder /my_nfs/audio_files -output_folder /my_nfs/transcriptions -log_folder /my_nfs/logs -file_log_level DEBUG -nbest 1 -m ONESHOT -diarization None -language en-US -strict_config ``` To run the batch client and container in a single command: -```Docker +```bash docker run --network host --rm -ti -v /mnt/my_nfs:/my_nfs docker.io/batchkit/speech-batch-kit:latest -config /my_nfs/config.yaml -input_folder /my_nfs/audio_files -output_folder /my_nfs/transcriptions -log_folder /my_nfs/logs ``` |
cognitive-services | Speech Container Configuration | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Speech-Service/speech-container-configuration.md | -Speech containers enable customers to build one speech application architecture that is optimized to take advantage of both robust cloud capabilities and edge locality. The supported speech containers are **speech-to-text**, **Custom speech-to-text**, **speech language identification** and **Neural text-to-speech**. +Speech containers enable customers to build one speech application architecture that is optimized to take advantage of both robust cloud capabilities and edge locality. -The **Speech** container runtime environment is configured using the `docker run` command arguments. This container has several required settings, along with a few optional settings. Several [examples](#example-docker-run-commands) of the command are available. The container-specific settings are the billing settings. +The Speech container runtime environment is configured using the `docker run` command arguments. This container has several required settings, along with a few optional settings. The container-specific settings are the billing settings. ## Configuration settings [!INCLUDE [Container shared configuration settings table](../../../includes/cognitive-services-containers-configuration-shared-settings-table.md)] > [!IMPORTANT]-> The [`ApiKey`](#apikey-configuration-setting), [`Billing`](#billing-configuration-setting), and [`Eula`](#eula-setting) settings are used together, and you must provide valid values for all three of them; otherwise your container won't start. For more information about using these configuration settings to instantiate a container, see [Billing](speech-container-howto.md#billing). +> The [`ApiKey`](#apikey-configuration-setting), [`Billing`](#billing-configuration-setting), and [`Eula`](#eula-setting) settings are used together, and you must provide valid values for all three of them; otherwise your container won't start. For more information about using these configuration settings to instantiate a container, see [Billing](speech-container-overview.md#billing). ## ApiKey configuration setting The `ApiKey` setting specifies the Azure resource key used to track billing info This setting can be found in the following place: -- Azure portal: **Speech's** Resource Management, under **Keys**+- Azure portal: **Speech** Resource Management, under **Keys** ## ApplicationInsights setting The `Billing` setting specifies the endpoint URI of the _Speech_ resource on Azu This setting can be found in the following place: -- Azure portal: **Speech's** Overview, labeled `Endpoint`+- Azure portal: **Speech** Overview, labeled `Endpoint` | Required | Name | Data type | Description | | -- | - | | -- |-| Yes | `Billing` | String | Billing endpoint URI. For more information on obtaining the billing URI, see [gather required parameters](speech-container-howto.md#gather-required-parameters). For more information and a complete list of regional endpoints, see [Custom subdomain names for Cognitive Services](../cognitive-services-custom-subdomains.md). | +| Yes | `Billing` | String | Billing endpoint URI. For more information on obtaining the billing URI, see [billing](speech-container-overview.md#billing). For more information and a complete list of regional endpoints, see [Custom subdomain names for Cognitive Services](../cognitive-services-custom-subdomains.md). | ## Eula setting The exact syntax of the host mount location varies depending on the host operati The custom speech containers use [volume mounts](https://docs.docker.com/storage/volumes/) to persist custom models. You can specify a volume mount by adding the `-v` (or `--volume`) option to the [docker run](https://docs.docker.com/engine/reference/commandline/run/) command. +> [!NOTE] +> The volume mount settings are only applicable for [Custom Speech-to-text](speech-container-cstt.md) containers. + Custom models are downloaded the first time that a new model is ingested as part of the custom speech container docker run command. Sequential runs of the same `ModelId` for a custom speech container will use the previously downloaded model. If the volume mount is not provided, custom models cannot be persisted. The volume mount setting consists of three color `:` separated fields: The volume mount setting consists of three color `:` separated fields: 2. The second field is the directory in the container, for example _/usr/local/models_. 3. The third field (optional) is a comma-separated list of options, for more information see [use volumes](https://docs.docker.com/storage/volumes/). -### Volume mount example +Here's a volume mount example that mounts the host machine _C:\input_ directory to the containers _/usr/local/models_ directory. ```bash -v C:\input:/usr/local/models ``` -This command mounts the host machine _C:\input_ directory to the containers _/usr/local/models_ directory. --> [!IMPORTANT] -> The volume mount settings are only applicable to **Custom Speech-to-text** containers. The **Speech-to-text**, **Neural Text-to-speech** and **Speech language identification** containers do not use volume mounts. --## Example docker run commands --The following examples use the configuration settings to illustrate how to write and use `docker run` commands. Once running, the container continues to run until you [stop](speech-container-howto.md#stop-the-container) it. --- **Line-continuation character**: The Docker commands in the following sections use the back slash, `\`, as a line continuation character. Replace or remove this based on your host operating system's requirements.-- **Argument order**: Do not change the order of the arguments unless you are familiar with Docker containers.--Replace {_argument_name_} with your own values: --| Placeholder | Value | Format or example | -| -- | -- | -- | -| **{API_KEY}** | The endpoint key of the `Speech` resource on the Azure `Speech` Keys page. | `xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx` | -| **{ENDPOINT_URI}** | The billing endpoint value is available on the Azure `Speech` Overview page. | See [gather required parameters](speech-container-howto.md#gather-required-parameters) for explicit examples. | ---> [!IMPORTANT] -> The `Eula`, `Billing`, and `ApiKey` options must be specified to run the container; otherwise, the container won't start. For more information, see [Billing](#billing-configuration-setting). -> The ApiKey value is the **Key** from the Azure Speech Resource keys page. --## Speech container Docker examples --The following Docker examples are for the Speech container. --## [Speech-to-text](#tab/stt) --### Basic example for Speech-to-text ---```Docker -docker run --rm -it -p 5000:5000 --memory 8g --cpus 4 \ -mcr.microsoft.com/azure-cognitive-services/speechservices/speech-to-text \ -Eula=accept \ -Billing={ENDPOINT_URI} \ -ApiKey={API_KEY} -``` --### Logging example for Speech-to-text ---```Docker -docker run --rm -it -p 5000:5000 --memory 8g --cpus 4 \ -mcr.microsoft.com/azure-cognitive-services/speechservices/custom-speech-to-text \ -Eula=accept \ -Billing={ENDPOINT_URI} \ -ApiKey={API_KEY} \ -Logging:Console:LogLevel:Default=Information -``` --## [Custom Speech-to-text](#tab/cstt) --### Basic example for Custom Speech-to-text ---```Docker -docker run --rm -it -p 5000:5000 --memory 8g --cpus 4 \ --v {VOLUME_MOUNT}:/usr/local/models \-mcr.microsoft.com/azure-cognitive-services/speechservices/custom-speech-to-text \ -ModelId={MODEL_ID} \ -Eula=accept \ -Billing={ENDPOINT_URI} \ -ApiKey={API_KEY} -``` --### Logging example for Custom Speech-to-text ---```Docker -docker run --rm -it -p 5000:5000 --memory 8g --cpus 4 \ --v {VOLUME_MOUNT}:/usr/local/models \-mcr.microsoft.com/azure-cognitive-services/speechservices/custom-speech-to-text \ -ModelId={MODEL_ID} \ -Eula=accept \ -Billing={ENDPOINT_URI} \ -ApiKey={API_KEY} \ -Logging:Console:LogLevel:Default=Information -``` --## [Neural Text-to-speech](#tab/ntts) --### Basic example for Neural Text-to-speech --```Docker -docker run --rm -it -p 5000:5000 --memory 12g --cpus 6 \ -mcr.microsoft.com/azure-cognitive-services/speechservices/neural-text-to-speech \ -Eula=accept \ -Billing={ENDPOINT_URI} \ -ApiKey={API_KEY} -``` --### Logging example for Neural Text-to-speech -```Docker -docker run --rm -it -p 5000:5000 --memory 12g --cpus 6 \ -mcr.microsoft.com/azure-cognitive-services/speechservices/neural-text-to-speech \ -Eula=accept \ -Billing={ENDPOINT_URI} \ -ApiKey={API_KEY} \ -Logging:Console:LogLevel:Default=Information -``` --## [Speech Language Identification](#tab/lid) --### Basic example for Speech language identification ---```Docker -docker run --rm -it -p 5000:5000 --memory 1g --cpus 1 \ -mcr.microsoft.com/azure-cognitive-services/speechservices/language-detection \ -Eula=accept \ -Billing={ENDPOINT_URI} \ -ApiKey={API_KEY} -``` --### Logging example for Speech language identification ---```Docker -docker run --rm -it -p 5000:5000 --memory 1g --cpus 1 \ -mcr.microsoft.com/azure-cognitive-services/speechservices/language-detection \ -Eula=accept \ -Billing={ENDPOINT_URI} \ -ApiKey={API_KEY} \ -Logging:Console:LogLevel:Default=Information -``` -- ## Next steps - Review [How to install and run containers](speech-container-howto.md) |
cognitive-services | Speech Container Cstt | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Speech-Service/speech-container-cstt.md | + + Title: Custom speech-to-text containers - Speech service ++description: Install and run custom speech-to-text containers with Docker to perform speech recognition, transcription, generation, and more on-premises. ++++++ Last updated : 04/18/2023++zone_pivot_groups: programming-languages-speech-sdk-cli +keywords: on-premises, Docker, container +++# Custom speech-to-text containers with Docker ++The Custom speech-to-text container transcribes real-time speech or batch audio recordings with intermediate results. You can use a custom model that you created in the [Custom Speech portal](https://speech.microsoft.com/customspeech). In this article, you'll learn how to download, install, and run a Custom speech-to-text container. ++> [!NOTE] +> You must [request and get approval](speech-container-overview.md#request-approval-to-run-the-container) to use a Speech container. ++For more information about prerequisites, validating that a container is running, running multiple containers on the same host, and running disconnected containers, see [Install and run Speech containers with Docker](speech-container-howto.md). ++## Container images ++The Custom speech-to-text container image for all supported versions and locales can be found on the [Microsoft Container Registry (MCR)](https://mcr.microsoft.com/product/azure-cognitive-services/speechservices/custom-speech-to-text/tags) syndicate. It resides within the `azure-cognitive-services/speechservices/` repository and is named `custom-speech-to-text`. +++The fully qualified container image name is, `mcr.microsoft.com/azure-cognitive-services/speechservices/custom-speech-to-text`. Either append a specific version or append `:latest` to get the most recent version. ++| Version | Path | +|--|| +| Latest | `mcr.microsoft.com/azure-cognitive-services/speechservices/custom-speech-to-text:latest` | +| 3.12.0 | `mcr.microsoft.com/azure-cognitive-services/speechservices/custom-speech-to-text:3.12.0-amd64` | ++All tags, except for `latest`, are in the following format and are case sensitive: ++``` +<major>.<minor>.<patch>-<platform>-<prerelease> +``` ++> [!NOTE] +> The `locale` and `voice` for custom speech-to-text containers is determined by the custom model ingested by the container. ++The tags are also available [in JSON format](https://mcr.microsoft.com/v2/azure-cognitive-services/speechservices/custom-speech-to-text/tags/list) for your convenience. The body includes the container path and list of tags. The tags aren't sorted by version, but `"latest"` is always included at the end of the list as shown in this snippet: ++```json +{ + "name": "azure-cognitive-services/speechservices/custom-speech-to-text", + "tags": [ + "2.10.0-amd64", + "2.11.0-amd64", + "2.12.0-amd64", + "2.12.1-amd64", + <--redacted for brevity--> + "latest" + ] +} +``` ++### Get the container image with docker pull ++You need the [prerequisites](speech-container-howto.md#prerequisites) including required hardware. Please also see the [recommended allocation of resources](speech-container-howto.md#container-requirements-and-recommendations) for each Speech container. ++Use the [docker pull](https://docs.docker.com/engine/reference/commandline/pull/) command to download a container image from Microsoft Container Registry: ++```bash +docker pull mcr.microsoft.com/azure-cognitive-services/speechservices/custom-speech-to-text:latest +``` ++> [!NOTE] +> The `locale` and `voice` for custom Speech containers is determined by the custom model ingested by the container. ++## Get the model ID ++Before you can [run](#run-the-container-with-docker-run) the container, you need to know the model ID of your custom model or a base model ID. When you run the container you specify one of the model IDs to download and use. ++# [Custom model ID](#tab/custom-model) ++The custom model has to have been [trained](how-to-custom-speech-train-model.md) by using the [Speech Studio](https://aka.ms/speechstudio/customspeech). For information about how to get the model ID, see [Custom Speech model lifecycle](how-to-custom-speech-model-and-endpoint-lifecycle.md). ++ ++Obtain the **Model ID** to use as the argument to the `ModelId` parameter of the `docker run` command. ++ +++# [Base model ID](#tab/base-model) ++You can get the available base model information by using option `BaseModelLocale={LOCALE}`. This option gives you a list of available base models on that locale under your billing account. ++To get base model IDs, you use the `docker run` command. For example: ++```bash +docker run --rm -it \ +mcr.microsoft.com/azure-cognitive-services/speechservices/custom-speech-to-text \ +BaseModelLocale={LOCALE} \ +Eula=accept \ +Billing={ENDPOINT_URI} \ +ApiKey={API_KEY} +``` ++This command checks the container image and returns the available base models of the target locale. ++> [!NOTE] +> Although you use the `docker run` command, the container isn't started for service. ++The output gives you a list of base models with the information locale, model ID, and creation date time. For example: ++``` +Checking available base model for en-us +2020/10/30 21:54:20 [Info] Searching available base models for en-us +2020/10/30 21:54:21 [Info] [Base model] Locale: en-us, CreatedDate: 2016-11-04T08:23:42Z, Id: a3d8aab9-6f36-44cd-9904-b37389ce2bfa +2020/10/30 21:54:21 [Info] [Base model] Locale: en-us, CreatedDate: 2016-11-04T12:01:02Z, Id: cc7826ac-5355-471d-9bc6-a54673d06e45 +2020/10/30 21:54:21 [Info] [Base model] Locale: en-us, CreatedDate: 2017-08-17T12:00:00Z, Id: a1f8db59-40ff-4f0e-b011-37629c3a1a53 +2020/10/30 21:54:21 [Info] [Base model] Locale: en-us, CreatedDate: 2018-04-16T11:55:00Z, Id: c7a69da3-27de-4a4b-ab75-b6716f6321e5 +2020/10/30 21:54:21 [Info] [Base model] Locale: en-us, CreatedDate: 2018-09-21T15:18:43Z, Id: da494a53-0dad-4158-b15f-8f9daca7a412 +2020/10/30 21:54:21 [Info] [Base model] Locale: en-us, CreatedDate: 2018-10-19T11:28:54Z, Id: 84ec130b-d047-44bf-a46d-58c1ac292ca7 +2020/10/30 21:54:21 [Info] [Base model] Locale: en-us, CreatedDate: 2018-11-26T07:59:09Z, Id: ee5c100f-152f-4ae5-9e9d-014af3c01c56 +2020/10/30 21:54:21 [Info] [Base model] Locale: en-us, CreatedDate: 2018-11-26T09:21:55Z, Id: d04959a6-71da-4913-9997-836793e3c115 +2020/10/30 21:54:21 [Info] [Base model] Locale: en-us, CreatedDate: 2019-01-11T10:04:19Z, Id: 488e5f23-8bc5-46f8-9ad8-ea9a49a8efda +2020/10/30 21:54:21 [Info] [Base model] Locale: en-us, CreatedDate: 2019-02-18T14:37:57Z, Id: 0207b3e6-92a8-4363-8c0e-361114cdd719 +2020/10/30 21:54:21 [Info] [Base model] Locale: en-us, CreatedDate: 2019-03-03T17:34:10Z, Id: 198d9b79-2950-4609-b6ec-f52254074a05 +2020/10/30 21:54:21 [Fatal] Please run this tool again and assign --modelId '<one above base model id>'. If no model id listed above, it means currently there is no available base model for en-us +``` ++++## Display model download ++Before you [run](#run-the-container-with-docker-run) the container, you can optionally get the available display models information and choose to download those models into your speech-to-text container to get highly improved final display output. Display model download is available with custom-speech-to-text container version 3.1.0 and later. ++> [!NOTE] +> Although you use the `docker run` command, the container isn't started for service. ++You can query or download any or all of these display model types: Rescoring (`Rescore`), Punctuation (`Punct`), resegmentation (`Resegment`), and wfstitn (`Wfstitn`). Otherwise, you can use the `FullDisplay` option (with or without the other types) to query or download all types of display models. ++Set the `BaseModelLocale` to query the latest available display model on the target locale. If you include multiple display model types, the command will return the latest available display models for each type. For example: ++```bash +docker run --rm -it \ +mcr.microsoft.com/azure-cognitive-services/speechservices/custom-speech-to-text \ +Punct Rescore Resegment Wfstitn \ # Specify `FullDisplay` or a space-separated subset of display models +BaseModelLocale={LOCALE} \ +Eula=accept \ +Billing={ENDPOINT_URI} \ +ApiKey={API_KEY} +``` ++Set the `DisplayLocale` to download the latest available display model on the target locale. When you set `DisplayLocale`, you must also specify `FullDisplay` or a space-separated subset of display models. The command will download the latest available display model for each specified type. For example: ++```bash +docker run --rm -it \ +mcr.microsoft.com/azure-cognitive-services/speechservices/custom-speech-to-text \ +Punct Rescore Resegment Wfstitn \ # Specify `FullDisplay` or a space-separated subset of display models +DisplayLocale={LOCALE} \ +Eula=accept \ +Billing={ENDPOINT_URI} \ +ApiKey={API_KEY} +``` ++Set one model ID parameter to download a specific display model: Rescoring (`RescoreId`), Punctuation (`PunctId`), resegmentation (`ResegmentId`), or wfstitn (`WfstitnId`). This is similar to how you would download a base model via the `ModelId` parameter. For example, to download a rescoring display model, you can use the following command with the `RescoreId` parameter: ++```bash +docker run --rm -it \ +mcr.microsoft.com/azure-cognitive-services/speechservices/custom-speech-to-text \ +RescoreId={RESCORE_MODEL_ID} \ +Eula=accept \ +Billing={ENDPOINT_URI} \ +ApiKey={API_KEY} +``` ++> [!NOTE] +> If you set more than one query or download parameter, the command will prioritize in this order: `BaseModelLocale`, model ID, and then `DisplayLocale` (only applicable for display models). ++## Run the container with docker run ++Use the [docker run](https://docs.docker.com/engine/reference/commandline/run/) command to run the container for service. ++# [Custom speech to text](#tab/container) +++# [Disconnected custom speech to text](#tab/disconnected) ++To run disconnected containers (not connected to the internet), you must submit [this request form](https://aka.ms/csdisconnectedcontainers) and wait for approval. For more information about applying and purchasing a commitment plan to use containers in disconnected environments, see [Use containers in disconnected environments](../containers/disconnected-containers.md) in the Azure Cognitive Services documentation. ++If you have been approved to run the container disconnected from the internet, the following example shows the formatting of the `docker run` command to use, with placeholder values. Replace these placeholder values with your own values. ++In order to prepare and configure a disconnected custom speech-to-text container you will need two separate speech resources: ++- A regular Azure Speech Service resource which is either configured to use a "**S0 - Standard**" pricing tier or a "**Speech to Text (Custom)**" commitment tier pricing plan. This is used to train, download, and configure your custom speech models for use in your container. +- An Azure Speech Service resource which is configured to use the "**DC0 Commitment (Disconnected)**" pricing plan. This is used to download your disconnected container license file required to run the container in disconnected mode. ++Follow these steps to download and run the container in disconnected environments. +1. [Download a model for the disconnected container](#download-a-model-for-the-disconnected-container). For this step, use a regular Azure Speech Service resource which is either configured to use a "**S0 - Standard**" pricing tier or a "**Speech to Text (Custom)**" commitment tier pricing plan. +1. [Download the disconnected container license](#download-the-disconnected-container-license). For this step, use an Azure Speech Service resource which is configured to use the "**DC0 Commitment (Disconnected)**" pricing plan. +1. [Run the disconnected container for service](#run-the-disconnected-container). For this step, use an Azure Speech Service resource which is configured to use the "**DC0 Commitment (Disconnected)**" pricing plan. ++### Download a model for the disconnected container ++For this step, use a regular Azure Speech Service resource which is either configured to use a "**S0 - Standard**" pricing tier or a "**Speech to Text (Custom)**" commitment tier pricing plan. +++### Download the disconnected container license ++Next, you download your disconnected license file. The `DownloadLicense=True` parameter in your `docker run` command will download a license file that will enable your Docker container to run when it isn't connected to the internet. It also contains an expiration date, after which the license file will be invalid to run the container. ++You can only use a license file with the appropriate container that you've been approved for. For example, you can't use a license file for a `speech-to-text` container with a `neural-text-to-speech` container. ++| Placeholder | Description | +|-|-| +| `{IMAGE}` | The container image you want to use.<br/><br/>For example: `mcr.microsoft.com/azure-cognitive-services/custom-speech-to-text:latest` | +| `{LICENSE_MOUNT}` | The path where the license will be downloaded, and mounted.<br/><br/>For example: `/host/license:/path/to/license/directory` | +| `{ENDPOINT_URI}` | The endpoint for authenticating your service request. You can find it on your resource's **Key and endpoint** page, on the Azure portal.<br/><br/>For example: `https://<your-resource-name>.cognitiveservices.azure.com` | +| `{API_KEY}` | The key for your Speech resource. You can find it on your resource's **Key and endpoint** page, on the Azure portal. | +| `{CONTAINER_LICENSE_DIRECTORY}` | Location of the license folder on the container's local filesystem.<br/><br/>For example: `/path/to/license/directory` | ++For this step, use an Azure Speech Service resource which is configured to use the "**DC0 Commitment (Disconnected)**" pricing plan. ++```bash +docker run --rm -it -p 5000:5000 \ +-v {LICENSE_MOUNT} \ +{IMAGE} \ +eula=accept \ +billing={ENDPOINT_URI} \ +apikey={API_KEY} \ +DownloadLicense=True \ +Mounts:License={CONTAINER_LICENSE_DIRECTORY} +``` ++### Run the disconnected container ++Once the license file has been downloaded, you can run the container in a disconnected environment. The following example shows the formatting of the `docker run` command you'll use, with placeholder values. Replace these placeholder values with your own values. ++Wherever the container is run, the license file must be mounted to the container and the location of the license folder on the container's local filesystem must be specified with `Mounts:License=`. An output mount must also be specified so that billing usage records can be written. ++| Placeholder | Description | +|-|-| +| `{IMAGE}` | The container image you want to use.<br/><br/>For example: `mcr.microsoft.com/azure-cognitive-services/custom-speech-to-text:latest` | +| `{MEMORY_SIZE}` | The appropriate size of memory to allocate for your container.<br/><br/>For example: `4g` | +| `{NUMBER_CPUS}` | The appropriate number of CPUs to allocate for your container.<br/><br/>For example: `4` | +| `{LICENSE_MOUNT}` | The path where the license will be downloaded, and mounted.<br/><br/>For example: `/host/license:/path/to/license/directory` | +| `{ENDPOINT_URI}` | The endpoint for authenticating your service request. You can find it on your resource's **Key and endpoint** page, on the Azure portal.<br/><br/>For example: `https://<your-resource-name>.cognitiveservices.azure.com` | +| `{API_KEY}` | The key for your Speech resource. You can find it on your resource's **Key and endpoint** page, on the Azure portal. | +| `{CONTAINER_LICENSE_DIRECTORY}` | Location of the license folder on the container's local filesystem.<br/><br/>For example: `/path/to/license/directory` | +| `{CONTAINER_OUTPUT_DIRECTORY}` | Location of the output folder on the container's local filesystem.<br/><br/>For example: `/path/to/output/directory` | +| `{OUTPUT_PATH}` | The output path for logging.<br/><br/>For example: `/host/output:/path/to/output/directory`<br/><br/>For more information, see [usage records](../containers/disconnected-containers.md#usage-records) in the Azure Cognitive Services documentation. | +| `{MODEL_PATH}` | The path where the model is located.<br/><br/>For example: `/path/to/model/` | ++For this step, use an Azure Speech Service resource which is configured to use the "**DC0 Commitment (Disconnected)**" pricing plan. ++```bash +docker run --rm -it -p 5000:5000 --memory {MEMORY_SIZE} --cpus {NUMBER_CPUS} \ +-v {LICENSE_MOUNT} \ +-v {OUTPUT_PATH} \ +-v {MODEL_PATH} \ +{IMAGE} \ +eula=accept \ +Mounts:License={CONTAINER_LICENSE_DIRECTORY} +Mounts:Output={CONTAINER_OUTPUT_DIRECTORY} +``` ++The Custom Speech-to-Text container provides a default directory for writing the license file and billing log at runtime. The default directories are /license and /output respectively. ++When you're mounting these directories to the container with the `docker run -v` command, make sure the local machine directory is set ownership to `user:group nonroot:nonroot` before running the container. ++Below is a sample command to set file/directory ownership. ++```bash +sudo chown -R nonroot:nonroot <YOUR_LOCAL_MACHINE_PATH_1> <YOUR_LOCAL_MACHINE_PATH_2> ... +``` +++++## Use the container +++[Try the speech-to-text quickstart](get-started-speech-to-text.md) using host authentication instead of key and region. ++## Next steps ++* See the [Speech containers overview](speech-container-overview.md) +* Review [configure containers](speech-container-configuration.md) for configuration settings +* Use more [Azure Cognitive Services containers](../cognitive-services-container-support.md) ++ |
cognitive-services | Speech Container Howto | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Speech-Service/speech-container-howto.md | Title: Install and run Docker containers for the Speech service APIs + Title: Install and run Speech containers with Docker - Speech service -description: Use the Docker containers for the Speech service to perform speech recognition, transcription, generation, and more on-premises. +description: Use the Speech containers with Docker to perform speech recognition, transcription, generation, and more on-premises. Previously updated : 03/02/2023 Last updated : 04/18/2023 - keywords: on-premises, Docker, container -# Install and run Docker containers for the Speech service APIs +# Install and run Speech containers with Docker -By using containers, you can run _some_ of the Azure Cognitive Services Speech service APIs in your own environment. Containers are great for specific security and data governance requirements. In this article, you'll learn how to download, install, and run a Speech container. +By using containers, you can use a subset of the Speech service features in your own environment. In this article, you'll learn how to download, install, and run a Speech container. -With Speech containers, you can build a speech application architecture that's optimized for both robust cloud capabilities and edge locality. Several containers are available, which use the same [pricing](https://azure.microsoft.com/pricing/details/cognitive-services/speech-services/) as the cloud-based Azure Speech services. --## Available Speech containers --> [!IMPORTANT] -> We retired the standard speech synthesis voices and text-to-speech container on August 31, 2021. Consider migrating your applications to use the neural text-to-speech container instead. For more information on updating your application, see [Migrate from standard voice to prebuilt neural voice](./how-to-migrate-to-prebuilt-neural-voice.md). --| Container | Features | Supported versions and locales | -|--|--|--| -| Speech-to-text | Analyzes sentiment and transcribes continuous real-time speech or batch audio recordings with intermediate results. | Latest: 3.13.0<br/><br/>For all supported versions and locales, see the [Microsoft Container Registry (MCR)](https://mcr.microsoft.com/product/azure-cognitive-services/speechservices/speech-to-text/tags) and [JSON tags](https://mcr.microsoft.com/v2/azure-cognitive-services/speechservices/speech-to-text/tags/list).| -| Custom speech-to-text | Using a custom model from the [Custom Speech portal](https://speech.microsoft.com/customspeech), transcribes continuous real-time speech or batch audio recordings into text with intermediate results. | Latest: 3.13.0<br/><br/>For all supported versions and locales, see the [Microsoft Container Registry (MCR)](https://mcr.microsoft.com/product/azure-cognitive-services/speechservices/custom-speech-to-text/tags) and [JSON tags](https://mcr.microsoft.com/v2/azure-cognitive-services/speechservices/speech-to-text/tags/list). | -| Speech language identification | Detects the language spoken in audio files. | Latest: 1.11.0<sup>1</sup><br/><br/>For all supported versions and locales, see the [Microsoft Container Registry (MCR)](https://mcr.microsoft.com/product/azure-cognitive-services/speechservices/language-detection/tags) and [JSON tags](https://mcr.microsoft.com/v2/azure-cognitive-services/speechservices/language-detection/tags/list). | -| Neural text-to-speech | Converts text to natural-sounding speech by using deep neural network technology, which allows for more natural synthesized speech. | Latest: 2.12.0<br/><br/>For all supported versions and locales, see the [Microsoft Container Registry (MCR)](https://mcr.microsoft.com/product/azure-cognitive-services/speechservices/neural-text-to-speech/tags) and [JSON tags](https://mcr.microsoft.com/v2/azure-cognitive-services/speechservices/neural-text-to-speech/tags/list). | --<sup>1</sup> The container is available in public preview. Containers in preview are still under development and don't meet Microsoft's stability and support requirements. +> [!NOTE] +> Disconnected container pricing and commitment tiers vary from standard containers. For more information, see [Speech Services pricing](https://azure.microsoft.com/pricing/details/cognitive-services/speech-services/). ## Prerequisites -> [!IMPORTANT] -> To use the Speech containers, you must submit an online request and have it approved. For more information, see the "Request approval to run the container" section. - You must meet the following prerequisites before you use Speech service containers. If you don't have an Azure subscription, create a [free account](https://azure.microsoft.com/free/cognitive-services/) before you begin. You need: +* You must [request and get approval](speech-container-overview.md#request-approval-to-run-the-container) to use a Speech container. * [Docker](https://docs.docker.com/) installed on a host computer. Docker must be configured to allow the containers to connect with and send billing data to Azure. * On Windows, Docker must also be configured to support Linux containers. * You should have a basic understanding of [Docker concepts](https://docs.docker.com/get-started/overview/). * A <a href="https://portal.azure.com/#create/Microsoft.CognitiveServicesSpeechServices" title="Create a Speech service resource" target="_blank">Speech service resource </a> with the free (F0) or standard (S) [pricing tier](https://azure.microsoft.com/pricing/details/cognitive-services/speech-services/). +### Billing arguments -## Host computer requirements and recommendations +Speech containers aren't licensed to run without being connected to Azure for metering. You must configure your container to communicate billing information with the metering service at all times. ++Three primary parameters for all Cognitive Services containers are required. The Microsoft Software License Terms must be present with a value of **accept**. An Endpoint URI and API key are also needed. ++Queries to the container are billed at the pricing tier of the Azure resource that's used for the `ApiKey` parameter. ++The <a href="https://docs.docker.com/engine/reference/commandline/run/" target="_blank">`docker run` <span class="docon docon-navigate-external x-hidden-focus"></span></a> command will start the container when all three of the following options are provided with valid values: ++| Option | Description | +|--|-| +| `ApiKey` | The API key of the Speech resource that's used to track billing information.<br/>The `ApiKey` value is used to start the container and is available on the Azure portal's **Keys** page of the corresponding Speech resource. Go to the **Keys** page, and select the **Copy to clipboard** <span class="docon docon-edit-copy x-hidden-focus"></span> icon.| +| `Billing` | The endpoint of the Speech resource that's used to track billing information.<br/>The endpoint is available on the Azure portal **Overview** page of the corresponding Speech resource. Go to the **Overview** page, hover over the endpoint, and a **Copy to clipboard** <span class="docon docon-edit-copy x-hidden-focus"></span> icon appears. Copy and use the endpoint where needed.| +| `Eula` | Indicates that you accepted the license for the container.<br/>The value of this option must be set to **accept**. | ++> [!IMPORTANT] +> These subscription keys are used to access your Cognitive Services API. Don't share your keys. Store them securely. For example, use Azure Key Vault. We also recommend that you regenerate these keys regularly. Only one key is necessary to make an API call. When you regenerate the first key, you can use the second key for continued access to the service. ++The container needs the billing argument values to run. These values allow the container to connect to the billing endpoint. The container reports usage about every 10 to 15 minutes. If the container doesn't connect to Azure within the allowed time window, the container continues to run but doesn't serve queries until the billing endpoint is restored. The connection is attempted 10 times at the same time interval of 10 to 15 minutes. If it can't connect to the billing endpoint within the 10 tries, the container stops serving requests. For an example of the information sent to Microsoft for billing, see the [Cognitive Services container FAQ](../containers/container-faq.yml#how-does-billing-work) in the Azure Cognitive Services documentation. +For more information about these options, see [Configure containers](speech-container-configuration.md). ### Container requirements and recommendations Core and memory correspond to the `--cpus` and `--memory` settings, which are us > [!NOTE] > The minimum and recommended allocations are based on Docker limits, *not* the host machine resources.-> For example, speech-to-text containers memory map portions of a large language model. We recommend that the entire file should fit in memory. You need to add an additional 4 to 8 GB to load the speech modesl (see above table). +> For example, speech-to-text containers memory map portions of a large language model. We recommend that the entire file should fit in memory. You need to add an additional 4 to 8 GB to load the speech models (see above table). > Also, the first run of either container might take longer because models are being paged into memory. -### Advanced Vector Extension support --The *host* is the computer that runs the Docker container. The host *must support* [Advanced Vector Extensions](https://en.wikipedia.org/wiki/Advanced_Vector_Extensions#CPUs_with_AVX2) (AVX2). You can check for AVX2 support on Linux hosts with the following command: --```console -grep -q avx2 /proc/cpuinfo && echo AVX2 supported || echo No AVX2 support detected -``` -> [!WARNING] -> The host computer is *required* to support AVX2. The container *will not* function correctly without AVX2 support. --## Request approval to run the container --Fill out and submit the [request form](https://aka.ms/csgate) to request access to the container. ---## Speech container images --# [Speech-to-text](#tab/stt) --The Speech-to-text container image can be found on the `mcr.microsoft.com` container registry syndicate. It resides within the `azure-cognitive-services/speechservices/` repository and is named `speech-to-text`. The fully qualified container image name is, `mcr.microsoft.com/azure-cognitive-services/speechservices/speech-to-text`. You can find a full list of [tags on the MCR](https://mcr.microsoft.com/product/azure-cognitive-services/speechservices/speech-to-text/tags). --| Container | Repository | -|--|| -| Speech-to-text | `mcr.microsoft.com/azure-cognitive-services/speechservices/speech-to-text:latest` | --# [Custom speech-to-text](#tab/cstt) --The Custom Speech-to-text container image can be found on the `mcr.microsoft.com` container registry syndicate. It resides within the `azure-cognitive-services/speechservices/` repository and is named `custom-speech-to-text`. The fully qualified container image name is `mcr.microsoft.com/azure-cognitive-services/speechservices/custom-speech-to-text`. --To use the latest version of the container, you can use the `latest` tag. You can also find a full list of [tags on the MCR](https://mcr.microsoft.com/product/azure-cognitive-services/speechservices/custom-speech-to-text/tags). --| Container | Repository | -|--|| -| Custom speech-to-text | `mcr.microsoft.com/azure-cognitive-services/speechservices/custom-speech-to-text:latest` | --# [Neural text-to-speech](#tab/ntts) --The Neural Text-to-speech container image can be found on the `mcr.microsoft.com` container registry syndicate. It resides within the `azure-cognitive-services/speechservices/` repository and is named `neural-text-to-speech`. The fully qualified container image name is, `mcr.microsoft.com/azure-cognitive-services/speechservices/neural-text-to-speech`. --To use the latest version of the container, you can use the `latest` tag. You can also find a full list of [tags on the MCR](https://mcr.microsoft.com/product/azure-cognitive-services/speechservices/neural-text-to-speech/about). --| Container | Repository | -|--|| -| Neural text-to-speech | `mcr.microsoft.com/azure-cognitive-services/speechservices/neural-text-to-speech:latest` | --# [Speech language identification](#tab/lid) --The Speech language detection container image can be found on the `mcr.microsoft.com` container registry syndicate. It resides within the `azure-cognitive-services/speechservices/` repository and is named `language-detection`. The fully qualified container image name is, `mcr.microsoft.com/azure-cognitive-services/speechservices/language-detection`. --To use the latest version of the container, you can use the `latest` tag. You can also find a full list of [tags on the MCR](https://mcr.microsoft.com/product/azure-cognitive-services/speechservices/language-detection/tags). --> [!TIP] -> To get the most useful results, use the Speech language identification container with the speech-to-text or custom speech-to-text containers. --| Container | Repository | -|--|| -| Speech language identification | `mcr.microsoft.com/azure-cognitive-services/speechservices/language-detection:latest` | --*** --### Get the container image with docker pull --# [Speech-to-text](#tab/stt) --#### Docker pull for the speech-to-text container --Use the [docker pull](https://docs.docker.com/engine/reference/commandline/pull/) command to download a container image from Microsoft Container Registry: --```Docker -docker pull mcr.microsoft.com/azure-cognitive-services/speechservices/speech-to-text:latest -``` --> [!IMPORTANT] -> The `latest` tag pulls the `en-US` locale. For additional locales, see [Speech-to-text locales](#speech-to-text-locales). --#### Speech-to-text locales --All tags, except for `latest`, are in the following format and are case sensitive: --``` -<major>.<minor>.<patch>-<platform>-<locale>-<prerelease> -``` --The following tag is an example of the format: --``` -2.6.0-amd64-en-us -``` --For all the supported locales of the speech-to-text container, see [Speech-to-text image tags](../containers/container-image-tags.md#speech-to-text). --# [Custom speech-to-text](#tab/cstt) --#### Docker pull for the custom speech-to-text container --Use the [docker pull](https://docs.docker.com/engine/reference/commandline/pull/) command to download a container image from Microsoft Container Registry: --```Docker -docker pull mcr.microsoft.com/azure-cognitive-services/speechservices/custom-speech-to-text:latest -``` --> [!NOTE] -> The `locale` and `voice` for custom Speech containers is determined by the custom model ingested by the container. --# [Neural text-to-speech](#tab/ntts) --#### Docker pull for the neural text-to-speech container --Use the [docker pull](https://docs.docker.com/engine/reference/commandline/pull/) command to download a container image from Microsoft Container Registry: --```Docker -docker pull mcr.microsoft.com/azure-cognitive-services/speechservices/neural-text-to-speech:latest -``` --> [!IMPORTANT] -> The `latest` tag pulls the `en-US` locale and `arianeural` voice. For more locales, see [Neural text-to-speech locales](#neural-text-to-speech-locales). --#### Neural text-to-speech locales --All tags, except for `latest`, are in the following format and are case sensitive: --``` -<major>.<minor>.<patch>-<platform>-<locale>-<voice> -``` --The following tag is an example of the format: --``` -1.3.0-amd64-en-us-arianeural -``` --For all the supported locales and corresponding voices of the neural text-to-speech container, see [Neural text-to-speech image tags](../containers/container-image-tags.md#neural-text-to-speech). --> [!IMPORTANT] -> When you construct a neural text-to-speech HTTP POST, the [SSML](speech-synthesis-markup.md) message requires a `voice` element with a `name` attribute. The value is the corresponding container [locale and voice](language-support.md?tabs=tts). For example, the `latest` tag would have a voice name of `en-US-AriaNeural`. --# [Speech language identification](#tab/lid) --#### Docker pull for the Speech language identification container --Use the [docker pull](https://docs.docker.com/engine/reference/commandline/pull/) command to download a container image from Microsoft Container Registry: --```Docker -docker pull mcr.microsoft.com/azure-cognitive-services/speechservices/language-detection:latest -``` --*** -## Use the container --After the container is on the [host computer](#host-computer-requirements-and-recommendations), use the following process to work with the container. --1. [Run the container](#run-the-container-with-docker-run) with the required billing settings. More [examples](speech-container-configuration.md#example-docker-run-commands) of the `docker run` command are available. -1. [Query the container's prediction endpoint](#query-the-containers-prediction-endpoint). --## Run the container with docker run --Use the [docker run](https://docs.docker.com/engine/reference/commandline/run/) command to run the container. For more information on how to get the `{Endpoint_URI}` and `{API_Key}` values, see [Gather required parameters](#gather-required-parameters). More [examples](speech-container-configuration.md#example-docker-run-commands) of the `docker run` command are also available. --> [!NOTE] -> For general container requirements, see [Container requirements and recommendations](#container-requirements-and-recommendations). +## Host computer requirements and recommendations -# [Speech-to-text](#tab/stt) +The host is an x64-based computer that runs the Docker container. It can be a computer on your premises or a Docker hosting service in Azure, such as: -### Run the container connected to the internet +* [Azure Kubernetes Service](~/articles/aks/index.yml). +* [Azure Container Instances](~/articles/container-instances/index.yml). +* A [Kubernetes](https://kubernetes.io/) cluster deployed to [Azure Stack](/azure-stack/operator). For more information, see [Deploy Kubernetes to Azure Stack](/azure-stack/user/azure-stack-solution-template-kubernetes-deploy). -To run the standard speech-to-text container, execute the following `docker run` command: --```bash -docker run --rm -it -p 5000:5000 --memory 8g --cpus 4 \ -mcr.microsoft.com/azure-cognitive-services/speechservices/speech-to-text \ -Eula=accept \ -Billing={ENDPOINT_URI} \ -ApiKey={API_KEY} -``` --This command: --* Runs a *speech-to-text* container from the container image. -* Allocates 4 CPU cores and 8 GB of memory. -* Exposes TCP port 5000 and allocates a pseudo-TTY for the container. -* Automatically removes the container after it exits. The container image is still available on the host computer. > [!NOTE] > Containers support compressed audio input to the Speech SDK by using GStreamer.-> To install GStreamer in a container, -> follow Linux instructions for GStreamer in [Use codec compressed audio input with the Speech SDK](how-to-use-codec-compressed-audio-input-streams.md). --### Run the container disconnected from the internet +> To install GStreamer in a container, follow Linux instructions for GStreamer in [Use codec compressed audio input with the Speech SDK](how-to-use-codec-compressed-audio-input-streams.md). --The speech-to-text container provide a default directory for writing the license file and billing log at runtime. The default directories are /license and /output respectively. --When you're mounting these directories to the container with the `docker run -v` command, make sure the local machine directory is set ownership to `user:group nonroot:nonroot` before running the container. +### Advanced Vector Extension support -Below is a sample command to set file/directory ownership. +The *host* is the computer that runs the Docker container. The host *must support* [Advanced Vector Extensions](https://en.wikipedia.org/wiki/Advanced_Vector_Extensions#CPUs_with_AVX2) (AVX2). You can check for AVX2 support on Linux hosts with the following command: -```bash -sudo chown -R nonroot:nonroot <YOUR_LOCAL_MACHINE_PATH_1> <YOUR_LOCAL_MACHINE_PATH_2> ... +```console +grep -q avx2 /proc/cpuinfo && echo AVX2 supported || echo No AVX2 support detected ```+> [!WARNING] +> The host computer is *required* to support AVX2. The container *will not* function correctly without AVX2 support. -### Diarization on the speech-to-text output --Diarization is enabled by default. To get diarization in your response, use `diarize_speech_config.set_service_property`. --1. Set the phrase output format to `Detailed`. -2. Set the mode of diarization. The supported modes are `Identity` and `Anonymous`. - - ```python - diarize_speech_config.set_service_property( - name='speechcontext-PhraseOutput.Format', - value='Detailed', - channel=speechsdk.ServicePropertyChannel.UriQueryParameter - ) - - diarize_speech_config.set_service_property( - name='speechcontext-phraseDetection.speakerDiarization.mode', - value='Identity', - channel=speechsdk.ServicePropertyChannel.UriQueryParameter - ) - ``` -- > [!NOTE] - > "Identity" mode returns `"SpeakerId": "Customer"` or `"SpeakerId": "Agent"`. - > "Anonymous" mode returns `"SpeakerId": "Speaker 1"` or `"SpeakerId": "Speaker 2"`. - -### Analyze sentiment on the speech-to-text output --Starting in v2.6.0 of the speech-to-text container, you should use Language service 3.0 API endpoint instead of the preview one. For example: --* `https://eastus.api.cognitive.microsoft.com/text/analytics/v3.0/sentiment` -* `https://localhost:5000/text/analytics/v3.0/sentiment` --> [!NOTE] -> The Language service `v3.0` API isn't backward compatible with `v3.0-preview.1`. To get the latest sentiment feature support, use `v2.6.0` of the speech-to-text container image and Language service `v3.0`. +## Run the container -Starting in v2.2.0 of the speech-to-text container, you can call the [sentiment analysis v3 API](../text-analytics/how-tos/text-analytics-how-to-sentiment-analysis.md) on the output. To call sentiment analysis, you'll need a Language service API resource endpoint. For example: +Use the [docker run](https://docs.docker.com/engine/reference/commandline/run/) command to run the container. Once running, the container continues to run until you [stop the container](#stop-the-container). -* `https://eastus.api.cognitive.microsoft.com/text/analytics/v3.0-preview.1/sentiment` -* `https://localhost:5000/text/analytics/v3.0-preview.1/sentiment` +Take note the following best practices with the `docker run` command: -If you're accessing a Language service endpoint in the cloud, you'll need a key. If you're running Language service features locally, you might not need to provide this. +- **Line-continuation character**: The Docker commands in the following sections use the back slash, `\`, as a line continuation character. Replace or remove this based on your host operating system's requirements. +- **Argument order**: Do not change the order of the arguments unless you are familiar with Docker containers. -The key and endpoint are passed to the Speech container as arguments, as in the following example: +You can use the [docker images](https://docs.docker.com/engine/reference/commandline/images/) command to list your downloaded container images. The following command lists the ID, repository, and tag of each downloaded container image, formatted as a table: ```bash-docker run -it --rm -p 5000:5000 \ -mcr.microsoft.com/azure-cognitive-services/speechservices/speech-to-text:latest \ -Eula=accept \ -Billing={ENDPOINT_URI} \ -ApiKey={API_KEY} \ -CloudAI:SentimentAnalysisSettings:TextAnalyticsHost={TEXT_ANALYTICS_HOST} \ -CloudAI:SentimentAnalysisSettings:SentimentAnalysisApiKey={SENTIMENT_APIKEY} -``` --This command: --* Performs the same steps as the preceding command. -* Stores a Language service API endpoint and key, for sending sentiment analysis requests. --### Phraselist v2 on the speech-to-text output --Starting in v2.6.0 of the speech-to-text container, you can get the output with your own phrases, either the whole sentence or phrases in the middle. For example, *the tall man* in the following sentence: --* "This is a sentence **the tall man** this is another sentence." --To configure a phrase list, you need to add your own phrases when you make the call. For example: --```python - phrase="the tall man" - recognizer = speechsdk.SpeechRecognizer( - speech_config=dict_speech_config, - audio_config=audio_config) - phrase_list_grammer = speechsdk.PhraseListGrammar.from_recognizer(recognizer) - phrase_list_grammer.addPhrase(phrase) - - dict_speech_config.set_service_property( - name='setflight', - value='xonlineinterp', - channel=speechsdk.ServicePropertyChannel.UriQueryParameter - ) +docker images --format "table {{.ID}}\t{{.Repository}}\t{{.Tag}}" ``` -If you have multiple phrases to add, call `.addPhrase()` for each phrase to add it to the phrase list. --# [Custom speech-to-text](#tab/cstt) --The custom speech-to-text container relies on a Custom Speech model. The custom model has to have been [trained](how-to-custom-speech-train-model.md) by using the [Speech Studio](https://aka.ms/speechstudio/customspeech). --The custom speech **Model ID** is required to run the container. For more information about how to get the model ID, see [Custom Speech model lifecycle](how-to-custom-speech-model-and-endpoint-lifecycle.md). -- --Obtain the **Model ID** to use as the argument to the `ModelId` parameter of the `docker run` command. -- --The following table represents the various `docker run` parameters and their corresponding descriptions: --| Parameter | Description | -||| -| `{VOLUME_MOUNT}` | The host computer [volume mount](https://docs.docker.com/storage/volumes/), which Docker uses to persist the custom model. An example is *C:\CustomSpeech* where the C drive is located on the host machine. | -| `{MODEL_ID}` | The custom speech model ID. For more information, see [Custom Speech model lifecycle](how-to-custom-speech-model-and-endpoint-lifecycle.md). | -| `{ENDPOINT_URI}` | The endpoint is required for metering and billing. For more information, see [Gather required parameters](#gather-required-parameters). | -| `{API_KEY}` | The API key is required. For more information, see [Gather required parameters](#gather-required-parameters). | --To run the custom speech-to-text container, execute the following `docker run` command: --```bash -docker run --rm -it -p 5000:5000 --memory 8g --cpus 4 \ --v {VOLUME_MOUNT}:/usr/local/models \-mcr.microsoft.com/azure-cognitive-services/speechservices/custom-speech-to-text \ -ModelId={MODEL_ID} \ -Eula=accept \ -Billing={ENDPOINT_URI} \ -ApiKey={API_KEY} +Here's an example result: ```--This command: --* Runs a custom speech-to-text container from the container image. -* Allocates 4 CPU cores and 8 GB of memory. -* Loads the custom speech-to-text model from the volume input mount, for example, *C:\CustomSpeech*. -* Exposes TCP port 5000 and allocates a pseudo-TTY for the container. -* Downloads the model given the `ModelId` (if not found on the volume mount). -* If the custom model was previously downloaded, the `ModelId` is ignored. -* Automatically removes the container after it exits. The container image is still available on the host computer. --#### Base model download on the custom speech-to-text container --Starting in v2.6.0 of the custom-speech-to-text container, you can get the available base model information by using option `BaseModelLocale={LOCALE}`. This option gives you a list of available base models on that locale under your billing account. For example: --```bash -docker run --rm -it \ -mcr.microsoft.com/azure-cognitive-services/speechservices/custom-speech-to-text \ -BaseModelLocale={LOCALE} \ -Eula=accept \ -Billing={ENDPOINT_URI} \ -ApiKey={API_KEY} +IMAGE ID REPOSITORY TAG +<image-id> <repository-path/name> <tag-name> ``` -This command: +## Validate that a container is running -* Runs a custom speech-to-text container from the container image. -* Checks and returns the available base models of the target locale. +There are several ways to validate that the container is running. Locate the *External IP* address and exposed port of the container in question, and open your favorite web browser. Use the various request URLs that follow to validate the container is running. -The output gives you a list of base models with the information locale, model ID, and creation date time. You can use the model ID to download and use the specific base model you prefer. For example: -``` -Checking available base model for en-us -2020/10/30 21:54:20 [Info] Searching available base models for en-us -2020/10/30 21:54:21 [Info] [Base model] Locale: en-us, CreatedDate: 2016-11-04T08:23:42Z, Id: a3d8aab9-6f36-44cd-9904-b37389ce2bfa -2020/10/30 21:54:21 [Info] [Base model] Locale: en-us, CreatedDate: 2016-11-04T12:01:02Z, Id: cc7826ac-5355-471d-9bc6-a54673d06e45 -2020/10/30 21:54:21 [Info] [Base model] Locale: en-us, CreatedDate: 2017-08-17T12:00:00Z, Id: a1f8db59-40ff-4f0e-b011-37629c3a1a53 -2020/10/30 21:54:21 [Info] [Base model] Locale: en-us, CreatedDate: 2018-04-16T11:55:00Z, Id: c7a69da3-27de-4a4b-ab75-b6716f6321e5 -2020/10/30 21:54:21 [Info] [Base model] Locale: en-us, CreatedDate: 2018-09-21T15:18:43Z, Id: da494a53-0dad-4158-b15f-8f9daca7a412 -2020/10/30 21:54:21 [Info] [Base model] Locale: en-us, CreatedDate: 2018-10-19T11:28:54Z, Id: 84ec130b-d047-44bf-a46d-58c1ac292ca7 -2020/10/30 21:54:21 [Info] [Base model] Locale: en-us, CreatedDate: 2018-11-26T07:59:09Z, Id: ee5c100f-152f-4ae5-9e9d-014af3c01c56 -2020/10/30 21:54:21 [Info] [Base model] Locale: en-us, CreatedDate: 2018-11-26T09:21:55Z, Id: d04959a6-71da-4913-9997-836793e3c115 -2020/10/30 21:54:21 [Info] [Base model] Locale: en-us, CreatedDate: 2019-01-11T10:04:19Z, Id: 488e5f23-8bc5-46f8-9ad8-ea9a49a8efda -2020/10/30 21:54:21 [Info] [Base model] Locale: en-us, CreatedDate: 2019-02-18T14:37:57Z, Id: 0207b3e6-92a8-4363-8c0e-361114cdd719 -2020/10/30 21:54:21 [Info] [Base model] Locale: en-us, CreatedDate: 2019-03-03T17:34:10Z, Id: 198d9b79-2950-4609-b6ec-f52254074a05 -2020/10/30 21:54:21 [Fatal] Please run this tool again and assign --modelId '<one above base model id>'. If no model id listed above, it means currently there is no available base model for en-us -``` +The example request URLs listed here are `http://localhost:5000`, but your specific container might vary. Make sure to rely on your container's *External IP* address and exposed port. -#### Display model download on the custom speech-to-text container -Starting in v3.1.0 of the custom-speech-to-text container, you can get the available display models information and choose to download those models into your speech-to-text container to get highly improved final display output. +| Request URL | Purpose | +|--|--| +| `http://localhost:5000/` | The container provides a home page. | +| `http://localhost:5000/ready` | Requested with GET, this URL provides a verification that the container is ready to accept a query against the model. This request can be used for Kubernetes [liveness and readiness probes](https://kubernetes.io/docs/tasks/configure-pod-container/configure-liveness-readiness-probes/). | +| `http://localhost:5000/status` | Also requested with GET, this URL verifies if the api-key used to start the container is valid without causing an endpoint query. This request can be used for Kubernetes [liveness and readiness probes](https://kubernetes.io/docs/tasks/configure-pod-container/configure-liveness-readiness-probes/). | +| `http://localhost:5000/swagger` | The container provides a full set of documentation for the endpoints and a **Try it out** feature. With this feature, you can enter your settings into a web-based HTML form and make the query without having to write any code. After the query returns, an example CURL command is provided to demonstrate the HTTP headers and body format that's required. | -You can query or download any or all of these display model types: Rescoring (`Rescore`), Punctuation (`Punct`), resegmentation (`Resegment`), and wfstitn (`Wfstitn`). Otherwise, you can use the `FullDisplay` option (with or without the other types) to query or download all types of display models. --Set the `BaseModelLocale` to query the latest available display model on the target locale. If you include multiple display model types, the command will return the latest available display models for each type. For example: +## Stop the container -```bash -docker run --rm -it \ -mcr.microsoft.com/azure-cognitive-services/speechservices/custom-speech-to-text \ -Punct Rescore Resegment Wfstitn \ # Specify `FullDisplay` or a space-separated subset of display models -BaseModelLocale={LOCALE} \ -Eula=accept \ -Billing={ENDPOINT_URI} \ -ApiKey={API_KEY} -``` +To shut down the container, in the command-line environment where the container is running, select <kbd>Ctrl+C</kbd>. -Set the `DisplayLocale` to download the latest available display model on the target locale. When you set `DisplayLocale`, you must also specify `FullDisplay` or a space-separated subset of display models. The command will download the latest available display model for each specified type. For example: +## Run multiple containers on the same host -```bash -docker run --rm -it \ -mcr.microsoft.com/azure-cognitive-services/speechservices/custom-speech-to-text \ -Punct Rescore Resegment Wfstitn \ # Specify `FullDisplay` or a space-separated subset of display models -DisplayLocale={LOCALE} \ -Eula=accept \ -Billing={ENDPOINT_URI} \ -ApiKey={API_KEY} -``` +If you intend to run multiple containers with exposed ports, make sure to run each container with a different exposed port. For example, run the first container on port 5000 and the second container on port 5001. -Set one model ID parameter to download a specific display model: Rescoring (`RescoreId`), Punctuation (`PunctId`), resegmentation (`ResegmentId`), or wfstitn (`WfstitnId`). This is similar to how you would download a base model via the `ModelId` parameter. For example, to download a rescoring display model, you can use the following command with the `RescoreId` parameter: +You can have this container and a different Cognitive Services container running on the HOST together. You also can have multiple containers of the same Cognitive Services container running. -```bash -docker run --rm -it \ -mcr.microsoft.com/azure-cognitive-services/speechservices/custom-speech-to-text \ -RescoreId={RESCORE_MODEL_ID} \ -Eula=accept \ -Billing={ENDPOINT_URI} \ -ApiKey={API_KEY} -``` +## Host URLs > [!NOTE]-> If you set more than one query or download parameter, the command will prioritize in this order: `BaseModelLocale`, model ID, and then `DisplayLocale` (only applicable for display models). --#### Custom pronunciation on the custom speech-to-text container --Starting in v2.5.0 of the custom-speech-to-text container, you can get custom pronunciation results in the output. All you need to do is have your own custom pronunciation rules set up in your custom model and mount the model to a custom-speech-to-text container. ---### Run the container disconnected from the internet --To use this container disconnected from the internet, you must first request access by filling out an application, and purchasing a commitment plan. See [Use Docker containers in disconnected environments](../containers/disconnected-containers.md) for more information. --In order to prepare and configure the Custom Speech-to-Text container you will need two separate speech resources: --1. A regular Azure Speech Service resource which is either configured to use a "**S0 - Standard**" pricing tier or a "**Speech to Text (Custom)**" commitment tier pricing plan. This will be used to train, download, and configure your custom speech models for use in your container. -1. An Azure Speech Service resource which is configured to use the "**DC0 Commitment (Disconnected)**" pricing plan. This is used to download your disconnected container license file required to run the container in disconnected mode. --Download the docker container and run it to get the required speech model as [described above](#get-the-container-image-with-docker-pull) using the regular Azure Speech resource. Next, you will need to download your disconnected license file. --The `DownloadLicense=True` parameter in your `docker run` command will download a license file that will enable your Docker container to run when it isn't connected to the internet. It also contains an expiration date, after which the license file will be invalid to run the container. You can only use a license file with the appropriate container that you've been approved for. For example, you can't use a license file for a speech-to-text container with a form recognizer container. --| Placeholder | Value | Format or example | -|-|-|| -| `{IMAGE}` | The container image you want to use. | `mcr.microsoft.com/azure-cognitive-services/form-recognizer/invoice` | -| `{LICENSE_MOUNT}` | The path where the license will be downloaded, and mounted. | `/host/license:/path/to/license/directory` | -| `{ENDPOINT_URI}` | The endpoint for authenticating your service request. You can find it on your resource's **Key and endpoint** page, on the Azure portal. | `https://<your-custom-subdomain>.cognitiveservices.azure.com` | -| `{API_KEY}` | The key for your Text Analytics resource. You can find it on your resource's **Key and endpoint** page, on the Azure portal. |`xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx`| -| `{CONTAINER_LICENSE_DIRECTORY}` | Location of the license folder on the container's local filesystem. | `/path/to/license/directory` | --```bash -docker run --rm -it -p 5000:5000 \ --v {LICENSE_MOUNT} \-{IMAGE} \ -eula=accept \ -billing={ENDPOINT_URI} \ -apikey={API_KEY} \ -DownloadLicense=True \ -Mounts:License={CONTAINER_LICENSE_DIRECTORY} -``` --Once the license file has been downloaded, you can run the container in a disconnected environment. The following example shows the formatting of the `docker run` command you'll use, with placeholder values. Replace these placeholder values with your own values. --Wherever the container is run, the license file must be mounted to the container and the location of the license folder on the container's local filesystem must be specified with `Mounts:License=`. An output mount must also be specified so that billing usage records can be written. +> Use a unique port number if you're running multiple containers. -Placeholder | Value | Format or example | -|-|-|| -| `{IMAGE}` | The container image you want to use. | `mcr.microsoft.com/azure-cognitive-services/form-recognizer/invoice` | - `{MEMORY_SIZE}` | The appropriate size of memory to allocate for your container. | `4g` | -| `{NUMBER_CPUS}` | The appropriate number of CPUs to allocate for your container. | `4` | -| `{LICENSE_MOUNT}` | The path where the license will be located and mounted. | `/host/license:/path/to/license/directory` | -| `{OUTPUT_PATH}` | The output path for logging [usage records](../containers/disconnected-containers.md#usage-records). | `/host/output:/path/to/output/directory` | -| `{MODEL_PATH}` | The path where the model is located. | `/path/to/model/` | -| `{CONTAINER_LICENSE_DIRECTORY}` | Location of the license folder on the container's local filesystem. | `/path/to/license/directory` | -| `{CONTAINER_OUTPUT_DIRECTORY}` | Location of the output folder on the container's local filesystem. | `/path/to/output/directory` | +| Protocol | Host URL | Containers | +|--|--|--| +| WS | `ws://localhost:5000` | [Speech-to-text](speech-container-stt.md#use-the-container)<br/><br/>[Custom speech-to-text](speech-container-cstt.md#use-the-container) | +| HTTP | `http://localhost:5000` | [Neural text-to-speech](speech-container-ntts.md#use-the-container)<br/><br/>[Speech language identification](speech-container-lid.md#use-the-container) | -```bash -docker run --rm -it -p 5000:5000 --memory {MEMORY_SIZE} --cpus {NUMBER_CPUS} \ --v {LICENSE_MOUNT} \ --v {OUTPUT_PATH} \--v {MODEL_PATH} \-{IMAGE} \ -eula=accept \ -Mounts:License={CONTAINER_LICENSE_DIRECTORY} -Mounts:Output={CONTAINER_OUTPUT_DIRECTORY} -``` +For more information on using WSS and HTTPS protocols, see [Container security](../cognitive-services-container-support.md#azure-cognitive-services-container-security) in the Azure Cognitive Services documentation. -The [Custom Speech-to-Text](../speech-service/speech-container-howto.md?tabs=cstt) container provides a default directory for writing the license file and billing log at runtime. The default directories are /license and /output respectively. +## Troubleshooting -When you're mounting these directories to the container with the `docker run -v` command, make sure the local machine directory is set ownership to `user:group nonroot:nonroot` before running the container. +When you start or run the container, you might experience issues. Use an output [mount](speech-container-configuration.md#mount-settings) and enable logging. Doing so allows the container to generate log files that are helpful when you troubleshoot issues. -Below is a sample command to set file/directory ownership. +> [!TIP] +> For more troubleshooting information and guidance, see [Cognitive Services containers frequently asked questions (FAQ)](../containers/container-faq.yml) in the Azure Cognitive Services documentation. -```bash -sudo chown -R nonroot:nonroot <YOUR_LOCAL_MACHINE_PATH_1> <YOUR_LOCAL_MACHINE_PATH_2> ... -``` -# [Neural text-to-speech](#tab/ntts) +### Logging settings -To run the neural text-to-speech container, execute the following `docker run` command: +Speech containers come with ASP.NET Core logging support. Here's an example of the `neural-text-to-speech container` started with default logging to the console: ```bash docker run --rm -it -p 5000:5000 --memory 12g --cpus 6 \ mcr.microsoft.com/azure-cognitive-services/speechservices/neural-text-to-speech \ Eula=accept \ Billing={ENDPOINT_URI} \-ApiKey={API_KEY} +ApiKey={API_KEY} \ +Logging:Console:LogLevel:Default=Information ``` -This command: --* Runs a neural text-to-speech container from the container image. -* Allocates 6 CPU cores and 12 GB of memory. -* Exposes TCP port 5000 and allocates a pseudo-TTY for the container. -* Automatically removes the container after it exits. The container image is still available on the host computer. ---### Run the container disconnected from the internet -+For more information about logging, see [Configure Speech containers](speech-container-configuration.md#logging-settings) and [usage records](../containers/disconnected-containers.md#usage-records) in the Azure Cognitive Services documentation. +## Microsoft diagnostics container -The neural text-to-speech container provide a default directory for writing the license file and billing log at runtime. The default directories are /license and /output respectively. +If you're having trouble running a Cognitive Services container, you can try using the Microsoft diagnostics container. Use this container to diagnose common errors in your deployment environment that might prevent Cognitive Services containers from functioning as expected. -When you're mounting these directories to the container with the `docker run -v` command, make sure the local machine directory is set ownership to `user:group nonroot:nonroot` before running the container. --Below is a sample command to set file/directory ownership. +To get the container, use the following `docker pull` command: ```bash-sudo chown -R nonroot:nonroot <YOUR_LOCAL_MACHINE_PATH_1> <YOUR_LOCAL_MACHINE_PATH_2> ... +docker pull mcr.microsoft.com/azure-cognitive-services/diagnostic ``` --# [Speech language identification](#tab/lid) --To run the Speech language identification container, execute the following `docker run` command: +Then run the container. Replace `{ENDPOINT_URI}` with your endpoint, and replace `{API_KEY}` with your key to your resource: ```bash-docker run --rm -it -p 5003:5003 --memory 1g --cpus 1 \ -mcr.microsoft.com/azure-cognitive-services/speechservices/language-detection \ -Eula=accept \ +docker run --rm mcr.microsoft.com/azure-cognitive-services/diagnostic \ +eula=accept \ Billing={ENDPOINT_URI} \ ApiKey={API_KEY} ``` -This command: --* Runs a Speech language-detection container from the container image. Currently, you won't be charged for running this image. -* Allocates 1 CPU core and 1 GB of memory. -* Exposes TCP port 5003 and allocates a pseudo-TTY for the container. -* Automatically removes the container after it exits. The container image is still available on the host computer. --If you want to run this container with the speech-to-text container, you can use this [docker image](https://hub.docker.com/r/antsu/on-prem-client). After both containers have been started, use this `docker run` command to execute `speech-to-text-with-languagedetection-client`: --```Docker -docker run --rm -v ${HOME}:/root -ti antsu/on-prem-client:latest ./speech-to-text-with-languagedetection-client ./audio/LanguageDetection_en-us.wav --host localhost --lport 5003 --sport 5000 -``` --Increasing the number of concurrent calls can affect reliability and latency. For language identification, we recommend a maximum of four concurrent calls using 1 CPU with 1 GB of memory. For hosts with 2 CPUs and 2 GB of memory, we recommend a maximum of six concurrent calls. --*** -> [!IMPORTANT] -> The `Eula`, `Billing`, and `ApiKey` options must be specified to run the container. Otherwise, the container won't start. For more information, see [Billing](#billing). --## Query the container's prediction endpoint --> [!NOTE] -> Use a unique port number if you're running multiple containers. --| Containers | SDK Host URL | Protocol | -|--|--|--| -| Standard speech-to-text and custom speech-to-text | `ws://localhost:5000` | WS | -| Neural Text-to-speech, Speech language identification | `http://localhost:5000` | HTTP | --For more information on using WSS and HTTPS protocols, see [Container security](../cognitive-services-container-support.md#azure-cognitive-services-container-security). --### Speech-to-text (standard and custom) ---#### Analyze sentiment --If you provided your Language service API credentials [to the container](#analyze-sentiment-on-the-speech-to-text-output), you can use the Speech SDK to send speech recognition requests with sentiment analysis. You can configure the API responses to use either a *simple* or *detailed* format. --> [!NOTE] -> v1.13 of the Speech Service Python SDK has an identified issue with sentiment analysis. Use v1.12.x or earlier if you're using sentiment analysis in the Speech Service Python SDK. --# [Simple format](#tab/simple-format) --To configure the Speech client to use a simple format, add `"Sentiment"` as a value for `Simple.Extensions`. If you want to choose a specific Language service model version, replace `'latest'` in the `speechcontext-phraseDetection.sentimentAnalysis.modelversion` property configuration. --```python -speech_config.set_service_property( - name='speechcontext-PhraseOutput.Simple.Extensions', - value='["Sentiment"]', - channel=speechsdk.ServicePropertyChannel.UriQueryParameter -) -speech_config.set_service_property( - name='speechcontext-phraseDetection.sentimentAnalysis.modelversion', - value='latest', - channel=speechsdk.ServicePropertyChannel.UriQueryParameter -) -``` --`Simple.Extensions` returns the sentiment result in the root layer of the response. --```json -{ - "DisplayText":"What's the weather like?", - "Duration":13000000, - "Id":"6098574b79434bd4849fee7e0a50f22e", - "Offset":4700000, - "RecognitionStatus":"Success", - "Sentiment":{ - "Negative":0.03, - "Neutral":0.79, - "Positive":0.18 - } -} -``` --# [Detailed format](#tab/detailed-format) --To configure the Speech client to use a detailed format, add `"Sentiment"` as a value for `Detailed.Extensions`, `Detailed.Options`, or both. If you want to choose a specific sentiment analysis model version, replace `'latest'` in the `speechcontext-phraseDetection.sentimentAnalysis.modelversion` property configuration. --```python -speech_config.set_service_property( - name='speechcontext-PhraseOutput.Detailed.Options', - value='["Sentiment"]', - channel=speechsdk.ServicePropertyChannel.UriQueryParameter -) -speech_config.set_service_property( - name='speechcontext-PhraseOutput.Detailed.Extensions', - value='["Sentiment"]', - channel=speechsdk.ServicePropertyChannel.UriQueryParameter -) -speech_config.set_service_property( - name='speechcontext-phraseDetection.sentimentAnalysis.modelversion', - value='latest', - channel=speechsdk.ServicePropertyChannel.UriQueryParameter -) -``` --`Detailed.Extensions` provides the sentiment result in the root layer of the response. `Detailed.Options` provides the result in the `NBest` layer of the response. They can be used separately or together. --```json -{ - "DisplayText":"What's the weather like?", - "Duration":13000000, - "Id":"6a2aac009b9743d8a47794f3e81f7963", - "NBest":[ - { - "Confidence":0.973695, - "Display":"What's the weather like?", - "ITN":"what's the weather like", - "Lexical":"what's the weather like", - "MaskedITN":"What's the weather like", - "Sentiment":{ - "Negative":0.03, - "Neutral":0.79, - "Positive":0.18 - } - }, - { - "Confidence":0.9164971, - "Display":"What is the weather like?", - "ITN":"what is the weather like", - "Lexical":"what is the weather like", - "MaskedITN":"What is the weather like", - "Sentiment":{ - "Negative":0.02, - "Neutral":0.88, - "Positive":0.1 - } - } - ], - "Offset":4700000, - "RecognitionStatus":"Success", - "Sentiment":{ - "Negative":0.03, - "Neutral":0.79, - "Positive":0.18 - } -} -``` ---If you want to completely disable sentiment analysis, add a `false` value to `sentimentanalysis.enabled`. --```python -speech_config.set_service_property( - name='speechcontext-phraseDetection.sentimentanalysis.enabled', - value='false', - channel=speechsdk.ServicePropertyChannel.UriQueryParameter -) -``` --### Neural Text-to-Speech ---### Run multiple containers on the same host --If you intend to run multiple containers with exposed ports, make sure to run each container with a different exposed port. For example, run the first container on port 5000 and the second container on port 5001. --You can have this container and a different Cognitive Services container running on the HOST together. You also can have multiple containers of the same Cognitive Services container running. ---## Stop the container ---## Troubleshooting --When you start or run the container, you might experience issues. Use an output [mount](speech-container-configuration.md#mount-settings) and enable logging. Doing so allows the container to generate log files that are helpful when you troubleshoot issues. -+The container will test for network connectivity to the billing endpoint. +## Run disconnected containers -## Billing +Tu run disconnected containers (not connected to the internet), you must submit [this request form](https://aka.ms/csdisconnectedcontainers) and wait for approval. For more information about applying and purchasing a commitment plan to use containers in disconnected environments, see [Use containers in disconnected environments](../containers/disconnected-containers.md) in the Azure Cognitive Services documentation. -The Speech containers send billing information to Azure by using a Speech resource on your Azure account. ---For more information about these options, see [Configure containers](speech-container-configuration.md). --## Summary --In this article, you learned concepts and workflow for how to download, install, and run Speech containers. In summary: --* Speech provides four Linux containers for Docker that have various capabilities: - * Speech-to-text - * Custom speech-to-text - * Neural text-to-speech - * Speech language identification -* Container images are downloaded from the container registry in Azure. -* Container images run in Docker. -* Whether you use the REST API (text-to-speech only) or the SDK (speech-to-text or text-to-speech), you specify the host URI of the container. -* You're required to provide billing information when you instantiate a container. --> [!IMPORTANT] -> Cognitive Services containers aren't licensed to run without being connected to Azure for metering. Customers need to enable the containers to communicate billing information with the metering service at all times. Cognitive Services containers don't send customer data (for example, the image or text that's being analyzed) to Microsoft. ## Next steps * Review [configure containers](speech-container-configuration.md) for configuration settings. * Learn how to [use Speech service containers with Kubernetes and Helm](speech-container-howto-on-premises.md).-* Use more [Cognitive Services containers](../cognitive-services-container-support.md). +* Deploy and run containers on [Azure Container Instance](../containers/azure-container-instance-recipe.md) +* Use more [Azure Cognitive Services containers](../cognitive-services-container-support.md). |
cognitive-services | Speech Container Lid | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Speech-Service/speech-container-lid.md | + + Title: Language identification containers - Speech service ++description: Install and run language identification containers with Docker to perform speech recognition, transcription, generation, and more on-premises. ++++++ Last updated : 04/18/2023++zone_pivot_groups: programming-languages-speech-sdk-cli +keywords: on-premises, Docker, container +++# Language identification containers with Docker ++The Speech language identification container detects the language spoken in audio files. You can get real-time speech or batch audio recordings with intermediate results. In this article, you'll learn how to download, install, and run a language identification container. ++> [!NOTE] +> You must [request and get approval](speech-container-overview.md#request-approval-to-run-the-container) to use a Speech container. +> +> The Speech language identification container is available in public preview. Containers in preview are still under development and don't meet Microsoft's stability and support requirements. ++For more information about prerequisites, validating that a container is running, running multiple containers on the same host, and running disconnected containers, see [Install and run Speech containers with Docker](speech-container-howto.md). ++> [!TIP] +> To get the most useful results, use the Speech language identification container with the [speech-to-text](speech-container-stt.md) or [custom speech-to-text](speech-container-cstt.md) containers. ++## Container images ++The Speech language identification container image for all supported versions and locales can be found on the [Microsoft Container Registry (MCR)](https://mcr.microsoft.com/product/azure-cognitive-services/speechservices/language-detection/tags) syndicate. It resides within the `azure-cognitive-services/speechservices/` repository and is named `language-detection`. +++The fully qualified container image name is, `mcr.microsoft.com/azure-cognitive-services/speechservices/language-detection`. Either append a specific version or append `:latest` to get the most recent version. ++| Version | Path | +|--|| +| Latest | `mcr.microsoft.com/azure-cognitive-services/speechservices/language-detection:latest` | +| 1.11.0 | `mcr.microsoft.com/azure-cognitive-services/speechservices/language-detection:1.11.0-amd64-preview` | ++All tags, except for `latest`, are in the following format and are case sensitive: ++``` +<major>.<minor>.<patch>-<platform>-<prerelease> +``` ++The tags are also available [in JSON format](https://mcr.microsoft.com/v2/azure-cognitive-services/speechservices/language-detection/tags/list) for your convenience. The body includes the container path and list of tags. The tags aren't sorted by version, but `"latest"` is always included at the end of the list as shown in this snippet: ++```json +{ + "name": "azure-cognitive-services/speechservices/language-detection", + "tags": [ + "1.1.0-amd64-preview", + "1.11.0-amd64-preview", + "1.3.0-amd64-preview", + "1.5.0-amd64-preview", + <--redacted for brevity--> + "1.8.0-amd64-preview", + "latest" + ] +} +``` ++## Get the container image with docker pull ++You need the [prerequisites](speech-container-howto.md#prerequisites) including required hardware. Please also see the [recommended allocation of resources](speech-container-howto.md#container-requirements-and-recommendations) for each Speech container. ++Use the [docker pull](https://docs.docker.com/engine/reference/commandline/pull/) command to download a container image from Microsoft Container Registry: ++```bash +docker pull mcr.microsoft.com/azure-cognitive-services/speechservices/language-detection:latest +``` +++## Run the container with docker run ++Use the [docker run](https://docs.docker.com/engine/reference/commandline/run/) command to run the container. ++The following table represents the various `docker run` parameters and their corresponding descriptions: ++| Parameter | Description | +||| +| `{ENDPOINT_URI}` | The endpoint is required for metering and billing. For more information, see [billing arguments](speech-container-howto.md#billing-arguments). | +| `{API_KEY}` | The API key is required. For more information, see [billing arguments](speech-container-howto.md#billing-arguments). | ++When you run the Speech language identification container, configure the port, memory, and CPU according to the language identification container [requirements and recommendations](speech-container-howto.md#container-requirements-and-recommendations). ++Here's an example `docker run` command with placeholder values. You must specify the `ENDPOINT_URI` and `API_KEY` values: ++```bash +docker run --rm -it -p 5000:5003 --memory 1g --cpus 1 \ +mcr.microsoft.com/azure-cognitive-services/speechservices/language-detection \ +Eula=accept \ +Billing={ENDPOINT_URI} \ +ApiKey={API_KEY} +``` ++This command: ++* Runs a Speech language identification container from the container image. +* Allocates 1 CPU core and 1 GB of memory. +* Exposes TCP port 5000 and allocates a pseudo-TTY for the container. +* Automatically removes the container after it exits. The container image is still available on the host computer. ++For more information about `docker run` with Speech containers, see [Install and run Speech containers with Docker](speech-container-howto.md#run-the-container). ++## Run with the speech-to-text container ++If you want to run the language identification container with the [speech-to-text](speech-container-stt.md) container, you can use this [docker image](https://hub.docker.com/r/antsu/on-prem-client). After both containers have been started, use this `docker run` command to execute `speech-to-text-with-languagedetection-client`: ++```bash +docker run --rm -v ${HOME}:/root -ti antsu/on-prem-client:latest ./speech-to-text-with-languagedetection-client ./audio/LanguageDetection_en-us.wav --host localhost --lport 5003 --sport 5000 +``` ++Increasing the number of concurrent calls can affect reliability and latency. For language identification, we recommend a maximum of four concurrent calls using 1 CPU with 1 GB of memory. For hosts with 2 CPUs and 2 GB of memory, we recommend a maximum of six concurrent calls. ++## Use the container +++[Try language identification](language-identification.md) using host authentication instead of key and region. When you run language ID in a container, use the `SourceLanguageRecognizer` object instead of `SpeechRecognizer` or `TranslationRecognizer`. ++## Next steps ++* See the [Speech containers overview](speech-container-overview.md) +* Review [configure containers](speech-container-configuration.md) for configuration settings +* Use more [Azure Cognitive Services containers](../cognitive-services-container-support.md) |
cognitive-services | Speech Container Ntts | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Speech-Service/speech-container-ntts.md | + + Title: Neural text-to-speech containers - Speech service ++description: Install and run neural text-to-speech containers with Docker to perform speech synthesis and more on-premises. ++++++ Last updated : 04/18/2023++zone_pivot_groups: programming-languages-speech-sdk-cli +keywords: on-premises, Docker, container +++# Text-to-speech containers with Docker ++The neural text-to-speech container converts text to natural-sounding speech by using deep neural network technology, which allows for more natural synthesized speech.. In this article, you'll learn how to download, install, and run a Text-to-speech container. ++> [!NOTE] +> You must [request and get approval](speech-container-overview.md#request-approval-to-run-the-container) to use a Speech container. ++For more information about prerequisites, validating that a container is running, running multiple containers on the same host, and running disconnected containers, see [Install and run Speech containers with Docker](speech-container-howto.md). ++## Container images ++The neural text-to-speech container image for all supported versions and locales can be found on the [Microsoft Container Registry (MCR)](https://mcr.microsoft.com/product/azure-cognitive-services/speechservices/neural-text-to-speech/tags) syndicate. It resides within the `azure-cognitive-services/speechservices/` repository and is named `neural-text-to-speech`. +++The fully qualified container image name is, `mcr.microsoft.com/azure-cognitive-services/speechservices/neural-text-to-speech`. Either append a specific version or append `:latest` to get the most recent version. ++| Version | Path | +|--|| +| Latest | `mcr.microsoft.com/azure-cognitive-services/speechservices/neural-text-to-speech:latest`<br/><br/>The `latest` tag pulls the `en-US` locale and `en-us-arianeural` voice. | +| 2.12.0 | `mcr.microsoft.com/azure-cognitive-services/speechservices/neural-text-to-speech:2.12.0-amd64-mr-in` | ++All tags, except for `latest`, are in the following format and are case sensitive: ++``` +<major>.<minor>.<patch>-<platform>-<voice>-<preview> +``` ++The tags are also available [in JSON format](https://mcr.microsoft.com/v2/azure-cognitive-services/speechservices/neural-text-to-speech/tags/list) for your convenience. The body includes the container path and list of tags. The tags aren't sorted by version, but `"latest"` is always included at the end of the list as shown in this snippet: ++```json +{ + "name": "azure-cognitive-services/speechservices/neural-text-to-speech", + "tags": [ + "1.10.0-amd64-cs-cz-antoninneural", + "1.10.0-amd64-cs-cz-vlastaneural", + "1.10.0-amd64-de-de-conradneural", + "1.10.0-amd64-de-de-katjaneural", + "1.10.0-amd64-en-au-natashaneural", + <--redacted for brevity--> + "latest" + ] +} +``` ++> [!IMPORTANT] +> We retired the standard speech synthesis voices and standard [text-to-speech](https://mcr.microsoft.com/product/azure-cognitive-services/speechservices/text-to-speech/tags) container on August 31, 2021. You should use neural voices with the [neural-text-to-speech](https://mcr.microsoft.com/product/azure-cognitive-services/speechservices/neural-text-to-speech/tags) container instead. For more information on updating your application, see [Migrate from standard voice to prebuilt neural voice](./how-to-migrate-to-prebuilt-neural-voice.md). ++## Get the container image with docker pull ++You need the [prerequisites](speech-container-howto.md#prerequisites) including required hardware. Please also see the [recommended allocation of resources](speech-container-howto.md#container-requirements-and-recommendations) for each Speech container. ++Use the [docker pull](https://docs.docker.com/engine/reference/commandline/pull/) command to download a container image from Microsoft Container Registry: ++```bash +docker pull mcr.microsoft.com/azure-cognitive-services/speechservices/neural-text-to-speech:latest +``` ++> [!IMPORTANT] +> The `latest` tag pulls the `en-US` locale and `en-us-arianeural` voice. For additional locales and voices, see [text-to-speech container images](#container-images). ++## Run the container with docker run ++Use the [docker run](https://docs.docker.com/engine/reference/commandline/run/) command to run the container. ++# [Neural text to speech](#tab/container) ++The following table represents the various `docker run` parameters and their corresponding descriptions: ++| Parameter | Description | +||| +| `{ENDPOINT_URI}` | The endpoint is required for metering and billing. For more information, see [billing arguments](speech-container-howto.md#billing-arguments). | +| `{API_KEY}` | The API key is required. For more information, see [billing arguments](speech-container-howto.md#billing-arguments). | ++When you run the text-to-speech container, configure the port, memory, and CPU according to the text-to-speech container [requirements and recommendations](speech-container-howto.md#container-requirements-and-recommendations). ++Here's an example `docker run` command with placeholder values. You must specify the `ENDPOINT_URI` and `API_KEY` values: ++```bash +docker run --rm -it -p 5000:5000 --memory 12g --cpus 6 \ +mcr.microsoft.com/azure-cognitive-services/speechservices/neural-text-to-speech \ +Eula=accept \ +Billing={ENDPOINT_URI} \ +ApiKey={API_KEY} +``` ++This command: ++* Runs a neural text-to-speech container from the container image. +* Allocates 6 CPU cores and 12 GB of memory. +* Exposes TCP port 5000 and allocates a pseudo-TTY for the container. +* Automatically removes the container after it exits. The container image is still available on the host computer. ++# [Disconnected neural text to speech](#tab/disconnected) ++To run disconnected containers (not connected to the internet), you must submit [this request form](https://aka.ms/csdisconnectedcontainers) and wait for approval. For more information about applying and purchasing a commitment plan to use containers in disconnected environments, see [Use containers in disconnected environments](../containers/disconnected-containers.md) in the Azure Cognitive Services documentation. ++If you have been approved to run the container disconnected from the internet, the following example shows the formatting of the `docker run` command to use, with placeholder values. Replace these placeholder values with your own values. ++The `DownloadLicense=True` parameter in your `docker run` command will download a license file that will enable your Docker container to run when it isn't connected to the internet. It also contains an expiration date, after which the license file will be invalid to run the container. You can only use a license file with the appropriate container that you've been approved for. For example, you can't use a license file for a `speech-to-text` container with a `neural-text-to-speech` container. ++| Placeholder | Description | +|-|-| +| `{IMAGE}` | The container image you want to use.<br/><br/>For example: `mcr.microsoft.com/azure-cognitive-services/neural-text-to-speech:latest` | +| `{LICENSE_MOUNT}` | The path where the license will be downloaded, and mounted.<br/><br/>For example: `/host/license:/path/to/license/directory` | +| `{ENDPOINT_URI}` | The endpoint for authenticating your service request. You can find it on your resource's **Key and endpoint** page, on the Azure portal.<br/><br/>For example: `https://<your-resource-name>.cognitiveservices.azure.com` | +| `{API_KEY}` | The key for your Speech resource. You can find it on your resource's **Key and endpoint** page, on the Azure portal. | +| `{CONTAINER_LICENSE_DIRECTORY}` | Location of the license folder on the container's local filesystem.<br/><br/>For example: `/path/to/license/directory` | ++```bash +docker run --rm -it -p 5000:5000 \ +-v {LICENSE_MOUNT} \ +{IMAGE} \ +eula=accept \ +billing={ENDPOINT_URI} \ +apikey={API_KEY} \ +DownloadLicense=True \ +Mounts:License={CONTAINER_LICENSE_DIRECTORY} +``` ++Once the license file has been downloaded, you can run the container in a disconnected environment. The following example shows the formatting of the `docker run` command you'll use, with placeholder values. Replace these placeholder values with your own values. ++Wherever the container is run, the license file must be mounted to the container and the location of the license folder on the container's local filesystem must be specified with `Mounts:License=`. An output mount must also be specified so that billing usage records can be written. ++Placeholder | Value | Format or example | +|-|-|| +| `{IMAGE}` | The container image you want to use.<br/><br/>For example: `mcr.microsoft.com/azure-cognitive-services/neural-text-to-speech:latest` | + `{MEMORY_SIZE}` | The appropriate size of memory to allocate for your container.<br/><br/>For example: `4g` | +| `{NUMBER_CPUS}` | The appropriate number of CPUs to allocate for your container.<br/><br/>For example: `4` | +| `{LICENSE_MOUNT}` | The path where the license will be located and mounted.<br/><br/>For example: `/host/license:/path/to/license/directory` | +| `{OUTPUT_PATH}` | The output path for logging.<br/><br/>For example: `/host/output:/path/to/output/directory`<br/><br/>For more information, see [usage records](../containers/disconnected-containers.md#usage-records) in the Azure Cognitive Services documentation. | +| `{CONTAINER_LICENSE_DIRECTORY}` | Location of the license folder on the container's local filesystem.<br/><br/>For example: `/path/to/license/directory` | +| `{CONTAINER_OUTPUT_DIRECTORY}` | Location of the output folder on the container's local filesystem.<br/><br/>For example: `/path/to/output/directory` | ++```bash +docker run --rm -it -p 5000:5000 --memory {MEMORY_SIZE} --cpus {NUMBER_CPUS} \ +-v {LICENSE_MOUNT} \ +-v {OUTPUT_PATH} \ +{IMAGE} \ +eula=accept \ +Mounts:License={CONTAINER_LICENSE_DIRECTORY} +Mounts:Output={CONTAINER_OUTPUT_DIRECTORY} +``` ++Speech containers provide a default directory for writing the license file and billing log at runtime. The default directories are /license and /output respectively. ++When you're mounting these directories to the container with the `docker run -v` command, make sure the local machine directory is set ownership to `user:group nonroot:nonroot` before running the container. ++Below is a sample command to set file/directory ownership. ++```bash +sudo chown -R nonroot:nonroot <YOUR_LOCAL_MACHINE_PATH_1> <YOUR_LOCAL_MACHINE_PATH_2> ... +``` ++++For more information about `docker run` with Speech containers, see [Install and run Speech containers with Docker](speech-container-howto.md#run-the-container). ++## Use the container +++[Try the text-to-speech quickstart](get-started-text-to-speech.md) using host authentication instead of key and region. ++### SSML voice element ++When you construct a neural text-to-speech HTTP POST, the [SSML](speech-synthesis-markup.md) message requires a `voice` element with a `name` attribute. The [locale of the voice](language-support.md?tabs=tts) must correspond to the locale of the container model. ++For example, a model that was downloaded via the `latest` tag (defaults to "en-US") would have a voice name of `en-US-AriaNeural`. ++```xml +<speak version="1.0" xmlns="http://www.w3.org/2001/10/synthesis" xml:lang="en-US"> + <voice name="en-US-AriaNeural"> + This is the text that is spoken. + </voice> +</speak> +``` ++## Next steps ++* See the [Speech containers overview](speech-container-overview.md) +* Review [configure containers](speech-container-configuration.md) for configuration settings +* Use more [Azure Cognitive Services containers](../cognitive-services-container-support.md) |
cognitive-services | Speech Container Overview | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Speech-Service/speech-container-overview.md | + + Title: Speech containers overview - Speech service ++description: Use the Docker containers for the Speech service to perform speech recognition, transcription, generation, and more on-premises. ++++++ Last updated : 04/18/2023++keywords: on-premises, Docker, container +++# Speech containers overview ++By using containers, you can use a subset of the Speech service features in your own environment. With Speech containers, you can build a speech application architecture that's optimized for both robust cloud capabilities and edge locality. Containers are great for specific security and data governance requirements. ++> [!NOTE] +> You must [request and get approval](#request-approval-to-run-the-container) to use a Speech container. ++## Available Speech containers ++The following table lists the Speech containers available in the Microsoft Container Registry (MCR). The table also lists the features supported by each container and the latest version of the container. ++| Container | Features | Supported versions and locales | +|--|--|--| +| [Speech-to-text](speech-container-stt.md) | Transcribes continuous real-time speech or batch audio recordings with intermediate results. | Latest: 3.12.0<br/><br/>For all supported versions and locales, see the [Microsoft Container Registry (MCR)](https://mcr.microsoft.com/product/azure-cognitive-services/speechservices/speech-to-text/tags) and [JSON tags](https://mcr.microsoft.com/v2/azure-cognitive-services/speechservices/speech-to-text/tags/list).| +| [Custom speech-to-text](speech-container-cstt.md) | Using a custom model from the [Custom Speech portal](https://speech.microsoft.com/customspeech), transcribes continuous real-time speech or batch audio recordings into text with intermediate results. | Latest: 3.12.0<br/><br/>For all supported versions and locales, see the [Microsoft Container Registry (MCR)](https://mcr.microsoft.com/product/azure-cognitive-services/speechservices/custom-speech-to-text/tags) and [JSON tags](https://mcr.microsoft.com/v2/azure-cognitive-services/speechservices/speech-to-text/tags/list). | +| [Speech language identification](speech-container-lid.md)<sup>1, 2</sup> | Detects the language spoken in audio files. | Latest: 1.11.0<br/><br/>For all supported versions and locales, see the [Microsoft Container Registry (MCR)](https://mcr.microsoft.com/product/azure-cognitive-services/speechservices/language-detection/tags) and [JSON tags](https://mcr.microsoft.com/v2/azure-cognitive-services/speechservices/language-detection/tags/list). | +| [Neural text-to-speech](speech-container-ntts.md) | Converts text to natural-sounding speech by using deep neural network technology, which allows for more natural synthesized speech. | Latest: 2.11.0<br/><br/>For all supported versions and locales, see the [Microsoft Container Registry (MCR)](https://mcr.microsoft.com/product/azure-cognitive-services/speechservices/neural-text-to-speech/tags) and [JSON tags](https://mcr.microsoft.com/v2/azure-cognitive-services/speechservices/neural-text-to-speech/tags/list). | ++<sup>1</sup> The container is available in public preview. Containers in preview are still under development and don't meet Microsoft's stability and support requirements. +<sup>2</sup> Not available as a disconnected container. ++## Request approval to run the container ++To use the Speech containers, you must submit one of the following request forms and wait for approval: +- [Connected containers request form](https://aka.ms/csgate) if you want to run containers regularly, in environments that are only connected to the internet. +- [Disconnected Container request form](https://aka.ms/csdisconnectedcontainers) if you want to run containers in environments that can be disconnected from the internet. For more information about applying and purchasing a commitment plan to use containers in disconnected environments, see [Use containers in disconnected environments](../containers/disconnected-containers.md) in the Azure Cognitive Services documentation. ++The form requests information about you, your company, and the user scenario for which you'll use the container. ++* On the form, you must use an email address associated with an Azure subscription ID. +* The Azure resource you use to run the container must have been created with the approved Azure subscription ID. +* Check your email for updates on the status of your application from Microsoft. ++After you submit the form, the Azure Cognitive Services team reviews it and emails you with a decision within 10 business days. ++> [!IMPORTANT] +> To use the Speech containers, your request must be approved. ++While you're waiting for approval, you can [setup the prerequisites](speech-container-howto.md#prerequisites) on your host computer. You can also download the container from the Microsoft Container Registry (MCR). You can run the container after your request is approved. ++## Billing ++The Speech containers send billing information to Azure by using a Speech resource on your Azure account. ++> [!NOTE] +> Connected and disconnected container pricing and commitment tiers vary. For more information, see [Speech Services pricing](https://azure.microsoft.com/pricing/details/cognitive-services/speech-services/). ++Speech containers aren't licensed to run without being connected to Azure for metering. You must configure your container to communicate billing information with the metering service at all times. For more information, see [billing arguments](speech-container-howto.md#billing-arguments). ++## Container recipes and other container services ++You can use container recipes to create containers that can be reused. Containers can be built with some or all configuration settings so that they are not needed when the container is started. For container recipes see the following Azure Cognitive Services articles: +- [Create containers for reuse](../containers/container-reuse-recipe.md) +- [Deploy and run container on Azure Container Instance](../containers/azure-container-instance-recipe.md) +- [Deploy a language detection container to Azure Kubernetes Service](../containers/azure-kubernetes-recipe.md) +- [Use Docker Compose to deploy multiple containers](../containers/docker-compose-recipe.md) ++For information about other container services, see the following Azure Cognitive Services articles: +- [Tutorial: Create a container image for deployment to Azure Container Instances](../../container-instances/container-instances-tutorial-prepare-app.md) +- [Quickstart: Create a private container registry using the Azure CLI](../../container-registry/container-registry-get-started-azure-cli.md) +- [Tutorial: Prepare an application for Azure Kubernetes Service (AKS)](../../aks/tutorial-kubernetes-prepare-app.md) ++## Next steps ++* [Install and run Speech containers](speech-container-howto.md) ++ |
cognitive-services | Speech Container Stt | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Speech-Service/speech-container-stt.md | + + Title: Speech-to-text containers - Speech service ++description: Install and run speech-to-text containers with Docker to perform speech recognition, transcription, generation, and more on-premises. ++++++ Last updated : 04/18/2023++zone_pivot_groups: programming-languages-speech-sdk-cli +keywords: on-premises, Docker, container +++# Speech-to-text containers with Docker ++The Speech-to-text container transcribes real-time speech or batch audio recordings with intermediate results. In this article, you'll learn how to download, install, and run a Speech-to-text container. ++> [!NOTE] +> You must [request and get approval](speech-container-overview.md#request-approval-to-run-the-container) to use a Speech container. ++For more information about prerequisites, validating that a container is running, running multiple containers on the same host, and running disconnected containers, see [Install and run Speech containers with Docker](speech-container-howto.md). ++## Container images ++The Speech-to-text container image for all supported versions and locales can be found on the [Microsoft Container Registry (MCR)](https://mcr.microsoft.com/product/azure-cognitive-services/speechservices/speech-to-text/tags) syndicate. It resides within the `azure-cognitive-services/speechservices/` repository and is named `speech-to-text`. +++The fully qualified container image name is, `mcr.microsoft.com/azure-cognitive-services/speechservices/speech-to-text`. Either append a specific version or append `:latest` to get the most recent version. ++| Version | Path | +|--|| +| Latest | `mcr.microsoft.com/azure-cognitive-services/speechservices/speech-to-text:latest`<br/><br/>The `latest` tag pulls the latest image for the `en-US` locale. | +| 3.12.0 | `mcr.microsoft.com/azure-cognitive-services/speechservices/speech-to-text:3.12.0-amd64-mr-in` | ++All tags, except for `latest`, are in the following format and are case sensitive: ++``` +<major>.<minor>.<patch>-<platform>-<locale>-<prerelease> +``` ++The tags are also available [in JSON format](https://mcr.microsoft.com/v2/azure-cognitive-services/speechservices/speech-to-text/tags/list) for your convenience. The body includes the container path and list of tags. The tags aren't sorted by version, but `"latest"` is always included at the end of the list as shown in this snippet: ++```json +{ + "name": "azure-cognitive-services/speechservices/speech-to-text", + "tags": [ + "2.10.0-amd64-ar-ae", + "2.10.0-amd64-ar-bh", + "2.10.0-amd64-ar-eg", + "2.10.0-amd64-ar-iq", + "2.10.0-amd64-ar-jo", + <--redacted for brevity--> + "latest" + ] +} +``` ++## Get the container image with docker pull ++You need the [prerequisites](speech-container-howto.md#prerequisites) including required hardware. Please also see the [recommended allocation of resources](speech-container-howto.md#container-requirements-and-recommendations) for each Speech container. ++Use the [docker pull](https://docs.docker.com/engine/reference/commandline/pull/) command to download a container image from Microsoft Container Registry: ++```bash +docker pull mcr.microsoft.com/azure-cognitive-services/speechservices/speech-to-text:latest +``` ++> [!IMPORTANT] +> The `latest` tag pulls the latest image for the `en-US` locale. For additional versions and locales, see [speech-to-text container images](#container-images). ++## Run the container with docker run ++Use the [docker run](https://docs.docker.com/engine/reference/commandline/run/) command to run the container. ++# [Speech to text](#tab/container) ++The following table represents the various `docker run` parameters and their corresponding descriptions: ++| Parameter | Description | +||| +| `{ENDPOINT_URI}` | The endpoint is required for metering and billing. For more information, see [billing arguments](speech-container-howto.md#billing-arguments). | +| `{API_KEY}` | The API key is required. For more information, see [billing arguments](speech-container-howto.md#billing-arguments). | ++When you run the speech-to-text container, configure the port, memory, and CPU according to the speech-to-text container [requirements and recommendations](speech-container-howto.md#container-requirements-and-recommendations). ++Here's an example `docker run` command with placeholder values. You must specify the `ENDPOINT_URI` and `API_KEY` values: ++```bash +docker run --rm -it -p 5000:5000 --memory 8g --cpus 4 \ +mcr.microsoft.com/azure-cognitive-services/speechservices/speech-to-text \ +Eula=accept \ +Billing={ENDPOINT_URI} \ +ApiKey={API_KEY} +``` ++This command: +* Runs a `speech-to-text` container from the container image. +* Allocates 4 CPU cores and 8 GB of memory. +* Exposes TCP port 5000 and allocates a pseudo-TTY for the container. +* Automatically removes the container after it exits. The container image is still available on the host computer. ++# [Disconnected speech to text](#tab/disconnected) ++To run disconnected containers (not connected to the internet), you must submit [this request form](https://aka.ms/csdisconnectedcontainers) and wait for approval. For more information about applying and purchasing a commitment plan to use containers in disconnected environments, see [Use containers in disconnected environments](../containers/disconnected-containers.md) in the Azure Cognitive Services documentation. ++If you have been approved to run the container disconnected from the internet, the following example shows the formatting of the `docker run` command to use, with placeholder values. Replace these placeholder values with your own values. ++The `DownloadLicense=True` parameter in your `docker run` command will download a license file that will enable your Docker container to run when it isn't connected to the internet. It also contains an expiration date, after which the license file will be invalid to run the container. You can only use a license file with the appropriate container that you've been approved for. For example, you can't use a license file for a `speech-to-text` container with a `neural-text-to-speech` container. ++| Placeholder | Description | +|-|-| +| `{IMAGE}` | The container image you want to use.<br/><br/>For example: `mcr.microsoft.com/azure-cognitive-services/speech-to-text:latest` | +| `{LICENSE_MOUNT}` | The path where the license will be downloaded, and mounted.<br/><br/>For example: `/host/license:/path/to/license/directory` | +| `{ENDPOINT_URI}` | The endpoint for authenticating your service request. You can find it on your resource's **Key and endpoint** page, on the Azure portal.<br/><br/>For example: `https://<your-resource-name>.cognitiveservices.azure.com` | +| `{API_KEY}` | The key for your Speech resource. You can find it on your resource's **Key and endpoint** page, on the Azure portal. | +| `{CONTAINER_LICENSE_DIRECTORY}` | Location of the license folder on the container's local filesystem.<br/><br/>For example: `/path/to/license/directory` | ++```bash +docker run --rm -it -p 5000:5000 \ +-v {LICENSE_MOUNT} \ +{IMAGE} \ +eula=accept \ +billing={ENDPOINT_URI} \ +apikey={API_KEY} \ +DownloadLicense=True \ +Mounts:License={CONTAINER_LICENSE_DIRECTORY} +``` ++Once the license file has been downloaded, you can run the container in a disconnected environment. The following example shows the formatting of the `docker run` command you'll use, with placeholder values. Replace these placeholder values with your own values. ++Wherever the container is run, the license file must be mounted to the container and the location of the license folder on the container's local filesystem must be specified with `Mounts:License=`. An output mount must also be specified so that billing usage records can be written. ++Placeholder | Value | Format or example | +|-|-|| +| `{IMAGE}` | The container image you want to use.<br/><br/>For example: `mcr.microsoft.com/azure-cognitive-services/speech-to-text:latest` | + `{MEMORY_SIZE}` | The appropriate size of memory to allocate for your container.<br/><br/>For example: `4g` | +| `{NUMBER_CPUS}` | The appropriate number of CPUs to allocate for your container.<br/><br/>For example: `4` | +| `{LICENSE_MOUNT}` | The path where the license will be located and mounted.<br/><br/>For example: `/host/license:/path/to/license/directory` | +| `{OUTPUT_PATH}` | The output path for logging.<br/><br/>For example: `/host/output:/path/to/output/directory`<br/><br/>For more information, see [usage records](../containers/disconnected-containers.md#usage-records) in the Azure Cognitive Services documentation. | +| `{CONTAINER_LICENSE_DIRECTORY}` | Location of the license folder on the container's local filesystem.<br/><br/>For example: `/path/to/license/directory` | +| `{CONTAINER_OUTPUT_DIRECTORY}` | Location of the output folder on the container's local filesystem.<br/><br/>For example: `/path/to/output/directory` | ++```bash +docker run --rm -it -p 5000:5000 --memory {MEMORY_SIZE} --cpus {NUMBER_CPUS} \ +-v {LICENSE_MOUNT} \ +-v {OUTPUT_PATH} \ +{IMAGE} \ +eula=accept \ +Mounts:License={CONTAINER_LICENSE_DIRECTORY} +Mounts:Output={CONTAINER_OUTPUT_DIRECTORY} +``` ++Speech containers provide a default directory for writing the license file and billing log at runtime. The default directories are /license and /output respectively. ++When you're mounting these directories to the container with the `docker run -v` command, make sure the local machine directory is set ownership to `user:group nonroot:nonroot` before running the container. ++Below is a sample command to set file/directory ownership. ++```bash +sudo chown -R nonroot:nonroot <YOUR_LOCAL_MACHINE_PATH_1> <YOUR_LOCAL_MACHINE_PATH_2> ... +``` ++++For more information about `docker run` with Speech containers, see [Install and run Speech containers with Docker](speech-container-howto.md#run-the-container). +++## Use the container +++[Try the speech-to-text quickstart](get-started-speech-to-text.md) using host authentication instead of key and region. ++## Next steps ++* See the [Speech containers overview](speech-container-overview.md) +* Review [configure containers](speech-container-configuration.md) for configuration settings +* Use more [Azure Cognitive Services containers](../cognitive-services-container-support.md) + |
cognitive-services | Speech Services Quotas And Limits | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Speech-Service/speech-services-quotas-and-limits.md | This section describes text-to-speech quotas and limits per Speech resource. Unl | Quota | Free (F0)| Standard (S0) | |--|--|--|-| File size | 3,000 characters per file | 20,000 characters per file | +| File size (plain text in SSML)<sup>1</sup> | 3,000 characters per file | 20,000 characters per file | +| File size (lexicon file)<sup>2</sup> | 3,000 characters per file | 20,000 characters per file | +| Billable characters in SSML| 15,000 characters per file | 100,000 characters per file | | Export to audio library | 1 concurrent task | N/A | +<sup>1</sup> The limit only applies to plain text in SSML and doesn't include tags. ++<sup>2</sup> The limit includes all text including tags. The characters of lexicon file aren't charged. Only the lexicon elements in SSML are counted as billable characters. Refer to [billable characters](text-to-speech.md#billable-characters) to learn more. + ### Speaker recognition quotas and limits per resource Speaker recognition is limited to 20 transactions per second (TPS). |
cognitive-services | Faq | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Translator/custom-translator/faq.md | |
cognitive-services | Release Notes | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Translator/custom-translator/release-notes.md | |
cognitive-services | Use Rest Api Programmatically | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Translator/document-translation/how-to-guides/use-rest-api-programmatically.md | The `sourceUrl` , `targetUrl` , and optional `glossaryUrl` must include a Share A batch Document Translation request is submitted to your Translator service endpoint via a POST request. If successful, the POST method returns a `202 Accepted` response code and the service creates a batch request. The translated documents are listed in your target container. +For detailed information regarding Azure Translator Service request limits, _see_ [**Document Translation request limits**](../../request-limits.md#document-translation). + ### HTTP headers The following headers are included with each Document Translation API request: func main() { -## Content limits --This table lists the limits for data that you send to Document Translation: --|Attribute | Limit| -||| -|Document size| Γëñ 40 MB | -|Total number of files.|Γëñ 1000 | -|Total content size in a batch | Γëñ 250 MB| -|Number of target languages in a batch| Γëñ 10 | -|Size of Translation memory file| Γëñ 10 MB| --Document Translation can't be used to translate secured documents such as those with an encrypted password or with restricted access to copy content. --## Troubleshooting - ### Common HTTP status codes | HTTP status code | Description | Possible reason | Document Translation can't be used to translate secured documents such as those > [!div class="nextstepaction"] > [Create a customized language system using Custom Translator](../../custom-translator/overview.md)-> -> |
cognitive-services | Overview | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Translator/document-translation/overview.md | Document Translation is a cloud-based feature of the [Azure Translator](../trans > [!NOTE] > When translating documents with content in multiple languages, the feature is intended for complete sentences in a single language. If sentences are composed of more than one language, the content may not all translate into the target language.-> For more information on input requirements, *see* [content limits](get-started-with-document-translation.md#content-limits) +> For more information on input requirements, *see* [Document Transaltion request limits](../request-limits.md#document-translation) ## Document Translation development options Document Translation supports the following document file types: |Tab Separated Values/TAB|`tsv`/`tab`| A tab-delimited raw-data file used by spreadsheet programs.| |Text|`txt`| An unformatted text document.| +## Request limits ++For detailed information regarding Azure Translator Service request limits, *see* [**Document Translation request limits**](../request-limits.md#document-translation). + ### Legacy file types Source file types are preserved during the document translation with the following **exceptions**: |
cognitive-services | Get Started With Rest Api | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Translator/document-translation/quickstarts/get-started-with-rest-api.md | For this project, you need a **source document** uploaded to your **source conta A batch Document Translation request is submitted to your Translator service endpoint via a POST request. If successful, the POST method returns a `202 Accepted` response code and the service creates a batch request. The translated documents are listed in your target container. +For detailed information regarding Azure Translator Service request limits, *see* [**Document Translation request limits**](../../request-limits.md#document-translation). + ### Headers The following headers are included with each Document Translation API request: |
cognitive-services | Quickstart Translator | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Translator/quickstart-translator.md | Header|Value| Condition | The core operation of the Translator service is translating text. In this quickstart, you build a request using a programming language of your choice that takes a single source (`from`) and provides two outputs (`to`). Then we review some parameters that can be used to adjust both the request and the response. +For detailed information regarding Azure Translator Service request limits, *see* [**Text translation request limits**](request-limits.md#text-translation). + ### [C#: Visual Studio](#tab/csharp) ### Set up your Visual Studio project |
cognitive-services | Request Limits | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Translator/request-limits.md | Title: Request limits - Translator + Title: Request limits - Translator Service -description: This article lists request limits for the Translator. Charges are incurred based on character count, not request frequency with a limit of 50,000 characters per request. Character limits are subscription-based, with F0 limited to 2 million characters per hour. +description: This article lists request limits for the Translator text and document translation. Charges are incurred based on character count, not request frequency with a limit of 50,000 characters per request. Character limits are subscription-based, with F0 limited to 2 million characters per hour. Previously updated : 08/17/2022 Last updated : 04/17/2023 -# Request limits for Translator +# Request limits for Azure Translator Service -This article provides throttling limits for the Translator translation, transliteration, sentence length detection, language detection, and alternate translations. +This article provides both a quick reference and detailed description of Azure Translator Service character and array limits for text and document translation. -## Character and array limits per request +## Text translation -Each translate request is limited to 50,000 characters, across all the target languages you're translating to. For example, sending a translate request of 3,000 characters to translate to three different languages results in a request size of 3000x3 = 9,000 characters, which satisfy the request limit. You're charged per character, not by the number of requests. It's recommended to send shorter requests. +Charges are incurred based on character count, not request frequency. Character limits are subscription-based. -The following table lists array element and character limits for each operation of the Translator. +### Character and array limits per request ++Each translate request is limited to 50,000 characters, across all the target languages. For example, sending a translate request of 3,000 characters to translate to three different languages results in a request size of 3,000 × 3 = 9,000 characters and meets the request limit. You're charged per character, not by the number of requests, therefore, it's recommended that you send shorter requests. ++The following table lists array element and character limits for each text translation operation. | Operation | Maximum Size of Array Element | Maximum Number of Array Elements | Maximum Request Size (characters) | |:-|:-|:-|:-|-| Translate | 50,000| 1,000| 50,000 | -| Transliterate | 5,000| 10| 5,000 | -| Detect | 50,000 |100 |50,000 | -| BreakSentence | 50,000| 100 |50,000 | -| Dictionary Lookup| 100 |10| 1,000 | -| Dictionary Examples | 100 for text and 100 for translation (200 total)| 10|2,000 | +| **Translate** | 50,000| 1,000| 50,000 | +| **Transliterate** | 5,000| 10| 5,000 | +| **Detect** | 50,000 |100 |50,000 | +| **BreakSentence** | 50,000| 100 |50,000 | +| **Dictionary Lookup** | 100 |10| 1,000 | +| **Dictionary Examples** | 100 for text and 100 for translation (200 total)| 10|2,000 | -## Character limits per hour +### Character limits per hour Your character limit per hour is based on your Translator subscription tier. Limits for [multi-service subscriptions](./reference/v3-0-reference.md#authentic These limits are restricted to Microsoft's standard translation models. Custom translation models that use Custom Translator are limited to 3,600 characters per second, per model. -## Latency +### Latency ++The Translator has a maximum latency of 15 seconds using standard models and 120 seconds when using custom models. Typically, responses *for text within 100 characters* are returned in 150 milliseconds to 300 milliseconds. The custom translator models have similar latency characteristics on sustained request rate and may have a higher latency when your request rate is intermittent. Response times vary based on the size of the request and language pair. If you don't receive a translation or an [error response](./reference/v3-0-reference.md#errors) within that time frame, check your code, your network connection, and retry. ++## Document Translation ++This table lists the content limits for data sent using Document Translation: ++|Attribute | Limit| +||| +|Document size| Γëñ 40 MB | +|Total number of files.|Γëñ 1000 | +|Total content size in a batch | Γëñ 250 MB| +|Number of target languages in a batch| Γëñ 10 | +|Size of Translation memory file| Γëñ 10 MB| -The Translator has a maximum latency of 15 seconds using standard models and 120 seconds when using custom models. Typically, responses *for text within 100 characters* are returned in 150 milliseconds to 300 milliseconds. The custom translator models have similar latency characteristics on sustained request rate and may have a higher latency when your request rate is intermittent. Response times will vary based on the size of the request and language pair. If you don't receive a translation or an [error response](./reference/v3-0-reference.md#errors) within that timeframe, check your code, your network connection, and retry. +> [!NOTE] +> Document Translation can't be used to translate secured documents such as those with an encrypted password or with restricted access to copy content. ## Next steps |
cognitive-services | Translator Faq | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Translator/translator-faq.md | Translator counts the following input: * An individual letter. * Punctuation. * A space, tab, markup, or any white-space character.-* A repeated translation, even if you've previously translated the same text. Every character submitted to the translate function is counted even when the content is unchanged or the source and target language are the same. +* A repeated translation, even if you have previously translated the same text. Every character submitted to the translate function is counted even when the content is unchanged or the source and target language are the same. For scripts based on graphic symbols, such as written Chinese and Japanese Kanji, the Translator service counts the number of Unicode code points. One character per symbol. Exception: Unicode surrogate pairs count as two characters. Calls to the **Detect** and **BreakSentence** methods aren't counted in the character consumption. However, we do expect calls to the Detect and BreakSentence methods to be reasonably proportionate to the use of other counted functions. If the number of Detect or BreakSentence calls exceeds the number of other counted methods by 100 times, Microsoft reserves the right to restrict your use of the Detect and BreakSentence methods. +For detailed information regarding Azure Translator Service request limits, *see* [**Text translation request limits**](request-limits.md#text-translation). + ## Where can I see my monthly usage? The [Azure pricing calculator](https://azure.microsoft.com/pricing/calculator/) can be used to estimate your costs. You can also monitor, view, and add Azure alerts for your Azure services in your user account in the Azure portal: The [Azure pricing calculator](https://azure.microsoft.com/pricing/calculator/) :::image type="content" source="media/azure-portal-overview.png" alt-text="Screenshot of the subscription link on overview page in the Azure portal."::: -2. In the left rail, make your selection under **Cost Management**: +1. In the left rail, make your selection under **Cost Management**: :::image type="content" source="media/azure-portal-cost-management.png" alt-text="Screenshot of the cost management resources links in the Azure portal."::: ## Is attribution required when using Translator? -Attribution isn't required when using Translator for text and speech translation. It is recommended that you inform users that the content they're viewing is machine translated. +Attribution isn't required when using Translator for text and speech translation. It's recommended that you inform users that the content they're viewing is machine translated. If attribution is present, it must conform to the [Translator attribution guidelines](https://www.microsoft.com/translator/business/attribution/). |
cognitive-services | Translator Overview | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Translator/translator-overview.md | -Translator Service is a cloud-based neural machine translation service that is part of the [Azure Cognitive Services](../what-are-cognitive-services.md) family of REST APIs and can be used with any operating system. Translator powers many Microsoft products and services used by thousands of businesses worldwide to perform language translation and other language-related operations. In this overview, you'll learn how Translator can enable you to build intelligent, multi-language solutions for your applications across all [supported languages](./language-support.md). +Translator Service is a cloud-based neural machine translation service that is part of the [Azure Cognitive Services](../what-are-cognitive-services.md) family of REST APIs and can be used with any operating system. Translator powers many Microsoft products and services used by thousands of businesses worldwide to perform language translation and other language-related operations. In this overview, you learn how Translator can enable you to build intelligent, multi-language solutions for your applications across all [supported languages](./language-support.md). Translator documentation contains the following article types: Translator documentation contains the following article types: ## Translator features and development options -The following features are supported by the Translator service. Use the links in this table to learn more about each feature and browse the API references. +Translator service supports the following features. Use the links in this table to learn more about each feature and browse the API references. | Feature | Description | Development options | |-|-|--| The following features are supported by the Translator service. Use the links in | [**Document Translation**](document-translation/overview.md) | Translate batch and complex files while preserving the structure and format of the original documents. | <ul><li>[**REST API**](document-translation/reference/rest-api-guide.md)</li><li>[**Client-library SDK**](document-translation/how-to-guides/use-client-sdks.md)</li></ul> | | [**Custom Translator**](custom-translator/overview.md) | Build customized models to translate domain- and industry-specific language, terminology, and style. | <ul><li>[**Custom Translator portal**](https://portal.customtranslator.azure.ai/)</li></ul> | +For detailed information regarding Azure Translator Service request limits, *see* [**Text translation request limits**](request-limits.md#text-translation). + ## Try the Translator service for free -First, you'll need a Microsoft account; if you don't have one, you can sign up for free at the [**Microsoft account portal**](https://account.microsoft.com/account). Select **Create a Microsoft account** and follow the steps to create and verify your new account. +First, you need a Microsoft account; if you don't have one, you can sign up for free at the [**Microsoft account portal**](https://account.microsoft.com/account). Select **Create a Microsoft account** and follow the steps to create and verify your new account. -Next, you'll need to have an Azure accountΓÇönavigate to the [**Azure sign-up page**](https://azure.microsoft.com/free/ai/), select the **Start free** button, and create a new Azure account using your Microsoft account credentials. +Next, you need to have an Azure accountΓÇönavigate to the [**Azure sign-up page**](https://azure.microsoft.com/free/ai/), select the **Start free** button, and create a new Azure account using your Microsoft account credentials. Now, you're ready to get started! [**Create a Translator service**](how-to-create-translator-resource.md "Go to the Azure portal."), [**get your access keys and API endpoint**](how-to-create-translator-resource.md#authentication-keys-and-endpoint-url "An endpoint URL and read-only key are required for authentication."), and try our [**quickstart**](quickstart-translator.md "Learn to use Translator via REST."). |
cognitive-services | Cognitive Services Container Support | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/cognitive-services-container-support.md | Azure Cognitive Services containers provide the following set of Docker containe | Service | Container | Description | Availability | |--|--|--|--|-| [Speech Service API][sp-containers-stt] | **Speech-to-text** ([image](https://hub.docker.com/_/microsoft-azure-cognitive-services-speechservices-custom-speech-to-text)) | Transcribes continuous real-time speech into text. | Generally available. <br> This container can also [run in disconnected environments](containers/disconnected-containers.md). | +| [Speech Service API][sp-containers-stt] | **Speech-to-text** ([image](https://hub.docker.com/_/microsoft-azure-cognitive-services-speechservices-speech-to-text)) | Transcribes continuous real-time speech into text. | Generally available. <br> This container can also [run in disconnected environments](containers/disconnected-containers.md). | | [Speech Service API][sp-containers-cstt] | **Custom Speech-to-text** ([image](https://hub.docker.com/_/microsoft-azure-cognitive-services-speechservices-custom-speech-to-text)) | Transcribes continuous real-time speech into text using a custom model. | Generally available <br> This container can also [run in disconnected environments](containers/disconnected-containers.md). | | [Speech Service API][sp-containers-ntts] | **Neural Text-to-speech** ([image](https://hub.docker.com/_/microsoft-azure-cognitive-services-speechservices-neural-text-to-speech)) | Converts text to natural-sounding speech using deep neural network technology, allowing for more natural synthesized speech. | Generally available. <br> This container can also [run in disconnected environments](containers/disconnected-containers.md). | | [Speech Service API][sp-containers-lid] | **Speech language detection** ([image](https://hub.docker.com/_/microsoft-azure-cognitive-services-speechservices-language-detection)) | Determines the language of spoken audio. | Gated preview | Install and explore the functionality provided by containers in Azure Cognitive [lu-containers]: luis/luis-container-howto.md [sp-containers]: speech-service/speech-container-howto.md [spa-containers]: ./computer-vision/spatial-analysis-container.md-[sp-containers-lid]: speech-service/speech-container-howto.md?tabs=lid -[sp-containers-stt]: speech-service/speech-container-howto.md?tabs=stt -[sp-containers-cstt]: speech-service/speech-container-howto.md?tabs=cstt -[sp-containers-tts]: speech-service/speech-container-howto.md?tabs=tts -[sp-containers-ctts]: speech-service/speech-container-howto.md?tabs=ctts -[sp-containers-ntts]: speech-service/speech-container-howto.md?tabs=ntts +[sp-containers-lid]: speech-service/speech-container-lid.md +[sp-containers-stt]: speech-service/speech-container-stt.md +[sp-containers-cstt]: speech-service/speech-container-cstt.md +[sp-containers-ntts]: speech-service/speech-container-ntts.md [ta-containers]: language-service/overview.md#deploy-on-premises-using-docker-containers [ta-containers-keyphrase]: language-service/key-phrase-extraction/how-to/use-containers.md [ta-containers-language]: language-service/language-detection/how-to/use-containers.md |
cognitive-services | Container Reuse Recipe | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/containers/container-reuse-recipe.md | -# SME: Siddhartha Prasad <siprasa@microsoft.com> # Create containers for reuse |
cognitive-services | Disconnected Containers | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/containers/disconnected-containers.md | Access is limited to customers that meet the following requirements: **Speech service** - * [Speech-to-Text](../speech-service/speech-container-howto.md?tabs=stt#run-the-container-disconnected-from-the-internet) - * [Custom Speech-to-Text](../speech-service/speech-container-howto.md?tabs=cstt#run-the-container-disconnected-from-the-internet-1) - * [Neural Text-to-Speech](../speech-service/speech-container-howto.md?tabs=ntts#run-the-container-disconnected-from-the-internet-2) + * [Speech-to-Text](../speech-service/speech-container-stt.md?tabs=disconnected#run-the-container-with-docker-run) + * [Custom Speech-to-Text](../speech-service/speech-container-cstt.md?tabs=disconnected#run-the-container-with-docker-run) + * [Neural Text-to-Speech](../speech-service/speech-container-ntts.md?tabs=disconnected#run-the-container-with-docker-run) **Language service** |
cognitive-services | Tag Utterances | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/language-service/conversational-language-understanding/how-to/tag-utterances.md | To delete an entity: In CLU, use Azure OpenAI to suggest utterances to add to your project using GPT models. You first need to get access and create a resource in Azure OpenAI. You'll then need to create a deployment for the GPT models. Follow the pre-requisite steps [here](../../../openai/how-to/create-resource.md). +Before you get started, the suggest utterances feature is only available if your Language resource is in the following regions: +* East US +* South Central US +* West Europe + In the Data Labeling page: 1. Click on the **Suggest utterances** button. A pane will open up on the right side prompting you to select your Azure OpenAI resource and deployment. |
cognitive-services | Models | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/openai/concepts/models.md | These models can be used with Completion API requests. `gpt-35-turbo` is the onl | Model ID | Base model Regions | Fine-Tuning Regions | Max Request (tokens) | Training Data (up to) | | | | - | -- | - |-| ada | N/A | East US <sup>2</sup> | 2,049 | Oct 2019| +| ada | N/A | South Central US, West Europe <sup>2</sup> | 2,049 | Oct 2019| | text-ada-001 | East US, South Central US, West Europe | N/A | 2,049 | Oct 2019|-| babbage | N/A | East US<sup>2</sup> | 2,049 | Oct 2019 | +| babbage | N/A | South Central US, West Europe<sup>2</sup> | 2,049 | Oct 2019 | | text-babbage-001 | East US, South Central US, West Europe | N/A | 2,049 | Oct 2019 |-| curie | N/A | East US<sup>2</sup> | 2,049 | Oct 2019 | +| curie | N/A | South Central US, West Europe<sup>2</sup> | 2,049 | Oct 2019 | | text-curie-001 | East US, South Central US, West Europe | N/A | 2,049 | Oct 2019 | | davinci<sup>1</sup> | N/A | Currently unavailable | 2,049 | Oct 2019| | text-davinci-001 | South Central US, West Europe | N/A | | | These models can be used with Completion API requests. `gpt-35-turbo` is the onl | gpt-35-turbo<sup>3</sup> (ChatGPT) (preview) | East US, South Central US | N/A | 4,096 | Sep 2021 | <sup>1</sup> The model is available by request only. Currently we aren't accepting new requests to use the model.-<br><sup>2</sup> South Central US and West Europe were previously available, but due to high demand they are currently unavailable for new customers to use for fine-tuning. Please use the East US region for fine-tuning. +<br><sup>2</sup> East US was previously available, but due to high demand this region is currently unavailable for new customers to use for fine-tuning. Please use the South Central US, and West Europe regions for fine-tuning. <br><sup>3</sup> Currently, only version `0301` of this model is available. This version of the model will be deprecated on 8/1/2023 in favor of newer version of the gpt-35-model. See [ChatGPT model versioning](../how-to/chatgpt.md#model-versioning) for more details. ### GPT-4 Models |
cognitive-services | Chatgpt | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/openai/how-to/chatgpt.md | description: Learn about the options for how to use the ChatGPT and GPT-4 models -+ Last updated 03/21/2023 keywords: ChatGPT |
cognitive-services | Quotas Limits | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/openai/quotas-limits.md | The following sections provide you with a quick guide to the quotas and limits t | Limit Name | Limit Value | |--|--| | OpenAI resources per region | 2 | -| Requests per minute per model* | Davinci-models (002 and later): 120 <br> ChatGPT model (preview): 300 <br> GPT-4 models (preview): 12 <br> All other models: 300 | +| Requests per minute per model* | Davinci-models (002 and later): 120 <br> ChatGPT model (preview): 300 <br> GPT-4 models (preview): 18 <br> All other models: 300 | | Tokens per minute per model* | Davinci-models (002 and later): 40,000 <br> ChatGPT model: 120,000 <br> All other models: 120,000 | | Max fine-tuned model deployments* | 2 | | Ability to deploy same model to multiple deployments | Not allowed | The following sections provide you with a quick guide to the quotas and limits t *The limits are subject to change. We anticipate that you will need higher limits as you move toward production and your solution scales. When you know your solution requirements, please reach out to us by applying for a quota increase here: <https://aka.ms/oai/quotaincrease> + For information on max tokens for different models, consult the [models article](./concepts/models.md#model-summary-table-and-region-availability) ### General best practices to mitigate throttling during autoscaling The next sections describe specific cases of adjusting quotas. If you need to increase the limit, you can apply for a quota increase here: <https://aka.ms/oai/quotaincrease> +### How to request an increase to the number of resources per region ++If you need to increase the number of resources, you can apply for a resource increase here: <https://aka.ms/oai/resourceincrease> ++> [!NOTE] +> Ensure that you thoroughly assess your current resource utilization, approaching its full capacity. Be aware that we will not grant additional resources if efficient usage of existing resources is not observed. + ## Next steps Learn more about the [underlying models that power Azure OpenAI](./concepts/models.md). |
cognitive-services | Whats New | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/openai/whats-new.md | |
communication-services | Capabilities | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communication-services/concepts/interop/guest/capabilities.md | In this article, you will learn which capabilities are supported for Teams exter | Screen sharing | Share the entire screen from within the application | ✔️ | | | Share a specific application (from the list of running applications) | ✔️ | | | Share a web browser tab from the list of open tabs | ✔️ |+| | Receive your screen sharing stream | ❌ | | | Share content in "content-only" mode | ✔️ | | | Receive video stream with content for "content-only" screen sharing experience | ✔️ | | | Share content in "standout" mode | ❌ | |
communication-services | Teams User Calling | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communication-services/concepts/interop/teams-user-calling.md | The following list presents the set of features that are currently available in | Screen sharing | Share the entire screen from within the application | ✔️ | | | Share a specific application (from the list of running applications) | ✔️ | | | Share a web browser tab from the list of open tabs | ✔️ |+| | Receive your screen sharing stream | ❌ | | | Share content in "content-only" mode | ✔️ | | | Receive video stream with content for "content-only" screen sharing experience | ✔️ | | | Share content in "standout" mode | ❌ | |
communication-services | Meeting Capabilities | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communication-services/concepts/interop/teams-user/meeting-capabilities.md | The following list of capabilities is allowed when Teams user participates in Te | Screen sharing | Share the entire screen from within the application | ✔️ | | | Share a specific application (from the list of running applications) | ✔️ | | | Share a web browser tab from the list of open tabs | ✔️ |+| | Receive your screen sharing stream | ❌ | | | Share content in "content-only" mode | ✔️ | | | Receive video stream with content for "content-only" screen sharing experience | ✔️ | | | Share content in "standout" mode | ❌ | |
communication-services | Send Email | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communication-services/quickstarts/email/send-email.md | In this quick start, you'll learn about how to send email using our Email SDKs. [!INCLUDE [Send Email with Python SDK](./includes/send-email-python.md)] ::: zone-end [!INCLUDE [Azure Logic Apps](./includes/send-email-logic-app.md)] ::: zone-end |
communication-services | Send | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communication-services/quickstarts/sms/send.md | -zone_pivot_groups: acs-azcli-js-csharp-java-python-power-platform +zone_pivot_groups: acs-azcli-js-csharp-java-python-logic-apps # Quickstart: Send an SMS message |
communication-services | Click To Call Widget | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communication-services/tutorials/widgets/click-to-call-widget.md | -## Architecture overview ## Prerequisites - An Azure account with an active subscription. For details, see [Create an account for free](https://azure.microsoft.com/free/?WT.mc_id=A261C142F). Follow instructions from our [trusted user access service tutorial](../trusted-s 1. Create an HTML file named `https://docsupdatetracker.net/index.html` and add the following code to it: -``` html -- <!DOCTYPE html> - <html> - <head> - <meta charset="utf-8"> - <title>Call Widget App - Vanilla</title> - <link rel="stylesheet" href="style.css"> - </head> - <body> - <div id="call-widget"> - <div id="call-widget-header"> - <div id="call-widget-header-title">Call Widget App</div> - <button class='widget'> ? </button > - <div class='callWidget'></div> + ``` html ++ <!DOCTYPE html> + <html> + <head> + <meta charset="utf-8"> + <title>Call Widget App - Vanilla</title> + <link rel="stylesheet" href="style.css"> + </head> + <body> + <div id="call-widget"> + <div id="call-widget-header"> + <div id="call-widget-header-title">Call Widget App</div> + <button class='widget'> ? </button > + <div class='callWidget'></div> + </div> </div>- </div> - </body> - </html> + </body> + </html> -``` + ``` 2. Create a CSS file named `style.css` and add the following code to it: -``` css -- .widget { - height: 75px; - width: 75px; - position: absolute; - right: 0; - bottom: 0; - background-color: blue; - margin-bottom: 35px; - margin-right: 35px; - border-radius: 50%; - text-align: center; - vertical-align: middle; - line-height: 75px; - color: white; - font-size: 30px; - } - - .callWidget { - height: 400px; - width: 600px; - background-color: blue; - position: absolute; - right: 35px; - bottom: 120px; - z-index: 10; - display: none; - border-radius: 5px; - border-style: solid; - border-width: 5px; - } --``` --1. Configure the call window to be hidden by default. We show it when the user clicks the button. --``` html + ``` css ++ .widget { + height: 75px; + width: 75px; + position: absolute; + right: 0; + bottom: 0; + background-color: blue; + margin-bottom: 35px; + margin-right: 35px; + border-radius: 50%; + text-align: center; + vertical-align: middle; + line-height: 75px; + color: white; + font-size: 30px; + } ++ .callWidget { + height: 400px; + width: 600px; + background-color: blue; + position: absolute; + right: 35px; + bottom: 120px; + z-index: 10; + display: none; + border-radius: 5px; + border-style: solid; + border-width: 5px; + } ++ ``` ++3. Configure the call window to be hidden by default. We show it when the user clicks the button. ++ ``` html ++ <script> + var open = false; + const button = document.querySelector('.widget'); + const content = document.querySelector('.callWidget'); + button.addEventListener('click', async function() { + if(!open){ + open = !open; + content.style.display = 'block'; + button.innerHTML = 'X'; + //Add code to initialize call widget here + } else if (open) { + open = !open; + content.style.display = 'none'; + button.innerHTML = '?'; + } + }); - <script> - var open = false; - const button = document.querySelector('.widget'); - const content = document.querySelector('.callWidget'); - button.addEventListener('click', async function() { - if(!open){ - open = !open; - content.style.display = 'block'; - button.innerHTML = 'X'; - //Add code to initialize call widget here - } else if (open) { - open = !open; - content.style.display = 'none'; - button.innerHTML = '?'; + async function getAccessToken(){ + //Add code to get access token here }- }); - - async function getAccessToken(){ - //Add code to get access token here - } - </script> + </script> -``` + ``` At this point, we have set up a static HTML page with a button that opens a call widget when clicked. Next, we add the widget script code. It makes a call to our Azure Function to get the access token and then use it to initialize our call client for Azure Communication Services and start the call to the Teams user we define. Add the following code to the `getAccessToken()` function: } ```+ You need to add the URL of your Azure Function. You can find these values in the Azure portal under your Azure Function resource. You need to add the URL of your Azure Function. You can find these values in the 1. Add a script tag to load the call widget script: -``` html + ``` html - <script src="https://github.com/ddematheu2/ACS-UI-Library-Widget/releases/download/widget/callComposite.js"></script> + <script src="https://github.com/ddematheu2/ACS-UI-Library-Widget/releases/download/widget/callComposite.js"></script> -``` + ``` We provide a test script hosted on GitHub for you to use for testing. For production scenarios, we recommend hosting the script on your own CDN. For more information on how to build your own bundle, see [this article](https://azure.github.io/communication-ui-library/?path=/docs/use-composite-in-non-react-environment--page#build-your-own-composite-js-bundle-files). -1. Add the following code under the button event listener: +2. Add the following code under the button event listener: -``` javascript + ``` javascript - button.addEventListener('click', async function() { - if(!open){ - open = !open; - content.style.display = 'block'; - button.innerHTML = 'X'; - let response = await getChatContext(); - console.log(response); - const callAdapter = await callComposite.loadCallComposite( - { - displayName: "Test User", - locator: { participantIds: ['INSERT USER UNIQUE IDENTIFIER FROM MICROSOFT GRAPH']}, - userId: response.user, - token: response.userToken - }, - content, - { - formFactor: 'mobile', - key: new Date() - } - ); - } else if (open) { - open = !open; - content.style.display = 'none'; - button.innerHTML = '?'; - } - }); + button.addEventListener('click', async function() { + if(!open){ + open = !open; + content.style.display = 'block'; + button.innerHTML = 'X'; + let response = await getChatContext(); + console.log(response); + const callAdapter = await callComposite.loadCallComposite( + { + displayName: "Test User", + locator: { participantIds: ['INSERT USER UNIQUE IDENTIFIER FROM MICROSOFT GRAPH']}, + userId: response.user, + token: response.userToken + }, + content, + { + formFactor: 'mobile', + key: new Date() + } + ); + } else if (open) { + open = !open; + content.style.display = 'none'; + button.innerHTML = '?'; + } + }); -``` + ``` Add a Microsoft Graph [User](https://learn.microsoft.com/graph/api/resources/user?view=graph-rest-1.0) ID to the `participantIds` array. You can find this value through [Microsoft Graph](https://learn.microsoft.com/graph/api/user-get?view=graph-rest-1.0&tabs=http) or through [Microsoft Graph explorer](https://developer.microsoft.com/graph/graph-explorer) for testing purposes. There you can grab the `id` value from the response. |
communications-gateway | Prepare To Deploy | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communications-gateway/prepare-to-deploy.md | You must have signed an Operator Connect agreement with Microsoft. For more info You need an onboarding partner for integrating with Microsoft Phone System. If you're not eligible for onboarding to Microsoft Teams through Azure Communications Gateway's [Basic Integration Included Benefit](onboarding.md) or you haven't arranged alternative onboarding with Microsoft through a separate arrangement, you need to arrange an onboarding partner yourself. -You must ensure you've got two or more numbers that you own which are globally routable. Your onboarding team needs these numbers to configure test lines. +You must own globally routable numbers that you can use for testing, as follows. ++|Type of testing|Numbers required | +||| +|Automated validation testing by Microsoft Teams test suites|Minimum: 3. Recommended: 6 (to run tests simultaneously).| +|Manual test calls made by you and/or Microsoft staff during integration testing |Minimum: 1| We strongly recommend that you have a support plan that includes technical support, such as [Microsoft Unified Support](https://www.microsoft.com/en-us/unifiedsupport/overview) or [Premier Support](https://www.microsoft.com/en-us/unifiedsupport/premier). Collect all of the values in the following table for both service regions in whi ## 6. Collect Test Lines configuration values -Collect all of the values in the following table for all test lines you want to configure for Azure Communications Gateway. You must configure at least one test line. +Collect all of the values in the following table for all the test lines that you want to configure for Azure Communications Gateway. |**Value**|**Field name(s) in Azure portal**| ||| |The name of the test line. |**Name**|- |The phone number of the test line. |**Phone Number**| - |Whether the test line is manual or automated: **Manual** test lines will be used by you and Microsoft staff to make test calls during integration testing. **Automated** test lines will be assigned to Microsoft Teams test suites for validation testing. |**Testing purpose**| + |The phone number of the test line, in E.164 format and including the country code. |**Phone Number**| + |The purpose of the test line: **Manual** (for manual test calls by you and/or Microsoft staff during integration testing) or **Automated** (for automated validation with Microsoft Teams test suites).|**Testing purpose**| ++> [!IMPORTANT] +> You must configure at least three automated test lines. We recommend six automated test lines (to allow simultaneous tests). ## 7. Decide if you want tags |
container-apps | Background Processing | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/container-apps/background-processing.md | Now you can create the message queue. ```azurecli az storage queue create \- --name 'myqueue" \ + --name "myqueue" \ --account-name $STORAGE_ACCOUNT_NAME \ --connection-string $QUEUE_CONNECTION_STRING ``` Create a file named *queue.json* and paste the following configuration code into "type": "String" }, "environment_name": {- "defaultValue": "", "type": "String" }, "queueconnection": {- "defaultValue": "", - "type": "String" + "type": "secureString" } }, "variables": {}, |
container-registry | Container Registry Get Started Bicep | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/container-registry/container-registry-get-started-bicep.md | param location string = resourceGroup().location @description('Provide a tier of your Azure Container Registry.') param acrSku string = 'Basic' -resource acrResource 'Microsoft.ContainerRegistry/registries@2021-06-01-preview' = { +resource acrResource 'Microsoft.ContainerRegistry/registries@2023-01-01-preview' = { name: acrName location: location sku: { |
cosmos-db | Analytical Store Change Data Capture | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/analytical-store-change-data-capture.md | Last updated 04/03/2023 [!INCLUDE[NoSQL, MongoDB](includes/appliesto-nosql-mongodb.md)] -Change data capture (CDC) in [Azure Cosmos DB analytical store](analytical-store-introduction.md) allows you to efficiently consume a continuous and incremental feed of changed (inserted, updated, and deleted) data from analytical store. The change data capture feature of the analytical store is seamlessly integrated with Azure Synapse and Azure Data Factory, providing you with a scalable no-code experience for high data volume. As the change data capture feature is based on analytical store, it [doesn't consume provisioned RUs, doesn't affect your transactional workloads](analytical-store-introduction.md#decoupled-performance-for-analytical-workloads), provides lower latency, and has lower TCO. --> [!IMPORTANT] -> This feature is currently in preview. +Change data capture (CDC) in [Azure Cosmos DB analytical store](analytical-store-introduction.md) allows you to efficiently consume a continuous and incremental feed of changed (inserted, updated, and deleted) data from analytical store. Seamlessly integrated with Azure Synapse and Azure Data Factory, it provides you with a scalable no-code experience for high data volume. As the change data capture feature is based on analytical store, it [doesn't consume provisioned RUs, doesn't affect your transactional workloads](analytical-store-introduction.md#decoupled-performance-for-analytical-workloads), provides lower latency, and has lower TCO. The change data capture feature in Azure Cosmos DB analytical store can write to various sinks using an Azure Synapse or Azure Data Factory data flow. For more information on supported sink types in a mapping data flow, see [data f In addition to providing incremental data feed from analytical store to diverse targets, change data capture supports the following capabilities: -- Supports applying filters, projections and transformations on the Change feed via source query - Supports capturing deletes and intermediate updates - Ability to filter the change feed for a specific type of operation (**Insert** | **Update** | **Delete** | **TTL**)-- Each change in Container appears exactly once in the change data capture feed, and the checkpoints are managed internally for you-- Changes can be synchronized from ΓÇ£the BeginningΓÇ¥ or ΓÇ£from a given timestampΓÇ¥ or ΓÇ£from nowΓÇ¥-- There's no limitation around the fixed data retention period for which changes are available+- Supports applying filters, projections and transformations on the Change feed via source query - Multiple change feeds on the same container can be consumed simultaneously+- Each change in container appears exactly once in the change data capture feed, and the checkpoints are managed internally for you +- Changes can be synchronized "from the BeginningΓÇ¥ or ΓÇ£from a given timestampΓÇ¥ or ΓÇ£from nowΓÇ¥ +- There's no limitation around the fixed data retention period for which changes are available ++> [!IMPORTANT] +> Please note that "from the beginning" means that all data and all transactions since the container creation are availble for CDC, including deletes and updates. To ingest and process deletes and updates, you have to use specific settings in your CDC processes in Azure Synapse or Azure Data Factory. These settings are turned off by default. For more information, click [here](get-started-change-data-capture.md) ## Features WHERE Category = 'Urban' > [!NOTE] > If you would like to enable source-query based change data capture on Azure Data Factory data flows during preview, please email [cosmosdbsynapselink@microsoft.com](mailto:cosmosdbsynapselink@microsoft.com) and share your **subscription Id** and **region**. This is not necessary to enable source-query based change data capture on an Azure Synapse data flow. +### Multiple CDC processes ++You can create multiple processes to consume CDC in analytical store. This approach brings flexibility to support different scenarios and requirements. While one process may have no data transformations and multiple sinks, another one can have data flattening and one sink. And they can run in parallel. ++ ### Throughput isolation, lower latency and lower TCO Operations on Cosmos DB analytical store don't consume the provisioned RUs and so don't affect your transactional workloads. change data capture with analytical store also has lower latency and lower TCO. The lower latency is attributed to analytical store enabling better parallelism for data processing and reduces the overall TCO enabling you to drive cost efficiencies in these rapidly shifting economic conditions. |
cosmos-db | Analytical Store Introduction | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/analytical-store-introduction.md | Title: What is Azure Cosmos DB analytical store? -description: Learn about Azure Cosmos DB transactional (row-based) and analytical(column-based) store. Benefits of analytical store, performance impact for large-scale workloads, and auto sync of data from transactional store to analytical store +description: Learn about Azure Cosmos DB transactional (row-based) and analytical(column-based) store. Benefits of analytical store, performance impact for large-scale workloads, and auto sync of data from transactional store to analytical store. Previously updated : 03/24/2022 Last updated : 04/18/2023 Azure Cosmos DB transactional store is schema-agnostic, and it allows you to ite The multi-model operational data in an Azure Cosmos DB container is internally stored in an indexed row-based "transactional store". Row store format is designed to allow fast transactional reads and writes in the order-of-milliseconds response times, and operational queries. If your dataset grows large, complex analytical queries can be expensive in terms of provisioned throughput on the data stored in this format. High consumption of provisioned throughput in turn, impacts the performance of transactional workloads that are used by your real-time applications and services. -Traditionally, to analyze large amounts of data, operational data is extracted from Azure Cosmos DB's transactional store and stored in a separate data layer. For example, the data is stored in a data warehouse or data lake in a suitable format. This data is later used for large-scale analytics and analyzed using compute engine such as the Apache Spark clusters. This separation of analytical storage and compute layers from operational data results in additional latency, because the ETL(Extract, Transform, Load) pipelines are run less frequently to minimize the potential impact on your transactional workloads. +Traditionally, to analyze large amounts of data, operational data is extracted from Azure Cosmos DB's transactional store and stored in a separate data layer. For example, the data is stored in a data warehouse or data lake in a suitable format. This data is later used for large-scale analytics and analyzed using compute engines such as the Apache Spark clusters. The separation of analytical from operational data results in delays for analysts that want to use the most recent data. The ETL pipelines also become complex when handling updates to the operational data when compared to handling only newly ingested operational data. There's no impact on the performance of your transactional workloads due to anal ## Auto-Sync -Auto-Sync refers to the fully managed capability of Azure Cosmos DB where the inserts, updates, deletes to operational data are automatically synced from transactional store to analytical store in near real time. Auto-sync latency is usually within 2 minutes. In cases of shared throughput database with a large number of containers, auto-sync latency of individual containers could be higher and take up to 5 minutes. We would like to learn more how this latency fits your scenarios. For that, please reach out to the [Azure Cosmos DB Team](mailto:cosmosdbsynapselink@microsoft.com). +Auto-Sync refers to the fully managed capability of Azure Cosmos DB where the inserts, updates, deletes to operational data are automatically synced from transactional store to analytical store in near real time. Auto-sync latency is usually within 2 minutes. In cases of shared throughput database with a large number of containers, auto-sync latency of individual containers could be higher and take up to 5 minutes. At the end of each execution of the automatic sync process, your transactional data will be immediately available for Azure Synapse Analytics runtimes: The following constraints are applicable on the operational data in Azure Cosmos * Sample scenarios:- * If your document's first level has 2000 properties, only the first 1000 will be represented. - * If your documents have five levels with 200 properties in each one, all properties will be represented. - * If your documents have 10 levels with 400 properties in each one, only the two first levels will be fully represented in analytical store. Half of the third level will also be represented. + * If your document's first level has 2000 properties, the sync process will represent the first 1000 of them. + * If your documents have five levels with 200 properties in each one, the sync process will represent all properties. + * If your documents have 10 levels with 400 properties in each one, the sync process will fully represent the two first levels and only half of the third level. * The hypothetical document below contains four properties and three levels. * The levels are `root`, `myArray`, and the nested structure within the `myArray`. df = spark.read\ * MinKey/MaxKey * When using DateTime strings that follow the ISO 8601 UTC standard, expect the following behavior:- * Spark pools in Azure Synapse will represent these columns as `string`. - * SQL serverless pools in Azure Synapse will represent these columns as `varchar(8000)`. + * Spark pools in Azure Synapse represent these columns as `string`. + * SQL serverless pools in Azure Synapse represent these columns as `varchar(8000)`. * Properties with `UNIQUEIDENTIFIER (guid)` types are represented as `string` in analytical store and should be converted to `VARCHAR` in **SQL** or to `string` in **Spark** for correct visualization. -* SQL serverless pools in Azure Synapse support result sets with up to 1000 columns, and exposing nested columns also counts towards that limit. Please consider this information when designing your data architecture and modeling your transactional data. +* SQL serverless pools in Azure Synapse support result sets with up to 1000 columns, and exposing nested columns also counts towards that limit. It is a good practice to consider this information in your transactional data architecture and modeling. * If you rename a property, in one or many documents, it will be considered a new column. If you execute the same rename in all documents in the collection, all data will be migrated to the new column and the old column will be represented with `NULL` values. ### Schema representation -There are two types of schema representation in the analytical store. These types define the schema representation method for all containers in the database account and have tradeoffs between the simplicity of query experience versus the convenience of a more inclusive columnar representation for polymorphic schemas. +There are two methods of schema representation in the analytical store, valid for all containers in the database account. They have tradeoffs between the simplicity of query experience versus the convenience of a more inclusive columnar representation for polymorphic schemas. * Well-defined schema representation, default option for API for NoSQL and Gremlin accounts. * Full fidelity schema representation, default option for API for MongoDB accounts. The well-defined schema representation creates a simple tabular representation o * The first document defines the base schema and properties must always have the same type across all documents. The only exceptions are: * From `NULL` to any other data type. The first non-null occurrence defines the column data type. Any document not following the first non-null datatype won't be represented in analytical store.- * From `float` to `integer`. All documents will be represented in analytical store. - * From `integer` to `float`. All documents will be represented in analytical store. However, to read this data with Azure Synapse SQL serverless pools, you must use a WITH clause to convert the column to `varchar`. And after this initial conversion, it's possible to convert it again to a number. Please check the example below, where **num** initial value was an integer and the second one was a float. + * From `float` to `integer`. All documents are represented in analytical store. + * From `integer` to `float`. All documents are represented in analytical store. However, to read this data with Azure Synapse SQL serverless pools, you must use a WITH clause to convert the column to `varchar`. And after this initial conversion, it's possible to convert it again to a number. Please check the example below, where **num** initial value was an integer and the second one was a float. ```SQL SELECT CAST (num as float) as num WITH (num varchar(100)) AS [IntToFloat] > If the Azure Cosmos DB analytical store follows the well-defined schema representation and the specification above is violated by certain items, those items won't be included in the analytical store. * Expect different behavior in regard to different types in well-defined schema:- * Spark pools in Azure Synapse will represent these values as `undefined`. - * SQL serverless pools in Azure Synapse will represent these values as `NULL`. + * Spark pools in Azure Synapse represent these values as `undefined`. + * SQL serverless pools in Azure Synapse represent these values as `NULL`. * Expect different behavior in regard to explicit `NULL` values:- * Spark pools in Azure Synapse will read these values as `0` (zero). And it will change to `undefined` as soon as the column has a non-null value. - * SQL serverless pools in Azure Synapse will read these values as `NULL`. + * Spark pools in Azure Synapse read these values as `0` (zero), and as `undefined` as soon as the column has a non-null value. + * SQL serverless pools in Azure Synapse read these values as `NULL`. * Expect different behavior in regard to missing columns:- * Spark pools in Azure Synapse will represent these columns as `undefined`. - * SQL serverless pools in Azure Synapse will represent these columns as `NULL`. + * Spark pools in Azure Synapse represent these columns as `undefined`. + * SQL serverless pools in Azure Synapse represent these columns as `NULL`. ##### Representation challenges workarounds It is possible that an old document, with an incorrect schema, was used to create your container's analytical store base schema. Based on all the rules presented above, you may be receiving `NULL` for certain properties when querying your analytical store using Azure Synapse Link. To delete or update the problematic documents won't help because base schema reset isn't currently supported. The possible solutions are: * To migrate the data to a new container, making sure that all documents have the correct schema.- * To abandon the property with the wrong schema and add a new one, with another name, that has the correct schema in all documents. Example: You have billions of documents in the **Orders** container where the **status** property is a string. But the first document in that container has **status** defined with integer. So, one document will have **status** correctly represented and all other documents will have `NULL`. You can add the **status2** property to all documents and start to use it, instead of the original property. + * To abandon the property with the wrong schema and add a new one with another name that has the correct schema in all documents. Example: You have billions of documents in the **Orders** container where the **status** property is a string. But the first document in that container has **status** defined with integer. So, one document will have **status** correctly represented and all other documents will have `NULL`. You can add the **status2** property to all documents and start to use it, instead of the original property. #### Full fidelity schema representation the MongoDB `_id` field is fundamental to every collection in MongoDB and origin ###### Working with the MongoDB `_id` field in Spark -```Python -import org.apache.spark.sql.types._ -val simpleSchema = StructType(Array( -    StructField("_id", StructType(Array(StructField("objectId",BinaryType,true)) ),true), -    StructField("id", StringType, true) -  )) --df = spark.read.format("cosmos.olap")\ - .option("spark.synapse.linkedService", "<enter linked service name>")\ - .option("spark.cosmos.container", "<enter container name>")\ - .schema(simpleSchema) - .load() +The example below works on Spark 2.x and 3.x versions: -df.select("id", "_id.objectId").show() -``` +```Scala +val df = spark.read.format("cosmos.olap").option("spark.synapse.linkedService", "xxxx").option("spark.cosmos.container", "xxxx").load() -> [!NOTE] -> This workaround was designed to work with Spark 2.4. +val convertObjectId = udf((bytes: Array[Byte]) => { + val builder = new StringBuilder ++ for (b <- bytes) { + builder.append(String.format("%02x", Byte.box(b))) + } + builder.toString +} + ) ++val dfConverted = df.withColumn("objectId", col("_id.objectId")).withColumn("convertedObjectId", convertObjectId(col("_id.objectId"))).select("id", "objectId", "convertedObjectId") +display(dfConverted) +``` ###### Working with the MongoDB `_id` field in SQL It's possible to use full fidelity Schema for API for NoSQL accounts, instead of * Currently, if you enable Synapse Link in your NoSQL API account using the Azure Portal, it will be enabled as well-defined schema. * Currently, if you want to use full fidelity schema with NoSQL or Gremlin API accounts, you have to set it at account level in the same CLI or PowerShell command that will enable Synapse Link at account level.-* Currently Azure Cosmso DB for MongoDB isn't compatible with this possibility of changing the schema representation. All MongoDB accounts will always have full fidelity schema representation type. +* Currently Azure Cosmos DB for MongoDB isn't compatible with this possibility of changing the schema representation. All MongoDB accounts have full fidelity schema representation type. * It's not possible to reset the schema representation type, from well-defined to full fidelity or vice-versa.-* Currently, containers schema in analytical store are defined when the container is created, even if Synapse Link has not been enabled in the database account. +* Currently, containers schemas in analytical store are defined when the container is created, even if Synapse Link has not been enabled in the database account. * Containers or graphs created before Synapse Link was enabled with full fidelity schema at account level will have well-defined schema. * Containers or graphs created after Synapse Link was enabled with full fidelity schema at account level will have full fidelity schema. After the analytical store is enabled, based on the data retention needs of the Analytical store relies on Azure Storage and offers the following protection against physical failure: - * By default, Azure Cosmos DB database accounts allocate analytical store in Locally Redundant Storage (LRS) accounts. - * If any geo-region of the database account is configured for zone-redundancy, it is allocated in Zone-redundant Storage (ZRS) accounts. Customers need to enable Availability Zones on a region of their Azure Cosmos DB database account to have analytical data of that region stored in ZRS. + * By default, Azure Cosmos DB database accounts allocate analytical store in Locally Redundant Storage (LRS) accounts. LRS provides at least 99.999999999% (11 nines) durability of objects over a given year. + * If any geo-region of the database account is configured for zone-redundancy, it is allocated in Zone-redundant Storage (ZRS) accounts. Customers need to enable Availability Zones on a region of their Azure Cosmos DB database account to have analytical data of that region stored in Zone-redundant Storage. ZRS offers durability for storage resources of at least 99.9999999999% (12 9's) over a given year. ++For more information about Azure Storage durability, click [here](https://learn.microsoft.com/azure/storage/common/storage-redundancy). ## Backup Synapse Link, and analytical store by consequence, has different compatibility l * Periodic backup mode is fully compatible with Synapse Link and these 2 features can be used in the same database account. * Currently Continuous backup mode and Synapse Link aren't supported in the same database account. Customers have to choose one of these two features and this decision can't be changed. -### Backup Polices +### Backup policies There are two possible backup polices and to understand how to use them, the following details about Azure Cosmos DB backups are very important: If you want to delete the original container but don't want to lose its analytic It's important to note that the data in the analytical store has a different schema than what exists in the transactional store. While you can generate snapshots of your analytical store data, and export it to any Azure Data service, at no RUs costs, we can't guarantee the use of this snapshot to back feed the transactional store. This process isn't supported. -## Global Distribution +## Global distribution If you have a globally distributed Azure Cosmos DB account, after you enable analytical store for a container, it will be available in all regions of that account. Any changes to operational data are globally replicated in all regions. You can run analytical queries effectively against the nearest regional copy of your data in Azure Cosmos DB. In order to get a high-level cost estimate to enable analytical store on an Azur Analytical store read operations estimates aren't included in the Azure Cosmos DB cost calculator since they are a function of your analytical workload. But as a high-level estimate, scan of 1 TB of data in analytical store typically results in 130,000 analytical read operations, and results in a cost of $0.065. As an example, if you use Azure Synapse serverless SQL pools to perform this scan of 1 TB, it will cost $5.00 according to [Azure Synapse Analytics pricing page](https://azure.microsoft.com/pricing/details/synapse-analytics/). The final total cost for this 1 TB scan would be $5.065. -While the above estimate is for scanning 1TB of data in analytical store, applying filters reduces the volume of data scanned and this determines the exact number of analytical read operations given the consumption pricing model. A proof-of-concept around the analytical workload would provide a more finer estimate of analytical read operations. This estimate doesn't include the cost of Azure Synapse Analytics. +While the above estimate is for scanning 1TB of data in analytical store, applying filters reduces the volume of data scanned and this determines the exact number of analytical read operations given the consumption pricing model. A proof-of-concept around the analytical workload would provide a finer estimate of analytical read operations. This estimate doesn't include the cost of Azure Synapse Analytics. ## Next steps |
cosmos-db | Continuous Backup Restore Introduction | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/continuous-backup-restore-introduction.md | See [How do customer-managed keys affect continuous backups?](./how-to-setup-cmk Currently the point in time restore functionality has the following limitations: -* Azure Cosmos DB APIs for SQL and MongoDB are supported for continuous backup. API for Cassandra isn't supported now. +* Azure Cosmos DB APIs for SQL, MongoDB, Gremlin and Table supported for continuous backup. API for Cassandra isn't supported now. * Multi-regions write accounts aren't supported. |
cosmos-db | Get Started Change Data Capture | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/get-started-change-data-capture.md | -# Get started with change data capture in the analytical store for Azure Cosmos DB +# Get started with change data capture in the analytical store for Azure Cosmos DB (Preview) [!INCLUDE[NoSQL, MongoDB](includes/appliesto-nosql-mongodb.md)] Use Change data capture (CDC) in Azure Cosmos DB analytical store as a source to [Azure Data Factory](../data-factory/index.yml) or [Azure Synapse Analytics](../synapse-analytics/index.yml) to capture specific changes to your data. ++> [!NOTE] +> Please note that the linked service interface for Azure Cosmos DB for MongoDB API is not available on Dataflow yet. However, you would be able to use your accountΓÇÖs document endpoint with the ΓÇ£Azure Cosmos DB for NoSQLΓÇ¥ linked service interface as a work around until the Mongo linked service is supported. On a NoSQL linked service, choose ΓÇ£Enter ManuallyΓÇ¥ to provide the Cosmos DB account info and use accountΓÇÖs document endpoint (eg: `https://[your-database-account-uri].documents.azure.com:443/`) instead of the MongoDB endpoint (eg: `mongodb://[your-database-account-uri].mongo.cosmos.azure.com:10255/`)ΓÇ» + ## Prerequisites - An existing Azure Cosmos DB account. Use Change data capture (CDC) in Azure Cosmos DB analytical store as a source to First, enable Azure Synapse Link at the account level and then enable analytical store for the containers that's appropriate for your workload. -1. Enable Azure Synapse Link: [Enable Azure Synapse Link for an Azure Cosmos DB account](configure-synapse-link.md#enable-synapse-link) | +1. Enable Azure Synapse Link: [Enable Azure Synapse Link for an Azure Cosmos DB account](configure-synapse-link.md#enable-synapse-link) -1. Enable analytical store for your container\[s\]: +1. Enable analytical store for your containers: | Option | Guide | | | | Now create and configure a source to flow data from the Azure Cosmos DB account' | Batchsize in bytes | Specify the size in bytes if you would like to batch the change data capture feeds | | Extra Configs | Extra Azure Cosmos DB analytical store configs and their values. (ex: `spark.cosmos.allowWhiteSpaceInFieldNames -> true`) | +### Working with source options + +When you check any of the `Capture intermediate updates`, `Capture Deltes`, and `Capture Transactional store TTLs` options, your CDC process will create and populate the `__usr_opType` field in sink with the following values: ++| Value | Description | Option +| | | | +| 1 | UPDATE | Capture Intermediate updates | +| 2 | INSERT | There isn't an option for inserts, it's on by default | +| 3 | USER_DELETE | Capture Deletes | +| 4 | TTL_DELETE | Capture Transactional store TTLs| + +If you have to differentiate the TTL deleted records from documents deleted by users or applications, you have check both `Capture intermediate updates` and `Capture Transactional store TTLs` options. Then you have to adapt your CDC processes or applications or queries to use `__usr_opType` according to what is necessary for your business needs. + + ## Create and configure sink settings for update and delete operations First, create a straightforward [Azure Blob Storage](../storage/blobs/index.yml) sink and then configure the sink to filter data to only specific operations. |
cosmos-db | Introduction | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/mongodb/introduction.md | Cosmos DB for MongoDB has numerous benefits compared to other MongoDB service of - **Role Based Access Control**: With Azure Cosmos DB for MongoDB, you can assign granular roles and permissions to users to control access to your data and audit user actions- all using native Azure tooling. -- **Flexible single-field indexes**: Unlike single field indexes in MongoDB Atlas, [single field indexes in Cosmos DB for MongoDB](indexing.md) cover multi-field filter queries. There is no need to create compound indexes for each multi-field filter query. This increases developer productivity.- - **In-depth monitoring capabilities**: Cosmos DB for MongoDB integrates natively with [Azure Monitor](../../azure-monitor/overview.md) to provide in-depth monitoring capabilities. ## How Cosmos DB for MongoDB works Cosmos DB for MongoDB implements the wire protocol for MongoDB. This implementat Cosmos DB for MongoDB is compatible with the following MongoDB server versions: -- [Version 5.0 (limited preview)](../access-previews.md)+- [Version 5.0 (vCore preview)](./vcore/quickstart-portal.md) - [Version 4.2](feature-support-42.md) - [Version 4.0](feature-support-40.md) - [Version 3.6](feature-support-36.md) |
cosmos-db | Reference Limits | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/postgresql/reference-limits.md | be scaled down (decreased). ### Storage size -Up to 2 TiB of storage is supported on coordinator and worker nodes. See the -available storage options and IOPS calculation [above](resources-compute.md) -for node and cluster sizes. +Up to 16 TiB of storage is supported on coordinator and worker nodes in multi-node configuration. Up to 2 TiB of storage is supported for single node configurations. See [the available storage options and IOPS calculation](resources-compute.md) +for various node and cluster sizes. ## Compute |
cosmos-db | Restore Account Continuous Backup | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/restore-account-continuous-backup.md | Before restoring the account, install the [latest version of Azure PowerShell](/ ### <a id="trigger-restore-ps"></a>Trigger a restore operation for API for NoSQL account -The following cmdlet is an example to trigger a restore operation with the restore command by using the target account, source account, location, resource group, and timestamp: +The following cmdlet is an example to trigger a restore operation with the restore command by using the target account, source account, location, resource group, PublicNetworkAccess and timestamp: ```azurepowershell Restore-AzCosmosDBAccount ` -SourceDatabaseAccountName "SourceDatabaseAccountName" ` -RestoreTimestampInUtc "UTCTime" ` -Location "AzureRegionName"+ -PublicNetworkAccess Disabled ``` Restore-AzCosmosDBAccount ` -SourceDatabaseAccountName "source-sql" ` -RestoreTimestampInUtc "2021-01-05T22:06:00" ` -Location "West US"+ -PublicNetworkAccess Disabled ```+If `PublicNetworkAccess` is not set, restored account is accessible from public network, please ensure to pass Disabled to the `PublicNetworkAccess` option to disable public network access for restored account. + [NOTE] +> For restoring with public network access disabled, you'll need to install the preview of powershell module of CosmosDB by executing `Install-Module -Name Az.CosmosDB -AllowPrerelease`. You would also require version 5.1 of the Powershell. +> **Example 2:** Restoring specific collections and databases. This example restores the collections *MyCol1*, *MyCol2* from *MyDB1* and the entire database *MyDB2*, which, includes all the containers under it. ```azurepowershell The simplest way to trigger a restore is by issuing the restore command with nam --restore-timestamp 2020-07-13T16:03:41+0000 \ --resource-group MyResourceGroup \ --location "West US"+ --public-network-access Disabled ```+If `public-network-access` is not set, restored account is accessible from public network, please ensure to pass Disabled to the `public-network-access` option to disable public network access for restored account. ++> [NOTE] +> For restoring with public network access disabled, you'll need to install the cosmosdb-preview 0.23.0 of CLI extension by executing `az extension update --name cosmosdb-preview `. You would also require version 2.17.1 of the CLI. + #### Create a new Azure Cosmos DB account by restoring only selected databases and containers from an existing database account |
cost-management-billing | Purchase Recommendations | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cost-management-billing/savings-plan/purchase-recommendations.md | Finally, we present a differentiated set of one-year and three-year recommendati To account for scenarios where there were significant reductions in your usage, including recently decommissioned services, we run more simulations using only the last three days of usage. The lower of the three day and 30-day recommendations are highlighted, even in situations where the 30-day recommendation may appear to provide greater savings. The lower recommendation is to ensure that we don't encourage overcommitment based on stale data. -Recommendations are refreshed several times a day. However, it may take up to five days for the newly purchased savings plans and reservations to begin to be reflected in recommendations. +Note the following points: ++- Recommendations are refreshed several times a day. +- The recommended quantity for a scope is reduced on the same day that you purchase a savings plan for the scope. However, an update for the savings plan recommendation across scopes can take up to 25 days. + - For example, if you purchase based on shared scope recommendations, the single subscription scope recommendations can take up to 25 days to adjust down. ## Recommendations in Azure Advisor The minimum hourly commitment must be at least equal to the outstanding amount d As part of the trade in, the outstanding commitment is automatically included in your new savings plan. We do it by dividing the outstanding commitment by the number of hours in the term of the new savings plan. For example, 24 times the term length in days. And by making the value the minimum hourly commitment you can make during as part of the trade-in. Using the previous example, the $250 amount would be converted into an hourly commitment of about $0.029 for a new one-year savings plan. -If you're trading multiple reservations, the aggregate outstanding commitment is used. You may choose to increase the value, but you can't decrease it. The new savings plan will be used to cover usage of eligible resources. +If you're trading multiple reservations, the aggregate outstanding commitment is used. You may choose to increase the value, but you can't decrease it. The new savings plan is used to cover usage of eligible resources. The minimum value doesn't necessarily represent the hourly commitment necessary to cover the resources that were covered by the exchanged reservation. If you want to cover those resources, you'll most likely have to increase the hourly commitment. To determine the appropriate hourly commitment: |
data-factory | How To Schedule Azure Ssis Integration Runtime | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/how-to-schedule-azure-ssis-integration-runtime.md | You will need an instance of Azure Data Factory to implement this walk through. If you have not provisioned your Azure-SSIS IR already, provision it by following instructions in the [tutorial](./tutorial-deploy-ssis-packages-azure.md). ## Create and schedule ADF pipelines that start and or stop Azure-SSIS IR+> [!NOTE] +> This section is not supported for Azure-SSIS in **Azure Synapse** with [data exfiltration protection](/azure/synapse-analytics/security/workspace-data-exfiltration-protection) enabled. + This section shows you how to use Web activities in ADF pipelines to start/stop your Azure-SSIS IR on schedule or start & stop it on demand. We will guide you to create three pipelines: 1. The first pipeline contains a Web activity that starts your Azure-SSIS IR. If you create a third trigger that is scheduled to run daily at midnight and ass 2. In the **Activities** toolbox, expand **General** menu, and drag & drop a **Web** activity onto the pipeline designer surface. In **General** tab of the activity properties window, change the activity name to **startMyIR**. Switch to **Settings** tab, and do the following actions: > [!NOTE]- > For Azure-SSIS in Azure Synapse, use corresponding Azure Synapse REST API to [Get Integration Runtime status](/rest/api/synapse/integration-runtimes/get), [Start Integration Runtime](/rest/api/synapse/integration-runtimes/start) and [Stop Integration Runtime](/rest/api/synapse/integration-runtimes/stop). + > For Azure-SSIS in **Azure Synapse**, use corresponding Azure Synapse REST API to [Get Integration Runtime status](/rest/api/synapse/integration-runtimes/get), [Start Integration Runtime](/rest/api/synapse/integration-runtimes/start) and [Stop Integration Runtime](/rest/api/synapse/integration-runtimes/stop). 1. For **URL**, enter the following URL for REST API that starts Azure-SSIS IR, replacing `{subscriptionId}`, `{resourceGroupName}`, `{factoryName}`, and `{integrationRuntimeName}` with the actual values for your IR: `https://management.azure.com/subscriptions/{subscriptionId}/resourceGroups/{resourceGroupName}/providers/Microsoft.DataFactory/factories/{factoryName}/integrationRuntimes/{integrationRuntimeName}/start?api-version=2018-06-01`. Alternatively, you can also copy & paste the resource ID of your IR from its monitoring page on ADF UI/app to replace the following part of the above URL: `/subscriptions/{subscriptionId}/resourceGroups/{resourceGroupName}/providers/Microsoft.DataFactory/factories/{factoryName}/integrationRuntimes/{integrationRuntimeName}` |
data-manager-for-agri | Concepts Ingest Sensor Data | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-manager-for-agri/concepts-ingest-sensor-data.md | Gateways collect all essential data from the nodes and push it securely to the c In addition to the above approach, IOT devices (sensors/nodes/gateway) can directly push the data to IOTHub endpoint. In both cases, the data first reaches the IOTHub, post that the next set of processing happens. ->:::image type="content" source="./media/sensor-data-flow-new.png" alt-text="Screenshot showing sensor data flow."::: ## Sensor topology The following diagram depicts the topology of a sensor in Azure Data Manager for Agriculture. Each boundary under a party has a set of devices placed within it. A device can be either be a node or a gateway and each device has a set of sensors associated with it. Sensors send the recordings via gateway to the cloud. Sensors are tagged with GPS coordinates helping in creating a geospatial time series for all measured data. ->:::image type="content" source="./media/sensor-topology-new.png" alt-text="Screenshot showing sensor topology."::: ## Next steps |
data-manager-for-agri | Concepts Isv Solutions | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-manager-for-agri/concepts-isv-solutions.md | The agriculture industry is going through a significant technology transformatio The solution framework is built on top of Data Manager for Agriculture that provides extensibility capabilities. It enables our Independent Software Vendor (ISV) partners to apply their deep domain knowledge and develop specialized domain specific industry solutions to top of the core platform. The solution framework provides below capabilities: ->:::image type="content" source="./media/solution-framework-isv-1.png" alt-text="Screenshot showing ISV solution framework."::: * Enables ISV Partners to easily build industry specific solutions to top of Data Manager for Agriculture. * Helps ISVs generate revenue by monetizing their solution and publishing it on the Azure Marketplace* Provides simplified onboarding experience for ISV Partners and customers. |
data-manager-for-agri | Concepts Understanding Throttling | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-manager-for-agri/concepts-understanding-throttling.md | + + Title: API throttling guidance for customers using Azure Data Manager for Agriculture. +description: Provides information on API throttling limits to plan usage. ++++ Last updated : 04/18/2023++++# API throttling guidance for Azure Data Manager for Agriculture. ++The API throttling in Azure Data Manager for Agriculture allows more consistent performance within a time span for customers calling our service APIs. Throttling limits, the number of requests to our service in a time span to prevent overuse of resources. Azure Data Manager for Agriculture is designed to handle a high volume of requests, if an overwhelming number of requests occur by few customers, throttling helps maintain optimal performance and reliability for all customers. ++Throttling limits vary based on product type and capabilities being used. Currently we have two versions, standard and basic (for your POC needs). ++## DPS API limits ++Throttling category | Units available per Standard version| Units available per Basic version | +|:|:|:| +Per Minute | 25,000 | 25,000 | +Per 5 Minutes| 100,000| 100,000 | +Per Month| 25,000,000| 5,000,000| ++### Maximum requests allowed per type for standard version +API Type| Per minute| Per 5 minutes| Per month| +|:|:|:|:| +PUT |5,000 |20,000 |5,000,000 +PATCH |5,000 |20,000 |5,000,000 +POST |5,000 |20,000 |5,000,000 +DELETE |5,000 |20,000 |5,000,000 +GET (single object) |25,000 |100,000 |25,000,000 +LIST with paginated response |25,000 results |100,000 results |25,000,000 results ++### Maximum requests allowed per type for basic version +API Type| Per minute| Per 5 minutes| Per month| +|:|:|:|:| +PUT |5,000 |20,000 |1,000,000 +PATCH |5,000 |20,000 |1,000,000 +POST |5,000 |20,000 |1,000,000 +DELETE |5,000 |20,000 |1,000,000 +GET (single object) |25,000 |100,000 |5,000,000 +LIST with paginated response |25,000 results |100,000 results |5,000,000 results ++### Throttling cost by API type +API Type| Cost per request| +|:|::| +PUT |5 +PATCH |5 +POST |5 +DELETE |5 +GET (single object) |1 +GET Sensor Events |1 + 0.01 per result +LIST with paginated response |1 per request + 1 per result ++## Jobs create limits per instance of our service +The maximum queue size for each job type is 10,000. ++### Total units available +Throttling category| Units available per Standard version| Units available per Basic version| +|:|:|:| +Per 5 Minutes |1,000 |1,000 +Per Month |1,000,000 |200,000 +++### Maximum create job requests allowed for standard version +Job Type| Per 5 mins| Per month| +|:|:|:| +Cascade delete| 1,000| 500,000 +Satellite| 1,000| 500,000 +Model inference| 200| 100,000 +Farm Operation| 200| 100,000 +Rasterize| 500| 250,000 +Weather| 500| 250,000 +++### Maximum create job requests allowed for basic version +Job Type| Per 5 mins| Per month +|:|:|:| +Cascade delete| 1,000| 100,000 +Satellite| 1,000| 100,000 +Model inference| 200| 20,000 +Farm Operation| 200| 20,000 +Rasterize| 500| 50,000 +Weather| 500| 50,000 ++### Sensor events limits +100,000 event ingestion per hour by our sensor job. ++## Error code +When you reach the limit, you receive the HTTP status code **429 Too many requests**. The response includes a **Retry-After** value, which specifies the number of seconds your application should wait (or sleep) before sending the next request. If you send a request before the retry value has elapsed, your request isn't processed and a new retry value is returned. ++After waiting for specified time, you can also close and reopen your connection to Azure Data Manager for Agriculture. ++## Next steps +* See the Hierarchy Model and learn how to create and organize your agriculture data [here](./concepts-hierarchy-model.md). +* Understand our APIs [here](/rest/api/data-manager-for-agri). |
data-manager-for-agri | How To Set Up Isv Solution | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-manager-for-agri/how-to-set-up-isv-solution.md | Once you've installed an ISV solution from Azure portal, use this document to un A high level view of how you can create a new request and get responses from the ISV partners solution: ->:::image type="content" source="./media/3p-solutions-new.png" alt-text="Screenshot showing access flow for ISV API."::: * Step 1: You make an API call for a PUT request with the required parameters (for example Job ID, Farm details) * The Data Manager API receives this request and authenticates it. If the request is invalid, you'll get an error code back. |
data-manager-for-agri | How To Set Up Sensors Customer | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-manager-for-agri/how-to-set-up-sensors-customer.md | To start using the on-boarded sensor partners, you need to give consent to the s 5. Now, look for `Davis Instruments WeatherLink Data Manager for Agriculture Connector` under All Applications tab in `App Registrations` page (illustrated with a generic Partner in the image). - >:::image type="content" source="./media/sensor-partners.png" alt-text="Screenshot showing the partners message."::: + :::image type="content" source="./media/sensor-partners.png" alt-text="Screenshot showing the partners message."::: 6. Copy the Application (client) ID for the specific partner app that you want to provide access to. Log in to <a href="https://portal.azure.com" target=" blank">Azure portal</a> an You find the IAM (Identity Access Management) menu option on the left hand side of the option pane as shown in the image: ->:::image type="content" source="./media/role-assignment-1.png" alt-text="Screenshot showing role assignment."::: Click **Add > Add role assignment**, this action opens up a pane on the right side of the portal, choose the role from the dropdown: To complete the role assignment, do the following steps: 4. Click **Save** to assign the role. ->:::image type="content" source="./media/sensor-partner-role.png" alt-text="Screenshot showing app selection for authorization."::: This step ensures that the sensor partner app has been granted access (based on the role assigned) to Azure Data Manager for Agriculture Resource. |
data-manager-for-agri | How To Set Up Sensors Partner | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-manager-for-agri/how-to-set-up-sensors-partner.md | The below section of this document talks about the onboarding steps needed to in Onboarding covers the steps required by both customers & partners to integrate with Data Manager for Agriculture and start receiving/sending sensor telemetry respectively. ->:::image type="content" source="./media/sensor-partners-flow.png" alt-text="Screenshot showing sensor partners flow."::: From the above figure, the blocks highlighted in white are the steps taken by a partner, and the ones highlighted in black are done by customers. |
data-manager-for-agri | How To Use Events | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-manager-for-agri/how-to-use-events.md | + + Title: Azure Data Manager for Agriculture events with Azure Event Grid. +description: Learn about properties that are provided for Azure Data Manager for Agriculture events with Azure Event Grid. ++++ Last updated : 04/18/2023++++# Azure Data Manager for Agriculture Preview as Event Grid source ++This article provides the properties and schema for Azure Data Manager for Agriculture events. For an introduction to event schemas, see [Azure Event Grid](https://learn.microsoft.com/azure/event-grid/event-schema) event schema. ++## Prerequisites ++It's important that you have the following prerequisites completed before you begin the steps of deploying the Events feature in Azure Data Manager for Agriculture. ++* [An active Azure account](https://azure.microsoft.com/free/search/?OCID=AID2100131_SEM_c4b0772dc7df1f075552174a854fd4bc:G:s&ef_id=c4b0772dc7df1f075552174a854fd4bc:G:s&msclkid=c4b0772dc7df1f075552174a854fd4bc) +* [Microsoft Azure Event Hubs namespace and an event hub deployed in the Azure portal](../event-hubs/event-hubs-create.md) ++## Reacting to Data Manager for Agriculture events ++Data Manager for Agriculture events allow applications to react to creation, deletion and updating of resources. Data Manager for Agriculture events are pushed using <a href="https://azure.microsoft.com/services/event-grid/" target="_blank"> Azure Event Grid</a>. ++Azure Functions, Azure Logic Apps, or even to your own http listener can subscribe to these events. Azure Event Grid provides reliable event delivery to your applications through rich retry policies and dead-lettering. ++Here are example scenarios for consuming events in our service: +1. When downloading satellite or weather data or executing jobs, you can use events to respond to changes in job status. You can minimize long polling can and decreasing the number of API calls to the service. You can also get prompt notification of job completion. All our asynchronous ingestion jobs are capable of supporting events. ++> [!NOTE] +> Events related to ISV solutions flow are not currently supported. ++2. If there are modifications to data-plane resources such as party, fields, farms and other similar elements, you can react to changes and you can trigger workflows. ++## Filtering events +You can filter Data Manager for Agriculture <a href="https://docs.microsoft.com/cli/azure/eventgrid/event-subscription" target="_blank"> events </a> by event type, subject, or fields in the data object. Filters in Event Grid match the beginning or end of the subject so that events that match can go to the subscriber. ++For instance, for the PartyChanged event, to receive notifications for changes for a particular party with ID Party1234, you may use the subject filter "EndsWith" as shown: ++EndsWith- /Party1234 +The subject for this event is of the format +```"/parties/Party1234"``` ++Subjects in an event schema provide 'starts with' and 'exact match' filters as well. ++Similarly, to filter the same event for a group of party IDs, use the Advanced filter on partyId field in the event data object. In a single subscription, you may add five advanced filters with a limit of 25 values for each key filtered. ++To learn more about how to apply filters, see <a href = "https://docs.microsoft.com/azure/event-grid/how-to-filter-events" target = "_blank"> filter events for Event Grid. </a> ++## Subscribing to events +You can subscribe to Data Manager for Agriculture events by using Azure portal or Azure Resource Manager client. Each of these provide the user with a set of functionalities. Refer to following resources to know more about each method. ++<a href = "https://docs.microsoft.com/azure/event-grid/subscribe-through-portal#:~:text=Create%20event%20subscriptions%201%20Select%20All%20services.%202,event%20types%20option%20checked.%20...%20More%20items..." target = "_blank"> Subscribe to events using portal </a> ++<a href = "https://docs.microsoft.com/azure/event-grid/sdk-overview" target = "_blank"> Subscribe to events using the ARM template client </a> ++## Practices for consuming events ++Applications that handle Data Manager for Agriculture events should follow a few recommended practices: ++* Check that the eventType is one you're prepared to process, and don't assume that all events you receive are the types you expect. +* As messages can arrive out of order, use the modifiedTime and etag fields to understand the order of events for any particular object. +* Data Manager for Agriculture events guarantees at-least-once delivery to subscribers, which ensures that all messages are outputted. However due to retries or availability of subscriptions, duplicate messages may occasionally occur. To learn more about message delivery and retry, see <a href = "https://docs.microsoft.com/azure/event-grid/delivery-and-retry" target = "_blank">Event Grid message delivery and retry </a> +* Ignore fields you don't understand. This practice will help keep you resilient to new features that might be added in the future. +++### Available event types ++|Event Name | Description| +|:--|:-| +|Microsoft.AgFoodPlatform.PartyChanged|Published when a party is created /updated/deleted in an Azure Data Manager for Agriculture resource +|Microsoft.AgFoodPlatform.FarmChangedV2| Published when a farm is created /updated/deleted in an Azure Data Manager for Agriculture resource +|Microsoft.AgFoodPlatform.FieldChangedV2|Published when a Field is created /updated/deleted in an Azure Data Manager for Agriculture resource +|Microsoft.AgFoodPlatform.SeasonalFieldChangedV2|Published when a Seasonal Field is created /updated/deleted in an Azure Data Manager for Agriculture resource +|Microsoft.AgFoodPlatform.BoundaryChangedV2|Published when a farm is created /updated/deleted in an Azure Data Manager for Agriculture resource +|Microsoft.AgFoodPlatform.CropChanged|Published when a Crop is created /updated/deleted in an Azure Data Manager for Agriculture resource +|Microsoft.AgFoodPlatform.CropProductChanged|Published when a Crop Product is created /updated/deleted in an Azure Data Manager for Agriculture resource +|Microsoft.AgFoodPlatform.SeasonChanged|Published when a Season is created /updated/deleted in an Azure Data Manager for Agriculture resource +|Microsoft.AgFoodPlatform.SatelliteDataIngestionJobStatusChangedV2| Published when a satellite data ingestion job's status changes, for example, job is created, has progressed or completed. +|Microsoft.AgFoodPlatform.WeatherDataIngestionJobStatusChangedV2|Published when a weather data ingestion job's status changes, for example, job is created, has progressed or completed. +|Microsoft.AgFoodPlatform.WeatherDataRefresherJobStatusChangedV2| Published when Weather Data Refresher job status is changed. +|Microsoft.AgFoodPlatform.SensorMappingChangedV2|Published when Sensor Mapping is changed +|Microsoft.AgFoodPlatform.SensorPartnerIntegrationChangedV2|Published when Sensor Partner Integration is changed +|Microsoft.AgFoodPlatform.DeviceDataModelChanged|Published when Device Data Model is changed +|Microsoft.AgFoodPlatform.DeviceChanged|Published when Device is changed +|Microsoft.AgFoodPlatform.SensorDataModelChanged|Published when Sensor Data Model is changed +|Microsoft.AgFoodPlatform.SensorChanged|Published when Sensor is changed +|Microsoft.AgFoodPlatform.FarmOperationDataIngestionJobStatusChangedV2| Published when a farm operations data ingestion job's status changes, for example, job is created, has progressed or completed. +|Microsoft.AgFoodPlatform.ApplicationDataChangedV2|Published when Application Data is created /updated/deleted in an Azure Data Manager for Agriculture resource +|Microsoft.AgFoodPlatform.HarvestDataChangedV2|Published when Harvesting Data is created /updated/deleted in an Azure Data Manager for Agriculture resource +|Microsoft.AgFoodPlatform.TillageDataChangedV2|Published when Tillage Data is created /updated/deleted in an Azure Data Manager for Agriculture resource +|Microsoft.AgFoodPlatform.PlantingDataChangedV2|Published when Planting Data is created /updated/deleted in an Azure Data Manager for Agriculture resource +|Microsoft.AgFoodPlatform.AttachmentChangedV2|Published when an attachment is created/updated/deleted. +|Microsoft.AgFoodPlatform.ZoneChangedV2|Published when a zone is created/updated/deleted. +|Microsoft.AgFoodPlatform.ManagementZoneChangedV2|Published when a management zone is created/updated/deleted. +|Microsoft.AgFoodPlatform.PrescriptionChangedV2|Published when a prescription is created/updated/deleted. +|Microsoft.AgFoodPlatform.PrescriptionMapChangedV2|Published when a prescription map is created/updated/deleted. +|Microsoft.AgFoodPlatform.PlantTissueAnalysisChangedV2|Published when plant tissue analysis data is created/updated/deleted. +|Microsoft.AgFoodPlatform.NutrientAnalysisChangedV2|Published when nutrient analysis data is created/updated/deleted. +|Microsoft.AgFoodPlatform.ImageProcessingRasterizeJobStatusChangedV2|Published when an image processing rasterize job status changes, for example, job is created, has progressed or completed. +|Microsoft.AgFoodPlatform.InsightChangedV2| Published when Insight is created/updated/deleted. +|Microsoft.AgFoodPlatform.InsightAttachmentChangedV2| Published when Insight Attachment is created/updated/deleted. +|Microsoft.AgFoodPlatform.BiomassModelJobStatusChangedV2|Published when Biomass Model job status is changed +|Microsoft.AgFoodPlatform.SoilMoistureModelJobStatusChangedV2|Published when Soil Moisture Model job status is changed +|Microsoft.AgFoodPlatform.SensorPlacementModelJobStatusChangedV2|Published when Sensor Placement Model Job status is changed +++### Event properties ++Each Azure Data Manager for Agriculture event has two parts, the first part is common across events and the second, data object contains properties specific to each event. ++The part common across events is elaborated in the **Event Grid event schema** and has the following top-level data: ++|Property | Type| Description| +|:--| :-| :-| +topic| string| Full resource path to the event source. This field isn't writeable. Event Grid provides this value. +subject| string| Publisher-defined path to the event subject. +eventType | string| One of the registered event types for this event source. +eventTime| string| The time the event is generated based on the provider's UTC time. +| ID | string| Unique identifier for the event. +data| object| Data object with properties specific to each event type. +dataVersion| string| The schema version of the data object. The publisher defines the schema version. +metadataVersion| string| The schema version of the event metadata. Event Grid defines the schema of the top-level properties. Event Grid provides this value. ++For party, season, crop, crop product changed events, the data object contains following properties: ++|Property | Type| Description| +|:--| :-| :-| +| ID | string| Unique ID of resource. +actionType| string| Indicates the change that triggered publishing of the event. Applicable values are created, updated, deleted +properties| Object| It contains user defined key ΓÇô value pairs. +modifiedDateTime|string| Indicates the time at which the event was last modified. +createdDateTime| string| Indicates the time at which the resource was created. +status| string| Contains the user defined status of the object. +eTag| string| Implements optimistic concurrency. +description| string| Textual description of the resource. +name| string| Name to identify resource. ++For farm events, the data object contains following properties: ++|Property | Type| Description| +|:--| :-| :-| +| ID | string| Unique ID of resource. +actionType| string| Indicates the change that triggered publishing of the event. Applicable values are created, updated, deleted +properties| Object| It contains user defined key ΓÇô value pairs. +modifiedDateTime|string| Indicates the time at which the event was last modified. +createdDateTime| string| Indicates the time at which the resource was created. +status| string| Contains the user defined status of the object. +eTag| string| Implements optimistic concurrency. +description| string| Textual description of the resource. +name| string| Name to identify resource. +partyId| string| ID of the party it belongs to. ++For device data model, and sensor data model events, the data object contains following properties: ++|Property | Type| Description| +|:--| :-| :-| +sensorPartnerId| string| ID associated with the sensorPartner. +| ID | string| Unique ID of resource. +actionType| string| Indicates the change which triggered publishing of the event. Applicable values are created, updated, deleted +properties| Object| It contains user defined key ΓÇô value pairs. +modifiedDateTime|string| Indicates the time at which the event was last modified. +createdDateTime| string| Indicates the time at which the resource was created. +status| string| Contains the user defined status of the object. +eTag| string| Implements optimistic concurrency. +description| string| Textual description of the resource. +name| string| Name to identify resource. ++For device events, the data object contains following properties: ++|Property | Type| Description| +|:--| :-| :-| +deviceDataModelId| string| ID associated with the deviceDataModel. +integrationId| string| ID associated with the integration. +sensorPartnerId| string| ID associated with the sensorPartner. +| ID | string| Unique ID of resource. +actionType| string| Indicates the change that triggered publishing of the event. Applicable values are created, updated, deleted +properties| Object| It contains user defined key ΓÇô value pairs. +modifiedDateTime|string| Indicates the time at which the event was last modified. +createdDateTime| string| Indicates the time at which the resource was created. +status| string| Contains the user defined status of the object. +eTag| string| Implements optimistic concurrency. +description| string| Textual description of the resource. +name| string| Name to identify resource. ++For sensor events, the data object contains following properties: ++|Property | Type| Description| +|:--| :-| :-| +sensorDataModelId| string| ID associated with the sensorDataModel. +integrationId| string| ID associated with the integration. +deviceId| string| ID associated with the device. +sensorPartnerId| string| ID associated with the sensorPartner. +| ID | string| Unique ID of resource. +actionType| string| Indicates the change that triggered publishing of the event. Applicable values are created, updated, deleted +properties| Object| It contains user defined key ΓÇô value pairs. +modifiedDateTime|string| Indicates the time at which the event was last modified. +createdDateTime| string| Indicates the time at which the resource was created. +status| string| Contains the user defined status of the object. +eTag| string| Implements optimistic concurrency. +description| string| Textual description of the resource. +name| string| Name to identify resource. ++For sensor mapping events, the data object contains following properties: ++|Property | Type| Description| +|:--| :-| :-| +sensorId| string| ID associated with the sensor. +partyId| string| ID associated with the party. +boundaryId| string| ID associated with the boundary. +sensorPartnerId| string| ID associated with the sensorPartner. +| ID | string| Unique ID of resource. +actionType| string| Indicates the change that triggered publishing of the event. Applicable values are created, updated, deleted +properties| Object| It contains user defined key ΓÇô value pairs. +modifiedDateTime|string| Indicates the time at which the event was last modified. +createdDateTime| string| Indicates the time at which the resource was created. +status| string| Contains the user defined status of the object. +eTag| string| Implements optimistic concurrency. +description| string| Textual description of the resource. +name| string| Name to identify resource. ++For sensor partner integration events, the data object contains following properties: ++|Property | Type| Description| +|:--| :-| :-| +integrationId| string| ID associated with the integration. +partyId| string| ID associated with the party. +sensorPartnerId| string| ID associated with the sensorPartner. +| ID | string| Unique ID of resource. +actionType| string| Indicates the change that triggered publishing of the event. Applicable values are created, updated, deleted +properties| Object| It contains user defined key ΓÇô value pairs. +modifiedDateTime|string| Indicates the time at which the event was last modified. +createdDateTime| string| Indicates the time at which the resource was created. +status| string| Contains the user defined status of the object. +eTag| string| Implements optimistic concurrency. +description| string| Textual description of the resource. +name| string| Name to identify resource. ++Boundary events have the following data object: ++|Property |Type |Description | +|:|:|:| +| ID | string | User defined ID of boundary | +|actionType | string | Indicates the change that triggered publishing of the event. Applicable values are created, updated, deleted. | +|modifiedDateTime | string | Indicates the time at which the event was last modified. | +|createdDateTime | string | Indicates the time at which the resource was created. | +|status | string | Contains the user defined status of the object. | +|eTag | string | Implements optimistic concurrency. | +|partyId | string | ID of the party it belongs to. | +|parentId | string | ID of the parent boundary belongs. | +|parentType | string | Type of the parent boundary belongs to. Applicable values are Field, SeasonalField, Zone, Prescription, PlantTissueAnalysis, ApplicationData, PlantingData, TillageData, HarvestData etc. | +|description | string | Textual description of the resource. | +|properties | string | It contains user defined key ΓÇô value pair. | ++Seasonal field events have the following data object: ++Property| Type| Description +|:--| :-| :-| +ID | string| User defined ID of the seasonal field +farmId| string| User defined ID of the farm that seasonal field is associated with. +partyId| string| Id of the party it belongs to. +seasonId| string| User defined ID of the season that seasonal field is associated with. +fieldId| string| User defined ID of the field that seasonal field is associated with. +name| string| User defined name of the seasonal field. +actionType| string| Indicates the change that triggered publishing of the event. Applicable values are created, updated, deleted. +properties| Object| It contains user defined key-value pairs. +modifiedDateTime|string| Indicates the time at which the event was last modified. +createdDateTime| string| Indicates the time at which the resource was created. +status| string| Contains the user defined status of the object. +eTag| string| Implements optimistic concurrency. +description| string| Textual description of the resource. ++Insight events have the following data object: ++Property| Type| Description +|:--| :-| :-| +modelId| string| ID of the associated model.| +resourceId| string| User-defined ID of the resource such as farm, field, boundary etc.| +resourceType| string | Name of the resource type. Applicable values are Party, Farm, Field, SeasonalField, Boundary etc.| +partyId| string| ID of the party it belongs to.| +modelVersion| string| Version of the associated model.| +ID | string| User defined ID of the resource.| +status| string| Contains the status of the job. | +actionType|string| Indicates the change that triggered publishing of the event. Applicable values are created, updated, deleted. | +modifiedDateTime| date-time| Date-time when resource was last modified, sample format: yyyy-MM-ddTHH:mm:ssZ.| +createdDateTime| date-time| Date-time when resource was created, sample format: yyyy-MM-ddTHH:mm:ssZ.| +eTag| string| Implements optimistic concurrency| +description| string| A list of key value pairs that describe the resource. Only string and numerical values are supported. | +name| string| User-defined name of the resource.| +properties| object| User-defined name of the resource.| ++InsightAttachment events have the following data object: ++Property| Type| Description +|:--| :-| :-| +modelId| string| ID of the associated model. +resourceId| string| User-defined ID of the resource such as farm, field, boundary etc. +resourceType| string | Name of the resource type. +partyId| string| ID of the party it belongs to. +insightId| string| ID associated with the insight resource. +ID | string| User defined ID of the resource. +status| string| Contains the status of the job. +actionType|string| Indicates the change that triggered publishing of the event. Applicable values are created, updated, deleted. +modifiedDateTime| date-time|Date-time when resource was last modified, sample format: yyyy-MM-ddTHH:mm:ssZ. +createdDateTime|date-time|Date-time when resource was created, sample format: yyyy-MM-ddTHH:mm:ssZ. +eTag| string| Implements optimistic concurrency +description|string| A list of key value pairs that describe the resource. Only string and numerical values are supported. +name| string| User-defined name of the resource. +properties| object| User-defined name of the resource. ++Field events have the following data object: ++Property| Type| Description +|:--| :-| :-| +| ID | string| User defined ID of the field. +farmId| string| User defined ID of the farm that field is associated with. +partyId| string| Id of the party it belongs to. +name| string| User defined name of the field. +actionType| string| Indicates the change that triggered publishing of the event. Applicable values are created, updated, deleted. +properties| Object| It contains user defined key-value pairs. +modifiedDateTime|string|Indicates the time at which the event was last modified. +createdDateTime| string| Indicates the time at which the resource was created. +status| string| Contains the user defined status of the object. +eTag| string| Implements optimistic concurrency. +description| string| Textual description of the resource. ++ImageProcessingRasterizeJobStatusChanged event has the following data object: ++Property| Type| Description +|:--| :-| :-| +shapefileAttachmentId | string|User-defined ID name of the associated shape file. +partyId|string| Party ID for which job was created. +| ID |string| Unique ID of the job. +name| string| User-defined name of the job. +status|string|Various states a job can be in. Applicable values are Waiting, Running, Succeeded, Failed, Canceled etc. +isCancellationRequested| boolean|Flag that gets set when job cancellation is requested. +description|string| Textual description of the job. +message|string| Status message to capture more details of the job. +lastActionDateTime|date-time|Date-time when last action was taken on the job, sample format: yyyy-MM-ddTHH:mm:ssZ. +createdDateTime|date-time|Date-time when resource was created, sample format: yyyy-MM-ddTHH:mm:ssZ. +properties| Object| It contains user defined key-value pair ++SatelliteDataIngestionJobChanged, WeatherDataIngestionJobChanged, WeatherDataRefresherJobChanged, BiomassModelJobStatusChanged, SoilMoistureModelJobStatusChanged, and FarmOperationDataIngestionJobChanged events have the following data object: ++Property| Type| Description +|:--| :-| :-| +| ID |string| Unique ID of the job. +name| string| User-defined name of the job. +status|string|Various states a job can be in. +isCancellationRequested| boolean|Flag that gets set when job cancellation is requested. +description|string| Textual description of the job. +partyId|string| Party ID for which job was created. +message|string| Status message to capture more details of the job. +lastActionDateTime|date-time|Date-time when last action was taken on the job, sample format: yyyy-MM-ddTHH:mm:ssZ. +createdDateTime|date-time|Date-time when resource was created, sample format: yyyy-MM-ddTHH:mm:ssZ. +properties| Object| It contains user defined key-value pairs. ++Farm operations data events such as application data, harvesting data, planting data, and tillage data have the following data object: ++Property| Type| Description +|:--| :-| :-| +| ID | string| Unique ID of resource. +status| string| Contains the user defined status of the resource. +partyId| string| ID of the party it belongs to. +source| string| Message from Azure Data Manager for Agriculture giving details about the job. +modifiedDateTime| string| Indicates the time at which the event was last modified +createdDateTime| string| Indicates the time at which the resource was created +eTag| string| Implements optimistic concurrency +name| string| Name to identify resource. +description| string| Textual description of the resource +actionType| string|Indicates the change that triggered publishing of the event. Applicable values are created, updated, deleted. +properties| Object| It contains user defined key-value pairs. +++AttachmentChanged event has the following data object ++Property| Type| Description +|:--| :-| :-| +resourceId| string| User-defined ID of the resource such as farm, field, boundary etc. +resourceType| string | Name of the resource type. +partyId| string| ID of the party it belongs to. +| ID | string| User defined ID of the resource. +status| string| Contains the status of the job. +actionType|string| Indicates the change that triggered publishing of the event. Applicable values are created, updated, deleted. +modifiedDateTime| date-time|Date-time when resource was last modified, sample format: yyyy-MM-ddTHH:mm:ssZ. +createdDateTime|date-time|Date-time when resource was created, sample format: yyyy-MM-ddTHH:mm:ssZ. +eTag| string| Implements optimistic concurrency +description|string| Textual description of the resource +name| string| User-defined name of the resource. +++ZoneChanged event has the following data object ++Property| Type| Description +|:--| :-| :-| +managementZoneId| string | Management Zone ID associated with the zone. +partyId| string | User-defined ID of associated field. +| ID | string| Id of the party it belongs to +status| string| Contains the user defined status of the resource. +actionType| string| Indicates the change that triggered publishing of the event. Applicable values are created, updated, deleted. +modifiedDateTime| date-time|Date-time when resource was last modified, sample format: yyyy-MM-ddTHH:mm:ssZ. +createdDateTime|date-time|Date-time when resource was created, sample format: yyyy-MM-ddTHH:mm:ssZ. +eTag| string| Implements optimistic concurrency +description|string| Textual description of the resource +name| string| User-defined name of the resource. +properties| object| A list of key value pairs that describe the resource. Only string and numeral values are supported. ++PrescriptionChanged event has the following data object ++|Property | Type| Description| +|:--| :-| :-| +prescriptionMapId|string| User-defined ID of the associated prescription map. +partyId| string|Id of the party it belongs to. +| ID | string| User-defined ID of the prescription. +actionType| string| Indicates the change triggered during publishing of the event. Applicable values are Created, Updated, Deleted +status| string| Contains the user-defined status of the prescription. +properties| object| It contains user-defined key-value pairs. +modifiedDateTime| date-time|Date-time when resource was last modified, sample format: yyyy-MM-ddTHH:mm:ssZ. +createdDateTime|date-time|Date-time when resource was created, sample format: yyyy-MM-ddTHH:mm:ssZ. +eTag| string| Implements optimistic concurrency +description| string| Textual description of the resource +name| string| User-defined name of the prescription. ++PrescriptionMapChanged and ManagementZoneChanged events have the following data object: ++Property| Type| Description +|:--| :-| :-| +|seasonId |string | User-defined ID of the associated season. +|cropId |string | User-defined ID of the associated crop. +|fieldId |string | User-defined ID of the associated field. +|partyId |string| ID of the party it belongs to. +| ID | string| User-defined ID of the resource. +|actionType | string| Indicates the change that triggered publishing of the event. Applicable values are created, updated, deleted. +modifiedDateTime | date-time| Date-time when resource was last modified, sample format: yyyy-MM-ddTHH:mm:ssZ. +createdDateTime | date-time| Date-time when resource was created, sample format: yyyy-MM-ddTHH:mm:ssZ. +eTag| string | Implements optimistic concurrency +description | string| Textual description of the resource +name| string | User-defined name of the prescription map. +properties |object| It contains user-defined key-value pairs +status| string | Status of the resource. ++PlantTissueAnalysisChanged event has the following data object: ++Property| Type| Description +|:--| :-| :-| +|seasonId|string|User-defined ID of the associated season. +|cropId| string | User-defined ID of the associated crop. +|cropProductId | string| Crop Product ID associated with the plant tissue analysis. +|fieldId| string | User-defined ID of the associated field. +|partyId| string | ID of the party it belongs to. +| ID| string | User-defined ID of the resource. +|actionType | string | Indicates the change that triggered publishing of the event. Applicable values are created, updated, deleted. +modifiedDateTime| date-time | Date-time when resource was last modified, sample format: yyyy-MM-ddTHH:mm:ssZ. +createdDateTime| date-time | Date-time when resource was created, sample format: yyyy-MM-ddTHH:mm:ssZ. +eTag| string| Implements optimistic concurrency. +description | string| Textual description of the resource. +name| string| User-defined name of the prescription map. +properties | object| It contains user-defined key-value pairs. +status| string| Status of the resource. ++NutrientAnalysisChanged event has the following data object: ++|Property | Type| Description| +|:--| :-| :-| +parentId| string| ID of the parent nutrient analysis belongs to. +parentType| string| Type of the parent nutrient analysis belongs to. Applicable value(s) are PlantTissueAnalysis. +partyId| string|Id of the party it belongs to. +| ID | string| User-defined ID of nutrient analysis. +actionType| string| Indicates the change that is triggered during publishing of the event. Applicable values are Created, Updated, Deleted. +properties| object| It contains user-defined key-value pairs. +modifiedDateTime| date-time|Date-time when nutrient analysis was last modified, sample format: yyyy-MM-ddTHH:mm:ssZ. +createdDateTime|date-time|Date-time when nutrient analysis was created, sample format: yyyy-MM-ddTHH:mm:ssZ. +status| string| Contains user-defined status of the nutrient analysis. +eTag| string| Implements optimistic concurrency. +description| string| Textual description of resource. +name| string| User-defined name of the nutrient analysis. +++## Sample events +For Sample events, refer to [this](./sample-events.md) page ++## Next steps +* For an introduction to Azure Event Grid, see [What is Event Grid?](../event-grid/overview.md) +* Test our APIs [here](/rest/api/data-manager-for-agri). |
data-manager-for-agri | How To Use Nutrient Apis | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-manager-for-agri/how-to-use-nutrient-apis.md | Analyzing the nutrient composition of the crop is vital to ensure good harvest. ## Tissue sample model Here's how we have modeled tissue analysis in Azure Data Manager for Agriculture: ->:::image type="content" source="./media/schema-1.png" alt-text="Screenshot showing entity relationships."::: * Step 1: Create a **plant tissue analysis** resource for every sample you get tested. * Step 2: For each nutrient that is being tested, create a nutrient analysis resource with plant tissue analysis as parent created in step 1. |
data-manager-for-agri | Overview Azure Data Manager For Agriculture | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-manager-for-agri/overview-azure-data-manager-for-agriculture.md | Azure Data Manager for Agriculture helps reduce data engineering investments thr ## Our key features ->:::image type="content" source="./media/about-data-manager.png" alt-text="Screenshot showing key features."::: * Ingest, store and manage farm data: Connectors for satellite, weather forecast, farm operations, sensor data and extensibility framework help ingest your farm data. * Run Apps on your farm data: Use REST APIs to power your apps. |
data-manager-for-agri | Sample Events | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-manager-for-agri/sample-events.md | + + Title: Sample events for Microsoft Azure Data Manager for Agriculture Preview based on Azure Event Grid #Required; page title is displayed in search results. Include the brand. +description: This article provides samples of Azure Data Manager for Agriculture Preview events. #Required; article description that is displayed in search results. ++++ Last updated : 04/18/2023 #Required; mm/dd/yyyy format.+++# Azure Data Manager for Agriculture sample events +This article provides the Azure Data Manager for Agriculture events samples. To learn more about our event properties that are provided with Azure Event Grid see our [how to use events](./how-to-use-events.md) page. + +The event samples given on this page represent an event notification. ++1. **Event type: Microsoft.AgFoodPlatform.PartyChanged** ++````json + { + "data": { + "actionType": "Deleted", + "modifiedDateTime": "2022-10-17T18:43:37Z", + "eTag": "f700fdd7-0000-0700-0000-634da2550000", + "properties": { + "key1": "value1", + "key2": 123.45 + }, + "id": "<YOUR-PARTY-ID>", + "createdDateTime": "2022-10-17T18:43:30Z" + }, + "id": "23fad010-ec87-40d9-881b-1f2d3ba9600b", + "topic": "/subscriptions/{SUBSCRIPTION-ID}/resourceGroups/{RESOURCE-GROUP-NAME}/providers/Microsoft.AgFoodPlatform/farmBeats/{YOUR-RESOURCE-NAME}", + "subject": "/parties/<YOUR-PARTY-ID>", + "eventType": "Microsoft.AgFoodPlatform.PartyChanged", + "dataVersion": "1.0", + "metadataVersion": "1", + "eventTime": "2022-10-17T18:43:37.3306735Z" + } +```` ++ 2. **Event type: Microsoft.AgFoodPlatform.FarmChangedV2** +````json + { + "data": { + "partyId": "<YOUR-PARTY-ID>", + "actionType": "Updated", + "status": "string", + "modifiedDateTime": "2022-11-07T09:20:27Z", + "eTag": "99017a4e-0000-0700-0000-6368cddb0000", + "properties": { + "key1": "value1", + "key2": 123.45 + }, + "id": "<YOUR-FARM-ID>", + "name": "string", + "description": "string", + "createdDateTime": "2022-03-26T12:51:24Z" + }, + "id": "v2-796c89b6-306a-420b-be8f-4cd69df038f6", + "topic": "/subscriptions/{SUBSCRIPTION-ID}/resourceGroups/{RESOURCE-GROUP-NAME}/providers/Microsoft.AgFoodPlatform/farmBeats/{YOUR-RESOURCE-NAME}", + "subject": "/parties/<YOUR-PARTY-ID>/farms/<YOUR-FARM-ID>", + "eventType": "Microsoft.AgFoodPlatform.FarmChangedV2", + "dataVersion": "1.0", + "metadataVersion": "1", + "eventTime": "2022-11-07T09:20:27.5307566Z" + } +```` ++ 3. **Event type: Microsoft.AgFoodPlatform.FieldChangedV2** ++````json + { + "data": { + "farmId": "<YOUR-FARM-ID>", + "partyId": "<YOUR-PARTY-ID>", + "actionType": "Created", + "status": "string", + "modifiedDateTime": "2022-11-01T10:44:17Z", + "eTag": "af00eaf0-0000-0700-0000-6360f8810000", + "properties": { + "key1": "value1", + "key2": 123.45 + }, + "id": "<YOUR-FIELD-ID>", + "name": "string", + "description": "string", + "createdDateTime": "2022-11-01T10:44:17Z" + }, + "id": "v2-b80e0977-5aeb-47c9-be7b-d6555e1c44f1", + "topic": "/subscriptions/{SUBSCRIPTION-ID}/resourceGroups/{RESOURCE-GROUP-NAME}/providers/Microsoft.AgFoodPlatform/farmBeats/{YOUR-RESOURCE-NAME}", + "subject": "/parties/<YOUR-PARTY-ID>/fields/<YOUR-FIELD-ID>", + "eventType": "Microsoft.AgFoodPlatform.FieldChangedV2", + "dataVersion": "1.0", + "metadataVersion": "1", + "eventTime": "2022-11-01T10:44:17.162118Z" + } + ```` ++ + + 4. **Event type: Microsoft.AgFoodPlatform.CropChanged** ++````json + { + "data": { + "actionType": "Created", + "status": "Sample status", + "modifiedDateTime": "2021-03-05T11:03:48Z", + "eTag": "8601c4e5-0000-0700-0000-604210150000", + "id": "<YOUR-CROP-ID>", + "name": "Display name", + "description": "Sample description", + "createdDateTime": "2021-03-05T11:03:48Z", + "properties": { + "key1": "value1", + "key2": 123.45 + } + }, + "id": "4c59a797-b76d-48ec-8915-ceff58628f35", + "topic": "/subscriptions/{SUBSCRIPTION-ID}/resourceGroups/{RESOURCE-GROUP-NAME}/providers/Microsoft.AgFoodPlatform/farmBeats/{YOUR-RESOURCE-NAME}", + "subject": "/crops/<YOUR-CROP-ID>", + "eventType": "Microsoft.AgFoodPlatform.CropChanged", + "dataVersion": "1.0", + "metadataVersion": "1", + "eventTime": "2021-03-05T11:03:49.0590658Z" + } + ```` ++ 5. **Event type: Microsoft.AgFoodPlatform.CropProductChanged** ++````json + { + "data": { + "actionType": "Deleted", + "status": "string", + "modifiedDateTime": "2022-11-01T10:41:06Z", + "eTag": "59055238-0000-0700-0000-6360f7080000", + "properties": { + "key1": "value1", + "key2": 123.45 + }, + "id": "amcp", + "name": "stridfng", + "description": "string", + "createdDateTime": "2022-11-01T10:34:54Z" + }, + "id": "v2-a94f4e12-edca-4720-940f-f9d61755d8e2", + "topic": "/subscriptions/{SUBSCRIPTION-ID}/resourceGroups/{RESOURCE-GROUP-NAME}/providers/Microsoft.AgFoodPlatform/farmBeats/{YOUR-RESOURCE-NAME}", + "subject": "/cropProducts/amcp", + "eventType": "Microsoft.AgFoodPlatform.CropProductChanged", + "dataVersion": "1.0", + "metadataVersion": "1", + "eventTime": "2022-11-01T10:41:06.6942143Z" + } +```` ++ 6. **Event type: Microsoft.AgFoodPlatform.BoundaryChangedV2** ++````json + { + "data": { + "parentType": "Field", + "partyId": "amparty", + "actionType": "Created", + "modifiedDateTime": "2022-11-01T10:48:14Z", + "eTag": "af005dfc-0000-0700-0000-6360f96e0000", + "id": "amb", + "name": "string", + "description": "string", + "createdDateTime": "2022-11-01T10:48:14Z" + }, + "id": "v2-25fd01cf-72d4-401d-92ee-146de348e815", + "topic": "/subscriptions/{SUBSCRIPTION-ID}/resourceGroups/{RESOURCE-GROUP-NAME}/providers/Microsoft.AgFoodPlatform/farmBeats/{YOUR-RESOURCE-NAME}", + "subject": "/parties/amparty/boundaries/amb", + "eventType": "Microsoft.AgFoodPlatform.BoundaryChangedV2", + "dataVersion": "1.0", + "metadataVersion": "1", + "eventTime": "2022-11-01T10:48:14.2385557Z" + } + ```` ++ 7. **Event type: Microsoft.AgFoodPlatform.SeasonChanged** +````json + { + "data": { + "actionType": "Created", + "status": "Sample status", + "modifiedDateTime": "2021-03-05T11:18:38Z", + "eTag": "86019afd-0000-0700-0000-6042138e0000", + "id": "UNIQUE-SEASON-ID", + "name": "Display name", + "description": "Sample description", + "createdDateTime": "2021-03-05T11:18:38Z", + "properties": { + "key1": "value1", + "key2": 123.45 + } + }, + "id": "63989475-397b-4b92-8160-8743bf8e5804", + "topic": "/subscriptions/{SUBSCRIPTION-ID}/resourceGroups/{RESOURCE-GROUP-NAME}/providers/Microsoft.AgFoodPlatform/farmBeats/{FARMBEATS-RESOURCE-NAME}", + "subject": "/seasons/UNIQUE-SEASON-ID", + "eventType": "Microsoft.AgFoodPlatform.SeasonChanged", + "dataVersion": "1.0", + "metadataVersion": "1", + "eventTime": "2021-03-05T11:18:38.5804699Z" + } + ```` + 8. **Event type: Microsoft.AgFoodPlatform.SatelliteDataIngestionJobStatusChangedV2** +```json + { + "data": { + "partyId": "contoso-partyId", + "message": "Created job 'sat-ingestion-job-1' to fetch satellite data for boundary 'contoso-boundary' from startDate '08/07/2022' to endDate '10/07/2022' (both inclusive).", + "status": "Running", + "lastActionDateTime": "2022-11-07T09:35:23.3141004Z", + "isCancellationRequested": false, + "properties": { + "key1": "value1", + "key2": 123.45 + }, + "id": "sat-ingestion-job-1", + "name": "string", + "description": "string", + "createdDateTime": "2022-11-07T09:35:15.8064528Z" + }, + "id": "v2-3cab067b-4227-44c3-bea8-86e1e6d6968d", + "topic": "/subscriptions/{SUBSCRIPTION-ID}/resourceGroups/{RESOURCE-GROUP-NAME}/providers/Microsoft.AgFoodPlatform/farmBeats/{YOUR-RESOURCE-NAME}", + "subject": "/parties/contoso-partyId/satelliteDataIngestionJobs/sat-ingestion-job-1", + "eventType": "Microsoft.AgFoodPlatform.SatelliteDataIngestionJobStatusChangedV2", + "dataVersion": "1.0", + "metadataVersion": "1", + "eventTime": "2022-11-07T09:35:23.3141452Z" + } +``` + 9. **Event type: Microsoft.AgFoodPlatform.WeatherDataIngestionJobStatusChangedV2** +```json + { + "data": { + "partyId": "partyId1", + "message": "Weather data available from '11/25/2020 00:00:00' to '11/30/2020 00:00:00'.", + "status": "Succeeded", + "lastActionDateTime": "2022-11-01T10:40:58.4472391Z", + "isCancellationRequested": false, + "properties": { + "key1": "value1", + "key2": 123.45 + }, + "id": "newIjJk", + "name": "string", + "description": "string", + "createdDateTime": "2022-11-01T10:40:45.9408927Z" + }, + "id": "0c1507dc-1fe6-4ad5-b2f4-680f3b12b7cf", + "topic": "/subscriptions/{SUBSCRIPTION-ID}/resourceGroups/{RESOURCE-GROUP-NAME}/providers/Microsoft.AgFoodPlatform/farmBeats/{YOUR-RESOURCE-NAME}", + "subject": "/parties/partyId1/weatherDataIngestionJobs/newIjJk", + "eventType": "Microsoft.AgFoodPlatform.WeatherDataIngestionJobStatusChangedV2", + "dataVersion": "1.0", + "metadataVersion": "1", + "eventTime": "2022-11-01T10:40:58.4472961Z" + } +``` + 10. **Event type: Microsoft.AgFoodPlatform.WeatherDataRefresherJobStatusChangedV2** +```json +{ + "data": { + "message": "Weather data refreshed successfully at '11/01/2022 10:45:57'.", + "status": "Waiting", + "lastActionDateTime": "2022-11-01T10:45:57.5966716Z", + "isCancellationRequested": false, + "id": "IBM.TWC~33.00~-9.00~currents-on-demand", + "createdDateTime": "2022-11-01T10:39:34.2024298Z" + }, + "id": "dff85442-3b9c-4fb0-95da-bda66c994e73", + "topic": "/subscriptions/{SUBSCRIPTION-ID}/resourceGroups/{RESOURCE-GROUP-NAME}/providers/Microsoft.AgFoodPlatform/farmBeats/{YOUR-RESOURCE-NAME}", + "subject": "/weatherDataRefresherJobs/IBM.TWC~33.00~-9.00~currents-on-demand", + "eventType": "Microsoft.AgFoodPlatform.WeatherDataRefresherJobStatusChangedV2", + "dataVersion": "1.0", + "metadataVersion": "1", + "eventTime": "2022-11-01T10:45:57.596714Z" + } +``` ++ 11. **Event type: Microsoft.AgFoodPlatform.FarmOperationDataIngestionJobStatusChangedV2** +```json +{ + "data": { + "partyId": "party-contoso", + "message": "Created job 'ay-1nov' to fetch farm operation data for party id 'party-contoso'.", + "status": "Running", + "lastActionDateTime": "2022-11-01T10:36:58.4373839Z", + "isCancellationRequested": false, + "properties": { + "key1": "value1", + "key2": 123.45 + }, + "id": "ay-1nov", + "name": "string", + "description": "string", + "createdDateTime": "2022-11-01T10:36:54.322847Z" + }, + "id": "fa759285-9737-4636-ae47-8cffe8506986", + "topic": "/subscriptions/{SUBSCRIPTION-ID}/resourceGroups/{RESOURCE-GROUP-NAME}/providers/Microsoft.AgFoodPlatform/farmBeats/{YOUR-RESOURCE-NAME}", + "subject": "/parties/party-contoso/farmOperationDataIngestionJobs/ay-1nov", + "eventType": "Microsoft.AgFoodPlatform.FarmOperationDataIngestionJobStatusChangedV2", + "dataVersion": "1.0", + "metadataVersion": "1", + "eventTime": "2022-11-01T10:36:58.4379601Z" + } +``` + 12. **Event type: Microsoft.AgFoodPlatform.BiomassModelJobStatusChangedV2** +```json +{ + "data": { + "partyId": "party1", + "message": "Created job 'job-biomass-13sdqwd' to calculate biomass values for boundary 'boundary1' from plantingStartDate '05/03/2020' to inferenceEndDate '10/11/2020' (both inclusive).", + "status": "Waiting", + "lastActionDateTime": "0001-01-01T00:00:00Z", + "isCancellationRequested": false, + "properties": { + "key1": "value1", + "key2": 123.45 + }, + "id": "job-biomass-13sdqwd", + "name": "biomass", + "description": "biomass is awesome", + "createdDateTime": "2022-11-07T15:16:28.3177868Z" + }, + "id": "v2-bbb378f8-91cf-4005-8d1b-fe071d606459", + "topic": "/subscriptions/{SUBSCRIPTION-ID}/resourceGroups/{RESOURCE-GROUP-NAME}/providers/Microsoft.AgFoodPlatform/farmBeats/{YOUR-RESOURCE-NAME}", + "subject": "/parties/party1/biomassModelJobs/job-biomass-13sdqwd", + "eventType": "Microsoft.AgFoodPlatform.BiomassModelJobStatusChangedV2", + "dataVersion": "1.0", + "metadataVersion": "1", + "eventTime": "2022-11-07T15:16:28.6070116Z" + } +``` ++ 13. **Event type: Microsoft.AgFoodPlatform.SoilMoistureModelJobStatusChangedV2** +```json + { + "data": { + "partyId": "party", + "message": "Created job 'job-soilmoisture-sf332q' to calculate soil moisture values for boundary 'boundary' from inferenceStartDate '05/01/2022' to inferenceEndDate '05/20/2022' (both inclusive).", + "status": "Waiting", + "lastActionDateTime": "0001-01-01T00:00:00Z", + "isCancellationRequested": false, + "properties": { + "key1": "value1", + "key2": 123.45 + }, + "id": "job-soilmoisture-sf332q", + "name": "string", + "description": "string", + "createdDateTime": "2022-11-07T15:11:00.9484192Z" + }, + "id": "v2-575d2196-63f2-44dc-b0f5-e5180b8475f1", + "topic": "/subscriptions/{SUBSCRIPTION-ID}/resourceGroups/{RESOURCE-GROUP-NAME}/providers/Microsoft.AgFoodPlatform/farmBeats/{YOUR-RESOURCE-NAME}", + "subject": "/parties/party/soilMoistureModelJobs/job-soilmoisture-sf332q", + "eventType": "Microsoft.AgFoodPlatform.SoilMoistureModelJobStatusChangedV2", + "dataVersion": "1.0", + "metadataVersion": "1", + "eventTime": "2022-11-07T15:11:01.2957613Z" + } +``` ++ 14. **Event type: Microsoft.AgFoodPlatform.SensorPlacementModelJobStatusChangedV2** +```json + { + "data": { + "partyId": "pjparty", + "message": "Satellite scenes are available only for '0' days, expected scenes for '133' days. Not all scenes are available, please trigger satellite job for the required date range.", + "status": "Running", + "lastActionDateTime": "2022-11-01T10:44:19Z", + "isCancellationRequested": false, + "properties": { + "key1": "value1", + "key2": 123.45 + }, + "id": "pjjob2", + "name": "string", + "description": "string", + "createdDateTime": "2022-11-01T10:44:01Z" + }, + "id": "5d3e0d75-b963-494e-956a-3690b16315ff", + "topic": "/subscriptions/{SUBSCRIPTION-ID}/resourceGroups/{RESOURCE-GROUP-NAME}/providers/Microsoft.AgFoodPlatform/farmBeats/{YOUR-RESOURCE-NAME}", + "subject": "/parties/pjparty/sensorPlacementModelJobs/pjjob2", + "eventType": "Microsoft.AgFoodPlatform.SensorPlacementModelJobStatusChangedV2", + "dataVersion": "1.0", + "metadataVersion": "1", + "eventTime": "2022-11-01T10:44:19Z" + } +``` ++ 15. **Event type: Microsoft.AgFoodPlatform.SeasonalFieldChangedV2** +````json +{ + "data": { + "seasonId": "unique-season", + "fieldId": "unique-field", + "farmId": "unique-farm", + "partyId": "unique-party", + "actionType": "Created", + "status": "string", + "modifiedDateTime": "2022-11-07T07:40:30Z", + "eTag": "9601f7cc-0000-0700-0000-6368b66e0000", + "properties": { + "key1": "value1", + "key2": 123.45 + }, + "id": "unique-seasonalfield", + "name": "string", + "description": "string", + "createdDateTime": "2022-11-07T07:40:30Z" + }, + "id": "v2-8ac9fa0e-6750-4b9a-a62f-54fdeffb057a", + "topic": "/subscriptions/{SUBSCRIPTION-ID}/resourceGroups/{RESOURCE-GROUP-NAME}/providers/Microsoft.AgFoodPlatform/farmBeats/{YOUR-RESOURCE-NAME}", + "subject": "/parties/unique-party/seasonalFields/unique", + "eventType": "Microsoft.AgFoodPlatform.SeasonalFieldChangedV2", + "dataVersion": "1.0", + "metadataVersion": "1", + "eventTime": "2022-11-07T07:40:30.1368975Z" + } +```` ++ 16. **Event type: Microsoft.AgFoodPlatform.ZoneChangedV2** +```json +{ + "data": { + "managementZoneId": "contoso-mz", + "partyId": "contoso-party", + "actionType": "Deleted", + "status": "string", + "modifiedDateTime": "2022-11-01T10:50:07Z", + "eTag": "5a058b39-0000-0700-0000-6360f9ae0000", + "properties": { + "key1": "value1", + "key2": 123.45 + }, + "id": "contoso-zone-5764", + "name": "string", + "description": "string", + "createdDateTime": "2022-11-01T10:48:39Z" + }, + "id": "110777ec-e74e-42dd-aa5c-23c72fd2b2bf", + "topic": "/subscriptions/{SUBSCRIPTION-ID}/resourceGroups/{RESOURCE-GROUP-NAME}/providers/Microsoft.AgFoodPlatform/farmBeats/{YOUR-RESOURCE-NAME}", + "subject": "/parties/contoso-party/zones/contoso-zone-5764", + "eventType": "Microsoft.AgFoodPlatform.ZoneChangedV2", + "dataVersion": "1.0", + "metadataVersion": "1", + "eventTime": "2022-11-01T10:50:07.586658Z" + } + ``` + 17. **Event type: Microsoft.AgFoodPlatform.ManagementZoneChangedV2** +```json +{ + "data": { + "seasonId": "season", + "cropId": "crop", + "fieldId": "contoso-field", + "partyId": "contoso-party", + "actionType": "Created", + "status": "string", + "modifiedDateTime": "2022-11-01T10:44:38Z", + "eTag": "af00b1f1-0000-0700-0000-6360f8960000", + "properties": { + "key1": "value1", + "key2": 123.45 + }, + "id": "contoso-mz", + "name": "string", + "description": "string", + "createdDateTime": "2022-11-01T10:44:38Z" + }, + "id": "0ac75094-ffd6-4dbf-847c-d9df03b630f4", + "topic": "/subscriptions/{SUBSCRIPTION-ID}/resourceGroups/{RESOURCE-GROUP-NAME}/providers/Microsoft.AgFoodPlatform/farmBeats/{YOUR-RESOURCE-NAME}", + "subject": "/parties/contoso-party/managementZones/contoso-mz", + "eventType": "Microsoft.AgFoodPlatform.ManagementZoneChangedV2", + "dataVersion": "1.0", + "metadataVersion": "1", + "eventTime": "2022-11-01T10:44:38.3458983Z" + } + ``` ++ 18. **Event type: Microsoft.AgFoodPlatform.PrescriptionChangedV2** +```json +{ + "data": { + "prescriptionMapId": "contoso-prescriptionmapid123", + "partyId": "contoso-partyId", + "actionType": "Created", + "status": "string", + "modifiedDateTime": "2022-11-07T09:06:30Z", + "eTag": "8f0745e8-0000-0700-0000-6368ca960000", + "properties": { + "key1": "value1", + "key2": 123.45 + }, + "id": "contoso-prescrptionid123", + "name": "string", + "description": "string", + "createdDateTime": "2022-11-07T09:06:30Z" + }, + "id": "v2-f0c1df5d-db19-4bd9-adea-a0d38622d844", + "topic": "/subscriptions/{SUBSCRIPTION-ID}/resourceGroups/{RESOURCE-GROUP-NAME}/providers/Microsoft.AgFoodPlatform/farmBeats/{YOUR-RESOURCE-NAME}", + "subject": "/parties/contoso-partyId/prescriptions/contoso-prescrptionid123", + "eventType": "Microsoft.AgFoodPlatform.PrescriptionChangedV2", + "dataVersion": "1.0", + "metadataVersion": "1", + "eventTime": "2022-11-07T09:06:30.9331136Z" + } + ``` ++ 19. **Event type: Microsoft.AgFoodPlatform.PrescriptionMapChangedV2** +```json + { + "data": { + "seasonId": "contoso-season", + "cropId": "contoso-crop", + "fieldId": "contoso-field", + "partyId": "contoso-partyId", + "actionType": "Updated", + "status": "string", + "modifiedDateTime": "2022-11-07T09:04:09Z", + "eTag": "8f0722c1-0000-0700-0000-6368ca090000", + "properties": { + "key1": "value1", + "key2": 123.45 + }, + "id": "contoso-prescriptionmapid123", + "name": "string", + "description": "string", + "createdDateTime": "2022-11-07T09:01:25Z" + }, + "id": "v2-625f09bd-c342-4af4-8ae9-0533fe36d8b5", + "topic": "/subscriptions/{SUBSCRIPTION-ID}/resourceGroups/{RESOURCE-GROUP-NAME}/providers/Microsoft.AgFoodPlatform/farmBeats/{YOUR-RESOURCE-NAME}", + "subject": "/parties/contoso-partyId/prescriptionMaps/contoso-prescriptionmapid123", + "eventType": "Microsoft.AgFoodPlatform.PrescriptionMapChangedV2", + "dataVersion": "1.0", + "metadataVersion": "1", + "eventTime": "2022-11-07T09:04:09.8937395Z" + } + ``` + 20. **Event type: Microsoft.AgFoodPlatform.PlantTissueAnalysisChangedV2** +```json + { + "data": { + "fieldId": "contoso-field", + "cropId": "contoso-crop", + "cropProductId": "contoso-cropProduct", + "seasonId": "contoso-season", + "partyId": "contoso-partyId", + "actionType": "Created", + "status": "string", + "modifiedDateTime": "2022-11-07T09:10:12Z", + "eTag": "90078d29-0000-0700-0000-6368cb740000", + "properties": { + "key1": "value1", + "key2": 123.45 + }, + "id": "contoso-planttissueanalysis123", + "name": "string", + "description": "string", + "createdDateTime": "2022-11-07T09:10:12Z" + }, + "id": "v2-1bcc9ef4-51a1-4192-bfbc-64deb3816583", + "topic": "/subscriptions/{SUBSCRIPTION-ID}/resourceGroups/{RESOURCE-GROUP-NAME}/providers/Microsoft.AgFoodPlatform/farmBeats/{YOUR-RESOURCE-NAME}", + "subject": "/parties/contoso-partyId/plantTissueAnalyses/contoso-planttissueanalysis123", + "eventType": "Microsoft.AgFoodPlatform.PlantTissueAnalysisChangedV2", + "dataVersion": "1.0", + "metadataVersion": "1", + "eventTime": "2022-11-07T09:10:12.1008276Z" + } +``` + 21. **Event type: Microsoft.AgFoodPlatform.NutrientAnalysisChangedV2** +```json + { + "data": { + "parentId": "contoso-planttissueanalysis123", + "parentType": "PlantTissueAnalysis", + "partyId": "contoso-partyId", + "actionType": "Created", + "status": "string", + "modifiedDateTime": "2022-11-07T09:17:21Z", + "eTag": "9901583d-0000-0700-0000-6368cd220000", + "properties": { + "key1": "value1", + "key2": 123.45 + }, + "id": "nutrientAnalysis-123", + "name": "string", + "description": "string", + "createdDateTime": "2022-11-07T09:17:21Z" + }, + "id": "v2-c6eb10eb-27be-480a-bdca-bd8fbef7cfe7", + "topic": "/subscriptions/{SUBSCRIPTION-ID}/resourceGroups/{RESOURCE-GROUP-NAME}/providers/Microsoft.AgFoodPlatform/farmBeats/{YOUR-RESOURCE-NAME}", + "subject": "/parties/contoso-partyId/nutrientAnalyses/nutrientAnalysis-123", + "eventType": "Microsoft.AgFoodPlatform.NutrientAnalysisChangedV2", + "dataVersion": "1.0", + "metadataVersion": "1", + "eventTime": "2022-11-07T09:17:22.0694093Z" + } + ``` ++ 22. **Event type: Microsoft.AgFoodPlatform.AttachmentChangedV2** +```json + { + "data": { + "resourceId": "NDk5MzQ5XzVmZWQ3ZWQ4ZGQxNzQ0MTI1YzliNjU5Yg", + "resourceType": "ApplicationData", + "partyId": "contoso-432623-party-6", + "actionType": "Updated", + "modifiedDateTime": "2022-10-17T18:56:23Z", + "eTag": "19004980-0000-0700-0000-634da55a0000", + "id": "NDk5MzQ5XzVmZWQ3ZWQ4ZGQxNzQ0MTI1YzliNjU5Yg-AppliedRate-TIF", + "createdDateTime": "2022-06-08T15:03:00Z" + }, + "id": "80542664-b16f-4b0c-9d7e-f453edede5e3", + "topic": "/subscriptions/{SUBSCRIPTION-ID}/resourceGroups/{RESOURCE-GROUP-NAME}/providers/Microsoft.AgFoodPlatform/farmBeats/{YOUR-RESOURCE-NAME}", + "subject": "/parties/contoso-432623-party-6/attachments/NDk5MzQ5XzVmZWQ3ZWQ4ZGQxNzQ0MTI1YzliNjU5Yg-AppliedRate-TIF", + "eventType": "Microsoft.AgFoodPlatform.AttachmentChangedV2", + "dataVersion": "1.0", + "metadataVersion": "1", + "eventTime": "2022-10-17T18:56:23.4832442Z" + } + ``` ++ 23. **Event type: Microsoft.AgFoodPlatform.InsightChangedV2** +```json + { + "data": { + "modelId": "Microsoft.SoilMoisture", + "resourceType": "Boundary", + "resourceId": "boundary", + "modelVersion": "1.0", + "partyId": "party", + "actionType": "Updated", + "modifiedDateTime": "2022-11-03T18:21:24Z", + "eTag": "04011838-0000-0700-0000-636406a40000", + "properties": { + "SYSTEM-SENSORDATAMODELID": "pra-sm", + "SYSTEM-INFERENCESTARTDATETIME": "2022-05-01T00:00:00Z", + "SYSTEM-SENSORPARTNERID": "SensorPartner", + "SYSTEM-SATELLITEPROVIDER": "Microsoft", + "SYSTEM-SATELLITESOURCE": "Sentinel_2_L2A", + "SYSTEM-IMAGERESOLUTION": 10, + "SYSTEM-IMAGEFORMAT": "TIF" + }, + "id": "02e96e5e-852b-f895-af1e-c6da309ae345", + "createdDateTime": "2022-07-06T09:06:57Z" + }, + "id": "v2-475358e4-3c8a-4a05-a22c-9fa4da6effc7", + "topic": "/subscriptions/{SUBSCRIPTION-ID}/resourceGroups/{RESOURCE-GROUP-NAME}/providers/Microsoft.AgFoodPlatform/farmBeats/{YOUR-RESOURCE-NAME}", + "subject": "/parties/party/insights/02e96e5e-852b-f895-af1e-c6da309ae345", + "eventType": "Microsoft.AgFoodPlatform.InsightChangedV2", + "dataVersion": "1.0", + "metadataVersion": "1", + "eventTime": "2022-11-03T18:21:24.7502452Z" + } + ``` ++ 24. **Event type: Microsoft.AgFoodPlatform.InsightAttachmentChangedV2** +```json + { + "data": { + "insightId": "f5c2071c-c7ce-05f3-be4d-952a26f2490a", + "modelId": "Microsoft.SoilMoisture", + "resourceType": "Boundary", + "resourceId": "boundary", + "partyId": "party", + "actionType": "Updated", + "modifiedDateTime": "2022-11-03T18:21:26Z", + "eTag": "5d06cc22-0000-0700-0000-636406a60000", + "id": "f5c2071c-c7ce-05f3-be4d-952a26f2490a-soilMoisture", + "createdDateTime": "2022-07-06T09:07:00Z" + }, + "id": "v2-46881f59-fd5c-48ed-a71f-342c04c75d1f", + "topic": "/subscriptions/{SUBSCRIPTION-ID}/resourceGroups/{RESOURCE-GROUP-NAME}/providers/Microsoft.AgFoodPlatform/farmBeats/{YOUR-RESOURCE-NAME}", + "subject": "/parties/party/insightAttachments/f5c2071c-c7ce-05f3-be4d-952a26f2490a-soilMoisture", + "eventType": "Microsoft.AgFoodPlatform.InsightAttachmentChangedV2", + "dataVersion": "1.0", + "metadataVersion": "1", + "eventTime": "2022-11-03T18:21:26.9501924Z" + } + ``` ++ 25. **Event type: Microsoft.AgFoodPlatform.ApplicationDataChangedV2** +```json +{ + "data": { + "actionType": "Created", + "partyId": "contoso-partyId", + "status": "string", + "source": "string", + "modifiedDateTime": "2022-11-07T09:23:07Z", + "properties": { + "key1": "value1", + "key2": 123.45 + }, + "eTag": "91072b09-0000-0700-0000-6368ce7b0000", + "id": "applicationData-123", + "name": "string", + "description": "string", + "createdDateTime": "2022-11-07T09:23:07Z" + }, + "id": "v2-2d849164-a773-4926-bcd3-b3884bad5076", + "topic": "/subscriptions/{SUBSCRIPTION-ID}/resourceGroups/{RESOURCE-GROUP-NAME}/providers/Microsoft.AgFoodPlatform/farmBeats/{YOUR-RESOURCE-NAME}", + "subject": "/parties/contoso-partyId/applicationData/applicationData-123", + "eventType": "Microsoft.AgFoodPlatform.ApplicationDataChangedV2", + "dataVersion": "1.0", + "metadataVersion": "1", + "eventTime": "2022-11-07T09:23:07.078703Z" + } + ``` ++ 26. **Event type: Microsoft.AgFoodPlatform.HarvestDataChangedV2** +```json + { + "data": { + "actionType": "Created", + "partyId": "contoso-partyId", + "status": "string", + "source": "string", + "modifiedDateTime": "2022-11-07T09:29:39Z", + "properties": { + "key1": "value1", + "key2": 123.45 + }, + "eTag": "9901037e-0000-0700-0000-6368d0030000", + "id": "harvestData-123", + "name": "string", + "description": "string", + "createdDateTime": "2022-11-07T09:29:39Z" + }, + "id": "v2-bd4c9d63-17f2-4c61-8583-a64e064f06d6", + "topic": "/subscriptions/{SUBSCRIPTION-ID}/resourceGroups/{RESOURCE-GROUP-NAME}/providers/Microsoft.AgFoodPlatform/farmBeats/{YOUR-RESOURCE-NAME}", + "subject": "/parties/contoso-partyId/harvestData/harvestData-123", + "eventType": "Microsoft.AgFoodPlatform.HarvestDataChangedV2", + "dataVersion": "1.0", + "metadataVersion": "1", + "eventTime": "2022-11-07T09:29:39.3967693Z" + } + ``` ++ 27. **Event type: Microsoft.AgFoodPlatform.TillageDataChangedV2** +```json + { + "data": { + "actionType": "Created", + "partyId": "contoso-partyId", + "status": "string", + "source": "string", + "modifiedDateTime": "2022-11-07T09:32:00Z", + "properties": { + "key1": "value1", + "key2": 123.45 + }, + "eTag": "9107eb95-0000-0700-0000-6368d0900000", + "id": "tillageData-123", + "name": "string", + "description": "string", + "createdDateTime": "2022-11-07T09:32:00Z" + }, + "id": "v2-75b58a0f-00b9-4c73-9733-4caab2343686", + "topic": "/subscriptions/{SUBSCRIPTION-ID}/resourceGroups/{RESOURCE-GROUP-NAME}/providers/Microsoft.AgFoodPlatform/farmBeats/{YOUR-RESOURCE-NAME}", + "subject": "/parties/contoso-partyId/tillageData/tillageData-123", + "eventType": "Microsoft.AgFoodPlatform.TillageDataChangedV2", + "dataVersion": "1.0", + "metadataVersion": "1", + "eventTime": "2022-11-07T09:32:00.7745737Z" + } + ``` ++ 28. **Event type: Microsoft.AgFoodPlatform.PlantingDataChangedV2** +```json + { + "data": { + "actionType": "Created", + "partyId": "contoso-partyId", + "status": "string", + "source": "string", + "modifiedDateTime": "2022-11-07T09:13:27Z", + "properties": { + "key1": "value1", + "key2": 123.45 + }, + "eTag": "90073465-0000-0700-0000-6368cc370000", + "id": "contoso-plantingdata123", + "name": "string", + "description": "string", + "createdDateTime": "2022-11-07T09:13:27Z" + }, + "id": "v2-1b55076b-d989-4831-81e4-ff8b469dc5f8", + "topic": "/subscriptions/{SUBSCRIPTION-ID}/resourceGroups/{RESOURCE-GROUP-NAME}/providers/Microsoft.AgFoodPlatform/farmBeats/{YOUR-RESOURCE-NAME}", + "subject": "/parties/contoso-partyId/plantingData/contoso-plantingdata123", + "eventType": "Microsoft.AgFoodPlatform.PlantingDataChangedV2", + "dataVersion": "1.0", + "metadataVersion": "1", + "eventTime": "2022-11-07T09:13:27.9490317Z" + } + ``` ++ 29. **Event type: Microsoft.AgFoodPlatform.ImageProcessingRasterizeJobStatusChangedV2** +```json + { + "data": { + "shapefileAttachmentId": "attachment-contoso", + "partyId": "party-contoso", + "message": "Created job 'contoso-nov1-2' to rasterize shapefile attachment with id 'attachment-contoso'.", + "status": "Running", + "lastActionDateTime": "2022-11-01T10:44:44.8186582Z", + "isCancellationRequested": false, + "properties": { + "key1": "value1", + "key2": 123.45 + }, + "id": "contoso-nov1-2", + "name": "string", + "description": "string", + "createdDateTime": "2022-11-01T10:44:39.3098984Z" + }, + "id": "0ad2d5e6-1277-4880-adb6-bf0a621ad59b", + "topic": "/subscriptions/{SUBSCRIPTION-ID}/resourceGroups/{RESOURCE-GROUP-NAME}/providers/Microsoft.AgFoodPlatform/farmBeats/{YOUR-RESOURCE-NAME}", + "subject": "/parties/party-contoso/imageProcessingRasterizeJobs/contoso-nov1-2", + "eventType": "Microsoft.AgFoodPlatform.ImageProcessingRasterizeJobStatusChangedV2", + "dataVersion": "1.0", + "metadataVersion": "1", + "eventTime": "2022-11-01T10:44:44.8203668Z" + } + ``` ++ 30. **Event type: Microsoft.AgFoodPlatform.DeviceDataModelChanged** +```json + { + "data": { + "sensorPartnerId": "partnerId", + "actionType": "Created", + "modifiedDateTime": "2022-11-03T03:37:42Z", + "eTag": "e50094f2-0000-0700-0000-636337860000", + "id": "synthetics-02a465da-0c85-40cf-b7a8-64e15baae3c4", + "createdDateTime": "2022-11-03T03:37:42Z" + }, + "id": "40ba84c3-b8f4-497d-8d44-1b8df6eb3b7c", + "topic": "/subscriptions/{SUBSCRIPTION-ID}/resourceGroups/{RESOURCE-GROUP-NAME}/providers/Microsoft.AgFoodPlatform/farmBeats/{YOUR-RESOURCE-NAME}", + "subject": "/sensorPartners/partnerId/deviceDataModels/synthetics-02a465da-0c85-40cf-b7a8-64e15baae3c4", + "eventType": "Microsoft.AgFoodPlatform.DeviceDataModelChanged", + "dataVersion": "1.0", + "metadataVersion": "1", + "eventTime": "2022-11-03T03:37:42.4536218Z" + } + ``` ++ 31. **Event type: Microsoft.AgFoodPlatform.DeviceChanged** +```json + { + "data": { + "deviceDataModelId": "test-ddm1", + "integrationId": "ContosoID", + "sensorPartnerId": "SensorPartner", + "actionType": "Created", + "status": "string", + "modifiedDateTime": "2022-11-01T11:29:01Z", + "eTag": "b0000a6f-0000-0700-0000-636102fe0000", + "properties": { + "key1": "value1", + "key2": 123.45 + }, + "id": "dddd1", + "name": "string", + "description": "string", + "createdDateTime": "2022-11-01T11:29:01Z" + }, + "id": "15ab45c7-0f04-4db3-b982-87380b3c1ba4", + "topic": "/subscriptions/{SUBSCRIPTION-ID}/resourceGroups/{RESOURCE-GROUP-NAME}/providers/Microsoft.AgFoodPlatform/farmBeats/{YOUR-RESOURCE-NAME}", + "subject": "/sensorPartners/SensorPartner/devices/dddd1", + "eventType": "Microsoft.AgFoodPlatform.DeviceChanged", + "dataVersion": "1.0", + "metadataVersion": "1", + "eventTime": "2022-11-01T11:29:02.0578111Z" + } + ``` ++ 32. **Event type: Microsoft.AgFoodPlatform.SensorDataModelChanged** +```json + { + "data": { + "sensorPartnerId": "partnerId", + "actionType": "Deleted", + "modifiedDateTime": "2022-11-03T03:38:11Z", + "eTag": "e50099f2-0000-0700-0000-636337860000", + "id": "4fb0214a-459c-47b8-8564-b822f263ae12", + "createdDateTime": "2022-11-03T03:37:42Z" + }, + "id": "54fdb552-b5db-45c0-be49-8f4f27f27bde", + "topic": "/subscriptions/{SUBSCRIPTION-ID}/resourceGroups/{RESOURCE-GROUP-NAME}/providers/Microsoft.AgFoodPlatform/farmBeats/{YOUR-RESOURCE-NAME}", + "subject": "/sensorPartners/partnerId/sensorDataModels/4fb0214a-459c-47b8-8564-b822f263ae12", + "eventType": "Microsoft.AgFoodPlatform.SensorDataModelChanged", + "dataVersion": "1.0", + "metadataVersion": "1", + "eventTime": "2022-11-03T03:38:11.7538559Z" + } + ``` ++ 33. **Event type: Microsoft.AgFoodPlatform.SensorChanged** +```json + { + "data": { + "sensorDataModelId": "4fb0214a-459c-47b8-8564-b822f263ae12", + "integrationId": "159ce4e5-878f-4fc7-9bae-16eaf65bfb45", + "sensorPartnerId": "partnerId", + "actionType": "Deleted", + "modifiedDateTime": "2022-11-03T03:38:09Z", + "eTag": "13063e1e-0000-0700-0000-636337970000", + "properties": { + "key-a": "value-a" + }, + "id": "ec1ed9c6-f476-448a-ab07-65e0d71e34d5", + "createdDateTime": "2022-11-03T03:37:59Z" + }, + "id": "b3a0f169-6d28-4e57-b570-6068446b50b4", + "topic": "/subscriptions/{SUBSCRIPTION-ID}/resourceGroups/{RESOURCE-GROUP-NAME}/providers/Microsoft.AgFoodPlatform/farmBeats/{YOUR-RESOURCE-NAME}", + "subject": "/sensorPartners/partnerId/sensors/ec1ed9c6-f476-448a-ab07-65e0d71e34d5", + "eventType": "Microsoft.AgFoodPlatform.SensorChanged", + "dataVersion": "1.0", + "metadataVersion": "1", + "eventTime": "2022-11-03T03:38:09.7932361Z" + } + ``` ++ 34. **Event type: Microsoft.AgFoodPlatform.SensorMappingChangedV2** +```json + { + "data": { + "sensorId": "sensor", + "partyId": "ContosopartyId", + "boundaryId": "ContosoBoundary", + "sensorPartnerId": "sensorpartner", + "actionType": "Created", + "status": "string", + "modifiedDateTime": "2022-11-01T11:08:33Z", + "eTag": "b000ff36-0000-0700-0000-6360fe310000", + "properties": { + "key1": "value1", + "key2": 123.45 + }, + "id": "sensormapping", + "name": "string", + "description": "string", + "createdDateTime": "2022-11-01T11:08:33Z" + }, + "id": "c532ff5c-bfa0-4644-a0bc-14f736ebc07d", + "topic": "/subscriptions/{SUBSCRIPTION-ID}/resourceGroups/{RESOURCE-GROUP-NAME}/providers/Microsoft.AgFoodPlatform/farmBeats/{YOUR-RESOURCE-NAME}", + "subject": "/sensorPartners/sensorpartner/sensorMappings/sensormapping", + "eventType": "Microsoft.AgFoodPlatform.SensorMappingChangedV2", + "dataVersion": "1.0", + "metadataVersion": "1", + "eventTime": "2022-11-01T11:08:33.3345312Z" + } + ``` ++ 35. **Event type: Microsoft.AgFoodPlatform.SensorPartnerIntegrationChangedV2** +```json + { + "data": { + "integrationId": "159ce4e5-878f-4fc7-9bae-16eaf65bfb45", + "sensorPartnerId": "partnerId", + "actionType": "Deleted", + "modifiedDateTime": "2022-11-03T03:38:10Z", + "eTag": "e5009cf2-0000-0700-0000-636337870000", + "id": "159ce4e5-878f-4fc7-9bae-16eaf65bfb45", + "createdDateTime": "2022-11-03T03:37:42Z" + }, + "id": "v2-3e6b1527-7f67-4c7d-b26e-1000a6a97612", + "topic": "/subscriptions/{SUBSCRIPTION-ID}/resourceGroups/{RESOURCE-GROUP-NAME}/providers/Microsoft.AgFoodPlatform/farmBeats/{YOUR-RESOURCE-NAME}", + "subject": "/sensorPartners/partnerId/integrations/159ce4e5-878f-4fc7-9bae-16eaf65bfb45", + "eventType": "Microsoft.AgFoodPlatform.SensorPartnerIntegrationChangedV2", + "dataVersion": "1.0", + "metadataVersion": "1", + "eventTime": "2022-11-03T03:38:10.9531838Z" + } + ``` +## Next steps +* For an introduction to Azure Event Grid, see [What is Event Grid?](../event-grid/overview.md) +* Test our APIs [here](/rest/api/data-manager-for-agri). |
defender-for-iot | Tutorial Configure Agent Based Solution | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-iot/device-builders/tutorial-configure-agent-based-solution.md | There are no resources to clean up. ## Next steps > [!div class="nextstepaction"]-> [Investigate security recommendations](tutorial-investigate-security-recommendations.md) +> [Investigate security recommendations](tutorial-investigate-security-recommendations.md) |
defender-for-iot | Cli Ot Sensor | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-iot/organizations/cli-ot-sensor.md | To use this command: - Verify that the certificate file you want to import is readable on the appliance. Upload certificate files to the appliance using tools such as WinSCP or Wget. - Confirm with your IT office that the appliance domain as it appears in the certificate is correct for your DNS server and the corresponding IP address. -For more information, see [Certificates for appliance encryption and authentication (OT appliances)](how-to-deploy-certificates.md). +For more information, see [Prepare CA-signed certificates](best-practices/plan-prepare-deploy.md#prepare-ca-signed-certificates) and [Create SSL/TLS certificates for OT appliances](ot-deploy/create-ssl-certificates.md). |User |Command |Full command syntax | |||| |
defender-for-iot | Concept Supported Protocols | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-iot/organizations/concept-supported-protocols.md | OT network sensors can detect the following protocols when identifying assets an |Brand / Vendor |Protocols | ||| |**ABB** | ABB 800xA DCS (IEC61850 MMS including ABB extension)<br> CNCP<br> RNRP<br> ABB IAC<br> ABB Totalflow |-|**Samsung** | Samsung TV | |**ASHRAE** | BACnet<br> BACnet BACapp<br> BACnet BVLC | |**Beckhoff** | AMS (ADS)<br> Twincat | |**Cisco** | CAPWAP Control<br> CAPWAP Data<br> CDP<br> LWAPP | OT network sensors can detect the following protocols when identifying assets an |**Emerson** | DeltaV<br> DeltaV - Discovery<br> Emerson OpenBSI/BSAP<br> Ovation DCS ADMD<br>Ovation DCS DPUSTAT<br> Ovation DCS SSRPC | |**Emerson Fischer** | ROC | |**Eurocontrol** | ASTERIX |-|**GE** | Bentley Nevada (System 1 / BN3500)<br> EGD<br> GSM (GE MarkVI and MarkVIe)<br> SRTP (GE)<br> GE_CMP | +|**GE** | Bentley Nevada (System 1 / BN3500)<br>ClassicSDI (MarkVle) <br> EGD<br> GSM (GE MarkVI and MarkVIe)<br> InterSite<br> SDI (MarkVle) <br> SRTP (GE)<br> GE_CMP | |**Generic Applications** | Active Directory<br> RDP<br> Teamviewer<br> VNC<br> | |**Honeywell** | ENAP<br> Experion DCS CDA<br> Experion DCS FDA<br> Honeywell EUCN <br> Honeywell Discovery | |**IEC** | Codesys V3<br>IEC 60870-5-7 (IEC 62351-3 + IEC 62351-5)<br> IEC 60870-5-101 (encapsulated serial)<br> IEC 60870-5-103 (encapsulated serial)<br> IEC 60870-5-104<br> IEC 60870-5-104 ASDU_APCI<br> IEC 60870 ICCP TASE.2<br> IEC 61850 GOOSE<br> IEC 61850 MMS<br> IEC 61850 SMV (SAMPLED-VALUES)<br> LonTalk (LonWorks) | OT network sensors can detect the following protocols when identifying assets an |**Omron** | FINS | |**OPC** | UA | |**Oracle** | TDS<br> TNS |-|**Rockwell Automation** | ENIP<br> EtherNet/IP CIP (including Rockwell extension)<br> EtherNet/IP CIP FW version 27 and above | +|**Rockwell Automation** | CSP2<br> ENIP<br> EtherNet/IP CIP (including Rockwell extension)<br> EtherNet/IP CIP FW version 27 and above | +|**Samsung** | Samsung TV | |**Schneider Electric** | Modbus/TCP<br> Modbus TCPΓÇôSchneider Unity Extensions<br> OASYS (Schneider Electric Telvant)<br> Schneider TSAA | |**Schneider Electric / Invensys** | Foxboro Evo<br> Foxboro I/A<br> Trident<br> TriGP<br> TriStation | |**Schneider Electric / Modicon** | Modbus RTU | |**Schneider Electric / Wonderware** | Wonderware Suitelink |-|**Siemens** | CAMP<br> PCS7<br> PCS7 WinCC ΓÇô Historian<br> Profinet DCP<br> Profinet Realtime<br> Siemens PHD<br> Siemens S7<br> Siemens S7-Plus<br> Siemens SICAM<br> Siemens WinCC | +|**Siemens** | CAMP<br> PCS7<br> PCS7 WinCC ΓÇô Historian<br> Profinet DCP<br> Profinet I/O<br> Profinet Realtime<br> Siemens PHD<br> Siemens S7<br> Siemens S7 - Firmware and model extraction<br> Siemens S7 ΓÇô key state<br> Siemens S7-Plus<br> Siemens SICAM<br> Siemens WinCC | |**Toshiba** |Toshiba Computer Link | |**Yokogawa** | Centum ODEQ (Centum / ProSafe DCS)<br> HIS Equalize<br> FA-M3<br> Vnet/IP | Enterprise IoT network sensors can detect the following protocols when identifyi Asset vendors, partners, or platform owners can use Defender for IoT's Horizon Protocol SDK to secure any OT protocol used in IoT and ICS environments that's not isn't already supported by default. -Horizon helps you to write plugins for OT sensors that enable Deep Packet Inspection (DPI) on the traffic and detect threats in realtime. Customize your plugins localize and customize text for alerts, events, and protocol parameters. +Horizon helps you to write plugins for OT sensors that enable Deep Packet Inspection (DPI) on the traffic and detect threats in real-time. Customize your plugins localize and customize text for alerts, events, and protocol parameters. Horizon provides: |
defender-for-iot | Configure Windows Endpoint Monitoring | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-iot/organizations/configure-windows-endpoint-monitoring.md | If you'll be using a non-admin account to run your WEM scans, this procedure is For more information, see: +- [Detect Windows workstations and servers with a local script](detect-windows-endpoints-script.md) - [View your device inventory from a sensor console](how-to-investigate-sensor-detections-in-a-device-inventory.md) - [View your device inventory from the Azure portal](how-to-manage-device-inventory-for-organizations.md) - [Configure active monitoring for OT networks](configure-active-monitoring.md) |
defender-for-iot | Detect Windows Endpoints Script | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-iot/organizations/detect-windows-endpoints-script.md | The script described in this article returns the following details about each de - Installed programs - Last knowledge base update -If an OT network sensor has already learned the device, running the script outlined in this article retrieves the device's information and enrichment data. +If an OT network sensor has already detected the device, running the script outlined in this article retrieves the device's information and enrichment data. ## Prerequisites The script described in this article is supported for the following Windows oper - Windows 10 - Windows Server 2003/2008/2012/2016/2019 -## Run the script +## Download and run the script -This procedure describes how to obtain, deploy, and run the script on the Windows workstation and servers that you want to monitor in Defender for IoT. +This procedure describes how to deploy and run a script on the Windows workstation and servers that you want to monitor in Defender for IoT. -The script you run to detect enriched Windows data is run as a utility and not as an installed program. Running the script doesn't affect the endpoint. +The script detects enriched Windows data, and is run as a utility and not an installed program. Running the script doesn't affect the endpoint. You may want to deploy the script once, or using ongoing automation, using standard automated deployment methods and tools. -1. To acquire the script, [contact customer support](mailto:support.microsoft.com). +1. Sign into your OT sensor console, and select **System Settings** > **Import Settings** > **Windows Information**. ++1. Select **Download script**. For example: -1. Deploy the script once, or using ongoing automation, using standard automated deployment methods and tools. + :::image type="content" source="media/detect-windows-endpoints-script/download-wmi-script.png" alt-text="Screenshot of where to download WMI script." lightbox="media/detect-windows-endpoints-script/download-wmi-script.png"::: 1. Copy the script to a local drive and unzip it. The following files appear: The script you run to detect enriched Windows data is run as a utility and not a 1. Run the `run.bat` file. - After the script runs to probe the registry, a CX-snapshot file appears with the registry information. The filename indicates the system name, date, and time of the snapshot with the following syntax: `CX-snaphot_SystemName_Month_Year_Time` + After the script runs to probe the registry, a CX-snapshot file appears with the registry information. The filename indicates the machine name and the current date and time of the snapshot with the following syntax: `cx_snapshot_[machinename]_[current date time]`. -Files generated by the script: +Files generated by the script include: - Remain on the local drive until you delete them. - Must remain in the same location. Don't separate the generated files. Files generated by the script: ## Import device details -After having run the script as described [earlier](#run-the-script), import the generated data to your sensor to view the device details in the **Device inventory**. +After having run the script as described [earlier](#download-and-run-the-script), import the generated data to your sensor to view the device details in the **Device inventory**. **To import device details to your sensor**: After having run the script as described [earlier](#run-the-script), import the 1. Select **Import File**, and then select all the files (Ctrl+A). -1. Select **Close**. The device registry information is imported and a successful confirmation message is shown. + :::image type="content" source="media/detect-windows-endpoints-script/import-wmi-script.png" alt-text="Screenshot of where to import WMI script." lightbox="media/detect-windows-endpoints-script/import-wmi-script.png"::: ++## View devices applications report ++After [downloading and running](#download-and-run-the-script) the script, then [importing](#import-device-details) the generated data to your sensor, you can view your devices applications with a custom data mining report. ++**To view the devices applications:** - If there's a problem uploading one of the files, you'll be informed which file upload failed. +1. Sign into your OT sensor console, and select **Data mining**. ++1. Select **+ Create report** to [create a custom report](how-to-create-data-mining-queries.md#create-an-ot-sensor-custom-data-mining-report). In the **Choose Category** field, select **Devices Applications**. For example: ++ :::image type="content" source="media/detect-windows-endpoints-script/devices-applications-report.png" alt-text="Screenshot of creating devices applications custom report." lightbox="media/detect-windows-endpoints-script/devices-applications-report.png"::: ++1. Your devices applications report is shown in the **My reports** area. ++Based on this information, the Windows device installed applications CVE list will be displayed in Azure if the sensor is cloud-connected. ## Next steps For more information, see [Detect Windows workstations and servers with a local script](detect-windows-endpoints-script.md) and [Import extra data for detected OT devices](how-to-import-device-information.md).- |
defender-for-iot | Faqs Ot | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-iot/organizations/faqs-ot.md | Change network configuration settings before or after you activate your sensor u - **From the sensor UI**: [Update the OT sensor network configuration](how-to-manage-individual-sensors.md#update-the-ot-sensor-network-configuration) - **From the sensor CLI**: [Network configuration](cli-ot-sensor.md#network-configuration) -For more information, see [Activate and set up your sensor](how-to-activate-and-set-up-your-sensor.md), [Getting started with advanced CLI commands](references-work-with-defender-for-iot-cli-commands.md), and [CLI command reference from OT network sensors](cli-ot-sensor.md). +For more information, see [Activate and set up your OT network sensor](ot-deploy/activate-deploy-sensor.md), [Getting started with advanced CLI commands](references-work-with-defender-for-iot-cli-commands.md), and [CLI command reference from OT network sensors](cli-ot-sensor.md). ## How do I check the sanity of my deployment |
defender-for-iot | Getting Started | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-iot/organizations/getting-started.md | Before you start, make sure that you have: - Access to the Azure portal as a [Security Admin](../../role-based-access-control/built-in-roles.md#security-admin), [Contributor](../../role-based-access-control/built-in-roles.md#contributor), or [Owner](../../role-based-access-control/built-in-roles.md#owner). For more information, see [Azure user roles for OT and Enterprise IoT monitoring with Defender for IoT](roles-azure.md). -- A plan for your Defender for IoT deployment, such as any system requirements, [traffic mirroring](best-practices/traffic-mirroring-methods.md), any [SSL/TLS certificates](ot-deploy/create-ssl-certificates.md), and so on. For more information, see [Plan your OT monitoring system](best-practices/plan-corporate-monitoring.md).-- If you want to use on-premises sensors, make sure that you have the [hardware appliances](ot-appliance-sizing.md) for those sensors and any administrative user permissions. - ## Add a trial plan This procedure describes how to add a trial Defender for IoT plan for OT networks to an Azure subscription. |
defender-for-iot | How To Activate And Set Up Your On Premises Management Console | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-iot/organizations/how-to-activate-and-set-up-your-on-premises-management-console.md | - Title: Activate and set up your on-premises management console -description: Activating the management console ensures that sensors are registered with Azure and sending information to the on-premises management console, and that the on-premises management console carries out management tasks on connected sensors. Previously updated : 06/06/2022----# Activate and set up your on-premises management console --Activation and setup of the on-premises management console ensures that: --- Network devices that you're monitoring through connected sensors are registered with an Azure account.-- Sensors send information to the on-premises management console.-- The on-premises management console carries out management tasks on connected sensors.-- You've installed an SSL certificate.--## Sign in for the first time --To sign in to the on-premises management console: --1. Go to the IP address you received for the on-premises management console during the system installation. --1. Enter the username and password you received for the on-premises management console during the system installation. --If you forgot your password, select the **Recover Password** option. -## Activate the on-premises management console --After you sign in for the first time, you need to activate the on-premises management console by getting and uploading an activation file. Activation files on the on-premises management console enforce the number of committed devices configured for your subscription and Defender for IoT plan. For more information, see [Manage Defender for IoT subscriptions](how-to-manage-subscriptions.md). --**To activate the on-premises management console**: --1. Sign in to the on-premises management console. --1. In the alert notification at the top of the screen, select **Take Action**. -- :::image type="content" source="media/how-to-manage-sensors-from-the-on-premises-management-console/take-action.png" alt-text="Screenshot that shows the Take Action link in the alert at the top of the screen."::: --1. In the **Activation** pop-up screen, select **Azure portal**. -- :::image type="content" source="media/how-to-manage-sensors-from-the-on-premises-management-console/azure-portal.png" alt-text="Screenshot that shows the Azure portal link in the pop-up message."::: - -1. Select a subscription to associate the on-premises management console to. Then select **Download on-premises management console activation file**. The activation file downloads. -- The on-premises management console can be associated to one or more subscriptions. The activation file is associated with all the selected subscriptions and the number of committed devices at the time of download. -- [!INCLUDE [root-of-trust](includes/root-of-trust.md)] -- :::image type="content" source="media/how-to-manage-sensors-from-the-on-premises-management-console/multiple-subscriptions.png" alt-text="Screenshot that shows selecting multiple subscriptions." lightbox="media/how-to-manage-sensors-from-the-on-premises-management-console/multiple-subscriptions.png"::: -- If you haven't already onboarded Defender for IoT to a subscription, see [Onboard a Defender for IoT plan for OT networks](how-to-manage-subscriptions.md#onboard-a-defender-for-iot-plan-for-ot-networks). -- > [!Note] - > If you delete a subscription, you must upload a new activation file to the on-premises management console that was affiliated with the deleted subscription. --1. Go back to the **Activation** pop-up screen and select **CHOOSE FILE**. --1. Select the downloaded file. --After initial activation, the number of monitored devices might exceed the number of committed devices defined during onboarding. This issue occurs if you connect more sensors to the management console. If there's a discrepancy between the number of monitored devices and the number of committed devices, a warning appears on the management console. ---If this warning appears, you need to upload a [new activation file](#activate-the-on-premises-management-console). --### Activation expirations --After activating an on-premises management console, you'll need to apply new activation files on both the on-premises management console and connected sensors as follows: --|Location |Activation process | -||| -|**On-premises management console** | Apply a new activation file on your on-premises management console if you've [modified the number of committed devices](how-to-manage-subscriptions.md#edit-a-plan-for-ot-networks) in your subscription. | -|**Cloud-connected and locally managed sensors** | Cloud-connected and locally managed sensors remain activated for as long as your Azure subscription with your Defender for IoT plan is active. <br><br>If you're updating an OT sensor from a legacy version, you'll need to re-activate your updated sensor. | --For more information, see [Manage Defender for IoT subscriptions](how-to-manage-subscriptions.md). --### Activate expired licenses from versions earlier than 10.0 --For users with versions prior to 10.0, your license might expire and the following alert will appear: ---**To activate your license**: --1. Open a case with [support](https://portal.azure.com/?passwordRecovery=true&Microsoft_Azure_IoT_Defender=canary#create/Microsoft.Support). --1. Supply support with your **Activation ID** number. --1. Support will supply you with new license information in the form of a string of letters. --1. Read the terms and conditions, and select the checkbox to approve. --1. Paste the string into the space provided. -- :::image type="content" source="media/how-to-activate-and-set-up-your-on-premises-management-console/add-license.png" alt-text="Screenshot that shows pasting the string into the box."::: --1. Select **Activate**. --## Set up a certificate --After you install the management console, a local self-signed certificate is generated. This certificate is used to access the console. After an administrator signs in to the management console for the first time, that user is prompted to onboard an SSL/TLS certificate. --Two levels of security are available: --- Meet specific certificate and encryption requirements requested by your organization by uploading the CA-signed certificate.-- Allow validation between the management console and connected sensors. Validation is evaluated against a certificate revocation list and the certificate expiration date. *If validation fails, communication between the management console and the sensor is halted and a validation error is presented in the console.* This option is enabled by default after installation.--The console supports the following types of certificates: --- Private and Enterprise Key Infrastructure (private PKI)-- Public Key Infrastructure (public PKI)-- Locally generated on the appliance (locally self-signed)-- > [!IMPORTANT] - > We recommend that you don't use a self-signed certificate. The certificate isn't secure and should be used for test environments only. The owner of the certificate can't be validated, and the security of your system can't be maintained. Never use this option for production networks. --To upload a certificate: --1. When you're prompted after you sign in, define a certificate name. --1. Upload the CRT and key files. --1. Enter a passphrase and upload a PEM file if necessary. --You might need to refresh your screen after you upload the CA-signed certificate. --To disable validation between the management console and connected sensors: --1. Select **Next**. --1. Turn off the **Enable system-wide validation** toggle. --For information about uploading a new certificate, supported certificate files, and related items, see [Manage the on-premises management console](how-to-manage-the-on-premises-management-console.md). --## Connect sensors to the on-premises management console --Ensure that sensors send information to the on-premises management console. Make sure that the on-premises management console can perform backups, manage alerts, and carry out other activity on the sensors. Use the following procedures to verify that you make an initial connection between sensors and the on-premises management console. --Two options are available for connecting Microsoft Defender for IoT sensors to the on-premises management console: --- [Connect from the sensor console](#connect-sensors-to-the-on-premises-management-console-from-the-sensor-console)-- [Connect sensors by using tunneling](#connect-sensors-by-using-tunneling)--After connecting, set up sites and zones and assign each sensor to a zone to [monitor detected data segmented separately](monitor-zero-trust.md). --For more information, see [Create OT sites and zones on an on-premises management console](ot-deploy/sites-and-zones-on-premises.md). --### Connect sensors to the on-premises management console from the sensor console --**To connect sensors to the on-premises management console from the sensor console**: --1. In the on-premises management console, select **System Settings**. --1. Copy the string in the **Copy Connection String** box. -- :::image type="content" source="media/how-to-manage-sensors-from-the-on-premises-management-console/connection-string.png" alt-text="Screenshot that shows copying the connection string for the sensor."::: --1. On the sensor, go to **System Settings** > **Connection to Management Console**. --1. Paste the copied connection string from the on-premises management console into the **Connection string** box. -- :::image type="content" source="media/how-to-manage-sensors-from-the-on-premises-management-console/paste-connection-string.png" alt-text="Screenshot that shows pasting the copied connection string into the Connection string box."::: --1. Select **Connect**. --### Connect sensors by using tunneling --Enhance system security by preventing direct user access to the sensor. Instead of direct access, use proxy tunneling to let users access the sensor from the on-premises management console with a single firewall rule. This technique narrows the possibility of unauthorized access to the network environment beyond the sensor. The user's experience when signing in to the sensor remains the same. --Using tunneling allows you to connect to the on-premises management console from its IP address and a single port (9000 by default) to any sensor. --For example, the following image shows a sample architecture where users access the sensor consoles via the on-premises management console. ---**To set up tunneling at the on-premises management console**: --1. Sign in to the on-premises management console's CLI with the *cyberx* or the *support* user credentials and run the following command: -- ```bash - sudo cyberx-management-tunnel-enable - - ``` -- For more information on users, see [Default privileged on-premises users](roles-on-premises.md#default-privileged-on-premises-users). --1. Allow a few minutes for the connection to start. - - When tunneling access is configured, the following URL syntax is used to access the sensor consoles: `https://<on-premises management console address>/<sensor address>/<page URL>` --You can also customize the port range to a number other than 9000. An example is 10000. --**To use a new port**: --Sign in to the on-premises management console and run the following command: --```bash -sudo cyberx-management-tunnel-enable --port 10000 - -``` --**To disable the connection**: --Sign in to the on-premises management console and run the following command: --```bash -cyberx-management-tunnel-disable - -``` --No configuration is needed on the sensor. --**To access the tunneling log files**: --1. **From the on-premises management console**: Sign in and go to */var/log/apache2.log*. -1. **From the sensor**: Sign in and go to */var/cyberx/logs/tunnel.log*. --## Next steps ---For more information, see: --- [Troubleshoot the sensor and on-premises management console](how-to-troubleshoot-the-sensor-and-on-premises-management-console.md)-- [Manage individual sensors](how-to-manage-individual-sensors.md)-- [Create OT sites and zones on an on-premises management console](ot-deploy/sites-and-zones-on-premises.md) |
defender-for-iot | How To Activate And Set Up Your Sensor | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-iot/organizations/how-to-activate-and-set-up-your-sensor.md | - Title: Activate and set up your sensor -description: This article describes how to sign in and activate a sensor console. Previously updated : 06/06/2022----# Activate and set up your sensor --This article describes how to activate a sensor and perform initial setup. --Administrator users carry out activation when signing in for the first time and when activation management is required. Setup ensures that the sensor is configured to optimally detect and alert. --Security analysts and read-only users can't activate a sensor or generate a new password. --## Sign in and activation for administrator users --Administrators who sign in for the first time should verify that they have access to the activation and password recovery files for this sensor. These files were downloaded during sensor onboarding. If Administrators don't have these files, they can generate new ones via Defender for IoT in the Azure portal. The following Azure permissions are needed to generate the files: --- Azure security administrator-- Subscription contributor-- Subscription owner permissions--### First-time sign in and activation checklist --Before administrators sign in to the sensor console, administrator users should have access to: --- The sensor IP address that was defined during the installation.--- User sign in credentials for the sensor. If you downloaded an ISO for the sensor, use the default credentials that you received during the installation. We recommend that you create a new *Administrator* user after activation.--- An initial password. If you purchased a preconfigured sensor from Arrow, you need to generate a password when signing in for the first time.--- The activation file associated with this sensor. The file was generated and downloaded during sensor onboarding by Defender for IoT.---- An SSL/TLS CA-signed certificate that your company requires.---### About activation files --Your sensor was onboarded to Microsoft Defender for IoT in a specific management mode: --| Mode type | Description | -|--|--| -| **Cloud connected mode** | Information that the sensor detects is displayed in the sensor console. Alert information is also delivered to Azure and can be shared with other Azure services, such as Microsoft Sentinel. You can also enable automatic threat intelligence updates. | -| **Locally connected mode** | Information that the sensor detects is displayed in the sensor console. Detection information is also shared with the on-premises management console, if the sensor is connected to it. | --A locally connected, or cloud-connected activation file was generated and downloaded for this sensor during onboarding. The activation file contains instructions for the management mode of the sensor. *A unique activation file should be uploaded to each sensor you deploy.* The first time you sign in, you need to upload the relevant activation file for this sensor. ---### About certificates --Following sensor installation, a local self-signed certificate is generated. The certificate is used to access the sensor console. After administrators sign in to the console for the first time, they're prompted to onboard an SSL/TLS certificate. --Two levels of security are available: --- Meet specific certificate and encryption requirements requested by your organization, by uploading the CA-signed certificate.-- Allow validation between the management console and connected sensors. Validation is evaluated against a certificate revocation list and the certificate expiration date. *If validation fails, communication between the management console and the sensor is halted and a validation error appears in the console.* This option is enabled by default after installation. --The console supports the following certificate types: --- Private and Enterprise Key Infrastructure (private PKI)--- Public Key Infrastructure (public PKI)--- Locally generated on the appliance (locally self-signed) -- > [!IMPORTANT] - > We recommend that you don't use the default self-signed certificate. The certificate is not secure and should be used for test environments only. The owner of the certificate can't be validated, and the security of your system can't be maintained. Never use this option for production networks. --### Sign in and activate the sensor --**To sign in and activate:** --1. Go to the sensor console from your browser by using the IP defined during the installation. The sign-in dialog box opens. -- :::image type="content" source="media/how-to-activate-and-set-up-your-sensor/sensor-log-in-1.png" alt-text="Screenshot of a Defender for IoT sensor sign-in page."::: ---1. Enter the credentials defined during the sensor installation, or select the **Password recovery** option. If you purchased a preconfigured sensor from Arrow, generate a password first. For more information on password recovery, see [Investigate password failure at initial sign-in](how-to-troubleshoot-the-sensor-and-on-premises-management-console.md#investigate-password-failure-at-initial-sign-in). ---1. Select **Login/Next**. The **Sensor Network Settings** tab opens. -- :::image type="content" source="media/how-to-activate-and-set-up-your-sensor/sensor-log-in-wizard-activate.png" alt-text="Screenshot of the sensor network settings options when signing into the sensor."::: --1. Use this tab if you want to change the sensor network configuration before activation. The configuration parameters were defined during the software installation, or when you purchased a preconfigured sensor. The following parameters were defined: -- - IP address - - DNS - - Default gateway - - Subnet mask - - Host name -- You might want to update this information before activating the sensor. For example, you might need to change the preconfigured parameters defined by Arrow. You can also define proxy settings before activating your sensor. - - If you want to work with a proxy, enable the proxy toggle and add the proxy host, port and username. -- :::image type="content" source="media/how-to-activate-and-set-up-your-sensor/sensor-log-in-wizard-activate-proxy.png" alt-text="Screenshot of the proxy options for signing in to a sensor."::: --1. Select **Next.** The Activation tab opens. -- :::image type="content" source="media/how-to-activate-and-set-up-your-sensor/wizard-upload-activation-file.png" alt-text="Screenshot of a first time activation file upload option."::: --1. Select **Upload** and go to the activation file that you downloaded during the sensor onboarding. --1. Approve the terms and conditions. --1. Select **Activate**. The SSL/TLS certificate tab opens. Before defining certificates, see [Deploy SSL/TLS certificates on OT appliances](how-to-deploy-certificates.md). -- It is **not recommended** to use a locally generated certificate in a production environment. -- :::image type="content" source="media/how-to-activate-and-set-up-your-sensor/wizard-upload-activation-certificates-1.png" alt-text="Screenshot of the SSL/TLS Certificates page when signing in to a sensor."::: --1. Enable the **Import trusted CA certificate (recommended)** toggle. -1. Define a certificate name. -1. Upload the Key, CRT, and PEM files. -1. Enter a passphrase and upload a PEM file if necessary. -1. It's recommended to select **Enable certificate validation** to validate the connections between management console and connected sensors. --1. Select **Finish**. --You might need to refresh your screen after uploading the CA-signed certificate. --For information about uploading a new certificate, supported certificate parameters, and working with CLI certificate commands, see [Manage individual sensors](how-to-manage-individual-sensors.md). --### Activation expirations --After you've activated your sensor, cloud-connected and locally managed sensors remain activated for as long as your Azure subscription with your Defender for IoT plan is active. --If you're updating an OT sensor from a legacy version, you'll need to re-activate your updated sensor. --For more information, see [Manage Defender for IoT subscriptions](how-to-manage-subscriptions.md) and [Manage the on-premises management console](how-to-manage-the-on-premises-management-console.md). --### Activate an expired license (versions under 10.0) --For users with versions prior to 10.0, your license may expire, and the following alert will be displayed. -- :::image type="content" source="media/how-to-activate-and-set-up-your-on-premises-management-console/activation-popup.png" alt-text="Screenshot of a license expiration popup message."::: --**To activate your license:** --1. Open a case with [support](https://portal.azure.com/?passwordRecovery=true&Microsoft_Azure_IoT_Defender=canary#create/Microsoft.Support). --1. Supply support with your Activation ID number. --1. Support will supply you with new license information in the form of a string of letters. --1. Read the terms and conditions, and check the checkbox to approve. --1. Paste the string into space provided. -- :::image type="content" source="media/how-to-activate-and-set-up-your-on-premises-management-console/add-license.png" alt-text="Screenshot of the license activation box and button."::: --1. Select **Activate**. --### Subsequent sign ins --After first-time activation, the Microsoft Defender for IoT sensor console opens after sign-in without requiring an activation file or certificate definition. You only need your sign-in credentials. ---After your sign-in, the Microsoft Defender for IoT sensor console opens. -- :::image type="content" source="media/how-to-activate-and-set-up-your-sensor/initial-dashboard.png" alt-text="Screenshot of the initial sensor console dashboard Overview page." lightbox="media/how-to-activate-and-set-up-your-sensor/initial-dashboard.png"::: --## Initial setup and learning (for administrators) --After your first sign-in, the Microsoft Defender for IoT sensor starts to monitor your network automatically. Network devices will appear in the device map and device inventory sections. Microsoft Defender for IoT will begin to detect and alert you on all security and operational incidents that occur in your network. You can then create reports and queries based on the detected information. --Initially this activity is carried out in the Learning mode, which instructs your sensor to learn your network's usual activity. For example, the sensor learns devices discovered in your network, protocols detected in the network, and file transfers that occur between specific devices. This activity becomes your network's baseline activity. --### Review and update basic system settings --Review the sensor's system settings to make sure the sensor is configured to optimally detect and alert. --Define the sensor's system settings. For example: --- Define ICS (or IoT) and segregated subnets.--- Define port aliases for site-specific protocols.--- Define VLANs and names that are in use.--- If DHCP is in use, define legitimate DHCP ranges.--- Define integration with Active Directory and mail server as appropriate.--### Disable Learning mode --After adjusting the system settings, you can let the sensor run in Learning mode until you feel that system detections accurately reflect your network activity. --The learning mode should run for about 2 to 6 weeks, depending on your network size and complexity. After you disable Learning mode, any activity that differs from your baseline activity will trigger an alert. --**To disable learning mode:** --- Select **System Settings**, **Network Monitoring,** **Detection Engines and Network Modeling** and disable the **Learning** toggle.--## First-time sign in for security analysts and read-only users --Before you sign in, verify that you have: --- The sensor IP address.-- Sign in credentials that your administrator provided.- - :::image type="content" source="media/how-to-activate-and-set-up-your-sensor/sensor-log-in-1.png" alt-text="Screenshot of the sensor sign-in page after the initial setup."::: ---## Console tools: Overview --You can access console tools from the side menu. Tools help you: -- Gain deep, comprehensive visibility into your network-- Analyze network risks, vulnerabilities, trends and statistics-- Set up your sensor for maximum performance-- Create and manage users -- :::image type="content" source="media/how-to-activate-and-set-up-your-sensor/main-page-side-bar.png" alt-text="Screenshot of the sensor console's main menu on the left."::: --### Discover --| Tools| Description | -| --|--| -| Overview | View a dashboard with high-level information about your sensor deployment, alerts, traffic, and more. | -| Device map | View the network devices, device connections, Purdue levels, and device properties in a map. Various zooms, highlight, and filter options are available to help you gain the insight you need. For more information, see [Investigate devices on a device map](how-to-work-with-the-sensor-device-map.md) | -| Device inventory | The Device inventory displays a list of device attributes that this sensor detects. Options are available to: <br /> - Sort, or filter the information according to the table fields, and see the filtered information displayed. <br /> - Export information to a CSV file. <br /> - Import Windows registry details. For more information, see [Detect Windows workstations and servers with a local script](detect-windows-endpoints-script.md).| -| Alerts | Alerts are triggered when sensor engines detect changes or suspicious activity in network traffic that requires your attention. For more information, see [View and manage alerts on your OT sensor](how-to-view-alerts.md).| --### Analyze --| Tools| Description | -||| -| Event timeline | View a timeline with information about alerts, network events, and user operations. For more information, see [Track sensor activity](how-to-track-sensor-activity.md).| -| Data mining | Generate comprehensive and granular information about your network's devices at various layers. For more information, see [Sensor data mining queries](how-to-create-data-mining-queries.md).| -| Trends and Statistics | View trends and statistics about an extensive range of network traffic and activity. As a small example, display charts and graphs showing top traffic by port, connectivity drops by hours, S7 traffic by control function, number of devices per VLAN, SRTP errors by day, or Modbus traffic by function. For more information, see [Sensor trends and statistics reports](how-to-create-trends-and-statistics-reports.md). -| Risk Assessment | Proactively address vulnerabilities, identify risks such as missing patches or unauthorized applications. Detect changes to device configurations, controller logic, and firmware. Prioritize fixes based on risk scoring and automated threat modeling. For more information, see [Risk assessment reporting](how-to-create-risk-assessment-reports.md#create-risk-assessment-reports).| -| Attack Vector | Display a graphical representation of a vulnerability chain of exploitable devices. These vulnerabilities can give an attacker access to key network devices. The Attack Vector Simulator calculates attack vectors in real time and analyzes all attack vectors for a specific target. For more information, see [Attack vector reporting](how-to-create-attack-vector-reports.md#create-attack-vector-reports).| --### Manage --| Tools| Description | -||| -| System settings | Configure the system settings. For example, define DHCP settings, provide mail server details, or create port aliases. | -| Custom alert rules | Use custom alert rules to more specifically pinpoint activity or traffic of interest to you. For more information, see [Create custom alert rules on an OT sensor](how-to-accelerate-alert-incident-response.md#create-custom-alert-rules-on-an-ot-sensor). | -| Users | Define users and roles with various access levels. For more information, see [Create and manage users on an OT network sensor](manage-users-sensor.md). | -| Forwarding | Forward alert information to partners that integrate with Defender for IoT, for example, Microsoft Sentinel, Splunk, ServiceNow. You can also send to email addresses, webhook servers, and more. <br /> See [Forward alert information](how-to-forward-alert-information-to-partners.md) for details. | ---**Support** --| Tool| Description | -|-|| -| Support | Contact [Microsoft Support](https://support.microsoft.com/) for help.| --## Review system messages --System messages provide general information about your sensor that may require your attention, for example if: --- your sensor activation file is expired or will expire soon-- your sensor isn't detecting traffic-- your sensor SSL certificate is expired or will expire soon-- -**To review system messages:** -1. Sign into the sensor -1. Select the **System Messages** icon (Bell icon). ---## Next steps --For more information, see: --- [Threat intelligence research and packages](how-to-work-with-threat-intelligence-packages.md)--- [Onboard a sensor](tutorial-onboarding.md#onboard-and-activate-the-virtual-sensor)--- [Manage sensor activation files](how-to-manage-individual-sensors.md#upload-a-new-activation-file)--- [Control what traffic is monitored](how-to-control-what-traffic-is-monitored.md) |
defender-for-iot | How To Deploy Certificates | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-iot/organizations/how-to-deploy-certificates.md | - Title: Deploy SSL/TLS certificates on OT appliances - Microsoft Defender for IoT. -description: Learn how to deploy SSL/TLS certificates on Microsoft Defender for IoT OT network sensors and on-premises management consoles. Previously updated : 01/05/2023----# Deploy SSL/TLS certificates on OT appliances --This article describes how to create and deploy SSL/TLS certificates on OT network sensors and on-premises management consoles. Defender for IoT uses SSL/TLS certificates to secure communication between the following system components: --- Between users and the OT sensor or on-premises management console UI access-- Between OT sensors and an on-premises management console, including [API communication](references-work-with-defender-for-iot-apis.md)-- Between an on-premises management console and a high availability (HA) server, if configured-- Between OT sensors or on-premises management consoles and partners servers defined in [alert forwarding rules](how-to-forward-alert-information-to-partners.md)--You can deploy SSL/TLS certificates during initial configuration as well as later on. --Defender for IoT validates certificates against the certificate expiration date and against a passphrase, if one is defined. Validations against a Certificate Revocation List (CRL) and the certificate trust chain are available as well, though not mandatory. Invalid certificates can't be uploaded to OT sensors or on-premises management consoles, and will block encrypted communication between Defender for IoT components. --Each certificate authority (CA)-signed certificate must have both a `.key` file and a `.crt` file, which are uploaded to OT network sensors and on-premises management consoles after the first sign-in. While some organizations may also require a `.pem` file, a `.pem` file isn't required for Defender for IoT. --Make sure to create a unique certificate for each OT sensor, on-premises management console, and HA server, where each certificate meets required parameter criteria. --## Prerequisites --To perform the procedures described in this article, make sure that: --- You have a security, PKI or certificate specialist available to oversee the certificate creation-- You can access the OT network sensor or on-premises management console as an **Admin** user.-- For more information, see [On-premises users and roles for OT monitoring with Defender for IoT](roles-on-premises.md). --## Deploy an SSL/TLS certificate --Deploy your SSL/TLS certificate by importing it to your OT sensor or on-premises management console. --Verify that your SSL/TLS certificate [meets the required parameters](#verify-certificate-file-parameter-requirements), and that you have [access to a CRL server](#verify-crl-server-access). --### Deploy a certificate on an OT sensor --1. Sign into your OT sensor and select **System settings** > **Basic** > **SSL/TLS certificate**. --1. In the **SSL/TLS certificate** pane, select one of the following, and then follow the instructions in the relevant tab: -- - **Import a trusted CA certificate (recommended)** - - **Use Locally generated self-signed certificate (Not recommended)** -- # [Trusted CA certificates](#tab/import-trusted-ca-certificate) - - 1. Enter the following parameters: - - | Parameter | Description | - ||| - | **Certificate Name** | Enter your certificate name. | - | **Passphrase** - *Optional* | Enter a passphrase. | - | **Private Key (KEY file)** | Upload a Private Key (KEY file). | - | **Certificate (CRT file)** | Upload a Certificate (CRT file). | - | **Certificate Chain (PEM file)** - *Optional* | Upload a Certificate Chain (PEM file). | - - Select **Use CRL (Certificate Revocation List) to check certificate status** to validate the certificate against a [CRL server](#verify-crl-server-access). The certificate is checked once during the import process. -- For example: -- :::image type="content" source="media/how-to-deploy-certificates/recommended-ssl.png" alt-text="Screenshot of importing a trusted CA certificate." lightbox="media/how-to-deploy-certificates/recommended-ssl.png"::: - - # [Locally generated self-signed certificates](#tab/locally-generated-self-signed-certificate) - - > [!NOTE] - > Using self-signed certificates in a production environment is not recommended, as it leads to a less secure environment. - > We recommend using self-signed certificates in test environments only. - > The owner of the certificate cannot be validated and the security of your system cannot be maintained. -- Select **Confirm** to acknowledge the warning. -- --1. In the **Validation for on-premises management console certificates** area, select **Required** if SSL/TLS certificate validation is required. Otherwise, select **None**. --1. Select **Save** to save your certificate settings. --### Deploy a certificate on an on-premises management console --1. Sign into your on-premises management console and select **System settings** > **SSL/TLS certificates**. --1. In the **SSL/TLS certificate** pane, select one of the following, and then follow the instructions in the relevant tab: -- - **Import a trusted CA certificate** - - **Use Locally generated self-signed certificate (Insecure, not recommended)** -- # [Trusted CA certificates](#tab/cm-import-trusted-ca-certificate) - - 1. In the **SSL/TLS Certificates** dialog, select **Add Certificate**. -- 1. Enter the following parameters: - - | Parameter | Description | - ||| - | **Certificate Name** | Enter your certificate name. | - | **Passphrase** - *Optional* | Enter a passphrase. | - | **Private Key (KEY file)** | Upload a Private Key (KEY file). | - | **Certificate (CRT file)** | Upload a Certificate (CRT file). | - | **Certificate Chain (PEM file)** - *Optional* | Upload a Certificate Chain (PEM file). | -- For example: -- :::image type="content" source="media/how-to-deploy-certificates/management-ssl-certificate.png" alt-text="Screenshot of importing a trusted CA certificate." lightbox="media/how-to-deploy-certificates/management-ssl-certificate.png"::: -- # [Locally generated self-signed certificates](#tab/cm-locally-generated-self-signed-certificate) - - > [!NOTE] - > Using self-signed certificates in a production environment is not recommended, as it leads to a less secure environment. - > We recommend using self-signed certificates in test environments only. - > The owner of the certificate cannot be validated and the security of your system cannot be maintained. -- Select **I CONFIRM** to acknowledge the warning. -- --1. Select the **Enable Certificate Validation** option to turn on system-wide validation for SSL/TLS certificates with the issuing [Certificate Authority](#create-ca-signed-ssltls-certificates) and [Certificate Revocation Lists](#verify-crl-server-access). --1. Select **SAVE** to save your certificate settings. --You can also [import the certificate to your OT sensor using CLI commands](references-work-with-defender-for-iot-cli-commands.md#tlsssl-certificate-commands). --### Verify certificate file parameter requirements --Verify that the certificates meet the following requirements: --- **CRT file requirements**:-- | Field | Requirement | - ||| - | **Signature Algorithm** | SHA256RSA | - | **Signature Hash Algorithm** | SHA256 | - | **Valid from** | A valid past date | - | **Valid To** | A valid future date | - | **Public Key** | RSA 2048 bits (Minimum) or 4096 bits | - | **CRL Distribution Point** | URL to a CRL server. If your organization doesn't [validate certificates against a CRL server](#verify-crl-server-access), remove this line from the certificate. | - | **Subject CN (Common Name)** | domain name of the appliance, such as *sensor.contoso.com*, or *.contoso.com* | - | **Subject (C)ountry** | Certificate country code, such as `US` | - | **Subject (OU) Org Unit** | The organization's unit name, such as *Contoso Labs* | - | **Subject (O)rganization** | The organization's name, such as *Contoso Inc.* | -- > [!IMPORTANT] - > While certificates with other parameters might work, they aren't supported by Defender for IoT. Additionally, wildcard SSL certificates, which are public key certificates that can be used on multiple subdomains such as *.contoso.com*, are insecure and aren't supported. - > Each appliance must use a unique CN. --- **Key file requirements**: Use either RSA 2048 bits or 4096 bits. Using a key length of 4096 bits will slow down the SSL handshake at the start of each connection, and increase the CPU usage during handshakes.--- (Optional) Create a certificate chain, which is a `.pem` file that contains the certificates of all the certificate authorities in the chain of trust that led to your certificate. Certificate chain files support bag attributes.--### Verify CRL server access --If your organization validates certificates, your OT sensors and on-premises management console must be able to access the CRL server defined by the certificate. By default, certificates access the CRL server URL via HTTP port 80. However, some organizational security policies block access to this port. --If your OT sensors and on-premises management consoles can't access your CRL server on port 80, you can use one of the following workarounds: --- **Define another URL and port in the certificate**:-- - The URL you define must be configured as `http: //` and not `https://` - - Make sure that the destination CRL server can listen on the port you define --- **Use a proxy server that can access the CRL on port 80**-- For more information, see [Forward OT alert information](how-to-forward-alert-information-to-partners.md). --If validation fails, communication between the relevant components is halted and a validation error is presented in the console. --## Create a certificate --Create either a CA-signed SSL/TLS certificate or a self-signed SSL/TLS certificate (not recommended). --### Create CA-signed SSL/TLS certificates --Use a certificate management platform, such as an automated PKI management platform, to create a certificate. Verify that the certificate meets [certificate file requirements](#verify-certificate-file-parameter-requirements), and then [test the certificate](#test-your-ssltls-certificates) file you created when you're done. --If you aren't carrying out certificate validation, remove the CRL URL reference in the certificate. For more information, see [certificate file requirements](#verify-certificate-file-parameter-requirements). --Consult a security, PKI, or other qualified certificate lead if you don't have an application that can automatically create certificates. --You can also convert existing certificate files if you don't want to create new ones. --### Create self-signed SSL/TLS certificates --Create self-signed SSL/TLS certificates by first [downloading a security certificate](#download-a-security-certificate) from the OT sensor or on-premises management console and then exporting it to the required file types. --> [!NOTE] -> While you can use a locally-generated and self-signed certificate, we do not recommend this option. --**Export as a certificate file:** --After downloading the security certificate, use a certificate management platform to create the following types of SSL/TLS certificate files: --| File type | Description | -||| -| **.crt – certificate container file** | A `.pem`, or `.der` file, with a different extension for support in Windows Explorer.| -| **.key – Private key file** | A key file is in the same format as a `.pem` file, with a different extension for support in Windows Explorer.| -| **.pem – certificate container file (optional)** | Optional. A text file with a Base64-encoding of the certificate text, and a plain-text header and footer to mark the beginning and end of the certificate. | --For example: --1. Open the downloaded certificate file and select the **Details** tab > **Copy to file** to run the **Certificate Export Wizard**. --1. In the **Certificate Export Wizard**, select **Next** > **DER encoded binary X.509 (.CER)** > and then select **Next** again. --1. In the **File to Export** screen, select **Browse**, choose a location to store the certificate, and then select **Next**. --1. Select **Finish** to export the certificate. --> [!NOTE] -> You may need to convert existing files types to supported types. --### Check your certificate against a sample --Use the following sample certificate to compare to the certificate you've created, making sure that the same fields exist in the same order. --``` Sample SSL certificate -Bag Attributes: <No Attributes> -subject=C = US, S = Illinois, L = Springfield, O = Contoso Ltd, OU= Contoso Labs, CN= sensor.contoso.com, E -= support@contoso.com -issuer C=US, S = Illinois, L = Springfield, O = Contoso Ltd, OU= Contoso Labs, CN= Cert-ssl-root-da2e22f7-24af-4398-be51- -e4e11f006383, E = support@contoso.com BEGIN CERTIFICATE---MIIESDCCAZCgAwIBAgIIEZK00815Dp4wDQYJKoZIhvcNAQELBQAwgaQxCzAJBgNV -BAYTAIVTMREwDwYDVQQIDAhJbGxpbm9pczEUMBIGA1UEBwwLU3ByaW5nZmllbGQx -FDASBgNVBAoMCONvbnRvc28gTHRKMRUWEwYDVQQLDAXDb250b3NvIExhYnMxGzAZ -BgNVBAMMEnNlbnNvci5jb250b3NvLmNvbTEIMCAGCSqGSIb3DQEJARYTc3VwcG9y -dEBjb250b3NvLmNvbTAeFw0yMDEyMTcxODQwMzhaFw0yMjEyMTcxODQwMzhaMIGK -MQswCQYDVQQGEwJVUzERMA8GA1UECAwISWxsaW5vaXMxFDASBgNVBAcMC1Nwcmlu -Z2ZpZWxkMRQwEgYDVQQKDAtDb250b3NvIEX0ZDEVMBMGA1UECwwMQ29udG9zbyBM -YWJzMRswGQYDVQQDDBJzZW5zb3luY29udG9zby5jb20xljAgBgkqhkiG9w0BCQEW -E3N1cHBvcnRAY29udG9zby5jb20wggEiMA0GCSqGSIb3DQEBAQUAA4IBDwAwggEK -AoIBAQDRGXBNJSGJTfP/K5ThK8vGOPzh/N8AjFtLvQiiSfkJ4cxU/6d1hNFEMRYG -GU+jY1Vknr0|A2nq7qPB1BVenW3 MwsuJZe Floo123rC5ekzZ7oe85Bww6+6eRbAT -WyqpvGVVpfcsloDznBzfp5UM9SVI5UEybllod31MRR/LQUEIKLWILHLW0eR5pcLW -pPLtOW7wsK60u+X3tqFo1AjzsNbXbEZ5pnVpCMqURKSNmxYpcrjnVCzyQA0C0eyq -GXePs9PL5DXfHy1x4WBFTd98X83 pmh/vyydFtA+F/imUKMJ8iuOEWUtuDsaVSX0X -kwv2+emz8CMDLsbWvUmo8Sg0OwfzAgMBAAGjfDB6MB0GA1UdDgQWBBQ27hu11E/w -21Nx3dwjp0keRPuTsTAfBgNVHSMEGDAWgBQ27hu1lE/w21Nx3dwjp0keRPUTSTAM -BgNVHRMEBTADAQH/MAsGA1UdDwQEAwIDqDAdBgNVHSUEFjAUBggrBgEFBQcDAgYI -KwYBBQUHAwEwDQYJKoZIhvcNAQELBQADggEBADLsn1ZXYsbGJLLzsGegYv7jmmLh -nfBFQqucORSQ8tqb2CHFME7LnAMfzFGpYYV0h1RAR+1ZL1DVtm+IKGHdU9GLnuyv -9x9hu7R4yBh3K99ILjX9H+KACvfDUehxR/ljvthoOZLalsqZIPnRD/ri/UtbpWtB -cfvmYleYA/zq3xdk4vfOI0YTOW11qjNuBIHh0d5S5sn+VhhjHL/s3MFaScWOQU3G -9ju6mQSo0R1F989aWd+44+8WhtOEjxBvr+17CLqHsmbCmqBI7qVnj5dHvkh0Bplw -zhJp150DfUzXY+2sV7Uqnel9aEU2Hlc/63EnaoSrxx6TEYYT/rPKSYL+++8= END CERTIFICATE---``` --### Test your SSL/TLS certificates --If you want to check the information within the certificate `.csr` file or private key file, use the following CLI commands: --- **Check a Certificate Signing Request (CSR)**: Run `openssl req -text -noout -verify -in CSR.csr`-- **Check a private key**: Run `openssl rsa -in privateKey.key -check`-- **Check a certificate**: Run `openssl x509 -in certificate.crt -text -noout`--If these tests fail, review [certificate file parameter requirements](#verify-certificate-file-parameter-requirements) to verify that your file parameters are accurate, or consult your certificate specialist. --## Troubleshoot --### Download a security certificate --1. After [installing your OT sensor software](ot-deploy/install-software-ot-sensor.md) or [on-premises management console](ot-deploy/install-software-on-premises-management-console.md), go to the sensor's or on-premises management console's IP address in a browser. --1. Select the :::image type="icon" source="media/how-to-deploy-certificates/warning-icon.png" border="false"::: **Not secure** alert in the address bar of your web browser, then select the **>** icon next to the warning message **"Your connection to this site isn't secure"**. For example: -- :::image type="content" source="media/how-to-deploy-certificates/connection-is-not-secure.png" alt-text="Screenshot of web page with a Not secure warning in the address bar." lightbox="media/how-to-deploy-certificates/connection-is-not-secure.png"::: --1. Select the :::image type="icon" source="media/how-to-deploy-certificates/show-certificate-icon.png" border="false"::: **Show certificate** icon to view the security certificate for this website. --1. In the **Certificate viewer** pane, select the **Details** tab, then select **Export** to save the file on your local machine. --### Import a sensor's locally signed certificate to your certificate store --After creating your locally signed certificate, import it to a trusted storage location. For example: --1. Open the security certificate file and, in the **General** tab, select **Install Certificate** to start the **Certificate Import Wizard**. --1. In **Store Location**, select **Local Machine**, then select **Next**. --1. If a **User Allow Control** prompt appears, select **Yes** to allow the app to make changes to your device. --1. In the **Certificate Store** screen, select **Automatically select the certificate store based on the type of certificate**, then select **Next**. --1. Select **Place all certificates in the following store**, then **Browse**, and then select the **Trusted Root Certification Authorities** store. When you're done, select **Next**. For example: -- :::image type="content" source="media/how-to-deploy-certificates/certificate-store-trusted-root.png" alt-text="Screenshot of the certificate store screen where you can browse to the trusted root folder." lightbox="media/how-to-deploy-certificates/certificate-store-trusted-root.png"::: --1. Select **Finish** to complete the import. --### Validate the certificate's common name --1. To view the certificate's common name, open the certificate file and select the Details tab, and then select the **Subject** field. -- The certificate's common name will then appear next to **CN**. --1. Sign-in to your sensor console without a secure connection. In the **Your connection isn't private** warning screen, you might see a **NET::ERR_CERT_COMMON_NAME_INVALID** error message. --1. Select the error message to expand it, and then copy the string next to **Subject**. For example: -- :::image type="content" source="media/how-to-deploy-certificates/connection-is-not-private-subject.png" alt-text="Screenshot of the connection isn't private screen with the details expanded." lightbox="media/how-to-deploy-certificates/connection-is-not-private-subject.png"::: -- The subject string should match the **CN** string in the security certificate's details. --1. In your local file explorer, browse to the hosts file, such as at **This PC > Local Disk (C:) > Windows > System32 > drivers > etc**, and open the **hosts** file. --1. In the hosts file, add in a line at the end of document with the sensor's IP address and the SSL certificate's common name that you copied in the previous steps. When you're done, save the changes. For example: -- :::image type="content" source="media/how-to-deploy-certificates/hosts-file.png" alt-text="Screenshot of the hosts file." lightbox="media/how-to-deploy-certificates/hosts-file.png"::: --### Troubleshoot certificate upload errors --You won't be able to upload certificates to your OT sensors or on-premises management consoles if the certificates aren't created properly or are invalid. Use the following table to understand how to take action if your certificate upload fails and an error message is shown: --| **Certificate validation error** | **Recommendation** | -|--|--| -| **Passphrase does not match to the key** | Make sure you have the correct passphrase. If the problem continues, try recreating the certificate using the correct passphrase. | -| **Cannot validate chain of trust. The provided Certificate and Root CA don't match.** | Make sure a `.pem` file correlates to the `.crt` file. <br> If the problem continues, try recreating the certificate using the correct chain of trust, as defined by the `.pem` file. | -| **This SSL certificate has expired and isn't considered valid.** | Create a new certificate with valid dates.| -|**This certificate has been revoked by the CRL and can't be trusted for a secure connection** | Create a new unrevoked certificate. | -|**The CRL (Certificate Revocation List) location is not reachable. Verify the URL can be accessed from this appliance** | Make sure that your network configuration allows the sensor or on-premises management console to reach the CRL server defined in the certificate. <br> For more information, see [CRL server access](#verify-crl-server-access). | -|**Certificate validation failed** | This indicates a general error in the appliance. <br> Contact [Microsoft Support](https://support.microsoft.com/supportforbusiness/productselection?sapId=82c8f35-1b8e-f274-ec11-c6efdd6dd099).| --## Next steps --For more information, see: --- [Identify required appliances](how-to-identify-required-appliances.md)-- [Manage individual sensors](how-to-manage-individual-sensors.md)-- [Manage the on-premises management console](how-to-manage-the-on-premises-management-console.md) |
defender-for-iot | How To Enhance Port And Vlan Name Resolution | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-iot/organizations/how-to-enhance-port-and-vlan-name-resolution.md | - Title: Customize port and VLAN names on OT network sensors - Microsoft Defender for IoT -description: Learn how to customize port and VLAN names on Microsoft Defender for IoT OT network sensors. Previously updated : 01/12/2023----# Customize port and VLAN names on OT network sensors --Enrich device data shown in Defender for IoT by customizing port and VLAN names on your OT network sensors. --For example, you might want to assign a name to a non-reserved port that shows unusually high activity in order to call it out, or assign a name to a VLAN number to identify it quicker. --## Prerequisites --To customize port and VLAN names, you must be able to access the OT network sensor as an **Admin** user. --For more information, see [On-premises users and roles for OT monitoring with Defender for IoT](roles-on-premises.md). --## Customize names of detected ports --Defender for IoT automatically assigns names to most universally reserved ports, such as DHCP or HTTP. However, you might want to customize the name of a specific port to highlight it, such as when you're watching a port with unusually high detected activity. --Port names are shown in Defender for IoT when [viewing device groups from the OT sensor's device map](how-to-work-with-the-sensor-device-map.md), or when you create OT sensor reports that include port information. --**To customize a port name:** --1. Sign into your OT sensor as an **Admin** user. --1. Select **System settings** on the left and then, under **Network monitoring**, select **Port Naming**. --1. In the **Port naming** pane that appears, enter the port number you want to name, the port's protocol, and a meaningful name. Supported protocol values include: **TCP**, **UDP**, and **BOTH**. --1. Select **+ Add port** to customize an additional port, and **Save** when you're done. --## Customize a VLAN name --VLANs are either discovered automatically by the OT network sensor or added manually. Automatically discovered VLANs can't be edited or deleted, but manually added VLANs require a unique name. If a VLAN isn't explicitly named, the VLAN's number is shown instead. --VLAN's support is based on 802.1q (up to VLAN ID 4094). --VLAN names aren't synchronized between the OT network sensor and the on-premises management console. If you want to view customized VLAN names on the on-premises management console, [define the VLAN names](how-to-manage-the-on-premises-management-console.md#define-vlan-names) there as well. --**To configure VLAN names on an OT network sensor:** --1. Sign in to your OT sensor as an **Admin** user. --1. Select **System Settings** on the left and then, under **Network monitoring**, select **VLAN Naming**. --1. In the **VLAN naming** pane that appears, enter a VLAN ID and unique VLAN name. VLAN names can contain up to 50 ASCII characters. --1. Select **+ Add VLAN** to customize an additional VLAN, and **Save** when you're done. --1. **For Cisco switches**: Add the `monitor session 1 destination interface XX/XX encapsulation dot1q` command to the SPAN port configuration, where *XX/XX* is the name and number of the port. --## Next steps --> [!div class="nextstepaction"] -> [Investigate detected devices from the OT sensor device inventory](how-to-investigate-sensor-detections-in-a-device-inventory.md) --> [!div class="nextstepaction"] -> [Create sensor trends and statistics reports](how-to-create-trends-and-statistics-reports.md) --> [!div class="nextstepaction"] -> [Create sensor data mining queries](how-to-create-data-mining-queries.md) |
defender-for-iot | How To Forward Alert Information To Partners | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-iot/organizations/how-to-forward-alert-information-to-partners.md | If your forwarding alert rules aren't working as expected, check the following d - **Certificate validation**. Forwarding rules for [Syslog CEF](#syslog-server-actions), [Microsoft Sentinel](integrate-overview.md#microsoft-sentinel), and [QRadar](tutorial-qradar.md) support encryption and certificate validation. - If your OT sensors or on-premises management console are configured to [validate certificates](how-to-deploy-certificates.md#verify-crl-server-access) and the certificate can't be verified, the alerts aren't forwarded. + If your OT sensors or on-premises management console are configured to [validate certificates](ot-deploy/create-ssl-certificates.md#verify-crl-server-access) and the certificate can't be verified, the alerts aren't forwarded. In these cases, the sensor or on-premises management console is the session's client and initiator. Certificates are typically received from the server or use asymmetric encryption, where a specific certificate is provided to set up the integration. |
defender-for-iot | How To Gain Insight Into Global Regional And Local Threats | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-iot/organizations/how-to-gain-insight-into-global-regional-and-local-threats.md | - Title: Gain insight into global, regional, and local threats -description: Gain insight into global, regional, and local threats by using the site map in the on-premises management console. Previously updated : 01/01/2023----# Gain insight into global, regional, and local threats --The site map in the on-premises management console helps you achieve full security coverage by dividing your network into geographical and logical segments that reflect your business topology: --- **Geographical facility level**: A site reflects many devices grouped according to a geographical location presented on the map. By default, Microsoft Defender for IoT provides you with a world map. You update the map to reflect your organizational or business structure. For example, use a map that reflects sites across a specific country, city, or industrial campus. When the site color changes on the map, it provides the SOC team with an indication of critical system status in the facility.-- The map is interactive and enables opening each site and delving into this site's information. --- **Global logical layer**: A business unit is a way to divide your enterprise into logical segments according to specific industries. When you do this, your business topology is reflected on the map.-- For example, a global company that contains glass factories, plastic factories, and automobile factories can be managed as three different business units. A physical site located in Toronto includes three different glass production lines, a plastic production line, and a truck engine production line. So, this site has representatives of all three business units. --- **Geographical region level**: Create regions to divide a global enterprise into geographical regions. For example, the company that we described might use the regions North America, Western Europe, and Eastern Europe. North America has factories from all three business units. Western Europe has automobile factories and glass factories, and Eastern Europe has only plastic factories.--- **Local logical segment level**: A zone is a logical segment within a site that defines, for example, a functional area or production line. Working with zones allows enforcement of security policies that are relevant to the zone definition. For example, a site that contains five production lines can be segmented into five zones.--- **Local view level**: A local view of a single sensor installation provides insight into the operational and security status of connected devices.--## Work with site map views --The on-premises management console provides an overall view of your industrial network in a context-related map. The general map view presents the global map of your organization with the geographical location of each site. ---### Color-coded map views --**Green**: The number of security events is below the threshold that Defender for IoT has defined for your system. No action is needed. --**Yellow**: The number of security events is equal to the threshold that Defender for IoT has defined for your system. Consider investigating the events. --**Red**: The number of security events is beyond the threshold that Defender for IoT has defined for your system. Take immediate action. --### Risk-level map views --**Risk Assessment**: The Risk Assessment view displays information on site risks. Risk information helps you prioritize mitigation and build a road map to plan security improvements. --**Incident Response**: Get a centralized view of all unacknowledged alerts on each site across the enterprise. You can drill down and manage alerts detected in a specific site. ---**Malicious Activity**: If malware was detected, the site appears in red. This indicates that you should take immediate action. ---**Operational Alerts**: This map view for OT systems provides a better understanding of which OT system might experience operational incidents, such as PLC stops, firmware upload, and program upload. ---To choose a map view: --1. Select **Default View** from the map. -2. Select a view. ---## Update the site map image --Defender for IoT provides a default world map. You can change it to reflect your organization: a country map or a city map, for example. --To replace the map: --1. On the left pane, select **System Settings**. --2. Select the **Change Site Map** and upload the graphic file to replace the default map. --## Next step --[View alerts](how-to-view-alerts.md) |
defender-for-iot | How To Manage Individual Sensors | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-iot/organizations/how-to-manage-individual-sensors.md | The following procedures describe how to deploy updated SSL/TLS certificates, su If an upload fails, contact your security or IT administrator. For more information, see [SSL/TLS certificate requirements for on-premises resources](best-practices/certificate-requirements.md) and [Create SSL/TLS certificates for OT appliances](ot-deploy/create-ssl-certificates.md). -1. In the **Validation for on-premises management console certificates** area, select **Required** if SSL/TLS certificate validation is required. Otherwise, select **None**. +1. In the **Validation of on-premises management console certificate** area, select **Mandatory** if SSL/TLS certificate validation is required. Otherwise, select **None**. - If you've selected **Required** and validation fails, communication between relevant components is halted, and a validation error is shown on the sensor. For more information, see [CRT file requirements](best-practices/certificate-requirements.md#crt-file-requirements). + If you've selected **Mandatory** and validation fails, communication between relevant components is halted, and a validation error is shown on the sensor. For more information, see [CRT file requirements](best-practices/certificate-requirements.md#crt-file-requirements). 1. Select **Save** to save your certificate settings. When you're done, use the following procedures to validate your certificate file 1. Select the **Confirm** option to confirm the warning. -1. In the **Validation for on-premises management console certificates** area, select **Required** if SSL/TLS certificate validation is required. Otherwise, select **None**. +1. In the **Validation of on-premises management console certificate** area, select **Mandatory** if SSL/TLS certificate validation is required. Otherwise, select **None**. If this option is toggled on and validation fails, communication between relevant components is halted, and a validation error is shown on the sensor. For more information, see [CRT file requirements](best-practices/certificate-requirements.md#crt-file-requirements). |
defender-for-iot | How To Set Up High Availability | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-iot/organizations/how-to-set-up-high-availability.md | Before you perform the procedures in this article, verify that you've met the fo - Make sure that the primary on-premises management console is fully [configured](how-to-manage-the-on-premises-management-console.md), including at least two [OT network sensors connected](ot-deploy/connect-sensors-to-management.md) and visible in the console UI, as well as the scheduled backups or VLAN settings. All settings are applied to the secondary appliance automatically after pairing. -- Make sure that your SSL/TLS certificates meet required criteria. For more information, see [Deploy OT appliance certificates](how-to-deploy-certificates.md).+- Make sure that your SSL/TLS certificates meet required criteria. For more information, see [SSL/TLS certificate requirements for on-premises resources](best-practices/certificate-requirements.md). - Make sure that your organizational security policy grants you access to the following services, on the primary and secondary on-premises management console. These services also allow the connection between the sensors and secondary on-premises management console: |
defender-for-iot | How To Set Up Your Network | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-iot/organizations/how-to-set-up-your-network.md | - Title: Prepare your OT network for Microsoft Defender for IoT -description: Learn about solution architecture, network preparation, prerequisites, and other information needed to ensure that you successfully set up your network to work with Microsoft Defender for IoT appliances. Previously updated : 06/02/2022----# Prepare your OT network for Microsoft Defender for IoT --This article describes how to set up your OT network to work with Microsoft Defender for IoT components, including the OT network sensors, the Azure portal, and an optional on-premises management console. --OT network sensors use agentless, patented technology to discover, learn, and continuously monitor network devices for a deep visibility into OT/ICS/IoT risks. Sensors carry out data collection, analysis, and alerting on-site, making them ideal for locations with low bandwidth or high latency. --This article is intended for personnel experienced in operating and managing OT and IoT networks, such as automation engineers, plant managers, OT network infrastructure service providers, cybersecurity teams, CISOs, and CIOs. --We recommend that you use this article together with our [pre-deployment checklist](pre-deployment-checklist.md). --For assistance or support, contact [Microsoft Support](https://support.microsoft.com/supportforbusiness/productselection?sapId=82c88f35-1b8e-f274-ec11-c6efdd6dd099). --## Prerequisites --Before performing the procedures in this article, make sure you understand your own network architecture and how you'll connect to Defender for IoT. For more information, see: --- [Microsoft Defender for IoT system architecture](architecture.md)-- [Sensor connection methods](architecture-connections.md)-- [Best practices for planning your OT network monitoring](best-practices/plan-network-monitoring.md)--## On-site deployment tasks --Perform the steps in this section before deploying Defender for IoT on your network. --Make sure to perform each step methodologically, requesting the information and reviewing the data you receive. Prepare and configure your site and then validate your configuration. --### Collect site information --Record the following site information: --- Sensor management network information.--- Site network architecture.--- Physical environment.--- System integrations.--- Planned user credentials.--- Configuration workstation.--- TLS/SSL certificates (optional but recommended).--- SMTP authentication (optional). To use the SMTP server with authentication, prepare the credentials required for your server.--- DNS servers (optional). Prepare your DNS server's IP and host name.--### Prepare a configuration workstation --**To prepare a Windows or Mac workstation**: --- Make sure that you can connect to the sensor management interface.--- Make sure that you have terminal software (like PuTTY) or a supported browser. Supported browsers include the latest versions of Microsoft Edge, Chrome, Firefox, or Safari (Mac only).-- For more information, see [recommended browsers for the Azure portal](../../azure-portal/azure-portal-supported-browsers-devices.md#recommended-browsers). --- Make sure the required firewall rules are open on the workstation. Verify that your organizational security policy allows access as required. For more information, see [Networking requirements](#networking-requirements).--### Set up certificates --After you've installed the Defender for IoT sensor or on-premises management console software, a local, self-signed certificate is generated, and used to access the sensor web application. --The first time they sign in to Defender for IoT, administrator users are prompted to provide an SSL/TLS certificate. Optional certificate validation is enabled by default. --We recommend having your certificates ready before you start your deployment. For more information, see [Defender for IoT installation](how-to-install-software.md) and [About Certificates](how-to-deploy-certificates.md). --### Plan rack installation --**To plan your rack installation**: --1. Prepare a monitor and a keyboard for your appliance network settings. --1. Allocate the rack space for the appliance. --1. Have AC power available for the appliance. --1. Prepare the LAN cable for connecting the management to the network switch. --1. Prepare the LAN cables for connecting switch SPAN (mirror) ports and network taps to the Defender for IoT appliance. --1. Configure, connect, and validate SPAN ports in the mirrored switches using one of the following methods: -- |Method |Description | - ||| - |[Switch SPAN port](traffic-mirroring/configure-mirror-span.md) | Mirror local traffic from interfaces on the switch to a different interface on the same switch. | - |[Remote SPAN (RSPAN)](traffic-mirroring/configure-mirror-rspan.md) | Mirror traffic from multiple, distributed source ports into a dedicated remote VLAN. | - |[Active or passive aggregation (TAP)](traffic-mirroring/configure-mirror-tap.md) | Mirror traffic by installing an active or passive aggregation terminal access point (TAP) inline to the network cable. | - |[ERSPAN](traffic-mirroring/configure-mirror-erspan.md) | Mirror traffic with ERSPAN encapsulation when you need to extend monitored traffic across Layer 3 domains, when using specific Cisco routers and switches. | - |[ESXi vSwitch](traffic-mirroring/configure-mirror-esxi.md) | Use *Promiscuous mode* in a virtual switch environment as a workaround for configuring a monitoring port. | - |[Hyper-V vSwitch](traffic-mirroring/configure-mirror-hyper-v.md) | Use *Promiscuous mode* in a virtual switch environment as a workaround for configuring a monitoring port. | -- > [!NOTE] - > SPAN and RSPAN are Cisco terminology. Other brands of switches have similar functionality but might use different terminology. - > --1. Connect the configured SPAN port to a computer running Wireshark, and verify that the port is configured correctly. --1. Open all the relevant firewall ports. --### Validate your network --After preparing your network, use the guidance in this section to validate whether you're ready to deploy Defender for IoT. --Make an attempt to receive a sample of recorded traffic (PCAP file) from the switch SPAN or mirror port. This sample will: --- Validate if the switch is configured properly.--- Confirm if the traffic that goes through the switch is relevant for monitoring (OT traffic).--- Identify bandwidth and the estimated number of devices in this switch.--For example, you can record a sample PCAP file for a few minutes by connecting a laptop to an already configured SPAN port through the Wireshark application. --**To use Wireshark to validate your network**: --- Check that *Unicast packets* are present in the recording traffic. Unicast is from one address to another. If most of the traffic is ARP messages, then the switch setup is incorrect.--- Go to **Statistics** > **Protocol Hierarchy**. Verify that industrial OT protocols are present.--For example: ---## Networking requirements --Use the following tables to ensure that required firewalls are open on your workstation and verify that your organization security policy allows required access. --### User access to the sensor and management console --| Protocol | Transport | In/Out | Port | Used | Purpose | Source | Destination | -|--|--|--|--|--|--|--|--| -| SSH | TCP | In/Out | 22 | CLI | To access the CLI | Client | Sensor and on-premises management console | -| HTTPS | TCP | In/Out | 443 | To access the sensor, and on-premises management console web console | Access to Web console | Client | Sensor and on-premises management console | --### Sensor access to Azure portal --| Protocol | Transport | In/Out | Port | Purpose | Source | Destination | -|--|--|--|--|--|--|--| -| HTTPS | TCP | Out | 443 | Access to Azure | Sensor |OT network sensors connect to Azure to provide alert and device data and sensor health messages, access threat intelligence packages, and more. Connected Azure services include IoT Hub, Blob Storage, Event Hubs, and the Microsoft Download Center.<br><br>**For OT sensor versions 22.x**: Download the list from the **Sites and sensors** page in the Azure portal. Select an OT sensor with software versions 22.x or higher, or a site with one or more supported sensor versions. Then, select **More options > Download endpoint details**. For more information, see [Sensor management options from the Azure portal](how-to-manage-sensors-on-the-cloud.md#sensor-management-options-from-the-azure-portal).<br><br>**For OT sensor versions 10.x**: `*.azure-devices.net`<br> `*.blob.core.windows.net`<br> `*.servicebus.windows.net`<br> `download.microsoft.com`| --### Sensor access to the on-premises management console --| Protocol | Transport | In/Out | Port | Used | Purpose | Source | Destination | -|--|--|--|--|--|--|--|--| -| NTP | UDP | In/Out | 123 | Time Sync | Connects the NTP to the on-premises management console | Sensor | On-premises management console | -| TLS/SSL | TCP | In/Out | 443 | Give the sensor access to the on-premises management console. | The connection between the sensor, and the on-premises management console | Sensor | On-premises management console | --### Other firewall rules for external services (optional) --Open these ports to allow extra services for Defender for IoT. --| Protocol | Transport | In/Out | Port | Used | Purpose | Source | Destination | -|--|--|--|--|--|--|--|--| -| SMTP | TCP | Out | 25 | Email | Used to open the customer's mail server, in order to send emails for alerts, and events | Sensor and On-premises management console | Email server | -| DNS | TCP/UDP | In/Out | 53 | DNS | The DNS server port | On-premises management console and Sensor | DNS server | -| HTTP | TCP | Out | 80 | The CRL download for certificate validation when uploading certificates. | Access to the CRL server | Sensor and on-premises management console | CRL server | -| [WMI](how-to-configure-windows-endpoint-monitoring.md) | TCP/UDP | Out | 135, 1025-65535 | Monitoring | Windows Endpoint Monitoring | Sensor | Relevant network element | -| [SNMP](how-to-set-up-snmp-mib-monitoring.md) | UDP | Out | 161 | Monitoring | Monitors the sensor's health | On-premises management console and Sensor | SNMP server | -| LDAP | TCP | In/Out | 389 | Active Directory | Allows Active Directory management of users that have access, to sign in to the system | On-premises management console and Sensor | LDAP server | -| Proxy | TCP/UDP | In/Out | 443 | Proxy | To connect the sensor to a proxy server | On-premises management console and Sensor | Proxy server | -| Syslog | UDP | Out | 514 | LEEF | The logs that are sent from the on-premises management console to Syslog server | On-premises management console and Sensor | Syslog server | -| LDAPS | TCP | In/Out | 636 | Active Directory | Allows Active Directory management of users that have access, to sign in to the system | On-premises management console and Sensor | LDAPS server | -| Tunneling | TCP | In | 9000 </br></br> In addition to port 443 </br></br> Allows access from the sensor, or end user, to the on-premises management console </br></br> Port 22 from the sensor to the on-premises management console | Monitoring | Tunneling | Endpoint, Sensor | On-premises management console | --## Choose a cloud connection method --If you're setting up OT sensors and connecting them to the cloud, understand supported cloud connection methods, and make sure to connect your sensors as needed. --For more information, see: --- [OT sensor cloud connection methods](architecture-connections.md)-- [Connect your OT sensors to the cloud](connect-sensors.md)--## Troubleshooting --This section provides troubleshooting for common issues when preparing your network for a Defender for IoT deployment. --### Can't connect by using a web interface --1. Verify that the computer you're trying to connect is on the same network as the appliance. --2. Verify that the GUI network is connected to the management port on the sensor. --3. Ping the appliance IP address. If there's no response to ping: -- 1. Connect a monitor and a keyboard to the appliance. -- 1. Use the **support** user* and password to sign in. -- 1. Use the command **network list** to see the current IP address. --4. If the network parameters are misconfigured, sign into the OT sensor as the **cyberx_host** user* to re-run the OT monitoring software configuration wizard. For example: -- ```bash - root@xsense:/# sudo dpkg-reconfigure iot-sensor - ``` -- The configuration wizard starts automatically. For more information, see [Install OT monitoring software](../how-to-install-software.md#install-ot-monitoring-software). --5. Restart the sensor machine and sign in with the **support** user*. Run the **network list** command to verify that the parameters were changed. --6. Try to ping and connect from the GUI again. --(*) For more information, see [Default privileged on-premises users](roles-on-premises.md#default-privileged-on-premises-users). --### Appliance isn't responding --1. Connect with a monitor and keyboard to the appliance, or use PuTTY to connect remotely to the CLI. --2. Use the *support* credentials to sign in. --3. Use the **system sanity** command and check that all processes are running. -- :::image type="content" source="media/how-to-set-up-your-network/system-sanity-command.png" alt-text="Screenshot of the system sanity command."::: --For any other issues, contact [Microsoft Support](https://support.microsoft.com/supportforbusiness/productselection?sapId=82c88f35-1b8e-f274-ec11-c6efdd6dd099). --## Next steps --For more information, see: --- [Predeployment checklist](pre-deployment-checklist.md)-- [Quickstart: Get started with Defender for IoT](getting-started.md)-- [Tutorial: Get started with Microsoft Defender for IoT for OT security](tutorial-onboarding.md)-- [Defender for IoT installation](how-to-install-software.md)-- [Connect your sensors to Microsoft Defender for IoT](connect-sensors.md)-- [Microsoft Defender for IoT system architecture](architecture.md)-- [Sensor connection methods](architecture-connections.md) |
defender-for-iot | How To Troubleshoot The Sensor And On Premises Management Console | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-iot/organizations/how-to-troubleshoot-the-sensor-and-on-premises-management-console.md | - Title: Troubleshoot the OT sensor and on-premises management console -description: Troubleshoot your OT sensor and on-premises management console to eliminate any problems you might be having. Previously updated : 06/15/2022---# Troubleshoot the sensor and on-premises management console --This article describes basic troubleshooting tools for the sensor and the on-premises management console. In addition to the items described here, you can check the health of your system in the following ways: --- **Alerts**: An alert is created when the sensor interface that monitors the traffic is down.-- **SNMP**: Sensor health is monitored through SNMP. Microsoft Defender for IoT responds to SNMP queries sent from an authorized monitoring server.-- **System notifications**: When a management console controls the sensor, you can forward alerts about failed sensor backups and disconnected sensors.--## Check system health --Check your system health from the sensor or on-premises management console. --**To access the system health tool**: --1. Sign in to the sensor or on-premises management console with the *support* user credentials. --1. Select **System Statistics** from the **System Settings** window. -- :::image type="icon" source="media/tutorial-install-components/system-statistics-icon.png" border="false"::: --1. System health data appears. Select an item on the left to view more details in the box. For example: -- :::image type="content" source="media/tutorial-install-components/system-health-check-screen.png" alt-text="Screenshot that shows the system health check."::: --System health checks include the following: --|Name |Description | -||| -|**Sanity** | | -|- Appliance | Runs the appliance sanity check. You can perform the same check by using the CLI command `system-sanity`. | -|- Version | Displays the appliance version. | -|- Network Properties | Displays the sensor network parameters. | -|**Redis** | | -|- Memory | Provides the overall picture of memory usage, such as how much memory was used and how much remained. | -|- Longest Key | Displays the longest keys that might cause extensive memory usage. | -|**System** | | -|- Core Log | Provides the last 500 rows of the core log, so that you can view the recent log rows without exporting the entire system log. | -|- Task Manager | Translates the tasks that appear in the table of processes to the following layers: <br><br> - Persistent layer (Redis)<br> - Cache layer (SQL) | -|- Network Statistics | Displays your network statistics. | -|- TOP | Shows the table of processes. It's a Linux command that provides a dynamic real-time view of the running system. | -|- Backup Memory Check | Provides the status of the backup memory, checking the following:<br><br> - The location of the backup folder<br> - The size of the backup folder<br> - The limitations of the backup folder<br> - When the last backup happened<br> - How much space there are for the extra backup files | -|- ifconfig | Displays the parameters for the appliance's physical interfaces. | -|- CyberX nload | Displays network traffic and bandwidth by using the six-second tests. | -|- Errors from Core, log | Displays errors from the core log file. | --### Check system health by using the CLI --Verify that the system is up and running prior to testing the system's sanity. --For more information, see [CLI command reference from OT network sensors](cli-ot-sensor.md). --**To test the system's sanity**: --1. Connect to the CLI with the Linux terminal (for example, PuTTY) and the user *support*. --1. Enter `system sanity`. --1. Check that all the services are green (running). -- :::image type="content" source="media/tutorial-install-components/support-screen.png" alt-text="Screenshot that shows running services."::: --1. Verify that **System is UP! (prod)** appears at the bottom. --Verify that the correct version is used: --**To check the system's version**: --1. Connect to the CLI with the Linux terminal (for example, PuTTY) and the user *support*. --1. Enter `system version`. --1. Check that the correct version appears. --Verify that all the input interfaces configured during the installation process are running: --**To validate the system's network status**: --1. Connect to the CLI with the Linux terminal (for example, PuTTY) and the *support* user. --1. Enter `network list` (the equivalent of the Linux command `ifconfig`). --1. Validate that the required input interfaces appear. For example, if two quad Copper NICs are installed, there should be 10 interfaces in the list. -- :::image type="content" source="media/tutorial-install-components/interface-list-screen.png" alt-text="Screenshot that shows the list of interfaces."::: --Verify that you can access the console web GUI: --**To check that management has access to the UI**: --1. Connect a laptop with an Ethernet cable to the management port (**Gb1**). --1. Define the laptop NIC address to be in the same range as the appliance. -- :::image type="content" source="media/tutorial-install-components/access-to-ui.png" alt-text="Screenshot that shows management access to the UI." border="false"::: --1. Ping the appliance's IP address from the laptop to verify connectivity (default: 10.100.10.1). --1. Open the Chrome browser in the laptop and enter the appliance's IP address. --1. In the **Your connection is not private** window, select **Advanced** and proceed. --1. The test is successful when the Defender for IoT sign-in screen appears. -- :::image type="content" source="media/tutorial-install-components/defender-for-iot-sign-in-screen.png" alt-text="Screenshot that shows access to management console."::: --## Troubleshoot sensors ---### You can't connect by using a web interface --1. Verify that the computer that you're trying to connect is on the same network as the appliance. --1. Verify that the GUI network is connected to the management port. --1. Ping the appliance's IP address. If there's no ping: -- 1. Connect a monitor and a keyboard to the appliance. -- 1. Use the *support* user and password to sign in. -- 1. Use the command `network list` to see the current IP address. --1. If the network parameters are misconfigured, use the following procedure to change them: -- 1. Use the command `network edit-settings`. -- 1. To change the management network IP address, select **Y**. -- 1. To change the subnet mask, select **Y**. -- 1. To change the DNS, select **Y**. -- 1. To change the default gateway IP address, select **Y**. -- 1. For the input interface change (sensor only), select **N**. -- 1. To apply the settings, select **Y**. --1. After restart, connect with the *support* user credentials and use the `network list` command to verify that the parameters were changed. --1. Try to ping and connect from the GUI again. --### The appliance isn't responding --1. Connect a monitor and keyboard to the appliance, or use PuTTY to connect remotely to the CLI. --1. Use the *support* user credentials to sign in. --1. Use the `system sanity` command and check that all processes are running. For example: -- :::image type="content" source="media/tutorial-install-components/system-sanity-screen.png" alt-text="Screenshot that shows the system sanity command."::: --For any other issues, contact [Microsoft Support](https://support.microsoft.com/en-us/supportforbusiness/productselection?sapId=82c88f35-1b8e-f274-ec11-c6efdd6dd099). ---### Investigate password failure at initial sign-in --When signing into a pre-configured sensor for the first time, you'll need to perform password recovery as follows: --1. On the Defender for IoT sign in screen, select **Password recovery**. The **Password recovery** screen opens. --1. Select either **CyberX** or **Support**, and copy the unique identifier. --1. Navigate to the Azure portal and select **Sites and Sensors**. --1. Select the **More Actions** drop down menu and select **Recover on-premises management console password**. -- :::image type="content" source="media/how-to-create-and-manage-users/recover-password.png" alt-text=" Screenshot of the recover on-premises management console password option."::: --1. Enter the unique identifier that you received on the **Password recovery** screen and select **Recover**. The `password_recovery.zip` file is downloaded. Don't extract or modify the zip file. -- :::image type="content" source="media/how-to-create-and-manage-users/enter-identifier.png" alt-text="Screenshot of the Recover dialog box."::: --1. On the **Password recovery** screen, select **Upload**. **The Upload Password Recovery File** window will open. --1. Select **Browse** to locate your `password_recovery.zip` file, or drag the `password_recovery.zip` to the window. --1. Select **Next**, and your user, and system-generated password for your management console will then appear. -- > [!NOTE] - > When you sign in to a sensor or on-premises management console for the first time, it's linked to your Azure subscription, which you'll need if you need to recover the password for the *cyberx*, or *support* user. For more information, see the relevant procedure for [sensors](manage-users-sensor.md#recover-privileged-access-to-a-sensor) or an [on-premises management console](manage-users-on-premises-management-console.md#recover-privileged-access-to-an-on-premises-management-console). --### Investigate a lack of traffic --An indicator appears at the top of the console when the sensor recognizes that there's no traffic on one of the configured ports. This indicator is visible to all users. When this message appears, you can investigate where there's no traffic. Make sure the span cable is connected and there was no change in the span architecture. ---### Check system performance --When a new sensor is deployed or a sensor is working slowly or not showing any alerts, you can check system performance. --1. In the Defender for IoT dashboard > **Overview**, make sure that `PPS > 0`. -1. In *Devices** check that devices are being discovered. -1. In **Data Mining**, generate a report. -1. In **Trends & Statistics** window, create a dashboard. -1. In **Alerts**, check that the alert was created. ---### Investigate a lack of expected alerts --If the **Alerts** window doesn't show an alert that you expected, verify the following: --1. Check if the same alert already appears in the **Alerts** window as a reaction to a different security instance. If yes, and this alert hasn't been handled yet, the sensor console does not show a new alert. -1. Make sure you did not exclude this alert by using the **Alert Exclusion** rules in the management console. --### Investigate dashboard that shows no data --When the dashboards in the **Trends & Statistics** window show no data, do the following: -1. [Check system performance](#check-system-performance). -1. Make sure the time and region settings are properly configured and not set to a future time. --### Investigate a device map that shows only broadcasting devices --When devices shown on the device map appear not connected to each other, something might be wrong with the SPAN port configuration. That is, you might be seeing only broadcasting devices and no unicast traffic. --1. Validate that you're only seeing the broadcast traffic. To do this, in **Data Mining**, select **Create report**. In **Create new report**,specify the report fields. In **Choose Category**, choose **Select all**. -1. Save the report, and review it to see if only broadcast and multicast traffic (and no unicast traffic) appears. If so, asking networking to fix the SPAN port configuration so that you can see the unicast traffic as well. Alternately, you can record a PCAP directly from the switch, or connect a laptop by using Wireshark. --### Connect the sensor to NTP --You can configure a standalone sensor and a management console, with the sensors related to it, to connect to NTP. --To connect a standalone sensor to NTP: --- [See the CLI documentation](./references-work-with-defender-for-iot-cli-commands.md).--To connect a sensor controlled by the management console to NTP: --- The connection to NTP is configured on the management console. All the sensors that the management console controls get the NTP connection automatically.--### Investigate when devices aren't shown on the map, or you have multiple internet-related alerts --Sometimes ICS devices are configured with external IP addresses. These ICS devices are not shown on the map. Instead of the devices, an internet cloud appears on the map. The IP addresses of these devices are included in the cloud image. Another indication of the same problem is when multiple internet-related alerts appear. Fix the issue as follows: --1. Right-click the cloud icon on the device map and select **Export IP Addresses**. -1. Copy the public ranges that are private, and add them to the subnet list. -1. Generate a new data-mining report for internet connections. -1. In the data-mining report, enter the administrator mode and delete the IP addresses of your ICS devices. --### Clearing sensor data --In cases where the sensor needs to be relocated or erased, all learned data can be cleared from the sensor. --### Export logs from the sensor console for troubleshooting --For further troubleshooting, you may want to export logs to send to the support team, such as database or operating system logs. --**To export log data**: --1. In the sensor console, go to **System settings** > **Sensor management** > **Backup & restore** > **Backup**. --1. In the **Export Troubleshooting Information** dialog: -- 1. In the **File Name** field, enter a meaningful name for the exported log. The default filename uses the current date, such as **13:10-June-14-2022.tar.gz**. -- 1. Select the logs you would like to export. -- 1. Select **Export**. -- The file is exported and is linked from the **Archived Files** list at the bottom of the **Export Troubleshooting Information** dialog. - - For example: -- :::image type="content" source="media/how-to-troubleshoot-the-sensor-and-on-premises-management-console/export-logs-sensor.png" alt-text="Screenshot of the export troubleshooting information dialog in the sensor console. " lightbox="media/how-to-troubleshoot-the-sensor-and-on-premises-management-console/export-logs-sensor.png"::: --1. Select the file link to download the exported log, and also select the :::image type="icon" source="media/how-to-troubleshoot-the-sensor-and-on-premises-management-console/eye-icon.png" border="false"::: button to view its one-time password. --1. To open the exported logs, forward the downloaded file and the one-time password to the support team. Exported logs can be opened only together with the Microsoft support team. -- To keep your logs secure, make sure to forward the password separately from the downloaded log. --> [!NOTE] -> Support ticket diagnostics can be downloaded from the sensor console and then uploaded directly to the support team in the Azure portal. --## Troubleshoot an on-premises management console --### Investigate a lack of expected alerts --If you don't see an expected alert on the on-premises **Alerts** page, do the following to troubleshoot: --- Verify whether the alert is already listed as a reaction to a different security instance. If it has, and that alert hasn't yet been handled, a new alert isn't shown elsewhere.--- Verify that the alert isn't being excluded by **Alert Exclusion** rules. For more information, see [Create alert exclusion rules on an on-premises management console](how-to-accelerate-alert-incident-response.md#create-alert-exclusion-rules-on-an-on-premises-management-console).--### Tweak the Quality of Service (QoS) --To save your network resources, you can limit the number of alerts sent to external systems (such as emails or SIEM) in one sync operation between an appliance and the on-premises management console. --The default is 50. This means that in one communication session between an appliance and the on-premises management console, there will be no more than 50 alerts to external systems. --To limit the number of alerts, use the `notifications.max_number_to_report` property available in `/var/cyberx/properties/management.properties`. No restart is needed after you change this property. --**To tweak the Quality of Service (QoS)**: --1. Sign in as a Defender for IoT user. --1. Verify the default values: -- ```bash - grep \"notifications\" /var/cyberx/properties/management.properties - ``` -- The following default values appear: -- ```bash - notifications.max_number_to_report=50 - notifications.max_time_to_report=10 (seconds) - ``` --1. Edit the default settings: -- ```bash - sudo nano /var/cyberx/properties/management.properties - ``` --1. Edit the settings of the following lines: -- ```bash - notifications.max_number_to_report=50 - notifications.max_time_to_report=10 (seconds) - ``` --1. Save the changes. No restart is required. --### Export logs from the on-premises management console for troubleshooting --For further troubleshooting, you may want to export logs to send to the support team, such as audit or database logs. --**To export log data**: --1. In the on-premises management console, select **System Settings > Export**. --1. In the **Export Troubleshooting Information** dialog: -- 1. In the **File Name** field, enter a meaningful name for the exported log. The default filename uses the current date, such as **13:10-June-14-2022.tar.gz**. -- 1. Select the logs you would like to export. -- 1. Select **Export**. -- The file is exported and is linked from the **Archived Files** list at the bottom of the **Export Troubleshooting Information** dialog. -- For example: -- :::image type="content" source="media/how-to-troubleshoot-the-sensor-and-on-premises-management-console/export-logs-on-premises-management-console.png" alt-text="Screenshot of the Export Troubleshooting Information dialog in the on-premises management console." lightbox="media/how-to-troubleshoot-the-sensor-and-on-premises-management-console/export-logs-on-premises-management-console.png"::: --1. Select the file link to download the exported log, and also select the :::image type="icon" source="media/how-to-troubleshoot-the-sensor-and-on-premises-management-console/eye-icon.png" border="false"::: button to view its one-time password. --1. To open the exported logs, forward the downloaded file and the one-time password to the support team. Exported logs can be opened only together with the Microsoft support team. -- To keep your logs secure, make sure to forward the password separately from the downloaded log. --## Next steps --- [View alerts](how-to-view-alerts.md)--- [Set up SNMP MIB monitoring](how-to-set-up-snmp-mib-monitoring.md)--- [Track on-premises user activity](track-user-activity.md) |
defender-for-iot | How To Work With The Sensor Device Map |