Service | Microsoft Docs article | Related commit history on GitHub | Change details |
---|---|---|---|
active-directory-b2c | Azure Sentinel | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-b2c/azure-sentinel.md | Title: Secure Azure AD B2C with Microsoft Sentinel + Title: Configure security analytics for Azure Active Directory B2C data with Microsoft Sentinel -description: In this tutorial, you use Microsoft Sentinel to perform security analytics for Azure Active Directory B2C data. +description: Use Microsoft Sentinel to perform security analytics for Azure Active Directory B2C data. -+ - Previously updated : 08/17/2021 Last updated : 03/06/2023 -#Customer intent: As an IT professional, I want to gather logs and audit data by using Microsoft Sentinel and Azure Monitor so that I can secure my applications that use Azure Active Directory B2C. +#Customer intent: As an IT professional, I want to gather logs and audit data using Microsoft Sentinel and Azure Monitor to secure applications that use Azure Active Directory B2C. # Tutorial: Configure security analytics for Azure Active Directory B2C data with Microsoft Sentinel -You can further secure your Azure Active Directory B2C (Azure AD B2C) environment by routing logs and audit information to Microsoft Sentinel. Microsoft Sentinel is a cloud-native SIEM (security information and event management) and SOAR (security orchestration, automation, and response) solution. Microsoft Sentinel provides alert detection, threat visibility, proactive hunting, and threat response for Azure AD B2C. +Increase the security of your Azure Active Directory B2C (Azure AD B2C) environment by routing logs and audit information to Microsoft Sentinel. The scalable Microsoft Sentinel is a cloud-native, security information and event management (SIEM) and security orchestration, automation, and response (SOAR) solution. Use the solution for alert detection, threat visibility, proactive hunting, and threat response for Azure AD B2C. ++Learn more: -By using Microsoft Sentinel with Azure AD B2C, you can: +* [What is Microsoft Sentinel?](../sentinel/overview.md) +* [What is SOAR?](https://www.microsoft.com/security/business/security-101/what-is-soar) -- Detect previously undetected threats and minimize false positives by using Microsoft's analytics and threat intelligence.-- Investigate threats with AI. Hunt for suspicious activities at scale, and tap into years of cybersecurity-related work at Microsoft.-- Respond to incidents rapidly with built-in orchestration and automation of common tasks.-- Meet security and compliance requirements for your organization.+More uses for Microsoft Sentinel, with Azure AD B2C, are: -In this tutorial, you'll learn how to: +* Detect previously undetected threats and minimize false positives with analytics and threat intelligence features +* Investigate threats with artificial intelligence (AI) + * Hunt for suspicious activities at scale, and benefit from the experience of years of cybersecurity work at Microsoft +* Respond to incidents rapidly with common task orchestration and automation +* Meet your organization's security and compliance requirements -> [!div class="checklist"] -> * Transfer Azure AD B2C logs to a Log Analytics workspace. -> * Enable Microsoft Sentinel in a Log Analytics workspace. -> * Create a sample rule in Microsoft Sentinel that will trigger an incident. -> * Configure an automated response. +In this tutorial, learn how to: ++* Transfer Azure AD B2C logs to a Log Analytics workspace +* Enable Microsoft Sentinel in a Log Analytics workspace +* Create a sample rule in Microsoft Sentinel to trigger an incident +* Configure an automated response ## Configure Azure AD B2C with Azure Monitor Log Analytics -To define where logs and metrics for a resource should be sent, enable **Diagnostic settings** in Azure AD within your Azure AD B2C tenant. Then, [configure Azure AD B2C to send logs to Azure Monitor](./azure-monitor.md). +To define where logs and metrics for a resource are sent, -## Deploy a Microsoft Sentinel instance +1. Enable **Diagnostic settings** in Azure AD, in your Azure AD B2C tenant. +2. Configure Azure AD B2C to send logs to Azure Monitor. -After you've configured your Azure AD B2C instance to send logs to Azure Monitor, you need to enable a Microsoft Sentinel instance. +Learn more, [Monitor Azure AD B2C with Azure Monitor](./azure-monitor.md). ->[!IMPORTANT] ->To enable Microsoft Sentinel, you need contributor permissions to the subscription in which the Microsoft Sentinel workspace resides. To use Microsoft Sentinel, you need either contributor or reader permissions on the resource group that the workspace belongs to. +## Deploy a Microsoft Sentinel instance -1. Go to the [Azure portal](https://portal.azure.com). Select the subscription where the Log Analytics workspace is created. +After you configure your Azure AD B2C instance to send logs to Azure Monitor, enable an instance of Microsoft Sentinel. -2. Search for and select **Microsoft Sentinel**. + >[!IMPORTANT] + >To enable Microsoft Sentinel, obtain Contributor permissions to the subscription in which the Microsoft Sentinel workspace resides. To use Microsoft Sentinel, use Contributor or Reader permissions on the resource group to which the workspace belongs. -  +1. Go to the [Azure portal](https://portal.azure.com). +2. Select the subscription where the Log Analytics workspace is created. +3. Search for and select **Microsoft Sentinel**. -3. Select **Add**. +  -4. Select the new workspace. +3. Select **Add**. +4. In the **search workspaces** field, select the new workspace. -  +  5. Select **Add Microsoft Sentinel**. ->[!NOTE] ->You can [run Microsoft Sentinel](../sentinel/quickstart-onboard.md) on more than one workspace, but the data is isolated to a single workspace. + >[!NOTE] + >It's possible to run Microsoft Sentinel on more than one workspace, however data is isolated in a single workspace.</br> See, [Quickstart: Onboard Microsoft Sentinel](../sentinel/quickstart-onboard.md) ## Create a Microsoft Sentinel rule -Now that you've enabled Microsoft Sentinel, get notified when something suspicious occurs in your Azure AD B2C tenant. +After you enable Microsoft Sentinel, get notified when something suspicious occurs in your Azure AD B2C tenant. -You can create [custom analytics rules](../sentinel/detect-threats-custom.md) to discover threats and anomalous behaviors in your environment. These rules search for specific events or sets of events and alert you when certain event thresholds or conditions are reached. Then they generate incidents for further investigation. +You can create custom analytics rules to discover threats and anomalous behaviors in your environment. These rules search for specific events, or event sets, and alert you when event thresholds or conditions are met. Then incidents are generated for investigation. ->[!NOTE] ->Microsoft Sentinel provides built-in templates to help you create threat detection rules designed by Microsoft's team of security experts and analysts. Rules created from these templates automatically search across your data for any suspicious activity. There are no native Azure AD B2C connectors available at this time. For the example in this tutorial, we'll create our own rule. +See, [Create custom analytics rules to detect threats](../sentinel/detect-threats-custom.md) -In the following example, you receive a notification if someone tries to force access to your environment but isn't successful. It might mean a brute-force attack. You want to get notified for two or more unsuccessful logins within 60 seconds. + >[!NOTE] + >Microsoft Sentinel has templates to create threat detection rules that search your data for suspicious activity. For this tutorial, you create a rule. -1. From the left menu in Microsoft Sentinel, select **Analytics**. +### Notification rule for unsuccessful forced access -2. On the action bar at the top, select **+ Create** > **Scheduled query rule**. +Use the following steps to receive notification about two or more unsuccessful, forced access attempts into your environment. An example is brute-force attack. -  +1. In Microsoft Sentinel, from the left menu, select **Analytics**. +2. On the top bar, select **+ Create** > **Scheduled query rule**. -3. In the Analytics Rule wizard, go to the **General** tab and enter the following information: +  - | Field | Value | - |:--|:--| - |**Name** | Enter a name that's appropriate for Azure AD B2C unsuccessful logins. | - |**Description** | Enter a description that says the rule will notify on two or more unsuccessful logins within 60 seconds. | - | **Tactics** | Choose from the categories of attacks by which to classify the rule. These categories are based on the tactics of the [MITRE ATT&CK](https://attack.mitre.org/) framework.<BR>For our example, we'll choose **PreAttack**. <BR> MITRE ATT&CK is a globally accessible knowledge base of adversary tactics and techniques based on real-world observations. This knowledge base is used as a foundation for the development of specific threat models and methodologies. - | **Severity** | Select an appropriate severity level. | - | **Status** | When you create the rule, its status is **Enabled** by default. That status means the rule will run immediately after you finish creating it. If you don't want it to run immediately, select **Disabled**. The rule will then be added to your **Active rules** tab, and you can enable it from there when you need it.| +3. In the Analytics Rule wizard, go to **General**. +4. For **Name**, enter a name for unsuccessful logins. +5. For **Description**, indicate the rule notifies for two or more unsuccessful sign-ins, within 60 seconds. +6. For **Tactics**, select a category. For example, select **PreAttack**. +7. For **Severity**, select a severity level. +8. **Status** is **Enabled** by default. To change a rule, go to the **Active rules** tab. -  +  -4. To define the rule query logic and configure settings, on the **Set rule logic** tab, write a query directly in the -**Rule query** box. +9. Select the **Set rule logic** tab. +10. Enter a query in the **Rule query** field. The query example organizes the sign-ins by `UserPrincipalName`. -  +  - This query will alert you when there are two or more unsuccessful logins within 60 seconds to your Azure AD B2C tenant. It will organize the logins by `UserPrincipalName`. +11. Go to **Query scheduling**. +12. For **Run query every**, enter **5** and **Minutes**. +13. For **Lookup data from the last**, enter **5** and **Minutes**. +14. For **Generate alert when number of query results**, select **Is greater than**, and **0**. +15. For **Event grouping**, select **Group all events into a single alert**. +16. For **Stop running query after alert is generated**, select **Off**. +17. Select **Next: Incident settings (Preview)**. -5. In the **Query scheduling** section, set the following parameters: +  -  +18. Go to the **Review and create** tab to review rule settings. +19. When the **Validation passed** banner appears, select **Create**. -6. Select **Next: Incident settings (Preview)**. You'll configure and add the automated response later. +  -7. Go to the **Review and create** tab to review all the settings for your new alert rule. When the **Validation passed** message appears, select **Create** to initialize your alert rule. +#### View a rule and related incidents -  +View the rule and the incidents it generates. Find your newly created custom rule of type **Scheduled** in the table under the **Active rules** tab on the main -8. View the rule and the incidents that it generates. Find your newly created custom rule of type **Scheduled** in the table under the **Active rules** tab on the main **Analytics** screen. From this list, you can edit, enable, disable, or delete rules by using the corresponding buttons. +1. Go to the **Analytics** screen. +2. Select the **Active rules** tab. +3. In the table, under **Scheduled**, find the rule. -  +You can edit, enable, disable, or delete the rule. -9. View the results of your new rule for Azure AD B2C unsuccessful logins. Go to the **Incidents** page, where you can triage, investigate, and remediate the threats. +  - An incident can include multiple alerts. It's an aggregation of all the relevant evidence for a specific investigation. You can set properties such as severity and status at the incident level. +#### Triage, investigate, and remediate incidents - > [!NOTE] - > A key feature of Microsoft Sentinel is [incident investigation](../sentinel/investigate-cases.md). - -10. To begin an investigation, select a specific incident. +An incident can include multiple alerts, and is an aggregation of relevant evidence for an investigation. At the incident level, you can set properties such as Severity and Status. - On the right, you can see detailed information for the incident. This information includes severity, entities involved, the raw events that triggered the incident, and the incident's unique ID. +Learn more: [Investigate incidents with Microsoft Sentinel](../sentinel/investigate-cases.md). + +1. Go to the **Incidents** page. +2. Select an incident. +3. On the right, detailed incident information appears, including severity, entities, events, and the incident ID.  -11. Select **View full details** on the incident pane. Review the tabs that summarize the incident information and provide more details. +4. On the **Incidents** pane, elect **View full details**. +5. Review tabs that summarize the incident. -  +  -12. Select **Evidence** > **Events** > **Link to Log Analytics**. The result displays the `UserPrincipalName` value of the identity that's trying to log in with the number of attempts. +6. Select **Evidence** > **Events** > **Link to Log Analytics**. +7. In the results, see the identity `UserPrincipalName` value attempting sign-in. -  +  ## Automated response -Microsoft Sentinel provides a [robust SOAR capability](../sentinel/automation-in-azure-sentinel.md). Automated actions, called a *playbook* in Microsoft Sentinel, can be attached to analytics rules to suit your requirements. +Microsoft Sentinel has security orchestration, automation, and response (SOAR) functions. Attach automated actions, or a playbook, to analytics rules. -In this example, we add an email notification for an incident that the rule creates. To accomplish this task, use an [existing playbook from the Microsoft Sentinel GitHub repository](https://github.com/Azure/Azure-Sentinel/tree/master/Playbooks/Incident-Email-Notification). After the playbook is configured, edit the existing rule and select the playbook on the **Automated response** tab. +See, [What is SOAR?](https://www.microsoft.com/security/business/security-101/what-is-soar) - +### Email notification for an incident -## Related information +For this task, use a playbook from the Microsoft Sentinel GitHub repository. -For more information about Microsoft Sentinel and Azure AD B2C, see: +1. Go to a configured playbook. +2. Edit the rule. +3. On the **Automated response** tab, select the playbook. ++Learn more: [Incident-Email-Notification](https://github.com/Azure/Azure-Sentinel/tree/master/Playbooks/Incident-Email-Notification) -- [Sample workbooks](https://github.com/azure-ad-b2c/siem#workbooks)+  ++## Resources ++For more information about Microsoft Sentinel and Azure AD B2C, see: -- [Microsoft Sentinel documentation](../sentinel/index.yml)+* [Azure AD B2C Reports & Alerts, Workbooks](https://github.com/azure-ad-b2c/siem#workbooks) +* [Microsoft Sentinel documentation](../sentinel/index.yml) -## Next steps +## Next step -> [!div class="nextstepaction"] -> [Handle false positives in Microsoft Sentinel](../sentinel/false-positives.md) +[Handle false positives in Microsoft Sentinel](../sentinel/false-positives.md) |
active-directory-b2c | Configure Authentication Sample Web App | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-b2c/configure-authentication-sample-web-app.md | To create the web app registration, use the following steps: 1. Under **Name**, enter a name for the application (for example, *webapp1*). 1. Under **Supported account types**, select **Accounts in any identity provider or organizational directory (for authenticating users with user flows)**. 1. Under **Redirect URI**, select **Web** and then, in the URL box, enter `https://localhost:44316/signin-oidc`.-1. Under **Implicit grant and hybrid flows**, select the **ID tokens (used for implicit and hybrid flows)** checkbox. +1. Under **Authentication**, go to **Implicit grant and hybrid flows**, select the **ID tokens (used for implicit and hybrid flows)** checkbox. 1. Under **Permissions**, select the **Grant admin consent to openid and offline access permissions** checkbox. 1. Select **Register**. 1. Select **Overview**. |
active-directory-b2c | Partner Keyless | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-b2c/partner-keyless.md | Title: Tutorial for configuring Keyless with Azure Active Directory B2C + Title: Tutorial to configure Keyless with Azure Active Directory B2C -description: Tutorial for configuring Keyless with Azure Active Directory B2C for passwordless authentication +description: Tutorial to configure Sift Keyless with Azure Active Directory B2C for passwordless authentication -+ - Previously updated : 09/20/2021 Last updated : 03/06/2023 # Tutorial: Configure Keyless with Azure Active Directory B2C -In this sample tutorial, we provide guidance on how to configure Azure Active Directory (AD) B2C with [Keyless](https://keyless.io/). With Azure AD B2C as an Identity provider, you can integrate Keyless with any of your customer applications to provide true passwordless authentication to your users. --Keyless's solution **Keyless Zero-Knowledge Biometric (ZKBΓäó)** provides passwordless multifactor authentication that eliminates fraud, phishing, and credential reuse ΓÇô all while enhancing customer experience and protecting their privacy. +Learn to configure Azure Active Directory B2C (Azure AD B2C) with the Sift Keyless passwordless solution. With Azure AD B2C as an identity provider (IdP), integrate Keyless with customer applications to provide passwordless authentication. The Keyless Zero-Knowledge Biometric (ZKB) is passwordless multi-factor authentication that helps eliminate fraud, phishing, and credential reuse, while enhancing the customer experience and protecting privacy. -## Pre-requisites +Go to keyless.io to learn about: -To get started, you'll need: +* [Sift Keyless](https://keyless.io/) +* [How Keyless uses zero-knowledge proofs to protect your biometric data](https://keyless.io/blog/post/how-keyless-uses-zero-knowledge-proofs-to-protect-your-biometric-data) -- An Azure subscription. If you don't have a subscription, you can get a [free account](https://azure.microsoft.com/free/).+## Prerequisites -- An [Azure AD B2C tenant](./tutorial-create-tenant.md). Tenant must be linked to your Azure subscription.--- A Keyless cloud tenant, get a free [trial account](https://keyless.io/go).+To get started, you'll need: -- The Keyless Authenticator app installed on your userΓÇÖs device.+* An Azure subscription + * If you don't have one, get an [Azure free account](https://azure.microsoft.com/free/) +* An [Azure AD B2C tenant](./tutorial-create-tenant.md) linked to the Azure subscription +* A Keyless cloud tenant + * Go to keyless.io to [Request a demo](https://keyless.io/go) +* The Keyless Authenticator app installed on a user device ## Scenario description The Keyless integration includes the following components: -- Azure AD B2C ΓÇô The authorization server, responsible for verifying the userΓÇÖs credentials, also known as the identity provider.+* **Azure AD B2C** ΓÇô authorization server that verifies user credentials. Also known as the IdP. +* **Web and mobile applications** ΓÇô mobile or web applications to protect with Keyless and Azure AD B2C +* **The Keyless Authenticator mobile app** ΓÇô Sift mobile app for authentication to the Azure AD B2C enabled applications -- Web and mobile applications ΓÇô Your mobile or web applications that you choose to protect with Keyless and Azure AD B2C.+The following architecture diagram illustrates an implementation. -- The Keyless mobile app ΓÇô The Keyless mobile app will be used for authentication to the Azure AD B2C enabled applications.+  -The following architecture diagram shows the implementation. +1. User arrives at a sign-in page. User selects sign-in/sign-up and enters the username. +2. The application sends user attributes to Azure AD B2C for identity verification. +3. Azure AD B2C sends user attributes to Keyless for authentication. +4. Keyless sends a push notification to the users' registered mobile device for authentication, a facial biometric scan. +5. The user responds to the push notification and is granted or denied access. - +## Add an IdP, configure the IdP, and create a user flow policy -|Step | Description | -|:--| :--| -| 1. | User arrives at a login page. Users select sign-in/sign-up and enters the username -| 2. | The application sends the user attributes to Azure AD B2C for identity verification. -| 3. | Azure AD B2C collects the user attributes and sends the attributes to Keyless to authenticate the user through the Keyless mobile app. -| 4. | Keyless sends a push notification to the registered user's mobile device for a privacy-preserving authentication in the form of a facial biometric scan. -| 5. | After the user responds to the push notification, the user is either granted or denied access to the customer application based on the verification results. --## Integrate with Azure AD B2C +Use the following sections to add an IdP, configure the IdP, and create a user flow policy. ### Add a new Identity provider -To add a new Identity provider, follow these steps: --1. Sign in to the **[Azure portal](https://portal.azure.com/#home)** as the global administrator of your Azure AD B2C tenant. -1. Make sure you're using the directory that contains your Azure AD B2C tenant. Select the **Directories + subscriptions** icon in the portal toolbar. -1. On the **Portal settings | Directories + subscriptions** page, find your Azure AD B2C directory in the **Directory name** list, and then select **Switch**. -1. Choose **All services** in the top-left corner of the Azure portal, search for and select **Azure AD B2C**. -1. Navigate to **Dashboard** > **Azure Active Directory B2C** > **Identity providers** -1. Select **Identity providers**. -1. Select **Add**. --### Configure an Identity provider --To configure an identity provider, follow these steps: --1. Select **Identity provider type** > **OpenID Connect (Preview)** -1. Fill out the form to set up the Identity provider: -- |Property | Value | - |:--| :--| - | Name | Keyless | - | Metadata URL | Insert the URI of the hosted Keyless Authentication app, followed by the specific path such as 'https://keyless.auth/.well-known/openid-configuration' | - | Client Secret | The secret associated with the Keyless Authentication instance - not same as the one configured before. Insert a complex string of your choice. This secret will be used later in the Keyless Container configuration.| - | Client ID | The ID of the client. This ID will be used later in the Keyless Container configuration.| - | Scope | openid | - | Response type | id_token | - | Response mode | form_post| --1. Select **OK**. --1. Select **Map this identity providerΓÇÖs claims**. --1. Fill out the form to map the Identity provider: -- |Property | Value | - |:--| :--| - | UserID | From subscription | - | Display name | From subscription | - | Response mode | From subscription | --1. Select **Save** to complete the setup for your new Open ID Connect (OIDC) Identity provider. +To add a new Identity provider: ++1. Sign in to the [Azure portal](https://portal.azure.com/#home) as Global Administrator of the Azure AD B2C tenant. +2. Select **Directories + subscriptions**. +3. On the **Portal settings, Directories + subscriptions** page, in the **Directory name** list, find your Azure AD B2C directory. +4. Select **Switch**. +5. In the top-left corner of the Azure portal, select **All services**. +6. Search for and select **Azure AD B2C**. +7. Navigate to **Dashboard** > **Azure Active Directory B2C** > **Identity providers**. +8. Select **Identity providers**. +9. Select **Add**. ++### Configure an identity provider ++To configure an IdP: ++1. Select **Identity provider type** > **OpenID Connect (Preview)**. +2. For **Name**, select **Keyless**. +3. For **Metadata URL**, insert the hosted Keyless Authentication app URI, followed by the path, such as `https://keyless.auth/.well-known/openid-configuration`. +4. For **Client Secret**, select the secret associated with the Keyless Authentication instance. The secret is used later in Keyless Container configuration. +5. For **Client ID**, select the client ID. The Client ID is used later in Keyless Container configuration. +6. For **Scope**, select **openid**. +7. For **Response type**, select **id_token**. +8. For **Response mode**, select **form_post**. +9. Select **OK**. +10. Select **Map this identity providerΓÇÖs claims**. +11. For **UserID**, select **From subscription**. +12. For **Display name**, select **From subscription**. +13. For **Response mode**, select **From subscription**. +14. Select **Save**. ### Create a user flow policy -You should now see Keyless as a new OIDC Identity provider listed within your B2C identity providers. --1. In your Azure AD B2C tenant, under **Policies**, select **User flows**. --2. Select **New** user flow. --3. Select **Sign up and sign in**, select a **version**, and then select **Create**. --4. Enter a **Name** for your policy. --5. In the Identity providers section, select your newly created Keyless Identity Provider. --6. Set up the parameters of your User flow. Insert a name and select the Identity provider youΓÇÖve created. You can also add email address. In this case, Azure wonΓÇÖt redirect the login procedure directly to Keyless instead it will show a screen where the user can choose the option they would like to use. --7. Leave the **Multi-factor Authentication** field as is. --8. Select **Enforce conditional access policies** --9. Under **User attributes and token claims**, select **Email Address** in the Collect attribute option. You can add all the attributes that Azure Active Directory can collect about the user alongside the claims that Azure AD B2C can return to the client application. --10. Select **Create**. --11. After a successful creation, select your new **User flow**. --12. On the left panel, select **Application Claims**. Under options, tick the **email** checkbox and select **Save**. +Keyless appears as a new OpenID Connect (OIDC) IdP with B2C identity providers. ++1. Open the Azure AD B2C tenant. +2. Under **Policies**, select **User flows**. +3. Select **New** user flow. +4. Select **Sign up and sign in**. +5. Select a **version**. +6. Select **Create**. +7. Enter a **Name** for your policy. +8. In the Identity providers section, select the created Keyless Identity Provider. +9. Enter a name. +10. Select the IdP you created. +11. Add an email address. Azure wonΓÇÖt redirect the sign-in to Keyless; a screen appears with a user option. +12. Leave the **Multi-factor Authentication** field. +13. Select **Enforce conditional access policies**. +14. Under **User attributes and token claims**, in the **Collect attribute** option, select **Email Address**. +15. Add user attributes Azure AD collects with claims Azure AD B2C returns to the client application. +16. Select **Create**. +17. Select the new **User flow**. +18. On the left panel, select **Application Claims**. +19. Under options, select the **email** checkbox. +20. Select **Save**. ## Test the user flow -1. Open the Azure AD B2C tenant and under Policies select Identity Experience Framework. --2. Select your previously created SignUpSignIn. --3. Select Run user flow and select the settings: -- a. Application: select the registered app (sample is JWT) -- b. Reply URL: select the redirect URL +1. Open the Azure AD B2C tenant. +2. Under **Policies** select **Identity Experience Framework**. +3. Select the created SignUpSignIn. +4. Select **Run user flow**. +5. For **Application**, select the registered app (the example is JWT). +6. For **Reply URL**, select the redirect URL. +7. Select **Run user flow**. +8. Complete the sign-up flow and create an account. +9. After the user attribute is created, Keyless is called during the flow. - c. Select Run user flow. --4. Go through sign-up flow and create an account --5. Keyless will be called during the flow, after user attribute is created. If the flow is incomplete, check that user isn't saved in the directory. +If the flow is incomplete, confirm user is or isn't saved in the directory. ## Next steps -For additional information, review the following articles: --- [Custom policies in Azure AD B2C](./custom-policy-overview.md)--- [Get started with custom policies in Azure AD B2C](tutorial-create-user-flows.md?pivots=b2c-custom-policy)+* [Azure AD B2C custom policy overview](./custom-policy-overview.md) +* [Tutorial: Create user flows and custom policies in Azure AD B2C](tutorial-create-user-flows.md?pivots=b2c-custom-policy) |
active-directory-b2c | Partner Saviynt | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-b2c/partner-saviynt.md | Title: Tutorial for configuring Saviynt with Azure Active Directory B2C + Title: Tutorial to configure Saviynt with Azure Active Directory B2C -description: Tutorial to configure Azure Active Directory B2C with Saviynt for cross application integration to streamline IT modernization and promote better security, governance, and compliance.ΓÇ» +description: Learn to configure Azure AD B2C with Saviynt for cross-application integration for better security, governance, and compliance.ΓÇ» -+ - Previously updated : 09/20/2021 Last updated : 03/07/2023 -# Tutorial for configuring Saviynt with Azure Active Directory B2C --In this sample tutorial, we provide guidance on how to integrate Azure Active Directory (AD) B2C with [Saviynt](https://saviynt.com/integrations/azure-ad/for-b2c/). SaviyntΓÇÖs Security Manager platform provides the visibility, security, and governance todayΓÇÖs businesses need, in a single unified platform. Saviynt incorporates application risk and governance, infrastructure management, privileged account management, and customer risk analysis. +# Tutorial to configure Saviynt with Azure Active Directory B2C -In this sample tutorial, you'll set up Saviynt to provide fine grained access control based delegated administration for Azure AD B2C users. Saviynt does the following checks to determine if a user is authorized to manage Azure AD B2C users. +Learn to integrate Azure Active Directory B2C (Azure AD B2C) with the Saviynt Security Manager platform, which has visibility, security, and governance. Saviynt incorporates application risk and governance, infrastructure management, privileged account management, and customer risk analysis. -- Feature level security to determine if a user can perform a specific operation. For example, Create user, Update user, Reset user password, and so on.+Learn more: [Saviynt for Azure AD B2C](https://saviynt.com/integrations/azure-ad/for-b2c/) -- Field level security to determine if a user can read/write a specific attribute of another user during user management operations. For example, help desk agent can only update phone number and all other attributes are read-only.+Use the following instructions to set up access control delegated administration for Azure AD B2C users. Saviynt determines if a user is authorized to manage Azure AD B2C users with: -- Data level security to determine if a user can perform a certain operation on a specific user. For example, help desk administrator for UK region can manage UK users only.+* Feature level security to determine if users can perform an operation + * For example, create user, update user, reset user password, and so on +* Field level security to determine if users can read/write user attributes during user management operations + * For example, a Help Desk agent can update a phone number; other attributes are read-only +* Data level security to determine if users can perform an operation on another user + * For example, a Help Desk administrator for the United Kingdom region manages UK users ## Prerequisites -To get started, you'll need: --- An Azure AD subscription. If you don't have a subscription, you can get a [free account](https://azure.microsoft.com/free/).--- An [Azure AD B2C tenant](./tutorial-create-tenant.md). Tenant is linked to your Azure subscription.+To get started, you need: -- A Saviynt [subscription](https://saviynt.com/contact-us/)+* An Azure AD subscription + * If you don't have on, get an [Azure free account](https://azure.microsoft.com/free/) +* An [Azure AD B2C tenant](./tutorial-create-tenant.md) linked to your Azure subscription +* Go to saviynt.com [Contact Us](https://saviynt.com/contact-us/) to request a demo ## Scenario description The Saviynt integration includes the following components: -- [Azure AD B2C](https://azure.microsoft.com/services/active-directory/external-identities/b2c/) ΓÇô The business-to-customer identity as a service that enables custom control of how your customers sign up, sign in, and manage their profiles.+* **Azure AD B2C** ΓÇô identity as a service for custom control of customer sign-up, sign-in, and profile management + * See, [Azure AD B2C, Get started](https://azure.microsoft.com/services/active-directory/external-identities/b2c/) +* **Saviynt for Azure AD B2C** ΓÇô identity governance for delegated administration of user life-cycle management and access governance + * See, [Saviynt for Azure AD B2C](https://saviynt.com/integrations/azure-ad/for-b2c/) +* **Microsoft Graph API** ΓÇô interface for Saviynt to manage Azure AD B2C users and their access + * See, [Use the Microsoft Graph API](/graph/use-the-api) + -- [Saviynt](https://saviynt.com/integrations/azure-ad/for-b2c/) ΓÇô The identity governance platform that provides fine grained delegated administration for user life-cycle management and access governance of Azure AD B2C users. +The following architecture diagram illustrates the implementation. -- [Microsoft Graph API](/graph/use-the-api) ΓÇô This API provides the interfaces for Saviynt to manage the Azure AD B2C users and their access in Azure AD B2C.+  -The following architecture diagram shows the implementation. +1. A delegated administrator starts the Azure AD B2C user operation with Saviynt. +2. Saviynt verifies the delegated administrator can perform the operation. +3. Saviynt sends an authorization success or failure response. +4. Saviynt allows the delegated administrator to perform the operation. +5. Saviynt invokes Microsoft Graph API, with user attributes, to manage the user in Azure AD B2C. +6. Microsoft Graph API creates, updates, or deletes the user in Azure AD B2C. +7. Azure AD B2C sends a success or failure response. +8. Microsoft Graph API returns the response to Saviynt. - +## Create a Saviynt account and create delegated policies -|Step | Description | -|:--| :--| -| 1. | A delegated administrator starts a manage Azure AD B2C user operation through Saviynt. -| 2. | Saviynt verifies with its authorization engine if the delegated administrator can do the specific operation. -| 3. | SaviyntΓÇÖs authorization engine sends an authorization success/failure response. -| 4. | Saviynt allows the delegated administrator to do the required operation. -| 5. | Saviynt invokes Microsoft Graph API along with user attributes to manage the user in Azure AD B2C -| 6. | Microsoft Graph API will in turn create/update/delete the user in Azure AD B2C. -| 7. | Azure AD B2C will send a success/failure response. -| 8. | Microsoft Graph API will then return the response to Saviynt. --## Onboard with Saviynt --1. To create a Saviynt account, contact [Saviynt](https://saviynt.com/contact-us/) --2. Create delegated administration policies and assign users as delegated administrators with various roles. +1. Create a Saviynt account. To get started, go to saviynt.com [Contact Us](https://saviynt.com/contact-us/). +2. Create delegated administration policies. +3. Assign users the delegated administrator role. ## Configure Azure AD B2C with Saviynt -### Create an Azure AD Application for Saviynt --1. Sign in to the [Azure portal](https://portal.azure.com/#home). -1. Make sure you're using the directory that contains your Azure AD B2C tenant. Select the **Directories + subscriptions** icon in the portal toolbar. -1. On the **Portal settings | Directories + subscriptions** page, find your Azure AD B2C directory in the **Directory name** list, and then select **Switch**. -1. In the Azure portal, search and select **Azure AD B2C**. -1. Select **App registrations** > **New registration**. -1. Enter a Name for the application. For example, Saviynt and select **Create**. -1. Go to **API Permissions** and select **+ Add a permission.** -1. The Request API permissions page appears. Select **Microsoft APIs** tab and select **Microsoft Graph** as commonly used Microsoft APIs. -1. Go to the next page, and select **Application permissions**. -1. Select **Directory**, and select **Directory.Read.All** and **Directory.ReadWrite.All** checkboxes. -1. Select **Add Permissions**. Review the permissions added. -1. Select **Grant admin consent for Default Directory** > **Save**. -1. Go to **Certificates and Secrets** and select **+ Add Client Secret**. Enter the client secret description, select the expiry option, and select **Add**. -1. The Secret key is generated and displayed in the Client secret section. You'll need to use it later. --1. Go to **Overview** and get the **Client ID** and **Tenant ID**. -1. Tenant ID, client ID, and client secret will be needed to complete the setup in Saviynt. --### Enable Saviynt to Delete users --The below steps explain how to enable Saviynt to perform user delete operations in Azure AD B2C. +Use the following instructions to create an application, delete users, and more. ->[!NOTE] ->[Evaluate the risk before granting admin roles access to a service principal.](../active-directory/develop/app-objects-and-service-principals.md) +### Create an Azure AD application for Saviynt -1. Install the latest version of MSOnline PowerShell Module on a Windows workstation/server. +For the following instructions, use the directory with the Azure AD B2C tenant. -2. Connect to AzureAD PowerShell module and execute the following commands: +1. Sign in to the [Azure portal](https://portal.azure.com/#home). +2. In the portal toolbar, select **Directories + subscriptions**. +3. On the **Portal settings, Directories + subscriptions** page, in the **Directory name** list, find your Azure AD B2C directory. +4. Select **Switch**. +5. In the Azure portal, search and select **Azure AD B2C**. +6. Select **App registrations** > **New registration**. +7. Enter an application name. For example, Saviynt. +8. Select **Create**. +9. Go to **API Permissions**. +10. Select **+ Add a permission.** +11. The Request API permissions page appears. +12. Select **Microsoft APIs** tab. +13. Select **Microsoft Graph** as commonly used Microsoft APIs. +14. Go to the next page. +15. Select **Application permissions**. +16. Select **Directory**. +17. Select the **Directory.Read.All** and **Directory.ReadWrite.All** checkboxes. +18. Select **Add Permissions**. +19. Review the permissions. +20. Select **Grant admin consent for Default Directory**. +21. Select **Save**. +22. Go to **Certificates and Secrets**. +23. Select **+ Add Client Secret**. +24. Enter the client secret description. +25. Select the expiry option. +26. Select **Add**. +27. The Secret Key appears in the Client Secret section. Save the Client Secret to use later. ++1. Go to **Overview**. +2. Copy the **Client ID** and **Tenant ID**. ++Save the Tenant ID, Client ID, and Client Secret to complete the setup. ++### Enable Saviynt to delete users ++Enable Saviynt to perform user delete operations in Azure AD B2C. ++Learn more: [Application and service principal objects in Azure AD](../active-directory/develop/app-objects-and-service-principals.md) ++1. Install the latest version of MSOnline PowerShell Module on a Windows workstation or server. ++For more information, see [Azure Active Directory V2 PowerShell Module](https://www.powershellgallery.com/packages/AzureAD/2.0.2.140) ++2. Connect to the AzureAD PowerShell module and execute the following commands: ```powershell Connect-msolservice #Enter Admin credentials of the Azure portal Add-MsolRoleMember -RoleName "Company Administrator" -RoleMemberType ServicePrin ## Test the solution -Browse to your Saviynt application tenant and test user life-cycle management and access governance use case. +Browse to your Saviynt application tenant and test user life-cycle management and access governance use cases. ## Next steps -For additional information, review the following articles: --- [Custom policies in Azure AD B2C](./custom-policy-overview.md)--- [Get started with custom policies in Azure AD B2C](tutorial-create-user-flows.md?pivots=b2c-custom-policy)--- [Create a web API application](./add-web-api-application.md)+* [Azure AD B2C custom policy overview](./custom-policy-overview.md) +* [Tutorial: Create user flows and custom policies in Azure AD B2C](tutorial-create-user-flows.md?pivots=b2c-custom-policy) +* [Add a web API application to your Azure Active Directory B2C tenant](./add-web-api-application.md) |
active-directory-domain-services | Concepts Custom Attributes | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-domain-services/concepts-custom-attributes.md | + + Title: Create and manage custom attributes for Azure AD Domain Services | Microsoft Docs +description: Learn how to create and manage custom attributes in an Azure AD DS managed domain. +++++ms.assetid: 1a14637e-b3d0-4fd9-ba7a-576b8df62ff2 ++++ Last updated : 03/07/2023++++# Custom attributes for Azure Active Directory Domain Services ++For various reasons, companies often canΓÇÖt modify code for legacy apps. For example, apps may use a custom attribute, such as a custom employee ID, and rely on that attribute for LDAP operations. ++Azure AD supports adding custom data to resources using [extensions](/graph/extensibility-overview). Azure Active Directory Domain Services (Azure AD DS) can synchronize the following types of extensions from Azure AD, so you can also use apps that depend on custom attributes with Azure AD DS: ++- [onPremisesExtensionAttributes](/graph/extensibility-overview?tabs=http#extension-attributes) are a set of 15 attributes that can store extended user string attributes. +- [Directory extensions](/graph/extensibility-overview?tabs=http#directory-azure-ad-extensions) allow the schema extension of specific directory objects, such as users and groups, with strongly typed attributes through registration with an application in the tenant. ++Both types of extensions can be configured By using Azure AD Connect for users who are managed on-premises, or MSGraph APIs for cloud-only users. ++>[!Note] +>The following types of extensions aren't supported for synchronization: +>- Custom Security Attributes in Azure AD (Preview) +>- MSGraph Schema Extensions +>- MSGraph Open Extensions +++## Requirements ++The minimum SKU supported for custom attributes is the Enterprise SKU. If you use Standard, you need to [upgrade](change-sku.md) the managed domain to Enterprise or Premium. For more information, see [Azure Active Directory Domain Pricing](https://azure.microsoft.com/pricing/details/active-directory-ds/). ++## How Custom Attributes work ++After you create a managed domain, click **Custom Attributes (Preview)** under **Settings** to enable attribute synchronization. Click **Save** to confirm the change. +++## Enable predefined attribute synchronization ++Click **OnPremisesExtensionAttributes** to synchronize the attributes extensionAttribute1-15, also known as [Exchange custom attributes](/graph/api/resources/onpremisesextensionattributes). ++## Synchronize Azure AD directory extension attributes ++These are the extended user or group attributes defined in your Azure AD tenant. ++Select **+ Add** to choose which custom attributes to synchronize. The list shows the available extension properties in your tenant. You can filter the list by using the search bar. ++++If you don't see the directory extension you are looking for, enter the extensionΓÇÖs associated application appId and click **Search** to load only that applicationΓÇÖs defined extension properties. This search helps when multiple applications define many extensions in your tenant. ++>[!NOTE] +>If you would like to see directory extensions synchronized by Azure AD Connect, click **Enterprise App** and look for the Application ID of the **Tenant Schema Extension App**. For more information, see [Azure AD Connect sync: Directory extensions](../active-directory/hybrid/how-to-connect-sync-feature-directory-extensions.md#configuration-changes-in-azure-ad-made-by-the-wizard). ++Click **Select**, and then **Save** to confirm the change. +++Azure AD DS back fills all synchronized users and groups with the onboarded custom attribute values. The custom attribute values gradually populate for objects that contain the directory extension in Azure AD. During the backfill synchronization process, incremental changes in Azure AD are paused, and the sync time depends on the size of the tenant. ++To check the backfilling status, click **Azure AD DS Health** and verify the **Synchronization with Azure AD** monitor has an updated timestamp within an hour since onboarding. Once updated, the backfill is complete. ++## Next steps ++To configure onPremisesExtensionAttributes or directory extensions for cloud-only users in Azure AD, see [Custom data options in Microsoft Graph](/graph/extensibility-overview?tabs=http#custom-data-options-in-microsoft-graph). ++To sync onPremisesExtensionAttributes or directory extensions from on-premises to Azure AD, [configure Azure AD Connect](../active-directory/hybrid/how-to-connect-sync-feature-directory-extensions.md). |
active-directory | Use Scim To Provision Users And Groups | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/app-provisioning/use-scim-to-provision-users-and-groups.md | Use the general guidelines when implementing a SCIM endpoint to ensure compatibi * Don't require a case-sensitive match on structural elements in SCIM, in particular **PATCH** `op` operation values, as defined in [section 3.5.2](https://tools.ietf.org/html/rfc7644#section-3.5.2). Azure AD emits the values of `op` as **Add**, **Replace**, and **Remove**. * Microsoft Azure AD makes requests to fetch a random user and group to ensure that the endpoint and the credentials are valid. It's also done as a part of the **Test Connection** flow in the [Azure portal](https://portal.azure.com). * Support HTTPS on your SCIM endpoint.-* Custom complex and multivalued attributes are supported but Azure AD doesn't have many complex data structures to pull data from in these cases. Simple paired name/value type complex attributes can be mapped to easily, but flowing data to complex attributes with three or more subattributes aren't well supported at this time. +* Custom complex and multivalued attributes are supported but Azure AD doesn't have many complex data structures to pull data from in these cases. Name/value attributes can be mapped to easily, but flowing data to complex attributes with three or more sub-attributes isn't supported. * The "type" subattribute values of multivalued complex attributes must be unique. For example, there can't be two different email addresses with the "work" subtype. * The header for all the responses should be of content-Type: application/scim+json TLS 1.2 Cipher Suites minimum bar: ### IP Ranges -The Azure AD provisioning service currently operates under the IP Ranges for AzureActiveDirectory as listed [here](https://www.microsoft.com/download/details.aspx?id=56519&WT.mc_id=rss_alldownloads_all). You can add the IP ranges listed under the AzureActiveDirectory tag to allow traffic from the Azure AD provisioning service into your application. You'll need to review the IP range list carefully for computed addresses. An address such as '40.126.25.32' could be represented in the IP range list as '40.126.0.0/18'. You can also programmatically retrieve the IP range list using the following [API](/rest/api/virtualnetwork/servicetags/list). +The Azure AD provisioning service currently operates under the IP Ranges for AzureActiveDirectory as listed [here](https://www.microsoft.com/download/details.aspx?id=56519&WT.mc_id=rss_alldownloads_all). You can add the IP ranges listed under the AzureActiveDirectory tag to allow traffic from the Azure AD provisioning service into your application. You need to review the IP range list carefully for computed addresses. An address such as '40.126.25.32' could be represented in the IP range list as '40.126.0.0/18'. You can also programmatically retrieve the IP range list using the following [API](/rest/api/virtualnetwork/servicetags/list). Azure AD also supports an agent based solution to provide connectivity to applications in private networks (on-premises, hosted in Azure, hosted in AWS, etc.). Customers can deploy a lightweight agent, which provides connectivity to Azure AD without opening any inbound ports, on a server in their private network. Learn more [here](./on-premises-scim-provisioning.md). |
active-directory | Concept Password Ban Bad Combined Policy | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/authentication/concept-password-ban-bad-combined-policy.md | -As the combined check for password policy and banned passwords gets rolled out to tenants, Azure AD and Office 365 admin center users may see differences when they create, change, or reset their passwords. This topic explains details about the password policy criteria checked by Azure AD. +This topic explains details about the password policy criteria checked by Azure AD. ## Azure AD password policies The following Azure AD password policy requirements apply for all passwords that | Characters not allowed | Unicode characters | | Password length |Passwords require<br>- A minimum of eight characters<br>- A maximum of 256 characters</li> | | Password complexity |Passwords require three out of four of the following categories:<br>- Uppercase characters<br>- Lowercase characters<br>- Numbers <br>- Symbols<br> Note: Password complexity check isn't required for Education tenants. |-| Password not recently used | When a user changes or resets their password, the new password can't be the same as the current or recently used passwords. | +| Password not recently used | When a user changes their password, the new password can't be the same as the current or recently used passwords. | | Password isn't banned by [Azure AD Password Protection](concept-password-ban-bad.md) | The password can't be on the global list of banned passwords for Azure AD Password Protection, or on the customizable list of banned passwords specific to your organization. | ## Password expiration policies -Password expiration policies are unchanged but they're included in this topic for completeness. A *global administrator* or *user administrator* can use the [Microsoft Azure AD Module for Windows PowerShell](/powershell/module/Azuread/) to set user passwords not to expire. +Password expiration policies are unchanged but they're included in this topic for completeness. A *Global Administrator* or *User Administrator* can use the [Microsoft Azure AD Module for Windows PowerShell](/powershell/module/Azuread/) to set user passwords not to expire. > [!NOTE] > By default, only passwords for user accounts that aren't synchronized through Azure AD Connect can be configured to not expire. For more information about directory synchronization, see [Connect AD with Azure AD](../hybrid/how-to-connect-password-hash-synchronization.md#password-expiration-policy). |
active-directory | Concept Registration Mfa Sspr Combined | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/authentication/concept-registration-mfa-sspr-combined.md | Combined registration supports the authentication methods and actions in the fol > [!NOTE] > <b>Alternate phone</b> can only be registered in *manage mode* on the [Security info](https://mysignins.microsoft.com/security-info) page and requires Voice calls to be enabled in the Authentication methods policy. <br /> > <b>Office phone</b> can only be registered in *Interrupt mode* if the users *Business phone* property has been set. Office phone can be added by users in *Managed mode from the [Security info](https://mysignins.microsoft.com/security-info)* without this requirement. <br />-> <b>App passwords</b> are available only to users who have been enforced for per-user MFA. App passwords are not available to users who are enabled for Azure AD Multi-Factor Authentication by a Conditional Access policy. <br /> +> <b>App passwords</b> are available only to users who have been enforced for per-user MFA. App passwords aren't available to users who are enabled for Azure AD Multi-Factor Authentication by a Conditional Access policy. <br /> > <b>FIDO2 security keys</b>, can only be added in *manage mode only* on the [Security info](https://mysignins.microsoft.com/security-info) page. Users can set one of the following options as the default multifactor authentication method. Users can set one of the following options as the default multifactor authentica - Text message >[!NOTE]->Virtual phone numbers are not supported for Voice calls or SMS messages. +>Virtual phone numbers aren't supported for Voice calls or SMS messages. -Third party authenticator apps do not provide push notification. As we continue to add more authentication methods to Azure AD, those methods become available in combined registration. +Third party authenticator apps don't provide push notification. As we continue to add more authentication methods to Azure AD, those methods become available in combined registration. ## Combined registration modes Combined registration adheres to both multifactor authentication and SSPR polici The following are sample scenarios where users might be prompted to register or refresh their security info: -- *Multifactor Authentication registration enforced through Identity Protection:* Users are asked to register during sign-in. They register multifactor authentication methods and SSPR methods (if the user is enabled for SSPR).-- *Multifactor Authentication registration enforced through per-user multifactor authentication:* Users are asked to register during sign-in. They register multifactor authentication methods and SSPR methods (if the user is enabled for SSPR).-- *Multifactor Authentication registration enforced through Conditional Access or other policies:* Users are asked to register when they use a resource that requires multifactor authentication. They register multifactor authentication methods and SSPR methods (if the user is enabled for SSPR).+- *Multifactor authentication registration enforced through Identity Protection:* Users are asked to register during sign-in. They register multifactor authentication methods and SSPR methods (if the user is enabled for SSPR). +- *Multifactor authentication registration enforced through per-user multifactor authentication:* Users are asked to register during sign-in. They register multifactor authentication methods and SSPR methods (if the user is enabled for SSPR). +- *Multifactor authentication registration enforced through Conditional Access or other policies:* Users are asked to register when they use a resource that requires multifactor authentication. They register multifactor authentication methods and SSPR methods (if the user is enabled for SSPR). - *SSPR registration enforced:* Users are asked to register during sign-in. They register only SSPR methods. - *SSPR refresh enforced:* Users are required to review their security info at an interval set by the admin. Users are shown their info and can confirm the current info or make changes if needed. -When registration is enforced, users are shown the minimum number of methods needed to be compliant with both multifactor authentication and SSPR policies, from most to least secure. Users going through combined registration where both MFA and SSPR registration is enforced and the SSPR policy requires two methods will first be required to register an MFA method as the first method and can select another MFA or SSPR specific method as the second registered method (e.g. email, security questions etc.) +When registration is enforced, users are shown the minimum number of methods needed to be compliant with both multifactor authentication and SSPR policies, from most to least secure. Users going through combined registration where both MFA and SSPR registration are enforced and the SSPR policy requires two methods will first be required to register an MFA method as the first method and can select another MFA or SSPR specific method as the second registered method (e.g. email, security questions etc.) Consider the following example scenario: -- A user is enabled for SSPR. The SSPR policy requires two methods to reset and has enabled Authenticator app, email, and phone.+- A user is enabled for SSPR. The SSPR policy requires two methods to reset and has enabled Microsoft Authenticator app, email, and phone. - When the user chooses to register, two methods are required:- - The user is shown Authenticator app and phone by default. + - The user is shown Microsoft Authenticator app and phone by default. - The user can choose to register email instead of Authenticator app or phone. +When they set up Microsoft Authenticator, the user can click **I want to setup a different method** to register other authentication methods. The list of available methods is determined by the Authentication methods policy for the tenant.  ++ The following flowchart describes which methods are shown to a user when interrupted to register during sign-in:  If you have both multifactor authentication and SSPR enabled, we recommend that you enforce multifactor authentication registration. -If the SSPR policy requires users to review their security info at regular intervals, users are interrupted during sign-in and shown all their registered methods. They can confirm the current info if it's up to date, or they can make changes if they need to. Users must perform multi-factor authentication when accessing this page. +If the SSPR policy requires users to review their security info at regular intervals, users are interrupted during sign-in and shown all their registered methods. They can confirm the current info if it's up to date, or they can make changes if they need to. Users must perform multifactor authentication to access this page. ### Manage mode A user has not set up all required security info and goes to the Azure portal. A ### Set up security info from My Account -An admin has not enforced registration. +An admin hasn't enforced registration. A user who hasn't yet set up all required security info goes to [https://myaccount.microsoft.com](https://myaccount.microsoft.com). The user selects **Security info** in the left pane. From there, the user chooses to add a method, selects any of the methods available, and follows the steps to set up that method. When finished, the user sees the method that was set up on the Security info page. |
active-directory | Usage Analytics Active Resources | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/cloud-infrastructure-entitlement-management/usage-analytics-active-resources.md | The **Analytics** dashboard in Permissions Management collects detailed informat - **Users**: Tracks assigned permissions and usage of various identities. - **Groups**: Tracks assigned permissions and usage of the group and the group members.-- **Active Resources**: Tracks active resources (used in the last 90 days).+- **Active Resources**: Tracks resources that identities have performed actions on (in the last 90 days). - **Active Tasks**: Tracks active tasks (performed in the last 90 days). - **Access Keys**: Tracks the permission usage of access keys for a given user. - **Serverless Functions**: Tracks assigned permissions and usage of the serverless functions. |
active-directory | Msal B2c Overview | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/develop/msal-b2c-overview.md | -By using Azure AD B2C as an identity management service, you can customize and control how your customers sign up, sign in, and manage their profiles when they use your applications. +By using Azure AD B2C as an identity management service, you can customize and control how your customers sign up, sign in, and manage their profiles when they use your applications. Azure AD B2C also enables you to brand and customize the UI that your application displays during the authentication process. For more information, see: [Working with Azure AD B2C](https://github.com/AzureA Follow the tutorial on how to: - [Sign in users with Azure AD B2C in a single-page application](../../active-directory-b2c/configure-authentication-sample-spa-app.md)-- [Call an Azure AD B2C protected web API](../../active-directory-b2c/enable-authentication-web-api.md)+- [Call an Azure AD B2C protected web API](../../active-directory-b2c/enable-authentication-web-api.md) |
active-directory | Tutorial V2 Windows Uwp | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/develop/tutorial-v2-windows-uwp.md | This section shows how to use the Microsoft Authentication Library to get a toke ```csharp using Microsoft.Identity.Client; using Microsoft.Graph;+ using Microsoft.Graph.Models; using System.Diagnostics; using System.Threading.Tasks; using System.Net.Http.Headers; This section shows how to use the Microsoft Authentication Library to get a toke GraphServiceClient graphClient = await SignInAndInitializeGraphServiceClient(scopes); // Call the /me endpoint of Graph- User graphUser = await graphClient.Me.Request().GetAsync(); + User graphUser = await graphClient.Me.GetAsync(); // Go back to the UI thread to make changes to the UI await Dispatcher.RunAsync(Windows.UI.Core.CoreDispatcherPriority.Normal, () => Eventually, the `AcquireTokenSilent` method fails. Reasons for failure include a ### Instantiate the Microsoft Graph Service Client by obtaining the token from the SignInUserAndGetTokenUsingMSAL method +In the project, create a new file named *TokenProvider.cs*: right-click on the project, select **Add** > **New Item** > **Blank Page**. ++Add to the newly created file the following code: ++```csharp +using Microsoft.Kiota.Abstractions.Authentication; +using System; +using System.Collections.Generic; +using System.Threading; +using System.Threading.Tasks; ++namespace UWP_app_MSGraph { + public class TokenProvider : IAccessTokenProvider { + private Func<string[], Task<string>> getTokenDelegate; + private string[] scopes; ++ public TokenProvider(Func<string[], Task<string>> getTokenDelegate, string[] scopes) { + this.getTokenDelegate = getTokenDelegate; + this.scopes = scopes; + } ++ public Task<string> GetAuthorizationTokenAsync(Uri uri, Dictionary<string, object> additionalAuthenticationContext = default, + CancellationToken cancellationToken = default) { + return getTokenDelegate(scopes); + } ++ public AllowedHostsValidator AllowedHostsValidator { get; } + } +} +``` ++> [!TIP] +> After pasting the code, make sure that the namespace in the *TokenProvider.cs* file matches the namespace of your project. This will allow you to more easily reference the `TokenProvider` class in your project. ++The `TokenProvider` class defines a custom access token provider that executes the specified delegate method to get and return an access token. + Add the following new method to *MainPage.xaml.cs*: ```csharp Add the following new method to *MainPage.xaml.cs*: /// <returns>GraphServiceClient</returns> private async static Task<GraphServiceClient> SignInAndInitializeGraphServiceClient(string[] scopes) {- GraphServiceClient graphClient = new GraphServiceClient(MSGraphURL, - new DelegateAuthenticationProvider(async (requestMessage) => - { - requestMessage.Headers.Authorization = new AuthenticationHeaderValue("bearer", await SignInUserAndGetTokenUsingMSAL(scopes)); - })); + var tokenProvider = new TokenProvider(SignInUserAndGetTokenUsingMSAL, scopes); + var authProvider = new BaseBearerTokenAuthenticationProvider(tokenProvider); + var graphClient = new GraphServiceClient(authProvider, MSGraphURL); return await Task.FromResult(graphClient); } ``` +In this method, you're using the custom access token provider `TokenProvider` to connect the `SignInUserAndGetTokenUsingMSAL` method to the Microsoft Graph .NET SDK and create an authenticated client. ++To use the `BaseBearerTokenAuthenticationProvider`, in the *MainPage.xaml.cs* file, add the following reference: ++```cs +using Microsoft.Kiota.Abstractions.Authentication; +``` + #### More information on making a REST call against a protected API In this sample application, the `GetGraphServiceClient` method instantiates `GraphServiceClient` by using an access token. Then, `GraphServiceClient` is used to get the user's profile information from the **me** endpoint. |
active-directory | V2 Admin Consent | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/develop/v2-admin-consent.md | |
active-directory | Workload Identity Federation Create Trust User Assigned Managed Identity | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/develop/workload-identity-federation-create-trust-user-assigned-managed-identity.md | Title: Create a trust relationship between a user-assigned managed identity and an external identity provider description: Set up a trust relationship between a user-assigned managed identity in Azure AD and an external identity provider. This allows a software workload outside of Azure to access Azure AD protected resources without using secrets or certificates. -+ Previously updated : 01/19/2023- Last updated : 03/06/2023+ zone_pivot_groups: identity-wif-mi-methods To learn more about supported regions, time to propagate federated credential up - If you're unfamiliar with managed identities for Azure resources, check out the [overview section](../managed-identities-azure-resources/overview.md). Be sure to review the [difference between a system-assigned and user-assigned managed identity](../managed-identities-azure-resources/overview.md#managed-identity-types). - If you don't already have an Azure account, [sign up for a free account](https://azure.microsoft.com/free/) before you continue. - Get the information for your external IdP and software workload, which you need in the following steps.-- To create a user-assigned managed identity and configure a federated identity credential, your account needs the [Managed Identity Contributor](../../role-based-access-control/built-in-roles.md#managed-identity-contributor) role assignment.+- To create a user-assigned managed identity and configure a federated identity credential, your account needs the [Contributor](../../role-based-access-control/built-in-roles.md#contributor) or [Owner](../../role-based-access-control/built-in-roles.md#owner) role assignment. - [Create a user-assigned manged identity](../managed-identities-azure-resources/how-manage-user-assigned-managed-identities.md?pivots=identity-mi-methods-azp#create-a-user-assigned-managed-identity) - Find the object ID of the user-assigned managed identity, which you need in the following steps. To delete a specific federated identity credential, select the **Delete** icon f - If you're unfamiliar with managed identities for Azure resources, check out the [overview section](../managed-identities-azure-resources/overview.md). Be sure to review the [difference between a system-assigned and user-assigned managed identity](../managed-identities-azure-resources/overview.md#managed-identity-types). - If you don't already have an Azure account, [sign up for a free account](https://azure.microsoft.com/free/) before you continue. - Get the information for your external IdP and software workload, which you need in the following steps.-- To create a user-assigned managed identity and configure a federated identity credential, your account needs the [Managed Identity Contributor](../../role-based-access-control/built-in-roles.md#managed-identity-contributor) role assignment.+- To create a user-assigned managed identity and configure a federated identity credential, your account needs the [Contributor](../../role-based-access-control/built-in-roles.md#contributor) or [Owner](../../role-based-access-control/built-in-roles.md#owner) role assignment. - [Create a user-assigned manged identity](../managed-identities-azure-resources/how-manage-user-assigned-managed-identities.md?pivots=identity-mi-methods-azcli#create-a-user-assigned-managed-identity-1) - Find the object ID of the user-assigned managed identity, which you need in the following steps. az identity federated-credential delete --name $ficId --identity-name $uaId --re - If you're unfamiliar with managed identities for Azure resources, check out the [overview section](../managed-identities-azure-resources/overview.md). Be sure to review the [difference between a system-assigned and user-assigned managed identity](../managed-identities-azure-resources/overview.md#managed-identity-types). - If you don't already have an Azure account, [sign up for a free account](https://azure.microsoft.com/free/) before you continue. - Get the information for your external IdP and software workload, which you need in the following steps.-- To create a user-assigned managed identity and configure a federated identity credential, your account needs the [Managed Identity Contributor](../../role-based-access-control/built-in-roles.md#managed-identity-contributor) role assignment.+- To create a user-assigned managed identity and configure a federated identity credential, your account needs the [Contributor](../../role-based-access-control/built-in-roles.md#contributor) or [Owner](../../role-based-access-control/built-in-roles.md#owner) role assignment. - To run the example scripts, you have two options: - Use [Azure Cloud Shell](../../cloud-shell/overview.md), which you can open by using the **Try It** button in the upper-right corner of code blocks. - Run scripts locally with Azure PowerShell, as described in the next section. Remove-AzFederatedIdentityCredentials -ResourceGroupName azure-rg-test -Identity - If you're unfamiliar with managed identities for Azure resources, check out the [overview section](../managed-identities-azure-resources/overview.md). Be sure to review the [difference between a system-assigned and user-assigned managed identity](../managed-identities-azure-resources/overview.md#managed-identity-types). - If you don't already have an Azure account, [sign up for a free account](https://azure.microsoft.com/free/) before you continue. - Get the information for your external IdP and software workload, which you need in the following steps.-- To create a user-assigned managed identity and configure a federated identity credential, your account needs the [Managed Identity Contributor](../../role-based-access-control/built-in-roles.md#managed-identity-contributor) role assignment.+- To create a user-assigned managed identity and configure a federated identity credential, your account needs the [Contributor](../../role-based-access-control/built-in-roles.md#contributor) or [Owner](../../role-based-access-control/built-in-roles.md#owner) role assignment. - [Create a user-assigned manged identity](../managed-identities-azure-resources/how-manage-user-assigned-managed-identities.md?pivots=identity-mi-methods-arm#create-a-user-assigned-managed-identity-3) - Find the object ID of the user-assigned managed identity, which you need in the following steps. Make sure that any kind of automation creates federated identity credentials und - If you're unfamiliar with managed identities for Azure resources, check out the [overview section](../managed-identities-azure-resources/overview.md). Be sure to review the [difference between a system-assigned and user-assigned managed identity](../managed-identities-azure-resources/overview.md#managed-identity-types). - If you don't already have an Azure account, [sign up for a free account](https://azure.microsoft.com/free/) before you continue. - Get the information for your external IdP and software workload, which you need in the following steps.-- To create a user-assigned managed identity and configure a federated identity credential, your account needs the [Managed Identity Contributor](../../role-based-access-control/built-in-roles.md#managed-identity-contributor) role assignment.+- To create a user-assigned managed identity and configure a federated identity credential, your account needs the [Contributor](../../role-based-access-control/built-in-roles.md#contributor) or [Owner](../../role-based-access-control/built-in-roles.md#owner) role assignment. - You can run all the commands in this article either in the cloud or locally: - To run in the cloud, use [Azure Cloud Shell](../../cloud-shell/overview.md). - To run locally, install [curl](https://curl.haxx.se/download.html) and the [Azure CLI](/cli/azure/install-azure-cli). |
active-directory | Cross Tenant Synchronization Overview | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/multi-tenant-organizations/cross-tenant-synchronization-overview.md | With cross-tenant synchronization, you can do the following: ## Teams and Microsoft 365 -Users created by cross-tenant synchronization will have the same experience when accessing Microsoft Teams and other Microsoft 365 services as B2B collaboration users created through a manual invitation. The [userType](../external-identities/user-properties.md) property on the B2B user, whether guest or member, does change the end user experience. Over time, the member userType will be used by the various Microsoft 365 services to provide differentiated end user experiences for users in a multi-tenant organization. +Users created by cross-tenant synchronization will have the same experience when accessing Microsoft Teams and other Microsoft 365 services as B2B collaboration users created through a manual invitation. The [userType](../external-identities/user-properties.md) property on the B2B user, whether guest or member, does not change the end user experience. Over time, the member userType will be used by the various Microsoft 365 services to provide differentiated end user experiences for users in a multi-tenant organization. ## Properties |
active-directory | Howto Use Recommendations | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/reports-monitoring/howto-use-recommendations.md | + + Title: How to use Azure Active Directory recommendations | Microsoft Docs +description: Learn how to use Azure Active Directory recommendations. +++++++ Last updated : 03/06/2023+++++# How to: Use Azure AD recommendations ++The Azure Active Directory (Azure AD) recommendations feature provides you with personalized insights with actionable guidance to: ++- Help you identify opportunities to implement best practices for Azure AD-related features. +- Improve the state of your Azure AD tenant. +- Optimize the configurations for your scenarios. ++This article covers how to work with Azure AD recommendations. Each Azure AD recommendation contains similar details such as a description, the value of addressing the recommendation, and the steps to address the recommendation. Microsoft Graph API guidance is also provided in this article. ++## Role requirements ++There are different role requirements for viewing or updating a recommendation. Use the least-privileged role for the type of access needed. ++| Azure AD role | Access type | +|- |- | +| Reports Reader | Read-only | +| Security Reader | Read-only | +| Global Reader | Read-only | +| Cloud apps Administrator | Update and read | +| Apps Administrator | Update and read | +| Security Operator | Update and read | +| Security Administrator | Update and read | ++Some recommendations may require a P2 or other license. For more information, see [Recommendation availability and license requirements](overview-recommendations.md#recommendation-availability-and-license-requirements). ++## How to read a recommendation ++To view the details of a recommendation: ++1. Sign in to Azure using the appropriate least-privilege role. +1. Go to **Azure AD** > **Recommendations** and select a recommendation from the list. ++  ++Each recommendation provides the same set of details that explain what the recommendation is, why it's important, and how to fix it. ++ ++- The **Status** of a recommendation can be updated manually or automatically by the system. If all resources are addressed according to the action plan, the status automatically changes to *Completed* the next time the recommendations service runs. The recommendation service runs every 24-48 hours, depending on the recommendation. ++- The **Priority** of a recommendation could be low, medium, or high. These values are determined by several factors, such as security implications, health concerns, or potential breaking changes. ++ - **High**: Must do. Not acting will result in severe security implications or potential downtime. + - **Medium**: Should do. No severe risk if action isn't taken. + - **Low**: Might do. No security risks or health concerns if action isn't taken. ++- The **Impacted resource type** for a recommendation could be applications, users, or your full tenant. This detail gives you an idea of what type of resources you need to address. If the impacted resource is at the tenant level, you may need to make a global change. ++ ++- The **Status description** tells you the date the recommendation status changed and if it was changed by the system or a user. ++- The recommendation's **Value** is an explanation of why completing the recommendation will benefit you, and the value of the associated feature. ++- The **Action plan** provides step-by-step instructions to implement a recommendation. The Action plan may include links to relevant documentation or direct you to other pages in the Azure AD portal. ++- The **Impacted resources** table contains a list of resources identified by the recommendation. The resource's name, ID, date it was first detected, and status are provided. The resource could be an application or resource service principal, for example. ++## How to update a recommendation ++To update the status of a recommendation or a related resource, sign in to Azure using a least-privileged role for updating a recommendation. ++1. Go to **Azure AD** > **Recommendations**. ++1. Select a recommendation from the list to view the details, status, and action plan. ++1. Follow the **Action plan**. ++1. If applicable, *right-click on the status* of a resource in a recommendation, select **Mark as**, then select a status. ++ - The status for the resource appears as regular text, but you can right-click on the status to open the menu. + - You can set each resource to a different status as needed. + +  ++1. The recommendation service automatically marks the recommendation as complete, but if you need to manually change the status of a recommendation, select **Mark as** from the top of the page and select a status. ++  ++ - Mark a recommendation as **Dismissed** if you think the recommendation is irrelevant or the data is wrong. + - Azure AD asks for a reason why you dismissed the recommendation so we can improve the service. + - Mark a recommendation as **Postponed** if you want to address the recommendation at a later time. + - The recommendation becomes **Active** when the selected date occurs. + - You can reactivate a completed or postponed recommendation to keep it top of mind and reassess the resources. + - Recommendations change to **Completed** if all impacted resources have been addressed. + - If the service identifies an active resource for a completed recommendation the next time the service runs, the recommendation will automatically change back to **Active**. + - Completing a recommendation is the only action collected in the audit log. To view these logs, go to **Azure AD** > **Audit logs** and filter the service to "Azure AD recommendations." ++Continue to monitor the recommendations in your tenant for changes. ++### How to use Microsoft Graph with Azure Active Directory recommendations ++Azure Active Directory recommendations can be viewed and managed using Microsoft Graph on the `/beta` endpoint. You can view recommendations along with their impacted resources, postpone a recommendation for later, and more. ++To get started, follow these instructions to work with recommendations using Microsoft Graph in Graph Explorer. The example uses the "Migrate apps from Active Directory Federated Services (ADFS) to Azure AD" recommendation. ++1. Sign in to [Graph Explorer](https://aka.ms/ge). +1. Select **GET** as the HTTP method from the dropdown. +1. Set the API version to **beta**. +1. Add the following query to retrieve recommendations, then select the **Run query** button. ++ ```http + GET https://graph.microsoft.com/beta/directory/recommendations + ``` ++1. To view the details of a specific `recommendationType`, use the following API. This example retrieves the detail of the "Migrate apps from AD FS to Azure AD" recommendation. ++ ```http + GET https://graph.microsoft.com/beta/directory/recommendations?$filter=recommendationType eq 'adfsAppsMigration' + ``` ++1. To view the impacted resources for a specific recommendation, expand the `impactedResources` relationship. ++ ```http + GET https://graph.microsoft.com/beta/directory/recommendations?$filter=recommendationType eq 'adfsAppsMigration'&$expand=impactedResources + ``` ++For more information, see the [Microsoft Graph documentation for recommendations](/graph/api/resources/recommendations-api-overview). ++## Next steps ++- [Review the Azure AD recommendations overview](overview-recommendations.md) +- [Learn about Service Health notifications](overview-service-health-notifications.md) |
active-directory | Overview Recommendations | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/reports-monitoring/overview-recommendations.md | This article gives you an overview of how you can use Azure AD recommendations. The Azure AD recommendations feature is the Azure AD specific implementation of [Azure Advisor](../../advisor/advisor-overview.md), which is a personalized cloud consultant that helps you follow best practices to optimize your Azure deployments. Azure Advisor analyzes your resource configuration and usage data to recommend solutions that can help you improve the cost effectiveness, performance, reliability, and security of your Azure resources. -*Azure AD recommendations* use similar data to support you with the roll-out and management of Microsoft's best practices for Azure AD tenants to keep your tenant in a secure and healthy state. Azure AD recommendations provide a holistic view into your tenant's security, health, and usage. - -## How it works --On a daily basis, Azure AD analyzes the configuration of your tenant. During this analysis, Azure AD compares the data of a recommendation with the actual configuration of your tenant. If a recommendation is flagged as applicable to your tenant, the recommendation appears in the **Recommendations** section of the Azure AD Overview area. The recommendations are listed in order of priority so you can quickly determine where to focus first. --Each recommendation contains a description, a summary of the value of addressing the recommendation, and a step-by-step action plan. If applicable, impacted resources associated with the recommendation are listed, so you can resolve each affected area. If a recommendation doesn't have any associated resources, the impacted resource type is *Tenant level*. so your step-by-step action plan impacts the entire tenant and not just a specific resource. -- --## Recommendation details --Each recommendation provides the same set of details that explain what the recommendation is, why it's important, and how to fix it. --The **Status** of a recommendation can be updated manually or automatically by the system. If all resources are addressed according to the action plan, the status automatically changes to *Completed* the next time the recommendations service runs. The recommendation service runs every 24-48 hours, depending on the recommendation. -- --The **Priority** of a recommendation could be low, medium, or high. These values are determined by several factors, such as security implications, health concerns, or potential breaking changes. -- --- **High**: Must do. Not acting will result in severe security implications or potential downtime.-- **Medium**: Should do. No severe risk if action isn't taken.-- **Low**: Might do. No security risks or health concerns if action isn't taken.--The **Impacted resources** for a recommendation could be things like applications or users. This detail gives you an idea of what type of resources you need to address. The impacted resource could also be at the tenant level, so you may need to make a global change. --The **Status description** tells you the date the recommendation status changed and if it was changed by the system or a user. --The recommendation's **Value** is an explanation of why completing the recommendation will benefit you, and the value of the associated feature. --The **Action plan** provides step-by-step instructions to implement a recommendation. May include links to relevant documentation or direct you to other pages in the Azure AD portal. --## Roles and licenses --The following roles provide *read-only* access to recommendations: --- Reports Reader-- Security Reader-- Global Reader--The following roles provide *update and read-only* access to recommendations: --- Global Administrator-- Security Administrator-- Security Operator-- Cloud apps Administrator-- Apps Administrator+*Azure AD recommendations* use similar data to support you with the roll-out and management of Microsoft's best practices for Azure AD tenants to keep your tenant in a secure and healthy state. The Azure AD recommendations feature provides a holistic view into your tenant's security, health, and usage. -The Azure AD recommendations feature is automatically enabled. If you'd like to disable this feature, go to **Azure AD** > **Preview features**. Locate the **Recommendations** feature, and change the **State**. --Azure AD only displays the recommendations that apply to your tenant, so you may not see all supported recommendations listed. Currently, all recommendations are available in all tenants, regardless of the license type. --### Recommendations available for all Azure AD tenants --The recommendations listed in the following table are available to all Azure AD tenants. The table provides the impacted resources and links to available documentation. --| Recommendation | Impacted resources | Availability | -|- |- |- | -| [Convert per-user MFA to Conditional Access MFA](recommendation-turn-off-per-user-mfa.md) | Users | Generally available | -| [Migrate applications from AD FS to Azure AD](recommendation-migrate-apps-from-adfs-to-azure-ad.md) | Users | Generally available | -| [Migrate to Microsoft Authenticator](recommendation-migrate-to-authenticator.md) | Users | Preview | -| [Minimize MFA prompts from known devices](recommendation-migrate-apps-from-adfs-to-azure-ad.md) | Users | Generally available | --## How to use Azure AD recommendations --1. Go to **Azure AD** > **Recommendations**. --1. Select a recommendation from the list to view the details, status, and action plan. --  --1. Follow the **Action plan**. --1. If applicable, *right-click on the status* of a resource in a recommendation, select **Mark as**, then select a status. -- - The status for the resource appears as regular text, but you can right-click on the status to open the menu. - - You can set each resource to a different status as needed. - -  --1. The recommendation service automatically marks the recommendation as complete, but if you need to manually change the status of a recommendation, select **Mark as** from the top of the page and select a status. --  -- - Mark a recommendation as **Dismissed** if you think the recommendation is irrelevant or the data is wrong. - - Azure AD asks for a reason why you dismissed the recommendation so we can improve the service. - - Mark a recommendation as **Postponed** if you want to address the recommendation at a later time. - - The recommendation becomes **Active** when the selected date occurs. - - You can reactivate a completed or postponed recommendation to keep it top of mind and reassess the resources. - - Recommendations change to **Completed** if all impacted resources have been addressed. - - If the service identifies an active resource for a completed recommendation the next time the service runs, the recommendation will automatically change back to **Active**. - - Completing a recommendation is the only action collected in the audit log. To view these logs, go to **Azure AD** > **Audit logs** and filter the service to "Azure AD recommendations." --Continue to monitor the recommendations in your tenant for changes. --### Use Microsoft Graph with Azure Active Directory recommendations --Azure Active Directory recommendations can be viewed and managed using Microsoft Graph on the `/beta` endpoint. You can view recommendations along with their impacted resources, mark a recommendation as completed by a user, postpone a recommendation for later, and more. --To get started, follow these instructions to work with recommendations using Microsoft Graph in Graph Explorer. The example uses the Migrate apps from Active Directory Federated Services (ADFS) to Azure AD recommendation. +## How it works -1. Sign in to [Graph Explorer](https://aka.ms/ge). -1. Select **GET** as the HTTP method from the dropdown. -1. Set the API version to **beta**. -1. Add the following query to retrieve recommendations, then select the **Run query** button. +On a daily basis, Azure AD analyzes the configuration of your tenant. During this analysis, Azure AD compares the data of a recommendation with the actual configuration of your tenant. If a recommendation is flagged as applicable to your tenant, the recommendation appears in the **Recommendations** section of the Azure AD Overview area. The recommendations are listed in order of priority so you can quickly determine where to focus first. - ```http - GET https://graph.microsoft.com/beta/directory/recommendations - ``` + -1. To view the details of a specific `recommendationType`, use the following API. This example retrieves the detail of the "Migrate apps from AD FS to Azure AD" recommendation. +Each recommendation contains a description, a summary of the value of addressing the recommendation, and a step-by-step action plan. If applicable, impacted resources associated with the recommendation are listed, so you can resolve each affected area. If a recommendation doesn't have any associated resources, the impacted resource type is *Tenant level*, so your step-by-step action plan impacts the entire tenant and not just a specific resource. - ```http - GET https://graph.microsoft.com/beta/directory/recommendations?$filter=recommendationType eq 'adfsAppsMigration' - ``` +## Recommendation availability and license requirements -1. To view the impacted resources for a specific recommendation, expand the `impactedResources` relationship. +The recommendations listed in the following table are currently available in public preview or general availability. The license requirements for recommendations in public preview are subject to change. The table provides the impacted resources and links to available documentation. - ```http - GET https://graph.microsoft.com/beta/directory/recommendations?$filter=recommendationType eq 'adfsAppsMigration'&$expand=impactedResources - ``` +| Recommendation | Impacted resources | Required license | Availability | +|- |- |- |- | +| [Convert per-user MFA to Conditional Access MFA](recommendation-turn-off-per-user-mfa.md) | Users | All licenses | Generally available | +| [Migrate applications from AD FS to Azure AD](recommendation-migrate-apps-from-adfs-to-azure-ad.md) | Applications | All licenses | Generally available | +| [Migrate to Microsoft Authenticator](recommendation-migrate-to-authenticator.md) | Users | All licenses | Preview | +| [Minimize MFA prompts from known devices](recommendation-mfa-from-known-devices.md) | Users | All licenses | Generally available | +| [Remove unused applications](recommendation-remove-unused-apps.md) | Applications | Azure AD Premium P2 | Preview | +| [Remove unused credentials from applications](recommendation-remove-unused-credential-from-apps.md) | Applications | Azure AD Premium P2 | Preview | +| [Renew expiring application credentials](recommendation-renew-expiring-application-credential.md) | Applications | Azure AD Premium P2 | Preview | +| [Renew expiring service principal credentials](recommendation-renew-expiring-service-principal-credential.md) | Applications | Azure AD Premium P2 | Preview | -For more information, see the [Microsoft Graph documentation for recommendations](/graph/api/resources/recommendations-api-overview). +Azure AD only displays the recommendations that apply to your tenant, so you may not see all supported recommendations listed. ## Next steps -* [Learn more about Microsoft Graph](/graph/overview) -* [Get started with Azure AD reports](overview-reports.md) -* [Learn about Azure AD monitoring](overview-monitoring.md) +* [Learn how to use Azure AD recommendations](howto-use-recommendations.md) +* [Explore the details of the "Turn off per-user MFA" recommendation](recommendation-turn-off-per-user-mfa.md) |
active-directory | Recommendation Mfa From Known Devices | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/reports-monitoring/recommendation-mfa-from-known-devices.md | This recommendation improves your user's productivity and minimizes the sign-in ## Next steps -- [What is Azure Active Directory recommendations](overview-recommendations.md)--- [Azure AD reports overview](overview-reports.md)+- [Review the Azure AD recommendations overview](overview-recommendations.md) +- [Learn how to use Azure AD recommendations](howto-use-recommendations.md) +- [Explore the Microsoft Graph API properties for recommendations](/graph/api/resources/recommendation) |
active-directory | Recommendation Migrate Apps From Adfs To Azure Ad | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/reports-monitoring/recommendation-migrate-apps-from-adfs-to-azure-ad.md | Using Azure AD gives you granular per-application access controls to secure acce ## Next steps -* [What is Azure Active Directory recommendations](overview-recommendations.md) -* [Azure AD reports overview](overview-reports.md) -* [Learn more about Microsoft Graph](/graph/overview) -* [Explore the Microsoft Graph API properties for recommendations](/graph/api/resources/recommendation) +- [Review the Azure AD recommendations overview](overview-recommendations.md) +- [Learn how to use Azure AD recommendations](howto-use-recommendations.md) +- [Explore the Microsoft Graph API properties for recommendations](/graph/api/resources/recommendation) |
active-directory | Recommendation Migrate To Authenticator | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/reports-monitoring/recommendation-migrate-to-authenticator.md | -## Logic - This recommendation appears if Azure AD detects that your tenant has users authenticating using SMS or voice instead of the Microsoft Authenticator app in the past week. + + ## Value Push notifications through the Microsoft Authenticator app provide the least intrusive MFA experience for users. This method is the most reliable and secure option because it relies on a data connection rather than telephony. The Microsoft Authenticator app is available for Android and iOS. Microsoft Auth ## Next steps -* [Learn more about Microsoft Graph](/graph/overview) -* [Explore the Microsoft Graph API properties for recommendations](/graph/api/resources/recommendation) -* [Azure AD reports overview](overview-reports.md) +- [Review the Azure AD recommendations overview](overview-recommendations.md) +- [Learn how to use Azure AD recommendations](howto-use-recommendations.md) +- [Explore the Microsoft Graph API properties for recommendations](/graph/api/resources/recommendation) |
active-directory | Recommendation Remove Unused Apps | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/reports-monitoring/recommendation-remove-unused-apps.md | + + Title: Azure Active Directory recommendation - Remove unused apps (preview) | Microsoft Docs +description: Learn why you should remove unused apps. +++++++ Last updated : 03/07/2023+++++# Azure AD recommendation: Remove unused applications (preview) +[Azure AD recommendations](overview-recommendations.md) is a feature that provides you with personalized insights and actionable guidance to align your tenant with recommended best practices. ++This article covers the recommendation to investigate unused applications. This recommendation is called `UnusedApps` in the recommendations API in Microsoft Graph. ++## Description ++This recommendation shows up if your tenant has applications that haven't been used in more than 30 days, so haven't been issued any tokens. Applications or service principals that were added but never used show up as unused apps, which will also trigger this recommendation. ++## Value ++Removing unused applications improves the security posture and promotes good application hygiene. It reduces the risk of application compromise by someone discovering an unused application and misusing it to get tokens. Depending on the permissions granted to the application and the resources that it exposes, an application compromise could expose sensitive data in an organization. ++## Action plan ++Applications that the recommendation identified appear in the list of **Impacted resources** at the bottom of the recommendation. ++1. Take note of the application name and ID that the recommendation identified. +1. Go to **Azure AD** > **App registration** and locate the application that was surfaced as part of this recommendation. +1. Determine if the identified application is needed. + - If the application is no longer needed, remove it from your tenant. + - If the application is needed, we suggest you take appropriate steps to ensure the application is used in intervals of less than 30 days. ++## Known limitations ++Take note of the following common scenarios or known limitations of the "Remove unused applications" recommendation. ++* The time frame for application usage that triggers this recommendation can't be customized. ++* The following apps won't show up as a part of this recommendation, but are currently under review for future enhancements: + - Microsoft-owned applications + - Password single sign-on + - Linked single sign-on + - App proxy + - Add-in apps ++* This recommendation currently surfaces applications that were created within the past 30 days *and* shows as unused. Updates to the recommendation to filter out recently created apps so that they can complete a full cycle are in progress. ++## Next steps ++- [Review the Azure AD recommendations overview](overview-recommendations.md) +- [Learn how to use Azure AD recommendations](howto-use-recommendations.md) +- [Explore the Microsoft Graph API properties for recommendations](/graph/api/resources/recommendation) |
active-directory | Recommendation Remove Unused Credential From Apps | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/reports-monitoring/recommendation-remove-unused-credential-from-apps.md | + + Title: Azure Active Directory recommendation - Remove unused credentials from apps (preview) | Microsoft Docs +description: Learn why you should remove unused credentials from apps. +++++++ Last updated : 03/07/2023+++++# Azure AD recommendation: Remove unused credentials from apps (preview) +[Azure AD recommendations](overview-recommendations.md) is a feature that provides you with personalized insights and actionable guidance to align your tenant with recommended best practices. ++This article covers the recommendation to remove unused credentials from apps. This recommendation is called `UnusedAppCreds` in the recommendations API in Microsoft Graph. ++## Description ++Application credentials can include certificates and other types of secrets that need to be registered with that application. These credentials are used to prove the identity of the application. Only credentials actively in use by an application should remain registered with the application. ++This recommendation shows up if your tenant has application credentials that haven't been used in more than 30 days. ++## Value ++An application credential is used to get a token that grants access to a resource or another service. If an application credential is compromised, it could be used to access sensitive resources or allow a bad actor to move latterly, depending on the access granted to the application. ++Removing credentials not actively used by applications improves security posture and promotes app hygiene. It reduces the risk of application compromise and improves the security posture of the application by reducing the attack surface for credential misuse by discovery. ++## Action plan ++Applications that the recommendation identified appear in the list of **Impacted resources** at the bottom of the recommendation. ++1. Take note of the application name and ID that the recommendation identified. ++1. Go to **Azure AD** > **App registration** and locate the application that was surfaced as part of this recommendation. +1. Navigate to the **Certificates & Secrets** section of the app registration. +1. Locate the unused credential and remove it. ++## Next steps ++- [Review the Azure AD recommendations overview](overview-recommendations.md) +- [Learn how to use Azure AD recommendations](howto-use-recommendations.md) +- [Explore the Microsoft Graph API properties for recommendations](/graph/api/resources/recommendations-api-overview) +- [Learn about app and service principal objects in Azure AD](../develop/app-objects-and-service-principals.md) |
active-directory | Recommendation Renew Expiring Application Credential | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/reports-monitoring/recommendation-renew-expiring-application-credential.md | + + Title: Azure Active Directory recommendation - Renew expiring application credentials (preview) | Microsoft Docs +description: Learn why you should renew expiring application credentials. +++++++ Last updated : 03/07/2023+++++# Azure AD recommendation: Renew expiring application credentials (preview) +[Azure AD recommendations](overview-recommendations.md) is a feature that provides you with personalized insights and actionable guidance to align your tenant with recommended best practices. ++This article covers the recommendation to renew expiring application credentials. This recommendation is called `applicationCredentialExpiry` in the recommendations API in Microsoft Graph. ++## Description ++Application credentials can include certificates and other types of secrets that need to be registered with that application. These credentials are used to prove the identity of the application. ++This recommendation shows up if your tenant has application credentials that will expire soon. ++## Value ++Renewing the app credential(s) before its expiration ensures the application continues to function and reduces the possibility of downtime due to an expired credential. ++## Action plan ++Applications that the recommendation identified appear in the list of **Impacted resources** at the bottom of the recommendation. ++1. Take note of the application name and ID that the recommendation identified. +1. Navigate to **Azure AD** > **App registration** and locate the application for which the credential needs to be rotated. +1. Navigate to the **Certificates & Secrets** section of the app registration. +1. Pick the credential type that you want to rotate and navigate to either **Certificates** or **Client Secret** tab and follow the prompts. +1. Once the certificate or secret is successfully added, update the service code to ensure it works with the new credential and doesn't negatively affect customers. +1. Use the Azure AD sign-in logs to validate that the Key ID of the credential matches the one that was recently added. +1. After validating the new credential, navigate back to **Azure AD** > **App registrations** > **Certificates and Secrets** for the app and remove the old credential. + +## Known limitations ++- Currently in the list of **Impacted resources**, only the app name and resource ID are shown. The key ID for the credential that needs to be rotated is not shown. To find the key ID credential, go to **Azure AD** > **App registrations** > **Certificates and Secrets** for the application. ++- An **Impacted resource** with credentials that expired recently will be marked as **Complete**. If that resource has more than one credential expiring soon, the status of the resource will be **Active**. ++## Next steps ++- [Review the Azure AD recommendations overview](overview-recommendations.md) +- [Learn how to use Azure AD recommendations](howto-use-recommendations.md) +- [Explore the Microsoft Graph API properties for recommendations](/graph/api/resources/recommendation) +- [Learn about app and service principal objects in Azure AD](../develop/app-objects-and-service-principals.md) |
active-directory | Recommendation Renew Expiring Service Principal Credential | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/reports-monitoring/recommendation-renew-expiring-service-principal-credential.md | + + Title: Azure Active Directory recommendation - Renew expiring service principal credentials (preview) | Microsoft Docs +description: Learn why you should renew expiring service principal credentials. +++++++ Last updated : 03/07/2023+++++# Azure AD recommendation: Renew expiring service principal credentials (preview) ++[Azure AD recommendations](overview-recommendations.md) is a feature that provides you with personalized insights and actionable guidance to align your tenant with recommended best practices. ++This article covers the recommendation to renew expiring service principal credentials. This recommendation is called `servicePrincipalKeyExpiry` in the recommendations API in Microsoft Graph. ++## Description ++An Azure Active Directory (Azure AD) service principal is the local representation of an application object in a single tenant or directory. The service principal defines who can access an application and what resources the application can access. Authentication of service principals is often completed using certificate credentials, which have a lifespan. If the credentials expire, the application won't be able to authenticate with your tenant. ++This recommendation shows up if your tenant has service principals with credentials that will expire soon. ++## Value ++Renewing the service principal credential(s) before expiration ensures the application continues to function and reduces the possibility of downtime due to an expired credential. ++## Action plan ++1. Select the name of the application from the list of **Impacted resources** to go directly to the **Enterprise applications - Single sign-on** page for the selected application. ++ a. Alternatively, go to **Azure AD** > **Enterprise applications**. The status of the service principal appears in the **Certificate Expiry Status** column. + + b. Use the search box at the top of the list to find the application that was listed in the recommendation. + + c. Select the service principal with the credential that needs to be rotated, then select **Single sign-on** from the side menu. ++1. Edit the **SAML signing certificate** section and follow the prompts to add a new certificate. + +  ++1. After adding the certificate, change its properties to make the certificate active, which makes the other certificate inactive. +1. Once the certificate is successfully added and activated, update the service code to ensure it works with the new credential and doesn't negatively affect customers. +1. Use the Azure AD sign-in logs to validate that the Key ID of the certificate matches the one that was recently uploaded. + - Go to **Azure AD Sign-in logs** > **Service principal sign-ins**. + - Open the details for a related sign-in and check that the **Client credential type** is "Client secret" and the **Credential key ID** matches your credential. +1. After validating the new credential, navigate back to the **Single sign-on** area for the app and remove the old credential. ++### Use Microsoft Graph to renew expiring service principal credentials ++You can use Microsoft Graph to renew expiring service credentials programmatically. To get started, see [How to use Microsoft Graph with Azure AD recommendations](howto-use-recommendations.md#how-to-use-microsoft-graph-with-azure-active-directory-recommendations). ++When renewing service principal credentials using Microsoft Graph, you need to run a query to get the password credentials on a service principal, add a new password credential, then remove the old credentials. ++1. Run the following query in Microsoft Graph to get the password credentials on a service principal: ++ ```http + https://graph.microsoft.com/v1.0/servicePrincipals/{id}?$select=passwordCredentials + ``` + - Replace {id} with the service principal ID. ++1. Add a new password credential. + - Use the Microsoft Graph Service Principal API service action `addPassword` + - [servicePrincipal: addPassword MS Graph API documentation](/graph/api/serviceprincipal-addpassword?view=graph-rest-beta&preserve-view=true) ++1. Remove the old/original credentials. + - Use the Microsoft Graph Service Principal API service action `removePassword` + - [servicePrincipal: removePassword MS Graph API documentation](/graph/api/serviceprincipal-removepassword?view=graph-rest-beta&preserve-view=true) ++## Known limitations ++- This recommendation identifies service principal credentials that are about to expire, so if they do expire, the recommendation doesn't distinguish between the credential expiring on its own or being addressed by the user. ++- Service principal credentials that expire before the recommendation is completed will be marked complete by the system. ++- The recommendation currently doesn't display the password secret credential in service principal when you select an **Impacted resource** from the list. ++- The **ID** shown in the list of **Impacted resources** is for the application not the service principal. ++## Next steps ++- [Review the Azure AD recommendations overview](overview-recommendations.md) +- [Learn how to use Azure AD recommendations](howto-use-recommendations.md) +- [Explore the Microsoft Graph API properties for recommendations](/graph/api/resources/recommendation) +- [Learn about securing service principals](../fundamentals/service-accounts-principal.md) |
active-directory | Recommendation Turn Off Per User Mfa | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/reports-monitoring/recommendation-turn-off-per-user-mfa.md | After all users have been migrated to CA MFA accounts, the recommendation status ## Next steps -* [Learn about requiring MFA for all users using Conditional Access](../conditional-access/howto-conditional-access-policy-all-users-mfa.md) -* [View the MFA CA policy tutorial](../authentication/tutorial-enable-azure-mfa.md) -* [Learn more about Microsoft Graph](/graph/overview) -* [Explore the Microsoft Graph API properties for recommendations](/graph/api/resources/recommendation) +- [Review the Azure AD recommendations overview](overview-recommendations.md) +- [Learn how to use Azure AD recommendations](howto-use-recommendations.md) +- [Explore the Microsoft Graph API properties for recommendations](/graph/api/resources/recommendation) +- [Learn about requiring MFA for all users using Conditional Access](../conditional-access/howto-conditional-access-policy-all-users-mfa.md) +- [View the MFA CA policy tutorial](../authentication/tutorial-enable-azure-mfa.md) |
active-directory | Administrative Units | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/roles/administrative-units.md | A central administrator could: Here are some of the constraints for administrative units. - Administrative units can't be nested.-- Administrative unit-scoped user account administrators can't create or delete users. - Administrative units are currently not available in [Azure AD Identity Governance](../governance/identity-governance-overview.md). ## Groups |
active-directory | Ardoq Provisioning Tutorial | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/ardoq-provisioning-tutorial.md | The scenario outlined in this tutorial assumes that you already have the followi 1. Determine what data to [map between Azure AD and Ardoq](../app-provisioning/customize-application-attributes.md). ## Step 2. Configure Ardoq to support provisioning with Azure AD+* Provisioning is gated by a feature toggle in Ardoq. If you intend to configure SSO or have already done so, Ardoq will automatically recognize that Azure AD is in use, and the provisioning feature will be automatically enabled. -1. Log in to [Ardoq](https://aad.ardoq.com/). +* If you don't intend to use the provisioning features of Azure AD along with SSO, please reach out to Ardoq customer support and they'll manually enable support for provisioning. ++Before we proceed we need to obtain a *Tenant Url* and a *Secret Token*, to configure secure communcation between Azure AD and Ardoq. +++++1. Log in to Ardoq admin console. 1. In the left menu click on profile logo and, navigate to **Organization Settings->Manage Organization->Manage SCIM Token**. 1. Click on **Generate new**. 1. Copy and save the **Token**.This value will be entered in the **Secret Token** field in the Provisioning tab of your Ardoq application in the Azure portal. -1. And `https://aad.ardoq.com/api/scim/v2` will be entered in the **Tenant Url** field in the Provisioning tab of your Ardoq application in the Azure portal. +1. To create your *tenant URL*, use this template: `https://<YOUR-SUBDOMAIN>.ardoq.com/api/scim/v2` by replacing the placeholder text `<YOUR-SUBDOMAIN>`.This value will be entered in the **Tenant Url** field in the Provisioning tab of your Ardoq application in the Azure portal. ++ >[!NOTE] + >`<YOUR-SUBDOMAIN>` is the subdomain your organization has chosen to access Ardoq. This is the same URL segment you use when you access the Ardoq app. For example, if your organization accesses Ardoq at `https://acme.ardoq.com` you'd fill in `acme. If you're in the US and access Ardoq at `https://piedpiper.us.ardoq.com` then you'd fill in `piedpiper.us`. ## Step 3. Add Ardoq from the Azure AD application gallery |
active-directory | Ms Confluence Jira Plugin Adminguide | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/ms-confluence-jira-plugin-adminguide.md | The following image shows the configuration screen in both Jira and Confluence: | 1.0.20 | Bug Fixes: | Jira Core and Software: | | | JIRA SAML SSO add-on redirects to incorrect URL from mobile browser. | 7.0.0 to 9.5.0 | | | The mark log section after enabling the JIRA plugin. | |-| | The last login date for a user doesn't update when user signs in via SSO | | +| | The last login date for a user doesn't update when user signs in via SSO. | | | | | | | 1.0.19 | New Feature: | Jira Core and Software: | | | Application Proxy Support - Checkbox on the configure plugin screen to toggle the App Proxy mode so as to make the Reply URL editable as per the need to point the App Proxy mode so as to make the Reply URL editable as per the need to point it to the proxy server URL | 6.0 to 9.3.1 | |
aks | Azure Files Csi | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/azure-files-csi.md | Filesystem ## Use a persistent volume with private Azure Files storage (private endpoint) -If your Azure Files resources are protected with a private endpoint, you must create your own storage class that's customized with the following parameters: +If your Azure Files resources are protected with a private endpoint, you must create your own storage class. Make sure that you've [configured your DNS settings to resolve the private endpoint IP address to the FQDN of the connection string][azure-private-endpoint-dns]. that's customized with the following parameters: * `resourceGroup`: The resource group where the storage account is deployed. * `storageAccount`: The storage account name.-* `server`: The FQDN of the storage account's private endpoint (for example, `<storage account name>.privatelink.file.core.windows.net`). +* `server`: The FQDN of the storage account's private endpoint. Create a file named `private-azure-file-sc.yaml`, and then paste the following example manifest in the file. Replace the values for `<resourceGroup>` and `<storageAccountName>`. allowVolumeExpansion: true parameters: resourceGroup: <resourceGroup> storageAccount: <storageAccountName>- server: <storageAccountName>.privatelink.file.core.windows.net + server: <storageAccountName>.file.core.windows.net reclaimPolicy: Delete volumeBindingMode: Immediate mountOptions: mountOptions: - actimeo=30 # reduce latency for metadata-heavy workload ``` -Create the storage class by using the [kubectl apply][kubectl-apply] command: +Create the storage class by using the `kubectl apply` command: ```console kubectl apply -f private-azure-file-sc.yaml The output of the commands resembles the following example: [access-tiers-overview]: ../storage/blobs/access-tiers-overview.md [tag-resources]: ../azure-resource-manager/management/tag-resources.md [statically-provision-a-volume]: azure-csi-files-storage-provision.md#statically-provision-a-volume+[azure-private-endpoint-dns]: ../private-link/private-endpoint-dns.md#azure-services-dns-zone-configuration |
aks | Best Practices | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/best-practices.md | Title: Best practices for Azure Kubernetes Service (AKS) description: Collection of the cluster operator and developer best practices to build and manage applications in Azure Kubernetes Service (AKS) Previously updated : 03/09/2021 Last updated : 03/07/2023 |
aks | Csi Migrate In Tree Volumes | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/csi-migrate-in-tree-volumes.md | For more about storage best practices, see [Best practices for storage and backu <!-- LINKS - internal --> [install-azure-cli]: /cli/azure/install-azure-cli-[aks-rbac-cluster-admin-role]: manage-azure-rbac.md#create-role-assignments-for-users-to-access-cluster +[aks-rbac-cluster-admin-role]: manage-azure-rbac.md#create-role-assignments-for-users-to-access-the-cluster [azure-resource-locks]: ../azure-resource-manager/management/lock-resources.md [csi-driver-overview]: csi-storage-drivers.md [aks-storage-backups-best-practices]: operator-best-practices-storage.md |
aks | Dapr Settings | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/dapr-settings.md | az k8s-extension upgrade --cluster-type managedClusters \ ## Meet network requirements -The Dapr extension for AKS and Arc for Kubernetes requires outbound URLs on `https://:443` to function. In addition to the `https://mcr.microsoft.com/daprio` URL for pulling Dapr artifacts, verify you've included the [outbound URLs required for AKS or Arc for Kubernetes](../azure-arc/kubernetes/quickstart-connect-cluster.md#meet-network-requirements). +The Dapr extension for AKS and Arc for Kubernetes requires outbound URLs on `https://:443` to function. In addition to the `https://mcr.microsoft.com/daprio` URL for pulling Dapr artifacts, verify you've included the [outbound URLs required for AKS or Arc for Kubernetes](../azure-arc/kubernetes/network-requirements.md). ## Next Steps |
aks | Deploy Marketplace | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/deploy-marketplace.md | Included among these solutions are Kubernetes application-based container offers [!INCLUDE [preview features callout](./includes/preview/preview-callout.md)] -> [!NOTE] -> This feature is currently supported only in the following regions: -> -> - West Central US -> - West Europe -> - East US +## Limitations ++This feature is currently supported only in the following regions: ++- West Central US +- West Europe +- East US ++Kubernetes application-based container offers cannot be deployed on AKS for Azure Stack HCI or AKS Edge Essentials. ## Register resource providers |
aks | Manage Azure Rbac | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/manage-azure-rbac.md | Title: Manage Azure RBAC in Kubernetes From Azure + Title: Use Azure RBAC for Kubernetes Authorization -description: Learn how to use Azure RBAC for Kubernetes Authorization with Azure Kubernetes Service (AKS). +description: Learn how to use Azure role-based access control (Azure RBAC) for Kubernetes Authorization with Azure Kubernetes Service (AKS). Previously updated : 02/09/2021 Last updated : 03/02/2023 #Customer intent: As a cluster operator or developer, I want to learn how to leverage Azure RBAC permissions to authorize actions within the AKS cluster. -# Use Azure RBAC for Kubernetes Authorization +# Use Azure role-based access control for Kubernetes Authorization -Today you can already leverage [integrated authentication between Azure Active Directory (Azure AD) and AKS](managed-aad.md). When enabled, this integration allows customers to use Azure AD users, groups, or service principals as subjects in Kubernetes RBAC, see more [here](azure-ad-rbac.md). -This feature frees you from having to separately manage user identities and credentials for Kubernetes. However, you still have to set up and manage Azure RBAC and Kubernetes RBAC separately. For more details on authentication and authorization with RBAC on AKS, see [here](concepts-identity.md). +When you leverage [integrated authentication between Azure Active Directory (Azure AD) and AKS](managed-aad.md), you can use Azure AD users, groups, or service principals as subjects in [Kubernetes role-based access control (Kubernetes RBAC)][kubernetes-rbac]. This feature frees you from having to separately manage user identities and credentials for Kubernetes. However, you still have to set up and manage Azure RBAC and Kubernetes RBAC separately. -This document covers a new approach that allows for the unified management and access control across Azure Resources, AKS, and Kubernetes resources. +This article covers how to use Azure RBAC for Kubernetes Authorization, which allows for the unified management and access control across Azure resources, AKS, and Kubernetes resources. For more information, see [Azure RBAC for Kubernetes Authorization][azure-rbac-kubernetes-rbac]. ## Before you begin -The ability to manage RBAC for Kubernetes resources from Azure gives you the choice to manage RBAC for the cluster resources either using Azure or native Kubernetes mechanisms. When enabled, Azure AD principals will be validated exclusively by Azure RBAC while regular Kubernetes users and service accounts are exclusively validated by Kubernetes RBAC. For more details on authentication and authorization with RBAC on AKS, see [here](concepts-identity.md#azure-rbac-for-kubernetes-authorization). +* You need the Azure CLI version 2.24.0 or later installed and configured. Run `az --version` to find the version. If you need to install or upgrade, see [Install Azure CLI][install-azure-cli]. +* You need `kubectl`, with a minimum version of [1.18.3](https://github.com/kubernetes/kubernetes/blob/master/CHANGELOG/CHANGELOG-1.18.md#v1183). +* You need managed Azure AD integration enabled on your cluster before you can add Azure RBAC for Kubernetes authorization. If you need to enable managed Azure AD integration, see [Use Azure AD in AKS](managed-aad.md). +* If you have CRDs and are making custom role definitions, the only way to cover CRDs today is to use `Microsoft.ContainerService/managedClusters/*/read`. For the remaining objects, you can use the specific API groups, such as `Microsoft.ContainerService/apps/deployments/read`. +* New role assignments can take up to five minutes to propagate and be updated by the authorization server. +* This article requires that the Azure AD tenant configured for authentication is same as the tenant for the subscription that holds your AKS cluster. -### Prerequisites +## Create a new AKS cluster with managed Azure AD integration and Azure RBAC for Kubernetes Authorization -- Ensure you have the Azure CLI version 2.24.0 or later-- Ensure you have installed [kubectl v1.18.3+][az-aks-install-cli].--### Limitations --- Requires [Managed Azure AD integration](managed-aad.md).-- Use [kubectl v1.18.3+][az-aks-install-cli].-- If you have CRDs and are making custom role definitions, the only way to cover CRDs today is to provide `Microsoft.ContainerService/managedClusters/*/read`. AKS is working on providing more granular permissions for CRDs. For the remaining objects you can use the specific API Groups, for example: `Microsoft.ContainerService/apps/deployments/read`.-- New role assignments can take up to 5min to propagate and be updated by the authorization server.-- Requires the Azure AD tenant configured for authentication to be the same as the tenant for the subscription that holds the AKS cluster. --## Create a new cluster using Azure RBAC and managed Azure AD integration --Create an AKS cluster by using the following CLI commands. --Create an Azure resource group: +Create an Azure resource group using the [`az group create`][az-group-create] command. ```azurecli-interactive-# Create an Azure resource group az group create --name myResourceGroup --location westus2 ``` -Create the AKS cluster with managed Azure AD integration and Azure RBAC for Kubernetes Authorization. +Create an AKS cluster with managed Azure AD integration and Azure RBAC for Kubernetes Authorization using the [`az aks create`][az-aks-create] command. ```azurecli-interactive-# Create an AKS-managed Azure AD cluster -az aks create -g MyResourceGroup -n MyManagedCluster --enable-aad --enable-azure-rbac +az aks create -g myResourceGroup -n myManagedCluster --enable-aad --enable-azure-rbac ``` -A successful creation of a cluster with Azure AD integration and Azure RBAC for Kubernetes Authorization has the following section in the response body: +The output will look similar to the following example output: ```json "AADProfile": { A successful creation of a cluster with Azure AD integration and Azure RBAC for "serverAppId": null, "serverAppSecret": null, "tenantId": "****-****-****-****-****"- } +} ``` -## Integrate Azure RBAC into an existing cluster +## Enable Azure RBAC on an existing AKS cluster -> [!NOTE] -> To use Azure RBAC for Kubernetes Authorization, Azure Active Directory integration must be enabled on your cluster. For more, see [Azure Active Directory integration][managed-aad]. --To add Azure RBAC for Kubernetes Authorization into an existing AKS cluster, use the [az aks update][az-aks-update] command with the flag `enable-azure-rbac`. +Add Azure RBAC for Kubernetes Authorization into an existing AKS cluster using the [`az aks update`][az-aks-update] command with the `enable-azure-rbac` flag. ```azurecli-interactive az aks update -g myResourceGroup -n myAKSCluster --enable-azure-rbac ```-To remove Azure RBAC for Kubernetes Authorization from an existing AKS cluster, use the [az aks update][az-aks-update] command with the flag `disable-azure-rbac`. ++## Disable Azure RBAC for Kubernetes Authorization from an AKS cluster ++Remove Azure RBAC for Kubernetes Authorization from an existing AKS cluster using the [`az aks update`][az-aks-update] command with the `disable-azure-rbac` flag. ```azurecli-interactive az aks update -g myResourceGroup -n myAKSCluster --disable-azure-rbac ``` -## Create role assignments for users to access cluster --AKS provides the following four built-in roles: +## Create role assignments for users to access the cluster +AKS provides the following built-in roles: | Role | Description | |-|--|-| Azure Kubernetes Service RBAC Reader | Allows read-only access to see most objects in a namespace. It doesn't allow viewing roles or role bindings. This role doesn't allow viewing `Secrets`, since reading the contents of Secrets enables access to ServiceAccount credentials in the namespace, which would allow API access as any ServiceAccount in the namespace (a form of privilege escalation) | -| Azure Kubernetes Service RBAC Writer | Allows read/write access to most objects in a namespace. This role doesn't allow viewing or modifying roles or role bindings. However, this role allows accessing `Secrets` and running Pods as any ServiceAccount in the namespace, so it can be used to gain the API access levels of any ServiceAccount in the namespace. | +| Azure Kubernetes Service RBAC Reader | Allows read-only access to see most objects in a namespace. It doesn't allow viewing roles or role bindings. This role doesn't allow viewing `Secrets`, since reading the contents of Secrets enables access to ServiceAccount credentials in the namespace, which would allow API access as any ServiceAccount in the namespace (a form of privilege escalation). | +| Azure Kubernetes Service RBAC Writer | Allows read/write access to most objects in a namespace. This role doesn't allow viewing or modifying roles or role bindings. However, this role allows accessing `Secrets` and running Pods as any ServiceAccount in the namespace, so it can be used to gain the API access levels of any ServiceAccount in the namespace. | | Azure Kubernetes Service RBAC Admin | Allows admin access, intended to be granted within a namespace. Allows read/write access to most resources in a namespace (or cluster scope), including the ability to create roles and role bindings within the namespace. This role doesn't allow write access to resource quota or to the namespace itself. | | Azure Kubernetes Service RBAC Cluster Admin | Allows super-user access to perform any action on any resource. It gives full control over every resource in the cluster and in all namespaces. | +Roles assignments scoped to the **entire AKS cluster** can be done either on the Access Control (IAM) blade of the cluster resource on Azure portal or by using the following Azure CLI commands: -Roles assignments scoped to the **entire AKS cluster** can be done either on the Access Control (IAM) blade of the cluster resource on Azure portal or by using Azure CLI commands as shown below: +Get your AKS resource ID using the [`az aks show`][az-aks-show] command. ```azurecli-# Get your AKS Resource ID -AKS_ID=$(az aks show -g MyResourceGroup -n MyManagedCluster --query id -o tsv) +AKS_ID=$(az aks show -g myResourceGroup -n myManagedCluster --query id -o tsv) ``` -```azurecli-interactive -az role assignment create --role "Azure Kubernetes Service RBAC Admin" --assignee <AAD-ENTITY-ID> --scope $AKS_ID -``` --where `<AAD-ENTITY-ID>` could be a username (for example, user@contoso.com) or even the ClientID of a service principal. --You can also create role assignments scoped to a specific **namespace** within the cluster: +Create a role assignment using the [`az role assignment create`][az-role-assignment-create] command. `<AAD-ENTITY-ID>` can be a username or the client ID of a service principal. ```azurecli-interactive-az role assignment create --role "Azure Kubernetes Service RBAC Reader" --assignee <AAD-ENTITY-ID> --scope $AKS_ID/namespaces/<namespace-name> +az role assignment create --role "Azure Kubernetes Service RBAC Admin" --assignee <AAD-ENTITY-ID> --scope $AKS_ID ``` -Today, role assignments scoped to namespaces need to be configured via Azure CLI. ---### Create custom roles definitions --Optionally you may choose to create your own role definition and then assign as above. +> [!NOTE] +> You can create the *Azure Kubernetes Service RBAC Reader* and *Azure Kubernetes Service RBAC Writer* role assignments scoped to a specific namespace within the cluster using the [`az role assignment create`][az-role-assignment-create] command and setting the scope to the desired namespace. +> +> ```azurecli-interactive +> az role assignment create --role "Azure Kubernetes Service RBAC Reader" --assignee <AAD-ENTITY-ID> --scope $AKS_ID/namespaces/<namespace-name> +> ``` -Below is an example of a role definition that allows a user to only read deployments and nothing else. You can check the full list of possible actions [here](../role-based-access-control/resource-provider-operations.md#microsoftcontainerservice). +## Create custom roles definitions +The following example custom role definition allows a user to only read deployments and nothing else. For the full list of possible actions, see [Microsoft.ContainerService operations](../role-based-access-control/resource-provider-operations.md#microsoftcontainerservice). -Copy the below json into a file called `deploy-view.json`. +To create your own custom role definitions, copy the following file, replacing `<YOUR SUBSCRIPTION ID>` with your own subscription ID, and then save it as `deploy-view.json`. ```json { Copy the below json into a file called `deploy-view.json`. } ``` -Replace `<YOUR SUBSCRIPTION ID>` by the ID from your subscription, which you can get by running: --```azurecli-interactive -az account show --query id -o tsv -``` --Now we can create the role definition by running the below command from the folder where you saved `deploy-view.json`: +Create the role definition using the [`az role definition create`][az-role-definition-create] command, setting the `--role-definition` to the `deploy-view.json` file you created in the previous step. ```azurecli-interactive az role definition create --role-definition @deploy-view.json ``` -Now that you have your role definition, you can assign it to a user or other identity by running: +Assign the role definition to a user or other identity using the [`az role assignment create`][az-role-assignment-create] command. ```azurecli-interactive az role assignment create --role "AKS Deployment Reader" --assignee <AAD-ENTITY-ID> --scope $AKS_ID az role assignment create --role "AKS Deployment Reader" --assignee <AAD-ENTITY- ## Use Azure RBAC for Kubernetes Authorization with `kubectl` -> [!NOTE] -> Ensure you have the latest kubectl by running the below command: -> -> ```azurecli-interactive -> az aks install-cli -> ``` -> -> You might need to run it with `sudo` privileges. --Now that you have assigned your desired role and permissions. You can start calling the Kubernetes API, for example, from `kubectl`. --For this purpose, let's first get the cluster's kubeconfig using the below command: +Make sure you have the [Azure Kubernetes Service Cluster User](../role-based-access-control/built-in-roles.md#azure-kubernetes-service-cluster-user-role) built-in role, and then get the kubeconfig of your AKS cluster using the [`az aks get-credentials`][az-aks-get-credentials] command. ```azurecli-interactive-az aks get-credentials -g MyResourceGroup -n MyManagedCluster +az aks get-credentials -g myResourceGroup -n myManagedCluster ``` -> [!IMPORTANT] -> You'll need the [Azure Kubernetes Service Cluster User](../role-based-access-control/built-in-roles.md#azure-kubernetes-service-cluster-user-role) built-in role to perform the step above. --Now, you can use kubectl to, for example, list the nodes in the cluster. The first time you run it you'll need to sign in, and subsequent commands will use the respective access token. +Now, you can use `kubectl` manage your cluster. For example, you can list the nodes in your cluster using `kubectl get nodes`. The first time you run it, you'll need to sign in, as shown in the following example: ```azurecli-interactive kubectl get nodes aks-nodepool1-93451573-vmss000001 Ready agent 3h6m v1.15.11 aks-nodepool1-93451573-vmss000002 Ready agent 3h6m v1.15.11 ``` - ## Use Azure RBAC for Kubernetes Authorization with `kubelogin` -To unblock additional scenarios like non-interactive logins, older `kubectl` versions or leveraging SSO across multiple clusters without the need to sign in to new cluster, granted that your token is still valid, AKS created an exec plugin called [`kubelogin`](https://github.com/Azure/kubelogin). +AKS created the [`kubelogin`](https://github.com/Azure/kubelogin) plugin to help unblock additional scenarios, such as non-interactive logins, older `kubectl` versions, or leveraging SSO across multiple clusters without the need to sign in to a new cluster. -You can use it by running: +You can use the `kubelogin` plugin by running the following command: ```bash export KUBECONFIG=/path/to/kubeconfig kubelogin convert-kubeconfig-``` +``` -The first time, you'll have to sign in interactively like with regular kubectl, but afterwards you'll no longer need to, even for new Azure AD clusters (as long as your token is still valid). +Similar to `kubectl`, you need to log in the first time you run it, as shown in the following example: ```bash kubectl get nodes aks-nodepool1-93451573-vmss000001 Ready agent 3h6m v1.15.11 aks-nodepool1-93451573-vmss000002 Ready agent 3h6m v1.15.11 ``` -## Clean up +## Clean up resources -### Clean Role assignment +### Delete role assignment ```azurecli-interactive+# List role assignments az role assignment list --scope $AKS_ID --query [].id -o tsv-``` -Copy the ID or IDs from all the assignments you did and then. --```azurecli-interactive +# Delete role assignments az role assignment delete --ids <LIST OF ASSIGNMENT IDS> ``` -### Clean up role definition +### Delete role definition ```azurecli-interactive az role definition delete -n "AKS Deployment Reader" ``` -### Delete cluster and resource group +### Delete resource group and AKS cluster ```azurecli-interactive-az group delete -n MyResourceGroup +az group delete -n myResourceGroup ``` ## Next steps -- Read more about AKS Authentication, Authorization, Kubernetes RBAC, and Azure RBAC [here](concepts-identity.md).-- Read more about Azure RBAC [here](../role-based-access-control/overview.md).-- Read more about the all the actions you can use to granularly define custom Azure roles for Kubernetes authorization [here](../role-based-access-control/resource-provider-operations.md#microsoftcontainerservice).+To learn more about AKS authentication, authorization, Kubernetes RBAC, and Azure RBAC, see: +* [Access and identity options for AKS](/concepts-identity.md) +* [What is Azure RBAC?](../role-based-access-control/overview.md) +* [Microsoft.ContainerService operations](../role-based-access-control/resource-provider-operations.md#microsoftcontainerservice) <!-- LINKS - Internal --> [aks-support-policies]: support-policies.md az group delete -n MyResourceGroup [az-feature-list]: /cli/azure/feature#az_feature_list [az-feature-register]: /cli/azure/feature#az_feature_register [az-aks-install-cli]: /cli/azure/aks#az_aks_install_cli+[az-aks-create]: /cli/azure/aks#az_aks_create +[az-aks-show]: /cli/azure/aks#az_aks_show +[az-role-assignment-create]: /cli/azure/role/assignment#az_role_assignment_create [az-provider-register]: /cli/azure/provider#az_provider_register+[az-group-create]: /cli/azure/group#az_group_create [az-aks-update]: /cli/azure/aks#az_aks_update [managed-aad]: ./managed-aad.md+[install-azure-cli]: /cli/azure/install-azure-cli +[az-role-definition-create]: /cli/azure/role/definition#az_role_definition_create +[az-aks-get-credentials]: /cli/azure/aks#az_aks_get-credentials +[kubernetes-rbac]: /concepts-identity#azure-rbac-for-kubernetes-authorization +[azure-rbac-kubernetes-rbac]: /concepts-identity#azure-rbac-for-kubernetes-authorization |
api-management | Api Management Features | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/api-management/api-management-features.md | Each API Management [pricing tier](https://aka.ms/apimpricing) offers a distinct | Direct management API | No | Yes | Yes | Yes | Yes | | Azure Monitor logs and metrics | No | Yes | Yes | Yes | Yes | | Static IP | No | Yes | Yes | Yes | Yes |-| [Pass-through WebSocket APIs](websocket-api.md) | No | Yes | Yes | Yes | Yes | -| [Pass-through GraphQL APIs](graphql-apis-overview.md) | Yes | Yes | Yes | Yes | Yes | -| [Synthetic GraphQL APIs](graphql-apis-overview.md) | Yes | Yes | Yes | Yes | Yes | +| [WebSocket APIs](websocket-api.md) | No | Yes | Yes | Yes | Yes | +| [GraphQL APIs](graphql-api.md)<sup>5</sup> | Yes | Yes | Yes | Yes | Yes | +| [Synthetic GraphQL APIs (preview)](graphql-schema-resolve-api.md) | No | Yes | Yes | Yes | Yes | <sup>1</sup> Enables the use of Azure AD (and Azure AD B2C) as an identity provider for user sign in on the developer portal.<br/> <sup>2</sup> Including related functionality such as users, groups, issues, applications, and email templates and notifications.<br/> <sup>3</sup> See [Gateway overview](api-management-gateways-overview.md#feature-comparison-managed-versus-self-hosted-gateways) for a feature comparison of managed versus self-hosted gateways. In the Developer tier self-hosted gateways are limited to a single gateway node. <br/>-<sup>4</sup> The following policies aren't available in the Consumption tier: rate limit by key and quota by key. +<sup>4</sup> The following policies aren't available in the Consumption tier: rate limit by key and quota by key. <br/> +<sup>5</sup> GraphQL subscriptions aren't supported in the Consumption tier. |
api-management | Api Management Gateways Overview | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/api-management/api-management-gateways-overview.md | The following table compares features available in the managed gateway versus th | [Function App](import-function-app-as-api.md) | ✔️ | ✔️ | ✔️ | | [Container App](import-container-app-with-oas.md) | ✔️ | ✔️ | ✔️ | | [Service Fabric](../service-fabric/service-fabric-api-management-overview.md) | Developer, Premium | ❌ | ❌ |-| [Pass-through GraphQL](graphql-apis-overview.md) | ✔️ | ✔️ | ❌ | -| [Synthetic GraphQL](graphql-apis-overview.md)| ✔️ | ✔️ | ❌ | -| [Pass-through WebSocket](websocket-api.md) | ✔️ | ❌ | ❌ | +| [Passthrough GraphQL](graphql-api.md) | ✔️ | ✔️<sup>1</sup> | ❌ | +| [Synthetic GraphQL](graphql-schema-resolve-api.md) | ✔️ | ❌ | ❌ | +| [Passthrough WebSocket](websocket-api.md) | ✔️ | ❌ | ❌ | ++<sup>1</sup> GraphQL subscriptions aren't supported in the Consumption tier. ### Policies Managed and self-hosted gateways support all available [policies](api-management | Policy | Managed (Dedicated) | Managed (Consumption) | Self-hosted<sup>1</sup> | | | -- | -- | - | | [Dapr integration](api-management-policies.md#dapr-integration-policies) | ❌ | ❌ | ✔️ |-| [GraphQL resolvers](api-management-policies.md#graphql-resolver-policies) and [GraphQL validation](api-management-policies.md#validation-policies)| ✔️ | ✔️ | ❌ | -| [Get authorization context](get-authorization-context-policy.md) | ✔️ | ✔️ | ❌ | +| [Get authorization context](get-authorization-context-policy.md) | ✔️ | ❌ | ❌ | | [Quota and rate limit](api-management-policies.md#access-restriction-policies) | ✔️ | ✔️<sup>2</sup> | ✔️<sup>3</sup>+| [Set GraphQL resolver](set-graphql-resolver-policy.md) | ✔️ | ❌ | ❌ | <sup>1</sup> Configured policies that aren't supported by the self-hosted gateway are skipped during policy execution.<br/> <sup>2</sup> The rate limit by key and quota by key policies aren't available in the Consumption tier.<br/> |
api-management | Api Management Howto Developer Portal Customize | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/api-management/api-management-howto-developer-portal-customize.md | Although you don't need to adjust any styles, you may consider adjusting particu You can control which portal content appears to different users, based on their identity. For example, you might want to display certain pages only to users who have access to a specific product or API. Or, make a section of a page appear only for certain [groups of users](api-management-howto-create-groups.md). The developer portal has built-in controls for these needs. > [!NOTE]-> * These controls are being released during December 2022. It may take several weeks for your API Management service to receive the update. -> * Visibility and access controls are supported only in the managed developer portal. They are not supported in the [self-hosted portal](developer-portal-self-host.md). +> Visibility and access controls are supported only in the managed developer portal. They are not supported in the [self-hosted portal](developer-portal-self-host.md). * When you add or edit a page, select the **Access** tab to control the users or groups that can access the page |
api-management | Api Management Howto Policies | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/api-management/api-management-howto-policies.md | When configuring a policy, you must first select the scope at which the policy a For more information, see [Set or edit policies](set-edit-policies.md#use-base-element-to-set-policy-evaluation-order). -### GraphQL resolver policies --In API Management, a [GraphQL resolver](configure-graphql-resolver.md) is configured using policies scoped to a specific operation type and field in a [GraphQL schema](graphql-apis-overview.md#resolvers). --* Currently, API Management supports GraphQL resolvers that specify HTTP data sources. Configure a single [`http-data-source`](http-data-source-policy.md) policy with elements to specify a request to (and optionally response from) an HTTP data source. -* You can't include a resolver policy in policy definitions at other scopes such as API, product, or all APIs. It also doesn't inherit policies configured at other scopes. -* The gateway evaluates a resolver-scoped policy *after* any configured `inbound` and `backend` policies in the policy execution pipeline. --For more information, see [Configure a GraphQL resolver](configure-graphql-resolver.md). - ## Examples ### Apply policies specified at different scopes |
api-management | Api Management Policies | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/api-management/api-management-policies.md | More information about policies: - [Send message to Pub/Sub topic](publish-to-dapr-policy.md): Uses Dapr runtime to publish a message to a Publish/Subscribe topic. To learn more about Publish/Subscribe messaging in Dapr, see the description in this [README](https://github.com/dapr/docs/blob/master/README.md) file. - [Trigger output binding](invoke-dapr-binding-policy.md): Uses Dapr runtime to invoke an external system via output binding. To learn more about bindings in Dapr, see the description in this [README](https://github.com/dapr/docs/blob/master/README.md) file. -## GraphQL resolver policies -- [HTTP data source for resolver](http-data-source-policy.md) - Configures the HTTP request and optionally the HTTP response to resolve data for an object type and field in a GraphQL schema.-- [Publish event to GraphQL subscription](publish-event-policy.md) - Publishes an event to one or more subscriptions specified in a GraphQL API schema. Used in the `http-response` element of the `http-data-source` policy+## GraphQL API policies +- [Validate GraphQL request](validate-graphql-request-policy.md) - Validates and authorizes a request to a GraphQL API. +- [Set GraphQL resolver](set-graphql-resolver-policy.md) - Retrieves or sets data for a GraphQL field in an object type specified in a GraphQL schema. ## Transformation policies - [Convert JSON to XML](json-to-xml-policy.md) - Converts request or response body from JSON to XML. More information about policies: ## Validation policies - [Validate content](validate-content-policy.md) - Validates the size or content of a request or response body against one or more API schemas. The supported schema formats are JSON and XML.-- [Validate GraphQL request](validate-graphql-request-policy.md) - Validates and authorizes a request to a GraphQL API. - [Validate parameters](validate-parameters-policy.md) - Validates the request header, query, or path parameters against the API schema. - [Validate headers](validate-headers-policy.md) - Validates the response headers against the API schema. - [Validate status code](validate-status-code-policy.md) - Validates the HTTP status codes in |
api-management | Api Management Policy Expressions | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/api-management/api-management-policy-expressions.md | The `context` variable is implicitly available in every policy [expression](api- |Context Variable|Allowed methods, properties, and parameter values| |-|-|-|`context`|[`Api`](#ref-context-api): [`IApi`](#ref-iapi)<br /><br /> [`Deployment`](#ref-context-deployment)<br /><br /> Elapsed: `TimeSpan` - time interval between the value of `Timestamp` and current time<br /><br /> [`GraphQL`](#ref-context-graphql)<br /><br />[`LastError`](#ref-context-lasterror)<br /><br /> [`Operation`](#ref-context-operation)<br /><br /> [`Request`](#ref-context-request)<br /><br /> `RequestId`: `Guid` - unique request identifier<br /><br /> [`Response`](#ref-context-response)<br /><br /> [`Subscription`](#ref-context-subscription)<br /><br /> `Timestamp`: `DateTime` - point in time when request was received<br /><br /> `Tracing`: `bool` - indicates if tracing is on or off <br /><br /> [User](#ref-context-user)<br /><br /> [`Variables`](#ref-context-variables): `IReadOnlyDictionary<string, object>`<br /><br /> `void Trace(message: string)`| +|`context`|[`Api`](#ref-context-api): [`IApi`](#ref-iapi)<br /><br /> [`Deployment`](#ref-context-deployment)<br /><br /> Elapsed: `TimeSpan` - time interval between the value of `Timestamp` and current time<br /><br /> [`LastError`](#ref-context-lasterror)<br /><br /> [`Operation`](#ref-context-operation)<br /><br /> [`Product`](#ref-context-product)<br /><br /> [`Request`](#ref-context-request)<br /><br /> `RequestId`: `Guid` - unique request identifier<br /><br /> [`Response`](#ref-context-response)<br /><br /> [`Subscription`](#ref-context-subscription)<br /><br /> `Timestamp`: `DateTime` - point in time when request was received<br /><br /> `Tracing`: `bool` - indicates if tracing is on or off <br /><br /> [User](#ref-context-user)<br /><br /> [`Variables`](#ref-context-variables): `IReadOnlyDictionary<string, object>`<br /><br /> `void Trace(message: string)`| |<a id="ref-context-api"></a>`context.Api`|`Id`: `string`<br /><br /> `IsCurrentRevision`: `bool`<br /><br /> `Name`: `string`<br /><br /> `Path`: `string`<br /><br /> `Revision`: `string`<br /><br /> `ServiceUrl`: [`IUrl`](#ref-iurl)<br /><br /> `Version`: `string` | |<a id="ref-context-deployment"></a>`context.Deployment`|[`Gateway`](#ref-context-gateway)<br /><br /> `GatewayId`: `string` (returns 'managed' for managed gateways)<br /><br /> `Region`: `string`<br /><br /> `ServiceId`: `string`<br /><br /> `ServiceName`: `string`<br /><br /> `Certificates`: `IReadOnlyDictionary<string, X509Certificate2>`| |<a id="ref-context-gateway"></a>`context.Deployment.Gateway`|`Id`: `string` (returns 'managed' for managed gateways)<br /><br /> `InstanceId`: `string` (returns 'managed' for managed gateways)<br /><br /> `IsManaged`: `bool`|-|<a id="ref-context-graphql"></a>`context.GraphQL`|`GraphQLArguments`: `IGraphQLDataObject`<br /><br /> `Parent`: `IGraphQLDataObject`<br/><br/>[Examples](configure-graphql-resolver.md#graphql-context)| |<a id="ref-context-lasterror"></a>`context.LastError`|`Source`: `string`<br /><br /> `Reason`: `string`<br /><br /> `Message`: `string`<br /><br /> `Scope`: `string`<br /><br /> `Section`: `string`<br /><br /> `Path`: `string`<br /><br /> `PolicyId`: `string`<br /><br /> For more information about `context.LastError`, see [Error handling](api-management-error-handling-policies.md).| |<a id="ref-context-operation"></a>`context.Operation`|`Id`: `string`<br /><br /> `Method`: `string`<br /><br /> `Name`: `string`<br /><br /> `UrlTemplate`: `string`| |<a id="ref-context-product"></a>`context.Product`|`Apis`: `IEnumerable<`[`IApi`](#ref-iapi)`>`<br /><br /> `ApprovalRequired`: `bool`<br /><br /> `Groups`: `IEnumerable<`[`IGroup`](#ref-igroup)`>`<br /><br /> `Id`: `string`<br /><br /> `Name`: `string`<br /><br /> `State`: `enum ProductState {NotPublished, Published}`<br /><br /> `SubscriptionLimit`: `int?`<br /><br /> `SubscriptionRequired`: `bool`| The `context` variable is implicitly available in every policy [expression](api- |<a id="ref-context-subscription"></a>`context.Subscription`|`CreatedDate`: `DateTime`<br /><br /> `EndDate`: `DateTime?`<br /><br /> `Id`: `string`<br /><br /> `Key`: `string`<br /><br /> `Name`: `string`<br /><br /> `PrimaryKey`: `string`<br /><br /> `SecondaryKey`: `string`<br /><br /> `StartDate`: `DateTime?`| |<a id="ref-context-user"></a>`context.User`|`Email`: `string`<br /><br /> `FirstName`: `string`<br /><br /> `Groups`: `IEnumerable<`[`IGroup`](#ref-igroup)`>`<br /><br /> `Id`: `string`<br /><br /> `Identities`: `IEnumerable<`[`IUserIdentity`](#ref-iuseridentity)`>`<br /><br /> `LastName`: `string`<br /><br /> `Note`: `string`<br /><br /> `RegistrationDate`: `DateTime`| |<a id="ref-iapi"></a>`IApi`|`Id`: `string`<br /><br /> `Name`: `string`<br /><br /> `Path`: `string`<br /><br /> `Protocols`: `IEnumerable<string>`<br /><br /> `ServiceUrl`: [`IUrl`](#ref-iurl)<br /><br /> `SubscriptionKeyParameterNames`: [`ISubscriptionKeyParameterNames`](#ref-isubscriptionkeyparameternames)|-|<a id="ref-igraphqldataobject"></a>`IGraphQLDataObject`|TBD<br /><br />| |<a id="ref-igroup"></a>`IGroup`|`Id`: `string`<br /><br /> `Name`: `string`| |<a id="ref-imessagebody"></a>`IMessageBody`|`As<T>(bool preserveContent = false): Where T: string, byte[], JObject, JToken, JArray, XNode, XElement, XDocument` <br /><br /> - The `context.Request.Body.As<T>` and `context.Response.Body.As<T>` methods read a request or response message body in specified type `T`. <br/><br/> - Or - <br/><br/>`AsFormUrlEncodedContent(bool preserveContent = false)` <br/></br>- The `context.Request.Body.AsFormUrlEncodedContent()` and `context.Response.Body.AsFormUrlEncodedContent()` methods read URL-encoded form data in a request or response message body and return an `IDictionary<string, IList<string>` object. The decoded object supports `IDictionary` operations and the following expressions: `ToQueryString()`, `JsonConvert.SerializeObject()`, `ToFormUrlEncodedContent().` <br/><br/> By default, the `As<T>` and `AsFormUrlEncodedContent()` methods:<br /><ul><li>Use the original message body stream.</li><li>Render it unavailable after it returns.</li></ul> <br />To avoid that and have the method operate on a copy of the body stream, set the `preserveContent` parameter to `true`, as shown in examples for the [set-body](set-body-policy.md#examples) policy.| |<a id="ref-iprivateendpointconnection"></a>`IPrivateEndpointConnection`|`Name`: `string`<br /><br /> `GroupId`: `string`<br /><br /> `MemberName`: `string`<br /><br />For more information, see the [REST API](/rest/api/apimanagement/current-ga/private-endpoint-connection/list-private-link-resources).| |
api-management | Configure Graphql Resolver | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/api-management/configure-graphql-resolver.md | - Title: Configure GraphQL resolver in Azure API Management -description: Configure a GraphQL resolver in Azure AI Management for a field in an object type specified in a GraphQL schema ----- Previously updated : 02/22/2023----# Configure a GraphQL resolver --Configure a resolver to retrieve or set data for a GraphQL field in an object type specified in a GraphQL schema. The schema must be imported to API Management. Currently, API Management supports resolvers that use HTTP-based data sources (REST or SOAP APIs). --* A resolver is a resource containing a policy definition that's invoked only when a matching object type and field is executed. -* Each resolver resolves data for a single field. To resolve data for multiple fields, configure a separate resolver for each. -* Resolver-scoped policies are evaluated *after* any `inbound` and `backend` policies in the policy execution pipeline. They don't inherit policies from other scopes. For more information, see [Policies in API Management](api-management-howto-policies.md). ---> [!IMPORTANT] -> * If you use the preview `set-graphql-resolver` policy in policy definitions, you should migrate to the managed resolvers described in this article. -> * After you configure a managed resolver for a GraphQL field, the gateway will skip the `set-graphql-resolver` policy in any policy definitions. You can't combine use of managed resolvers and the `set-graphql-resolver` policy in your API Management instance. --## Prerequisites --- An existing API Management instance. [Create one if you haven't already](get-started-create-service-instance.md).-- Import a [pass-through](graphql-api.md) or [synthetic](graphql-schema-resolve-api.md) GraphQL API.--## Create a resolver --1. In the [Azure portal](https://portal.azure.com), navigate to your API Management instance. --1. In the left menu, select **APIs** and then the name of your GraphQL API. -1. On the **Design** tab, review the schema for a field in an object type where you want to configure a resolver. - 1. Select a field, and then in the left margin, hover the pointer. - 1. Select **+ Add Resolver**. -- :::image type="content" source="media/configure-graphql-resolver/add-resolver.png" alt-text="Screenshot of adding a resolver from a field in GraphQL schema in the portal."::: -1. On the **Create Resolver** page, update the **Name** property if you want to, optionally enter a **Description**, and confirm or update the **Type** and **Field** selections. -1. In the **Resolver policy** editor, update the [`http-data-source`](http-data-source-policy.md) policy with child elements for your scenario. - 1. Update the required `http-request` element with policies to transform the GraphQL operation to an HTTP request. - 1. Optionally add an `http-response` element, and add child policies to transform the HTTP response of the resolver. If the `http-response` element isn't specified, the response is returned as a raw string. - 1. Select **Create**. - - :::image type="content" source="media/configure-graphql-resolver/configure-resolver-policy.png" alt-text="Screenshot of resolver policy editor in the portal." lightbox="media/configure-graphql-resolver/configure-resolver-policy.png"::: --1. The resolver is attached to the field. Go to the **Resolvers** tab to list and manage the resolvers configured for the API. -- :::image type="content" source="media/configure-graphql-resolver/list-resolvers.png" alt-text="Screenshot of the resolvers list for GraphQL API in the portal." lightbox="media/configure-graphql-resolver/list-resolvers.png"::: -- > [!TIP] - > The **Linked** column indicates whether or not the resolver is configured for a field that's currently in the GraphQL schema. If a resolver isn't linked, it can't be invoked. ----## GraphQL context --* The context for the HTTP request and HTTP response (if specified) differs from the context for the original gateway API request: - * `context.GraphQL` properties are set to the arguments (`Arguments`) and parent object (`Parent`) for the current resolver execution. - * The HTTP request context contains arguments that are passed in the GraphQL query as its body. - * The HTTP response context is the response from the independent HTTP call made by the resolver, not the context for the complete response for the gateway request. -The `context` variable that is passed through the request and response pipeline is augmented with the GraphQL context when used with a GraphQL resolver. --### context.GraphQL.parent --The `context.ParentResult` is set to the parent object for the current resolver execution. Consider the following partial schema: --``` graphql -type Comment { - id: ID! - owner: string! - content: string! -} --type Blog { - id: ID! - Title: string! - content: string! - comments: [Comment]! - comment(id: ID!): Comment -} --type Query { - getBlog(): [Blog]! - getBlog(id: ID!): Blog -} -``` --Also, consider a GraphQL query for all the information for a specific blog: --``` graphql -query { - getBlog(id: 1) { - title - content - comments { - id - owner - content - } - } -} -``` --If you set a resolver for the `comments` field in the `Blog` type, you'll want to understand which blog ID to use. You can get the ID of the blog using `context.GraphQL.Parent["id"]` as shown in the following resolver: --``` xml -<http-data-source> - <http-request> - <set-method>GET</set-method> - <set-url>@($"https://data.contoso.com/api/blog/{context.GraphQL.Parent["id"]}") - }</set-url> - </http-request> -</http-data-source> -``` --### context.GraphQL.Arguments --The arguments for a parameterized GraphQL query are added to `context.GraphQL.Arguments`. For example, consider the following two queries: --``` graphql -query($id: Int) { - getComment(id: $id) { - content - } -} --query { - getComment(id: 2) { - content - } -} -``` --These queries are two ways of calling the `getComment` resolver. GraphQL sends the following JSON payload: --``` json -{ - "query": "query($id: Int) { getComment(id: $id) { content } }", - "variables": { "id": 2 } -} --{ - "query": "query { getComment(id: 2) { content } }" -} -``` --You can define the resolver as follows: --``` xml -<http-data-source> - <http-request> - <set-method>GET</set-method> - <set-url>@($"https://data.contoso.com/api/comment/{context.GraphQL.Arguments["id"]}")</set-url> - </http-request> -</http-data-source> -``` --## Next steps --For more resolver examples, see: ---* [GraphQL resolver policies](api-management-policies.md#graphql-resolver-policies) --* [Samples APIs for Azure API Management](https://github.com/Azure-Samples/api-management-sample-apis) |
api-management | Graphql Api | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/api-management/graphql-api.md | Title: Add a GraphQL API to Azure API Management | Microsoft Docs + Title: Import a GraphQL API to Azure API Management | Microsoft Docs description: Learn how to add an existing GraphQL service as an API in Azure API Management using the Azure portal, Azure CLI, or Azure PowerShell. Manage the API and enable queries to pass through to the GraphQL endpoint. Previously updated : 02/24/2023 Last updated : 10/27/2022 -> * Add a pass-through GraphQL API to your API Management instance. +> * Learn more about the benefits of using GraphQL APIs. +> * Add a GraphQL API to your API Management instance. > * Test your GraphQL API.+> * Learn the limitations of your GraphQL API in API Management. If you want to import a GraphQL schema and set up field resolvers using REST or SOAP API endpoints, see [Import a GraphQL schema and set up field resolvers](graphql-schema-resolve-api.md). If you want to import a GraphQL schema and set up field resolvers using REST or 1. In the dialog box, select **Full** and complete the required form fields. - :::image type="content" source="media/graphql-api/create-from-graphql-endpoint.png" alt-text="Screenshot of fields for creating a GraphQL API."::: + :::image type="content" source="media/graphql-api/create-from-graphql-schema.png" alt-text="Screenshot of fields for creating a GraphQL API."::: | Field | Description | |-|-| | **Display name** | The name by which your GraphQL API will be displayed. | | **Name** | Raw name of the GraphQL API. Automatically populates as you type the display name. |- | **GraphQL type** | Select **Pass-through GraphQL** to import from an existing GraphQL API endpoint. | - | **GraphQL API endpoint** | The base URL with your GraphQL API endpoint name. <br /> For example: *`https://example.com/your-GraphQL-name`*. You can also use a common "swapi" GraphQL endpoint such as `https://swapi-graphql.azure-api.net/graphql` as a demo. | + | **GraphQL API endpoint** | The base URL with your GraphQL API endpoint name. <br /> For example: *`https://example.com/your-GraphQL-name`*. You can also use a common "Star Wars" GraphQL endpoint such as `https://swapi-graphql.azure-api.net/graphql` as a demo. | | **Upload schema** | Optionally select to browse and upload your schema file to replace the schema retrieved from the GraphQL endpoint (if available). | | **Description** | Add a description of your API. |- | **URL scheme** | Make a selection based on your GraphQL endpoint. Select one of the options that includes a WebSocket scheme (**WS** or **WSS**) if your GraphQL API includes the subscription type. Default selection: *HTTP(S)*. | + | **URL scheme** | Select **HTTP**, **HTTPS**, or **Both**. Default selection: *Both*. | | **API URL suffix**| Add a URL suffix to identify this specific API in this API Management instance. It has to be unique in this API Management instance. | | **Base URL** | Uneditable field displaying your API base URL | | **Tags** | Associate your GraphQL API with new or existing tags. | | **Products** | Associate your GraphQL API with a product to publish it. |+ | **Gateways** | Associate your GraphQL API with existing gateways. Default gateway selection: *Managed*. | | **Version this API?** | Select to apply a versioning scheme to your GraphQL API. | 1. Select **Create**.-1. After the API is created, browse or modify the schema on the **Design** tab. +1. After the API is created, browse the schema on the **Design** tab, in the **Frontend** section. :::image type="content" source="media/graphql-api/explore-schema.png" alt-text="Screenshot of exploring the GraphQL schema in the portal."::: #### [Azure CLI](#tab/cli) After importing the API, if needed, you can update the settings by using the [Se [!INCLUDE [api-management-graphql-test.md](../../includes/api-management-graphql-test.md)] -### Test a subscription -If your GraphQL API supports a subscription, you can test it in the test console. --1. Ensure that your API allows a WebSocket URL scheme (**WS** or **WSS**) that's appropriate for your API. You can enable this setting on the **Settings** tab. -1. Set up a subscription query in the query editor, and then select **Connect** to establish a WebSocket connection to the backend service. -- :::image type="content" source="media/graphql-api/test-graphql-subscription.png" alt-text="Screenshot of a subscription query in the query editor."::: -1. Review connection details in the **Subscription** pane. -- :::image type="content" source="media/graphql-api/graphql-websocket-connection.png" alt-text="Screenshot of Websocket connection in the portal."::: - -1. Subscribed events appear in the **Subscription** pane. The WebSocket connection is maintained until you disconnect it or you connect to a new WebSocket subscription. -- :::image type="content" source="media/graphql-api/graphql-subscription-event.png" alt-text="Screenshot of GraphQL subscription events in the portal."::: --## Secure your GraphQL API --Secure your GraphQL API by applying both existing [access control policies](api-management-policies.md#access-restriction-policies) and a [GraphQL validation policy](validate-graphql-request-policy.md) to protect against GraphQL-specific attacks. - [!INCLUDE [api-management-define-api-topics.md](../../includes/api-management-define-api-topics.md)] ## Next steps |
api-management | Graphql Apis Overview | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/api-management/graphql-apis-overview.md | - Title: Support for GraphQL APIs - Azure API Management -description: Learn about GraphQL and how Azure API Management helps you manage GraphQL APIs. ----- Previously updated : 02/26/2023----# Overview of GraphQL APIs in Azure API Management --You can use API Management to manage GraphQL APIs - APIs based on the GraphQL query language. GraphQL provides a complete and understandable description of the data in an API, giving clients the power to efficiently retrieve exactly the data they need. [Learn more about GraphQL](https://graphql.org/learn/) --API Management helps you import, manage, protect, test, publish, and monitor GraphQL APIs. You can choose one of two API models: ---|Pass-through GraphQL |Synthetic GraphQL | -||| -| ▪️ Pass-through API to existing GraphQL service endpoint<br><br/>▪️ Support for GraphQL queries, mutations, and subscriptions | ▪️ API based on a custom GraphQL schema<br></br>▪️ Support for GraphQL queries, mutations, and subscriptions<br/><br/>▪️ Configure custom resolvers, for example, to HTTP data sources<br/><br/>▪️ Develop GraphQL schemas and GraphQL-based clients while consuming data from legacy APIs | --## Availability --* GraphQL APIs are supported in all API Management service tiers -* Pass-through and synthetic GraphQL APIs currently aren't supported in a self-hosted gateway -* GraphQL subscription support in synthetic GraphQL APIs is currently in preview --## What is GraphQL? --GraphQL is an open-source, industry-standard query language for APIs. Unlike REST-style APIs designed around actions over resources, GraphQL APIs support a broader set of use cases and focus on data types, schemas, and queries. --The GraphQL specification explicitly solves common issues experienced by client web apps that rely on REST APIs: --* It can take a large number of requests to fulfill the data needs for a single page -* REST APIs often return more data than needed the page being rendered -* The client app needs to poll to get new information --Using a GraphQL API, the client app can specify the data they need to render a page in a query document that is sent as a single request to a GraphQL service. A client app can also subscribe to data updates pushed from the GraphQL service in real time. --## Schema and operation types --In API Management, add a GraphQL API from a GraphQL schema, either retrieved from a backend GraphQL API endpoint or uploaded by you. A GraphQL schema describes: --* Data object types and fields that clients can request from a GraphQL API -* Operation types allowed on the data, such as queries --For example, a basic GraphQL schema for user data and a query for all users might look like: --``` -type Query { - users: [User] -} --type User { - id: String! - name: String! -} -``` --API Management supports the following operation types in GraphQL schemas. For more information about these operation types, see the [GraphQL specification](https://spec.graphql.org/October2021/#sec-Subscription-Operation-Definitions). --* **Query** - Fetches data, similar to a `GET` operation in REST -* **Mutation** - Modifies server-side data, similar to a `PUT` or `PATCH` operation in REST -* **Subscription** - Enables notifying subscribed clients in real time about changes to data on the GraphQL service -- For example, when data is modified via a GraphQL mutation, subscribed clients could be automatically notified about the change. --> [!IMPORTANT] -> API Management supports subscriptions implemented using the [graphql-ws](https://github.com/enisdenjo/graphql-ws) WebSocket protocol. Queries and mutations aren't supported over WebSocket. -> --## Resolvers --*Resolvers* take care of mapping the GraphQL schema to backend data, producing the data for each field in an object type. The data source could be an API, a database, or another service. For example, a resolver function would be responsible for returning data for the `users` query in the preceding example. --In API Management, you can create a *custom resolver* to map a field in an object type to a backend data source. You configure resolvers for fields in synthetic GraphQL API schemas, but you can also configure them to override the default field resolvers used by pass-through GraphQL APIs. --API Management currently supports HTTP-based resolvers to return the data for fields in a GraphQL schema. To use an HTTP-based resolver, configure a [`http-data-source`](http-data-source-policy.md) policy that transforms the API request (and optionally the response) into an HTTP request/response. --For example, a resolver for the preceding `users` query might map to a `GET` operation in a backend REST API: --```xml -<http-data-source> - <http-request> - <set-method>GET</set-method> - <set-url>https://myapi.contoso.com/api/users</set-url> - </http-request> -</http-data-source> -``` --For more information, see [Configure a GraphQL resolver](configure-graphql-resolver.md). --## Manage GraphQL APIs --* Secure GraphQL APIs by applying both existing access control policies and a [GraphQL validation policy](validate-graphql-request-policy.md) to secure and protect against GraphQL-specific attacks. -* Explore the GraphQL schema and run test queries against the GraphQL APIs in the Azure and developer portals. ---## Next steps --- [Import a GraphQL API](graphql-api.md)-- [Import a GraphQL schema and set up field resolvers](graphql-schema-resolve-api.md) |
api-management | Graphql Schema Resolve Api | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/api-management/graphql-schema-resolve-api.md | Title: Add a synthetic GraphQL API to Azure API Management | Microsoft Docs + Title: Import GraphQL schema and set up field resolvers | Microsoft Docs -description: Add a synthetic GraphQL API by importing a GraphQL schema to API Management and configuring field resolvers that use HTTP-based data sources. +description: Import a GraphQL schema to API Management and configure a policy to resolve a GraphQL query using an HTTP-based data source. Previously updated : 02/21/2023 Last updated : 05/17/2022 -# Add a synthetic GraphQL API and set up field resolvers +# Import a GraphQL schema and set up field resolvers [!INCLUDE [api-management-graphql-intro.md](../../includes/api-management-graphql-intro.md)] + In this article, you'll: > [!div class="checklist"] > * Import a GraphQL schema to your API Management instance-> * Set up a resolver for a GraphQL query using an existing HTTP endpoint +> * Set up a resolver for a GraphQL query using an existing HTTP endpoints > * Test your GraphQL API If you want to expose an existing GraphQL endpoint as an API, see [Import a GraphQL API](graphql-api.md). If you want to expose an existing GraphQL endpoint as an API, see [Import a Grap ## Add a GraphQL schema 1. From the side navigation menu, under the **APIs** section, select **APIs**.-1. Under **Define a new API**, select the **GraphQL** icon. +1. Under **Define a new API**, select the **Synthetic GraphQL** icon. - :::image type="content" source="media/graphql-api/import-graphql-api.png" alt-text="Screenshot of selecting GraphQL icon from list of APIs."::: + :::image type="content" source="media/graphql-schema-resolve-api/import-graphql-api.png" alt-text="Screenshot of selecting Synthetic GraphQL icon from list of APIs."::: 1. In the dialog box, select **Full** and complete the required form fields. :::image type="content" source="media/graphql-schema-resolve-api/create-from-graphql-schema.png" alt-text="Screenshot of fields for creating a GraphQL API."::: - | Field | Description | + | Field | Description | |-|-| | **Display name** | The name by which your GraphQL API will be displayed. | | **Name** | Raw name of the GraphQL API. Automatically populates as you type the display name. |- | **GraphQL type** | Select **Synthetic GraphQL** to import from a GraphQL schema file. | - | **Fallback GraphQL endpoint** | Optionally enter a URL with a GraphQL API endpoint name. API Management passes GraphQL queries to this endpoint when a custom resolver isn't set for a field. | - | **Description** | Add a description of your API. | - | **URL scheme** | Make a selection based on your GraphQL endpoint. Select one of the options that includes a WebSocket scheme (**WS** or **WSS**) if your GraphQL API includes the subscription type. Default selection: *HTTP(S)*. | + | **Fallback GraphQL endpoint** | For this scenario, optionally enter a URL with a GraphQL API endpoint name. API Management passes GraphQL queries to this endpoint when a custom resolver isn't set for a field. | + | **Upload schema file** | Select to browse and upload a valid GraphQL schema file with the `.graphql` extension. | + | Description | Add a description of your API. | + | URL scheme | Select **HTTP**, **HTTPS**, or **Both**. Default selection: *Both*. | | **API URL suffix**| Add a URL suffix to identify this specific API in this API Management instance. It has to be unique in this API Management instance. | | **Base URL** | Uneditable field displaying your API base URL | | **Tags** | Associate your GraphQL API with new or existing tags. | | **Products** | Associate your GraphQL API with a product to publish it. |+ | **Gateways** | Associate your GraphQL API with existing gateways. Default gateway selection: *Managed*. | | **Version this API?** | Select to apply a versioning scheme to your GraphQL API. |- 1. Select **Create**. -1. After the API is created, browse or modify the schema on the **Design** tab. +1. After the API is created, browse the schema on the **Design** tab, in the **Frontend** section. ## Configure resolver -Configure a resolver to map a field in the schema to an existing HTTP endpoint. --<!-- Add link to resolver how-to article for details --> +Configure the [set-graphql-resolver](set-graphql-resolver-policy.md) policy to map a field in the schema to an existing HTTP endpoint. Suppose you imported the following basic GraphQL schema and wanted to set up a resolver for the *users* query. type User { ``` 1. From the side navigation menu, under the **APIs** section, select **APIs** > your GraphQL API.-1. On the **Design** tab, review the schema for a field in an object type where you want to configure a resolver. - 1. Select a field, and then in the left margin, hover the pointer. - 1. Select **+ Add Resolver** -- :::image type="content" source="media/graphql-schema-resolve-api/add-resolver.png" alt-text="Screenshot of adding a GraphQL resolver in the portal."::: --1. On the **Create Resolver** page, update the **Name** property if you want to, optionally enter a **Description**, and confirm or update the **Type** and **Field** selections. +1. On the **Design** tab of your GraphQL API, select **All operations**. +1. In the **Backend** processing section, select **+ Add policy**. +1. Configure the `set-graphql-resolver` policy to resolve the *users* query using an HTTP data source. -1. In the **Resolver policy** editor, update the `<http-data-source>` element with child elements for your scenario. For example, the following resolver retrieves the *users* field by using a `GET` call on an existing HTTP data source. + For example, the following `set-graphql-resolver` policy retrieves the *users* field by using a `GET` call on an existing HTTP data source. - ```xml+ <set-graphql-resolver parent-type="Query" field="users"> <http-data-source> <http-request> <set-method>GET</set-method> <set-url>https://myapi.contoso.com/users</set-url> </http-request> </http-data-source>+ </set-graphql-resolver> ```-- :::image type="content" source="media/graphql-schema-resolve-api/configure-resolver-policy.png" alt-text="Screenshot of configuring resolver policy in the portal."::: -1. Select **Create**. -1. To resolve data for another field in the schema, repeat the preceding steps to create a resolver. +1. To resolve data for other fields in the schema, repeat the preceding step. +1. Select **Save**. [!INCLUDE [api-management-graphql-test.md](../../includes/api-management-graphql-test.md)] -## Secure your GraphQL API --Secure your GraphQL API by applying both existing [access control policies](api-management-policies.md#access-restriction-policies) and a [GraphQL validation policy](validate-graphql-request-policy.md) to protect against GraphQL-specific attacks. -- [!INCLUDE [api-management-define-api-topics.md](../../includes/api-management-define-api-topics.md)] ## Next steps |
api-management | Http Data Source Policy | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/api-management/http-data-source-policy.md | - Title: Azure API Management policy reference - http-data-source | Microsoft Docs -description: Reference for the http-data-source resolver policy available for use in Azure API Management. Provides policy usage, settings, and examples. ----- Previously updated : 02/23/2023----# HTTP data source for a resolver --The `http-data-source` resolver policy configures the HTTP request and optionally the HTTP response to resolve data for an object type and field in a GraphQL schema. The schema must be imported to API Management. ---## Policy statement --```xml -<http-data-source> - <http-request> - <set-method>...set-method policy configuration...</set-method> - <set-url>URL</set-url> - <set-header>...set-header policy configuration...</set-header> - <set-body>...set-body policy configuration...</set-body> - <authentication-certificate>...authentication-certificate policy configuration...</authentication-certificate> - </http-request> - <http-response> - <xml-to-json>...xml-to-json policy configuration...</xml-to-json> - <find-and-replace>...find-and-replace policy configuration...</find-and-replace> - <set-body>...set-body policy configuration...</set-body> - <publish-event>...publish-event policy configuration...</publish-event> - </http-response> -</http-data-source> -``` --## Elements --|Name|Description|Required| -|-|--|--| -| http-request | Specifies a URL and child policies to configure the resolver's HTTP request. Each child element can be specified at most once. | Yes | -| http-response | Optionally specifies child policies to configure the resolver's HTTP response. If not specified, the response is returned as a raw string. Each child element can be specified at most once. | No | --### http-request elements --> [!NOTE] -> Each child element may be specified at most once. Specify elements in the order listed. ---|Element|Description|Required| -|-|--|--| -| [set-method](set-method-policy.md) | Sets the method of the resolver's HTTP request. | Yes | -| set-url | Sets the URL of the resolver's HTTP request. | Yes | -| [set-header](set-header-policy.md) | Sets a header in the resolver's HTTP request. | No | -| [set-body](set-body-policy.md) | Sets the body in the resolver's HTTP request. | No | -| [authentication-certificate](authentication-certificate-policy.md) | Authenticates using a client certificate in the resolver's HTTP request. | No | --### http-response elements --> [!NOTE] -> Each child element may be specified at most once. Specify elements in the order listed. --|Name|Description|Required| -|-|--|--| -| [xml-to-json](xml-to-json-policy.md) | Transforms the resolver's HTTP response from XML to JSON. | No | -| [find-and-replace](find-and-replace-policy.md) | Finds a substring in the resolver's HTTP response and replaces it with a different substring. | No | -| [set-body](set-body-policy.md) | Sets the body in the resolver's HTTP response. | No | -| [publish-event](publish-event-policy.md) | Publishes an event to one or more subscriptions specified in the GraphQL API schema. | No | --## Usage --- [**Policy scopes:**](./api-management-howto-policies.md#scopes) GraphQL resolver-- [**Gateways:**](api-management-gateways-overview.md) dedicated, consumption--### Usage notes --* This policy is invoked only when resolving a single field in a matching GraphQL query, mutation, or subscription. --## Examples --### Resolver for GraphQL query --The following example resolves a query by making an HTTP `GET` call to a backend data source. --#### Example schema --``` -type Query { - users: [User] -} --type User { - id: String! - name: String! -} -``` --#### Example policy --```xml -<http-data-source> - <http-request> - <set-method>GET</set-method> - <set-url>https://data.contoso.com/get/users</set-url> - </http-request> -</http-data-source> -``` --### Resolver for a GraqhQL query that returns a list, using a liquid template --The following example uses a liquid template, supported for use in the [set-body](set-body-policy.md) policy, to return a list in the HTTP response to a query. It also renames the `username` field in the response from the REST API to `name` in the GraphQL response. --#### Example schema --``` -type Query { - users: [User] -} --type User { - id: String! - name: String! -} -``` --#### Example policy --```xml -<http-data-source> - <http-request> - <set-method>GET</set-method> - <set-url>https://data.contoso.com/users</set-url> - </http-request> - <http-response> - <set-body template="liquid"> - [ - {% JSONArrayFor elem in body %} - { - "name": "{{elem.username}}" - } - {% endJSONArrayFor %} - ] - </set-body> - </http-response> -</http-data-source> -``` --### Resolver for GraphQL mutation --The following example resolves a mutation that inserts data by making a `POST` request to an HTTP data source. The policy expression in the `set-body` policy of the HTTP request modifies a `name` argument that is passed in the GraphQL query as its body. The body that is sent will look like the following JSON: --``` json -{ - "name": "the-provided-name" -} -``` --#### Example schema --``` -type Query { - users: [User] -} --type Mutation { - makeUser(name: String!): User -} --type User { - id: String! - name: String! -} -``` --#### Example policy --```xml -<http-data-source> - <http-request> - <set-method>POST</set-method> - <set-url> https://data.contoso.com/user/create </set-url> - <set-header name="Content-Type" exists-action="override"> - <value>application/json</value> - </set-header> - <set-body>@{ - var args = context.Request.Body.As<JObject>(true)["arguments"]; - JObject jsonObject = new JObject(); - jsonObject.Add("name", args["name"]) - return jsonObject.ToString(); - }</set-body> - </http-request> -</http-data-source> -``` --## Related policies --* [GraphQL resolver policies](api-management-policies.md#graphql-resolver-policies) - |
api-management | Publish Event Policy | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/api-management/publish-event-policy.md | - Title: Azure API Management policy reference - publish-event | Microsoft Docs -description: Reference for the publish-event policy available for use in Azure API Management. Provides policy usage, settings, and examples. ----- Previously updated : 02/23/2023----# Publish event to GraphQL subscription --The `publish-event` policy publishes an event to one or more subscriptions specified in a GraphQL API schema. Configure the policy using an [http-data-source](http-data-source-policy.md) GraphQL resolver for a related field in the schema for another operation type such as a mutation. At runtime, the event is published to connected GraphQL clients. Learn more about [GraphQL APIs in API Management](graphql-apis-overview.md). ---<!--Link to resolver configuration article --> --## Policy statement --```xml -<http-data-source - <http-request> - [...] - </http-request> - <http-response> - [...] - <publish-event> - <targets> - <graphql-subscription id="subscription field" /> - </targets> - </publish-event> - </http-response> -</http-data-source> -``` --## Elements --|Name|Description|Required| -|-|--|--| -| targets | One or more subscriptions in the GraphQL schema, specified in `target` subelements, to which the event is published. | Yes | ---## Usage --- [**Policy sections:**](./api-management-howto-policies.md#sections) `http-response` element in `http-data-source` resolver-- [**Policy scopes:**](./api-management-howto-policies.md#scopes) GraphQL resolver only-- [**Gateways:**](api-management-gateways-overview.md) dedicated, consumption--### Usage notes --* This policy is invoked only when a related GraphQL query or mutation is executed. --## Example --The following example policy definition is configured in a resolver for the `createUser` mutation. It publishes an event to the `onUserCreated` subscription. --### Example schema --``` -type User { - id: Int! - name: String! -} ---type Mutation { - createUser(id: Int!, name: String!): User -} --type Subscription { - onUserCreated: User! -} -``` --### Example policy --```xml -<http-data-source> - <http-request> - <set-method>POST</set-method> - <set-url>https://contoso.com/api/user</set-url> - <set-body template="liquid">{ "id" : {{body.arguments.id}}, "name" : "{{body.arguments.name}}"}</set-body> - </http-request> - <http-response> - <publish-event> - <targets> - <graphql-subscription id="onUserCreated" /> - </targets> - </publish-event> - </http-response> -</http-data-source> -``` --## Related policies --* [GraphQL resolver policies](api-management-policies.md#graphql-resolver-policies) - |
api-management | Set Graphql Resolver Policy | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/api-management/set-graphql-resolver-policy.md | Title: Azure API Management policy reference - set-graphql-resolver | Microsoft Docs -description: Reference for the set-graphql-resolver policy in Azure API Management. Provides policy usage, settings, and examples. This policy is retired. +description: Reference for the set-graphql-resolver policy available for use in Azure API Management. Provides policy usage, settings, and examples. - Previously updated : 02/09/2023+ Last updated : 12/07/2022 -# Set GraphQL resolver (retired) --> [!IMPORTANT] -> * The `set-graphql-resolver` policy is retired. Customers using the `set-graphql-resolver` policy must migrate to the [managed resolvers](configure-graphql-resolver.md) for GraphQL APIs, which provide enhanced functionality. -> * After you configure a managed resolver for a GraphQL field, the gateway skips the `set-graphql-resolver` policy in any policy definitions. You can't combine use of managed resolvers and the `set-graphql-resolver` policy in your API Management instance. -+# Set GraphQL resolver The `set-graphql-resolver` policy retrieves or sets data for a GraphQL field in an object type specified in a GraphQL schema. The schema must be imported to API Management. Currently the data must be resolved using an HTTP-based data source (REST or SOAP API). + [!INCLUDE [api-management-policy-generic-alert](../../includes/api-management-policy-generic-alert.md)] + ## Policy statement ```xml The `set-graphql-resolver` policy retrieves or sets data for a GraphQL field in * This policy is invoked only when a matching GraphQL query is executed. * The policy resolves data for a single field. To resolve data for multiple fields, configure multiple occurrences of this policy in a policy definition. + ## GraphQL context * The context for the HTTP request and HTTP response (if specified) differs from the context for the original gateway API request: |
app-service | Configure Connect To Azure Storage | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/configure-connect-to-azure-storage.md | +> [!NOTE] +> When using VNET integration on your web app, the mounted drive will use an RC1918 IP address and not an IP address from your VNET. +> ::: zone pivot="code-windows" This guide shows how to mount Azure Storage Files as a network share in Windows code (non-container) in App Service. Only [Azure Files Shares](../storage/files/storage-how-to-use-files-portal.md) and [Premium Files Shares](../storage/files/storage-how-to-create-file-share.md) are supported. The benefits of custom-mounted storage include: |
automation | Add User Assigned Identity | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/automation/add-user-assigned-identity.md | If you don't have an Azure subscription, create a [free account](https://azure.m - An Azure Automation account. For instructions, see [Create an Azure Automation account](./quickstarts/create-azure-automation-account-portal.md). -- The user-assigned managed identity and the target Azure resources that your runbook manages using that identity can be in different Azure subscriptions.+- The [user-assigned managed identity](../active-directory/managed-identities-azure-resources/how-manage-user-assigned-managed-identities.md) and the target Azure resources that your runbook manages using that identity can be in different Azure subscriptions. - The latest version of Azure Account modules. Currently this is 2.2.8. (See [Az.Accounts](https://www.powershellgallery.com/packages/Az.Accounts/) for details about this version.) print(response.text) - If you need to disable a managed identity, see [Disable your Azure Automation account managed identity](disable-managed-identity-for-automation.md). -- For an overview of Azure Automation account security, see [Automation account authentication overview](automation-security-overview.md).+- For an overview of Azure Automation account security, see [Automation account authentication overview](automation-security-overview.md). |
automation | Automation Solution Vm Management Config | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/automation/automation-solution-vm-management-config.md | Title: Configure Azure Automation Start/Stop VMs during off-hours description: This article tells how to configure the Start/Stop VMs during off-hours feature to support different use cases or scenarios. Previously updated : 02/28/2023 Last updated : 03/07/2023 -> Start/Stop VM during off-hours version 1 is unavailable in the marketplace now as it will retire by 30 September 2023. We recommend you start using [version 2](/azure/azure-functions/start-stop-vms/overview), which is now generally available. The new version offers all existing capabilities and provides new features, such as multi-subscription support from a single Start/Stop instance. If you have the version 1 solution already deployed, you can still use the feature, and we will provide support until 30 September 2023. The details of the announcement will be shared on March 31st 2023. +> Start/Stop VM during off-hours version 1 is unavailable in the marketplace now as it will retire by 30 September 2023. We recommend you start using [version 2](../azure-functions/start-stop-vms/overview.md), +which is now generally available. The new version offers all existing capabilities and provides new features, such as multi-subscription support from a single Start/Stop instance. If you have the version 1 solution already deployed, you can still use the feature, and we will provide support until 30 September 2023. The details of the announcement will be shared on March 31st 2023. This article describes how to configure the [Start/Stop VMs during off-hours](automation-solution-vm-management.md) feature to support the described scenarios. You can also learn how to: |
azure-arc | Cluster Connect | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/kubernetes/cluster-connect.md | Before you begin, review the [conceptual overview of the cluster connect feature - If you haven't connected a cluster yet, use our [quickstart](quickstart-connect-cluster.md). - [Upgrade your agents](agent-upgrade.md#manually-upgrade-agents) to the latest version. -- Enable the below endpoints for outbound access in addition to the ones mentioned under [connecting a Kubernetes cluster to Azure Arc](quickstart-connect-cluster.md#meet-network-requirements):+- In addition to meeting the [network requirements for Arc-enabled Kubernetes](network-requirements.md), enable these endpoints for outbound access: | Endpoint | Port | |-|-| Before you begin, review the [conceptual overview of the cluster connect feature - If you haven't connected a cluster yet, use our [quickstart](quickstart-connect-cluster.md). - [Upgrade your agents](agent-upgrade.md#manually-upgrade-agents) to the latest version. -- Enable the below endpoints for outbound access in addition to the ones mentioned under [connecting a Kubernetes cluster to Azure Arc](quickstart-connect-cluster.md#meet-network-requirements):+- In addition to meeting the [network requirements for Arc-enabled Kubernetes](network-requirements.md), enable these endpoints for outbound access: | Endpoint | Port | |-|-| |
azure-arc | Conceptual Agent Overview | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/kubernetes/conceptual-agent-overview.md | Azure Arc agents are deployed on Kubernetes clusters when you [connect them to A ## Deploy agents to your cluster -Most on-premises datacenters enforce strict network rules that prevent inbound communication on the network boundary firewall. Azure Arc-enabled Kubernetes works with these restrictions by not requiring inbound ports on the firewall. Azure Arc agents require outbound communication to a [set list of network endpoints](quickstart-connect-cluster.md#meet-network-requirements). +Most on-premises datacenters enforce strict network rules that prevent inbound communication on the network boundary firewall. Azure Arc-enabled Kubernetes works with these restrictions by not requiring inbound ports on the firewall. Azure Arc agents require outbound communication to a [set list of network endpoints](network-requirements.md). :::image type="content" source="media/architectural-overview.png" alt-text="Diagram showing an architectural overview of the Azure Arc-enabled Kubernetes agents." lightbox="media/architectural-overview.png"::: |
azure-arc | Diagnose Connection Issues | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/kubernetes/diagnose-connection-issues.md | Be sure that the Microsoft.Kubernetes, Microsoft.KubernetesConfiguration, and Mi ### Are all network requirements met? -Review the [network requirements](quickstart-connect-cluster.md#meet-network-requirements) and ensure that no required endpoints are blocked. +Review the [network requirements](network-requirements.md) and ensure that no required endpoints are blocked. ### Are all pods in the `azure-arc` namespace running? az connectedk8s connect --name <cluster-name> --resource-group <resource-group> ### Is the proxy server able to reach required network endpoints? -Review the [network requirements](quickstart-connect-cluster.md#meet-network-requirements) and ensure that no required endpoints are blocked. +Review the [network requirements](network-requirements.md) and ensure that no required endpoints are blocked. ### Is the proxy server only using HTTP? |
azure-arc | Network Requirements | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/kubernetes/network-requirements.md | + + Title: Azure Arc-enabled Kubernetes network requirements +description: Learn about the networking requirements to connect Kubernetes clusters to Azure Arc. Last updated : 03/07/2023+++++# Azure Arc-enabled Kubernetes network requirements ++This topic describes the networking requirements for connecting a Kubernetes cluster to Azure Arc and supporting various Arc-enabled Kubernetes scenarios. ++## Details ++++## Additional endpoints ++Depending on your scenario, you may need connectivity to other URLs, such as those used by the Azure portal, management tools, or other Azure services. In particular, review these lists to ensure that you allow connectivity to any necessary endpoints: ++- [Azure portal URLs](../../azure-portal/azure-portal-safelist-urls.md) +- [Azure CLI endpoints for proxy bypass](/cli/azure/azure-cli-endpoints) ++For a complete list of network requirements for Azure Arc features and Azure Arc-enabled services, see [Azure Arc network requirements (Consolidated)](../network-requirements-consolidated.md). ++## Next steps ++- Use our [quickstart](quickstart-connect-cluster.md) to connect your cluster. +- Review [frequently asked questions](faq.md) about Arc-enabled Kubernetes. |
azure-arc | Plan At Scale Deployment | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/kubernetes/plan-at-scale-deployment.md | The purpose of this article is to ensure you're prepared for a successful deploy - Create a Kubernetes cluster using Docker for [Mac](https://docs.docker.com/docker-for-mac/#kubernetes) or [Windows](https://docs.docker.com/docker-for-windows/#kubernetes) - Self-managed Kubernetes cluster using [Cluster API](https://cluster-api.sigs.k8s.io/user/quick-start.html) -* Your machines have connectivity from your on-premises network or other cloud environment to resources in Azure, either directly or through a proxy server. More details can be found under [network prerequisites](quickstart-connect-cluster.md#meet-network-requirements). +* Your machines have connectivity from your on-premises network or other cloud environment to resources in Azure, either directly or through a proxy server. More details can be found under [network prerequisites](network-requirements.md). * A `kubeconfig` file pointing to the cluster you want to connect to Azure Arc. * 'Read' and 'Write' permissions for the user or service principal creating the Azure Arc-enabled Kubernetes resource type of `Microsoft.Kubernetes/connectedClusters`. |
azure-arc | Quickstart Connect Cluster | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/kubernetes/quickstart-connect-cluster.md | Title: "Quickstart: Connect an existing Kubernetes cluster to Azure Arc" description: In this quickstart, you learn how to connect an Azure Arc-enabled Kubernetes cluster. Previously updated : 02/03/2023 Last updated : 03/07/2023 ms.devlang: azurecli For a conceptual look at connecting clusters to Azure Arc, see [Azure Arc-enable ## Prerequisites +In addition to the prerequisites below, be sure to meet all [network requirements for Azure Arc-enabled Kubernetes](network-requirements.md). + ### [Azure CLI](#tab/azure-cli) * An Azure account with an active subscription. [Create an account for free](https://azure.microsoft.com/free/?WT.mc_id=A261C142F). For a conceptual look at connecting clusters to Azure Arc, see [Azure Arc-enable * [Kubernetes in Docker (KIND)](https://kind.sigs.k8s.io/) * Create a Kubernetes cluster using Docker for [Mac](https://docs.docker.com/docker-for-mac/#kubernetes) or [Windows](https://docs.docker.com/docker-for-windows/#kubernetes) * Self-managed Kubernetes cluster using [Cluster API](https://cluster-api.sigs.k8s.io/user/quick-start.html)- * If you want to connect an OpenShift cluster to Azure Arc, you need to execute the following command just once on your cluster before running `New-AzConnectedKubernetes`: -- ```bash - oc adm policy add-scc-to-user privileged system:serviceaccount:azure-arc:azure-arc-kube-aad-proxy-sa - ``` >[!NOTE] > The cluster needs to have at least one node of operating system and architecture type `linux/amd64`. Clusters with only `linux/arm64` nodes aren't yet supported. For a conceptual look at connecting clusters to Azure Arc, see [Azure Arc-enable -## Meet network requirements ----For a complete list of network requirements for Azure Arc features and Azure Arc-enabled services, see [Azure Arc network requirements (Consolidated)](../network-requirements-consolidated.md). - ## Create a resource group Run the following command: |
azure-arc | Troubleshooting | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/kubernetes/troubleshooting.md | For more information, see [Debugging DNS Resolution](https://kubernetes.io/docs/ ### Outbound network connectivity issues -Issues with outbound network connectivity from the cluster may arise for different reasons. First make sure all of the [network requirements](quickstart-connect-cluster.md#meet-network-requirements) have been met. +Issues with outbound network connectivity from the cluster may arise for different reasons. First make sure all of the [network requirements](network-requirements.md) have been met. If you encounter this issue, and your cluster is behind an outbound proxy server, make sure you have passed proxy parameters during the onboarding of your cluster and that the proxy is configured correctly. For more information, see [Connect using an outbound proxy server](quickstart-connect-cluster.md#connect-using-an-outbound-proxy-server). ### Unable to retrieve MSI certificate -Problems retrieving the MSI certificate are usually due to network issues. Check to make sure all of the [network requirements](quickstart-connect-cluster.md#meet-network-requirements) have been met, then try again. +Problems retrieving the MSI certificate are usually due to network issues. Check to make sure all of the [network requirements](network-requirements.md) have been met, then try again. ### Azure CLI is unable to download Helm chart for Azure Arc agents To resolve this issue, try the following steps. name: azure-identity-certificate ``` - To resolve this issue, try deleting the Arc deployment by running the `az connectedk8s delete` command and reinstalling it. If the issue continues to happen, it could be an issue with your proxy settings. In that case, [try connecting your cluster to Azure Arc via a proxy](./quickstart-connect-cluster.md#connect-using-an-outbound-proxy-server) to connect your cluster to Arc via a proxy. Please also verify if all the [network prerequisites](quickstart-connect-cluster.md#meet-network-requirements) have been met. + To resolve this issue, try deleting the Arc deployment by running the `az connectedk8s delete` command and reinstalling it. If the issue continues to happen, it could be an issue with your proxy settings. In that case, [try connecting your cluster to Azure Arc via a proxy](./quickstart-connect-cluster.md#connect-using-an-outbound-proxy-server) to connect your cluster to Arc via a proxy. Please also verify if all the [network prerequisites](network-requirements.md) have been met. 4. If the `clusterconnect-agent` and the `config-agent` pods are running, but the `kube-aad-proxy` pod is missing, check your pod security policies. This pod uses the `azure-arc-kube-aad-proxy-sa` service account, which doesn't have admin permissions but requires the permission to mount host path. az connectedk8s proxy -n AzureArcTest -g AzureArcTest Hybrid connection for the target resource does not exist. Agent might not have started successfully. ``` -Be sure to use the `connectedk8s` Azure CLI extension with version >= 1.2.0, then [connect your cluster again](quickstart-connect-cluster.md) to Azure Arc. Also, verify that you've met all the [network prerequisites](quickstart-connect-cluster.md#meet-network-requirements) needed for Arc-enabled Kubernetes. +Be sure to use the `connectedk8s` Azure CLI extension with version >= 1.2.0, then [connect your cluster again](quickstart-connect-cluster.md) to Azure Arc. Also, verify that you've met all the [network prerequisites](network-requirements.md) needed for Arc-enabled Kubernetes. If your cluster is behind an outbound proxy or firewall, verify that websocket connections are enabled for `*.servicebus.windows.net`, which is required specifically for the [Cluster Connect](cluster-connect.md) feature. |
azure-arc | Overview | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/system-center-virtual-machine-manager/overview.md | Title: Overview of the Azure Connected System Center Virtual Machine Manager (preview) description: This article provides a detailed overview of the Azure Arc-enabled System Center Virtual Machine Manager (preview). Previously updated : 01/27/2023 Last updated : 03/07/2023 ms. |
azure-maps | About Azure Maps | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-maps/about-azure-maps.md | Azure Maps is a collection of geospatial services and SDKs that use fresh mappin Additionally, Azure Maps services are available through the Web SDK and the Android SDK. These tools help developers quickly develop and scale solutions that integrate location information into Azure solutions. -You can sign up for a free [Azure Maps account](https://azure.microsoft.com/services/azure-maps/) and start developing. +You can sign up for a free [Azure Maps account] and start developing. The following video explains Azure Maps in depth: Azure Maps consists of the following services that can provide geographic contex ### Data service -Data is imperative for maps. Use the Data service to upload and store geospatial data for use with spatial operations or image composition. Bringing customer data closer to the Azure Maps service will reduce latency, increase productivity, and create new scenarios in your applications. For details on this service, see the [Data service documentation](/rest/api/maps/data-v2). +Data is imperative for maps. Use the Data service to upload and store geospatial data for use with spatial operations or image composition. Bringing customer data closer to the Azure Maps service will reduce latency, increase productivity, and create new scenarios in your applications. For details on this service, see [Data service]. ### Geolocation service For more details, read the [Geolocation service documentation](/rest/api/maps/ge ### Render service -[Render service V2](/rest/api/maps/render-v2) introduces a new version of the [Get Map Tile V2 API](/rest/api/maps/render-v2/get-map-tile) that supports using Azure Maps tiles not only in the Azure Maps SDKs but other map controls as well. It includes raster and vector tile formats, 256x256 or 512x512 (where applicable) tile sizes and numerous map types such as road, weather, contour, or map tiles created using Azure Maps Creator. For a complete list, see [TilesetID](/rest/api/maps/render-v2/get-map-tile#tilesetid) in the REST API documentation. It's recommended that you use Render service V2 instead of Render service V1. You're required to display the appropriate copyright attribution on the map anytime you use the Azure Maps Render service V2, either as basemaps or layers, in any third-party map control. For more information, see [How to use the Get Map Attribution API](how-to-show-attribution.md). +[Render service V2](/rest/api/maps/render-v2) introduces a new version of the [Get Map Tile V2 API](/rest/api/maps/render-v2/get-map-tile) that supports using Azure Maps tiles not only in the Azure Maps SDKs but other map controls as well. It includes raster and vector tile formats, 256x256 or 512x512 (where applicable) tile sizes and numerous map types such as road, weather, contour, or map tiles created using Azure Maps Creator. For a complete list, see [TilesetID] in the REST API documentation. It's recommended that you use Render service V2 instead of Render service V1. You're required to display the appropriate copyright attribution on the map anytime you use the Azure Maps Render service V2, either as basemaps or layers, in any third-party map control. For more information, see [How to use the Get Map Attribution API](how-to-show-attribution.md). :::image type="content" source="./media/about-azure-maps/intro_map.png" border="false" alt-text="Example of a map from the Render service V2"::: Maps Creator service is a suite of web services that developers can use to creat Maps Creator provides the following -* [Dataset service][Dataset service]. Use the Dataset service to create a dataset from a converted drawing package data. For information about Drawing package requirements, see Drawing package requirements. +* [Dataset service]. Use the Dataset service to create a dataset from a converted drawing package data. For information about drawing package requirements, see drawing package requirements. -* [Conversion service][Conversion service]. Use the Conversion service to convert a DWG design file into drawing package data for indoor maps. +* [Conversion service]. Use the Conversion service to convert a DWG design file into drawing package data for indoor maps. -* [Tileset service][Tileset]. Use the Tileset service to create a vector-based representation of a dataset. Applications can use a tileset to present a visual tile-based view of the dataset. +* [Tileset service]. Use the Tileset service to create a vector-based representation of a dataset. Applications can use a tileset to present a visual tile-based view of the dataset. -* [Custom styling service][Custom styling] (preview). Use the [style service][style] or [visual style editor][style editor] to customize the visual elements of an indoor map. +* [Custom styling service] (preview). Use the [style service] or [visual style editor] to customize the visual elements of an indoor map. -* [Feature State service][FeatureState]. Use the Feature State service to support dynamic map styling. Dynamic map styling allows applications to reflect real-time events on spaces provided by IoT systems. +* [Feature State service]. Use the Feature State service to support dynamic map styling. Dynamic map styling allows applications to reflect real-time events on spaces provided by IoT systems. -* [WFS service][WFS]. Use the WFS service to query your indoor map data. The WFS service follows the [Open Geospatial Consortium API](https://docs.opengeospatial.org/is/17-069r3/17-069r3.html) standards for querying a single dataset. +* [WFS service]. Use the WFS service to query your indoor map data. The WFS service follows the [Open Geospatial Consortium API] standards for querying a single dataset. -* [Wayfinding service][wayfinding-preview] (preview). Use the [wayfinding API][wayfind] to generate a path between two points within a facility. Use the [routeset API][routeset] to create the data that the wayfinding service needs to generate paths. +* [Wayfinding service] (preview). Use the [wayfinding API] to generate a path between two points within a facility. Use the [routeset API] to create the data that the wayfinding service needs to generate paths. ### Elevation service For more information, see the [Get started with Azure Maps Power BI visual](powe ## Usage -To access Azure Maps services, go to the [Azure portal](https://portal.azure.com) and create an Azure Maps account. +To access Azure Maps services, go to the [Azure portal] and create an Azure Maps account. Azure Maps uses a key-based authentication scheme. When you create your account, two keys are generated. To authenticate for Azure Maps services, you can use either key. Try a sample app that showcases Azure Maps: Stay up to date on Azure Maps: -[Azure Maps blog](https://azure.microsoft.com/blog/topics/azure-maps/) +[Azure Maps blog] +[Data service]: /rest/api/maps/data-v2 [Dataset service]: creator-indoor-maps.md#datasets [Conversion service]: creator-indoor-maps.md#convert-a-drawing-package-[Tileset]: creator-indoor-maps.md#tilesets -[Custom styling]: creator-indoor-maps.md#custom-styling-preview -[style]: /rest/api/maps/v20220901preview/style -[style editor]: https://azure.github.io/Azure-Maps-Style-Editor -[FeatureState]: creator-indoor-maps.md#feature-statesets -[WFS]: creator-indoor-maps.md#web-feature-service-api -[wayfinding-preview]: creator-indoor-maps.md#wayfinding-preview -[wayfind]: /rest/api/maps/v20220901preview/wayfinding -[routeset]: /rest/api/maps/v20220901preview/routeset +[Tileset service]: creator-indoor-maps.md#tilesets +[Custom styling service]: creator-indoor-maps.md#custom-styling-preview +[style service]: /rest/api/maps/v20220901preview/style +[visual style editor]: https://azure.github.io/Azure-Maps-Style-Editor +[Feature State service]: creator-indoor-maps.md#feature-statesets +[WFS service]: creator-indoor-maps.md#web-feature-service-api +[Wayfinding service]: creator-indoor-maps.md#wayfinding-preview +[wayfinding API]: /rest/api/maps/v20220901preview/wayfinding +[routeset API]: /rest/api/maps/v20220901preview/routeset +[Open Geospatial Consortium API]: https://docs.opengeospatial.org/is/17-069r3/17-069r3.html +[Azure portal]: https://portal.azure.com +[Azure Maps account]: https://azure.microsoft.com/services/azure-maps/ +[TilesetID]: /rest/api/maps/render-v2/get-map-tile#tilesetid +[Azure Maps blog]: https://azure.microsoft.com/blog/topics/azure-maps/ |
azure-maps | Add Tile Layer Map Ios | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-maps/add-tile-layer-map-ios.md | A Tile layer loads in tiles from a server. These images can be pre-rendered and * X, Y, Zoom notation - Based on the zoom level, x is the column and y is the row position of the tile in the tile grid. * Quadkey notation - Combination x, y, zoom information into a single string value that is a unique identifier for a tile.-* Bounding Box - Bounding box coordinates can be used to specify an image in the format `{west},{south},{east},{north}`, which is commonly used by [web-mapping Services (WMS)](https://www.opengeospatial.org/standards/wms). +* Bounding Box - Bounding box coordinates can be used to specify an image in the format `{west},{south},{east},{north}`, which is commonly used by [web-mapping services (WMS)](https://www.opengeospatial.org/standards/wms). > [!TIP] > A TileLayer is a great way to visualize large data sets on the map. Not only can a tile layer be generated from an image, but vector data can also be rendered as a tile layer too. By rendering vector data as a tile layer, the map control only needs to load the tiles, which can be much smaller in file size than the vector data they represent. This technique is used by many who need to render millions of rows of data on the map. |
azure-maps | Azure Maps Authentication | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-maps/azure-maps-authentication.md | For general information about authenticating with Azure AD, see [Authentication ## Managed identities for Azure resources and Azure Maps -[Managed identities for Azure resources](../active-directory/managed-identities-azure-resources/overview.md) provide Azure services with an automatically managed application based security principal that can authenticate with Azure AD. With Azure role-based access control (Azure RBAC), the managed identity security principal can be authorized to access Azure Maps services. Some examples of managed identities include: Azure App Service, Azure Functions, and Azure Virtual Machines. For a list of managed identities, see [Services that support managed identities for Azure resources](../active-directory/managed-identities-azure-resources/services-support-managed-identities.md). For more information on managed identities, see [Manage authentication in Azure Maps](./how-to-manage-authentication.md). +[Managed identities for Azure resources](../active-directory/managed-identities-azure-resources/overview.md) provide Azure services with an automatically managed application based security principal that can authenticate with Azure AD. With Azure role-based access control (Azure RBAC), the managed identity security principal can be authorized to access Azure Maps services. Some examples of managed identities include: Azure App Service, Azure Functions, and Azure Virtual Machines. For a list of managed identities, see [Azure services that can use managed identities to access other services](../active-directory/managed-identities-azure-resources/services-support-managed-identities.md). For more information on managed identities, see [Manage authentication in Azure Maps](./how-to-manage-authentication.md). ### Configure application Azure AD authentication The following role definition types exist to support application scenarios. Some Azure Maps services may require elevated privileges to perform write or delete actions on Azure Maps REST APIs. Azure Maps Data Contributor role is required for services, which provide write or delete actions. The following table describes what services Azure Maps Data Contributor is applicable when using write or delete actions. When only read actions are required, the Azure Maps Data Reader role can be used in place of the Azure Maps Data Contributor role. -| Azure Maps Service | Azure Maps Role Definition | +| Azure Maps service | Azure Maps Role Definition | | : | :-- |-| [Data](/rest/api/maps/data) | Azure Maps Data Contributor | -| [Creator](/rest/api/maps-creator/) | Azure Maps Data Contributor | -| [Spatial](/rest/api/maps/spatial) | Azure Maps Data Contributor | +| [Data](/rest/api/maps/data) | Azure Maps Data Contributor | +| [Creator](/rest/api/maps-creator/) | Azure Maps Data Contributor | +| [Spatial](/rest/api/maps/spatial) | Azure Maps Data Contributor | | Batch [Search](/rest/api/maps/search) and [Route](/rest/api/maps/route) | Azure Maps Data Contributor | For information about viewing your Azure RBAC settings, see [How to configure Azure RBAC for Azure Maps](./how-to-manage-authentication.md). Assigning a role assignment to a resource group can enable access to multiple Az ## Disable local authentication -Azure Maps accounts support the standard Azure property in the [Azure Maps Management REST API](/rest/api/maps-management/) for `Microsoft.Maps/accounts` called `disableLocalAuth`. When `true`, all authentication to the Azure Maps data-plane REST API is disabled, except [Azure AD authentication](./azure-maps-authentication.md#azure-ad-authentication). This is configured using Azure Policy to control distribution and management of shared keys and SAS tokens. For more information, see [What is Azure Policy?](../governance/policy/overview.md). +Azure Maps accounts support the standard Azure property in the [Management API](/rest/api/maps-management/) for `Microsoft.Maps/accounts` called `disableLocalAuth`. When `true`, all authentication to the Azure Maps data-plane REST API is disabled, except [Azure AD authentication](./azure-maps-authentication.md#azure-ad-authentication). This is configured using Azure Policy to control distribution and management of shared keys and SAS tokens. For more information, see [What is Azure Policy?](../governance/policy/overview.md). Disabling local authentication doesn't take effect immediately. Allow a few minutes for the service to block future authentication requests. To re-enable local authentication, set the property to `false` and after a few minutes local authentication will resume. Consider the application topology where the endpoint `https://us.atlas.microsoft As described in [Azure Maps rate limits](./azure-maps-qps-rate-limits.md), individual service offerings have varying rate limits that are enforced as an aggregate of the account. -Consider the case of **Search Service - Non-Batch Reverse**, with its limit of 250 queries per second (QPS) for the following tables. Each table represents estimated total successful transactions from example usage. +Consider the case of **Search service - Non-Batch Reverse**, with its limit of 250 queries per second (QPS) for the following tables. Each table represents estimated total successful transactions from example usage. -The first table shows one token that has a maximum request per second of 500, and then actual usage of the application was 500 request per second for a duration of 60 seconds. **Search Service - Non-Batch Reverse** has a rate limit of 250, meaning of the total 30,000 requests made in the 60 seconds; 15,000 of those requests will be billable transactions. The remaining requests will result in status code `429 (TooManyRequests)`. +The first table shows one token that has a maximum request per second of 500, and then actual usage of the application was 500 request per second for a duration of 60 seconds. **Search service - Non-Batch Reverse** has a rate limit of 250, meaning of the total 30,000 requests made in the 60 seconds; 15,000 of those requests will be billable transactions. The remaining requests will result in status code `429 (TooManyRequests)`. | Name | Approximate Maximum Rate Per Second | Actual Rate Per Second | Duration of sustained rate in seconds | Approximate total successful transactions | | :- | :- | : | : | :- | |
azure-maps | Azure Maps Qps Rate Limits | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-maps/azure-maps-qps-rate-limits.md | Azure Maps does not have any maximum daily limits on the number of requests that Below are the QPS usage limits for each Azure Maps service by Pricing Tier. -| Azure Maps Service | QPS Limit: Gen 2 Pricing Tier | QPS Limit: Gen 1 S1 Pricing Tier | QPS Limit: Gen 1 S0 Pricing Tier | +| Azure Maps service | QPS Limit: Gen 2 Pricing Tier | QPS Limit: Gen 1 S1 Pricing Tier | QPS Limit: Gen 1 S0 Pricing Tier | | -- | :--: | :: | :: |-| Copyright Service | 10 | 10 | 10 | +| Copyright service | 10 | 10 | 10 | | Creator - Alias, TilesetDetails | 10 | Not Available | Not Available | | Creator - Conversion, Dataset, Feature State, WFS | 50 | Not Available | Not Available |-| Data Service | 50 | 50 | Not Available | -| Elevation Service | 50 | 50 | Not Available | -| Geolocation Service | 50 | 50 | 50 | -| Render Service - Contour tiles, Digital Elevation Model (DEM) tiles and Customer tiles | 50 | 50 | Not Available | -| Render Service - Traffic tiles and Static maps | 50 | 50 | 50 | -| Render Service - Road tiles | 500 | 500 | 50 | -| Render Service - Satellite tiles | 250 | 250 | Not Available | -| Render Service - Weather tiles | 100 | 100 | 50 | -| Route Service - Batch | 10 | 10 | Not Available | -| Route Service - Non-Batch | 50 | 50 | 50 | -| Search Service - Batch | 10 | 10 | Not Available | -| Search Service - Non-Batch | 500 | 500 | 50 | -| Search Service - Non-Batch Reverse | 250 | 250 | 50 | -| Spatial Service | 50 | 50 | Not Available | -| Timezone Service | 50 | 50 | 50 | -| Traffic Service | 50 | 50 | 50 | -| Weather Service | 50 | 50 | 50 | +| Data service | 50 | 50 | Not Available | +| Elevation service | 50 | 50 | Not Available | +| Geolocation service | 50 | 50 | 50 | +| Render service - Contour tiles, Digital Elevation Model (DEM) tiles and Customer tiles | 50 | 50 | Not Available | +| Render service - Traffic tiles and Static maps | 50 | 50 | 50 | +| Render service - Road tiles | 500 | 500 | 50 | +| Render service - Satellite tiles | 250 | 250 | Not Available | +| Render service - Weather tiles | 100 | 100 | 50 | +| Route service - Batch | 10 | 10 | Not Available | +| Route service - Non-Batch | 50 | 50 | 50 | +| Search service - Batch | 10 | 10 | Not Available | +| Search service - Non-Batch | 500 | 500 | 50 | +| Search service - Non-Batch Reverse | 250 | 250 | 50 | +| Spatial service | 50 | 50 | Not Available | +| Timezone service | 50 | 50 | 50 | +| Traffic service | 50 | 50 | 50 | +| Weather service | 50 | 50 | 50 | When QPS limits are reached, an HTTP 429 error will be returned. If you are using the Gen 2 or Gen 1 S1 pricing tiers, you can create an Azure Maps *Technical* Support Request in the [Azure portal](https://portal.azure.com/) to increase a specific QPS limit if needed. QPS limits for the Gen 1 S0 pricing tier cannot be increased. |
azure-maps | Creator Facility Ontology | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-maps/creator-facility-ontology.md | Facility ontology defines how Azure Maps Creator internally stores facility data :::zone pivot="facility-ontology-v1" -The Facility 1.0 contains revisions for the Facility feature class definitions for [Azure Maps Services](https://aka.ms/AzureMaps). +The Facility 1.0 contains revisions for the Facility feature class definitions for [Azure Maps services](https://aka.ms/AzureMaps). :::zone-end :::zone pivot="facility-ontology-v2" -The Facility 2.0 contains revisions for the Facility feature class definitions for [Azure Maps Services](https://aka.ms/AzureMaps). +The Facility 2.0 contains revisions for the Facility feature class definitions for [Azure Maps services](https://aka.ms/AzureMaps). :::zone-end |
azure-maps | Creator Indoor Maps | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-maps/creator-indoor-maps.md | Title: Work with indoor maps in Azure Maps Creator + Title: Work with indoor maps in Azure Maps Creator + description: This article introduces concepts that apply to Azure Maps Creator services The following diagram illustrates the entire workflow. ## Create Azure Maps Creator -To use Creator services, an Azure Maps Creator resource must be created and associated to an Azure Maps account with the Gen 2 pricing tier. For information about how to create an Azure Maps Creator resource in Azure, see [Manage Azure Maps Creator](how-to-manage-creator.md). +To use Creator services, an Azure Maps Creator resource must be created and associated to an Azure Maps account with the Gen 2 pricing tier. For information about how to create an Azure Maps Creator resource in Azure, see [Manage Azure Maps Creator]. > [!TIP]-> For pricing information see the *Creator* section in [Azure Maps pricing](https://aka.ms/CreatorPricing). +> For pricing information see the *Creator* section in [Azure Maps pricing]. ## Creator authentication Creator inherits Azure Maps Access Control (IAM) settings. All API calls for data access must be sent with authentication and authorization rules. -Creator usage data is incorporated in your Azure Maps usage charts and activity log. For more information, see [Manage authentication in Azure Maps](./how-to-manage-authentication.md). +Creator usage data is incorporated in your Azure Maps usage charts and activity log. For more information, see [Manage authentication in Azure Maps]. >[!Important] >We recommend using: >-> - Azure Active Directory (Azure AD) in all solutions that are built with an Azure Maps account using Creator services. For more information about Azure AD, see [Azure AD authentication](azure-maps-authentication.md#azure-ad-authentication). +> - Azure Active Directory (Azure AD) in all solutions that are built with an Azure Maps account using Creator services. For more information about Azure AD, see [Azure AD authentication]. >->- Role-based access control settings. Using these settings, map makers can act as the Azure Maps Data Contributor role, and Creator map data users can act as the Azure Maps Data Reader role. For more information, see [Authorization with role-based access control](azure-maps-authentication.md#authorization-with-role-based-access-control). +>- Role-based access control settings. Using these settings, map makers can act as the Azure Maps Data Contributor role, and Creator map data users can act as the Azure Maps Data Reader role. For more information, see [Authorization with role-based access control]. ## Creator data item types Creator services create, store, and use various data types that are defined and ## Upload a drawing package -Creator collects indoor map data by converting an uploaded drawing package. The drawing package represents a constructed or remodeled facility. For information about drawing package requirements, see [Drawing package requirements](drawing-requirements.md). +Creator collects indoor map data by converting an uploaded drawing package. The drawing package represents a constructed or remodeled facility. For information about drawing package requirements, see [Drawing package requirements]. -Use the [Azure Maps Data Upload API](/rest/api/maps/data-v2/update) to upload a drawing package. After the Drawing packing is uploaded, the Data Upload API returns a user data identifier (`udid`). The `udid` can then be used to convert the uploaded package into indoor map data. +Use [Data Upload] to upload a drawing package. After the Drawing packing is uploaded, the Data Upload API returns a user data identifier (`udid`). The `udid` can then be used to convert the uploaded package into indoor map data. ## Convert a drawing package -The [Azure Maps Conversion service](/rest/api/maps/v2/conversion) converts an uploaded drawing package into indoor map data. The Conversion service also validates the package. Validation issues are classified into two types: +The [Conversion service](/rest/api/maps/v2/conversion) converts an uploaded drawing package into indoor map data. The Conversion service also validates the package. Validation issues are classified into two types: -- Errors: If any errors are detected, the conversion process fails. When an error occurs, the Conversion service provides a link to the [Azure Maps Drawing Error Visualizer](drawing-error-visualizer.md) stand-alone web application. You can use the Drawing Error Visualizer to inspect [drawing package warnings and errors](drawing-conversion-error-codes.md) that occurred during the conversion process. After you fix the errors, you can attempt to upload and convert the package.+- Errors: If any errors are detected, the conversion process fails. When an error occurs, the Conversion service provides a link to the [Azure Maps Drawing Error Visualizer] stand-alone web application. You can use the Drawing Error Visualizer to inspect [Drawing package warnings and errors] that occurred during the conversion process. After you fix the errors, you can attempt to upload and convert the package. - Warnings: If any warnings are detected, the conversion succeeds. However, we recommend that you review and resolve all warnings. A warning means that part of the conversion was ignored or automatically fixed. Failing to resolve the warnings could result in errors in later processes.-For more information, see [Drawing package warnings and errors](drawing-conversion-error-codes.md). +For more information, see [Drawing package warnings and errors]. ## Create indoor map data Azure Maps Creator provides the following services that support map creation: -- [Dataset service](/rest/api/maps/v2/dataset).-- [Tileset service](/rest/api/maps/v2/tileset).+- [Dataset service]. +- [Tileset service]. Use the Tileset service to create a vector-based representation of a dataset. Applications can use a tileset to present a visual tile-based view of the dataset.-- [Custom styling service](#custom-styling-preview). Use the [style service][style] or [visual style editor][style editor] to customize the visual elements of an indoor map.-- [Feature State service](/rest/api/maps/v2/feature-state). Use the Feature State service to support dynamic map styling. Applications can use dynamic map styling to reflect real-time events on spaces provided by the IoT system.-- [Wayfinding service](#wayfinding-preview). Use the [wayfinding API][wayfind] to generate a path between two points within a facility. Use the [routeset API][routeset] to create the data that the wayfinding service needs to generate paths.+- [Custom styling service](#custom-styling-preview). Use the [style] service or [visual style editor] to customize the visual elements of an indoor map. +- [Feature State service]. Use the Feature State service to support dynamic map styling. Applications can use dynamic map styling to reflect real-time events on spaces provided by the IoT system. +- [Wayfinding service](#wayfinding-preview). Use the [wayfinding] API to generate a path between two points within a facility. Use the [routeset] API to create the data that the wayfinding service needs to generate paths. ### Datasets -A dataset is a collection of indoor map features. The indoor map features represent facilities that are defined in a converted drawing package. After you create a dataset with the [Dataset service](/rest/api/maps/v2/dataset), you can create any number of [tilesets](#tilesets) or [feature statesets](#feature-statesets). +A dataset is a collection of indoor map features. The indoor map features represent facilities that are defined in a converted drawing package. After you create a dataset with the [Dataset service], you can create any number of [tilesets](#tilesets) or [feature statesets](#feature-statesets). -At any time, developers can use the [Dataset service](/rest/api/maps/v2/dataset) to add or remove facilities to an existing dataset. For more information about how to update an existing dataset using the API, see the append options in [Dataset service](/rest/api/maps/v2/dataset). For an example of how to update a dataset, see [Data maintenance](#data-maintenance). +At any time, developers can use the [Dataset service] to add or remove facilities to an existing dataset. For more information about how to update an existing dataset using the API, see the append options in [Dataset service]. For an example of how to update a dataset, see [Data maintenance](#data-maintenance). ### Tilesets -A tileset is a collection of vector data that represents a set of uniform grid tiles. Developers can use the [Tileset service](/rest/api/maps/v2/tileset) to create tilesets from a dataset. +A tileset is a collection of vector data that represents a set of uniform grid tiles. Developers can use the [Tileset service] to create tilesets from a dataset. To reflect different content stages, you can create multiple tilesets from the same dataset. For example, you can make one tileset with furniture and equipment, and another tileset without furniture and equipment. You might choose to generate one tileset with the most recent data updates, and another tileset without the most recent data updates. -In addition to the vector data, the tileset provides metadata for map rendering optimization. For example, tileset metadata contains a minimum and maximum zoom level for the tileset. The metadata also provides a bounding box that defines the geographic extent of the tileset. An application can use a bounding box to programmatically set the correct center point. For more information about tileset metadata, see [Tileset List API](/rest/api/maps/v2/tileset/list). +In addition to the vector data, the tileset provides metadata for map rendering optimization. For example, tileset metadata contains a minimum and maximum zoom level for the tileset. The metadata also provides a bounding box that defines the geographic extent of the tileset. An application can use a bounding box to programmatically set the correct center point. For more information about tileset metadata, see [Tileset List]. After a tileset is created, it can be retrieved by the [Render V2 service](#render-v2-get-map-tile-api). If a tileset becomes outdated and is no longer useful, you can delete the tilese ### Custom styling (preview) -A style defines the visual appearance of a map. It defines what data to draw, the order to draw it in, and how to style the data when drawing it. Azure Maps Creator styles support the MapLibre standard for [style layers][style layers] and [sprites][sprites]. +A style defines the visual appearance of a map. It defines what data to draw, the order to draw it in, and how to style the data when drawing it. Azure Maps Creator styles support the MapLibre standard for [style layers] and [sprites]. -When you convert a drawing package after uploading it to your Azure Maps account, default styles are applied to the elements of your map. The custom styling service enables you to customize the visual appearance of your map. You can do this by manually editing the style JSON and importing it into your Azure Maps account using the [Style - Create][create-style] HTTP request, however the recommended approach is to use the [visual style editor][style editor]. For more information, see [Create custom styles for indoor maps](how-to-create-custom-styles.md). +When you convert a drawing package after uploading it to your Azure Maps account, default styles are applied to the elements of your map. The custom styling service enables you to customize the visual appearance of your map. You can do this by manually editing the style JSON and importing it into your Azure Maps account using the [Style - Create] HTTP request, however the recommended approach is to use the [visual style editor]. For more information, see [Create custom styles for indoor maps]. Example layer in the style.json file: Example layer in the style.json file: #### Map configuration -The map configuration is an array of configurations. Each configuration consists of a [basemap][basemap] and one or more layers, each layer consisting of a [style][style] + [tileset][tileset] tuple. +The map configuration is an array of configurations. Each configuration consists of a [basemap] and one or more layers, each layer consisting of a [style] + [tileset] tuple. -The map configuration is used when you [Instantiate the Indoor Manager][instantiate-indoor-manager] of a Map object when developing applications in Azure Maps. It's referenced using the `mapConfigurationId` or `alias`. Map configurations are immutable. When making changes to an existing map configuration, a new map configuration will be created, resulting in a different `mapConfingurationId`. Anytime you create a map configuration using an alias already used by an existing map configuration, it will always point to the new map configuration. +The map configuration is used when you [Instantiate the Indoor Manager] of a Map object when developing applications in Azure Maps. It's referenced using the `mapConfigurationId` or `alias`. Map configurations are immutable. When making changes to an existing map configuration, a new map configuration will be created, resulting in a different `mapConfingurationId`. Anytime you create a map configuration using an alias already used by an existing map configuration, it will always point to the new map configuration. The following JSON is an example of a default map configuration. See the table below for a description of each element of the file: The following JSON is an example of a default map configuration. See the table b #### Additional information - For more information how to modify styles using the style editor, see [Create custom styles for indoor maps][style-how-to].-- For more information on style Rest API, see [style][style] in the Maps Creator Rest API reference.+- For more information on style Rest API, see [style] in the Maps Creator Rest API reference. - For more information on the map configuration Rest API, see [Creator - map configuration Rest API][map-config-api]. ### Feature statesets Feature statesets are collections of dynamic properties (*states*) that are assigned to dataset features, such as rooms or equipment. An example of a *state* can be temperature or occupancy. Each *state* is a key/value pair that contains the name of the property, the value, and the timestamp of the last update. -You can use the [Feature State service](/rest/api/maps/v2/feature-state/create-stateset) to create and manage a feature stateset for a dataset. The stateset is defined by one or more *states*. Each feature, such as a room, can have one *state* attached to it. +You can use the [Feature State service] to create and manage a feature stateset for a dataset. The stateset is defined by one or more *states*. Each feature, such as a room, can have one *state* attached to it. The value of each *state* in a stateset can be updated or retrieved by IoT devices or other applications. For example, using the [Feature State Update API](/rest/api/maps/v2/feature-state/update-states), devices measuring space occupancy can systematically post the state change of a room. An application can use a feature stateset to dynamically render features in a fa ### Wayfinding (preview) -The [Wayfinding service][wayfind] enables you to provide your customers with the shortest path between two points within a facility. Once you've imported your indoor map data and created your dataset, you can use that to create a [routeset][routeset]. The routeset provides the data required to generate paths between two points. The wayfinding service takes into account things such as the minimum width of openings and may optionally exclude elevators or stairs when navigating between levels as a result. +The [Wayfinding service] enables you to provide your customers with the shortest path between two points within a facility. Once you've imported your indoor map data and created your dataset, you can use that to create a [routeset]. The routeset provides the data required to generate paths between two points. The wayfinding service takes into account things such as the minimum width of openings and may optionally exclude elevators or stairs when navigating between levels as a result. -Creator wayfinding is powered by [Havok][havok]. +Creator wayfinding is powered by [Havok]. #### Wayfinding paths -When a [wayfinding path][wayfinding path] is successfully generated, it finds the shortest path between two points in the specified facility. Each floor in the journey is represented as a separate leg, as are any stairs or elevators used to move between floors. +When a [wayfinding path] is successfully generated, it finds the shortest path between two points in the specified facility. Each floor in the journey is represented as a separate leg, as are any stairs or elevators used to move between floors. For example, the first leg of the path might be from the origin to the elevator on that floor. The next leg will be the elevator, and then the final leg will be the path from the elevator to the destination. The estimated travel time is also calculated and returned in the HTTP response JSON. For wayfinding to work, the facility data must contain a [structure][structures] If the selected origin and destination are on different floors, the wayfinding service determines what [vertical penetration][verticalPenetration] objects such as stairs or elevators, are available as possible pathways for navigating vertically between levels. By default, the option that results in the shortest path will be used. -The Wayfinding service includes stairs or elevators in a path based on the value of the vertical penetration's `direction` property. For more information on the direction property, see [verticalPenetration][verticalPenetration] in the Facility Ontology article. See the `avoidFeatures` and `minWidth` properties in the [wayfinding][wayfind] API documentation to learn about other factors that can affect the path selection between floor levels. +The Wayfinding service includes stairs or elevators in a path based on the value of the vertical penetration's `direction` property. For more information on the direction property, see [verticalPenetration][verticalPenetration] in the Facility Ontology article. See the `avoidFeatures` and `minWidth` properties in the [wayfinding] API documentation to learn about other factors that can affect the path selection between floor levels. -For more information, see the [Indoor maps wayfinding service](how-to-creator-wayfinding.md) how-to article. +For more information, see the [Indoor maps wayfinding service] how-to article. ## Using indoor maps The Azure Maps [Render V2-Get Map Tile API](/rest/api/maps/render-v2/get-map-til Applications can use the Render V2-Get Map Tile API to request tilesets. The tilesets can then be integrated into a map control or SDK. For an example of a map control that uses the Render V2 service, see [Indoor Maps Module](#indoor-maps-module). -### Web Feature Service API +### Web Feature service API -You can use the [Web Feature Service (WFS) API](/rest/api/maps/v2/wfs) to query datasets. WFS follows the [Open Geospatial Consortium API Features](https://docs.opengeospatial.org/DRAFTS/17-069r4.html). You can use the WFS API to query features within the dataset itself. For example, you can use WFS to find all mid-size meeting rooms of a specific facility and floor level. +You can use the [Web Feature service] (WFS) to query datasets. WFS follows the [Open Geospatial Consortium API Features]. You can use the WFS API to query features within the dataset itself. For example, you can use WFS to find all mid-size meeting rooms of a specific facility and floor level. ### Alias API Creator services such as Conversion, Dataset, Tileset and Feature State return a ### Indoor Maps module -The [Azure Maps Web SDK](./index.yml) includes the Indoor Maps module. This module offers extended functionalities to the Azure Maps *Map Control* library. The Indoor Maps module renders indoor maps created in Creator. It integrates widgets such as *floor picker* that help users to visualize the different floors. +The [Azure Maps Web SDK] includes the Indoor Maps module. This module offers extended functionalities to the Azure Maps *Map Control* library. The Indoor Maps module renders indoor maps created in Creator. It integrates widgets such as *floor picker* that help users to visualize the different floors. -You can use the Indoor Maps module to create web applications that integrate indoor map data with other [Azure Maps services](./index.yml). The most common application setups include adding knowledge from other maps - such as road, imagery, weather, and transit - to indoor maps. +You can use the Indoor Maps module to create web applications that integrate indoor map data with other [Azure Maps services]. The most common application setups include adding knowledge from other maps - such as road, imagery, weather, and transit - to indoor maps. -The Indoor Maps module also supports dynamic map styling. For a step-by-step walkthrough to implement feature stateset dynamic styling in an application, see [Use the Indoor Map module](how-to-use-indoor-module.md). +The Indoor Maps module also supports dynamic map styling. For a step-by-step walkthrough to implement feature stateset dynamic styling in an application, see [Use the Indoor Map module]. ### Azure Maps integration -As you begin to develop solutions for indoor maps, you can discover ways to integrate existing Azure Maps capabilities. For example, you can implement asset tracking or safety scenarios by using the [Azure Maps Geofence API](/rest/api/maps/spatial/postgeofence) with Creator indoor maps. For example, you can use the Geofence API to determine whether a worker enters or leaves specific indoor areas. For more information about how to connect Azure Maps with IoT telemetry, see [this IoT spatial analytics tutorial](tutorial-iot-hub-maps.md). +As you begin to develop solutions for indoor maps, you can discover ways to integrate existing Azure Maps capabilities. For example, you can implement asset tracking or safety scenarios by using the [Geofence service] with Creator indoor maps. For example, you can use the Geofence API to determine whether a worker enters or leaves specific indoor areas. For more information about how to connect Azure Maps with IoT telemetry, see [Tutorial: Implement IoT spatial analytics by using Azure Maps]. ### Data maintenance You can use the Azure Maps Creator List, Update, and Delete API to list, update, and delete your datasets, tilesets, and feature statesets. >[!NOTE]->When you review a list of items to determine whether to delete them, consider the impact of that deletion on all dependent API or applications. For example, if you delete a tileset that's being used by an application by means of the [Render V2-Get Map Tile API](/rest/api/maps/render-v2/get-map-tile), the application fails to render that tileset. +>When you review a list of items to determine whether to delete them, consider the impact of that deletion on all dependent API or applications. For example, if you delete a tileset that's being used by an application by means of the [Render V2-Get Map Tile API], the application fails to render that tileset. ### Example: Updating a dataset The following example shows how to update a dataset, create a new tileset, and delete an old tileset: 1. Follow steps in the [Upload a drawing package](#upload-a-drawing-package) and [Convert a drawing package](#convert-a-drawing-package) sections to upload and convert the new drawing package.-2. Use the [Dataset Create API](/rest/api/maps/v2/dataset/create) to append the converted data to the existing dataset. -3. Use the [Tileset Create API](/rest/api/maps/v2/tileset/create) to generate a new tileset out of the updated dataset. +2. Use [Dataset Create] to append the converted data to the existing dataset. +3. Use [Tileset Create] to generate a new tileset out of the updated dataset. 4. Save the new **tilesetId** for the next step. 5. To enable the visualization of the updated campus dataset, update the tileset identifier in your application. If the old tileset is no longer used, you can delete it. The following example shows how to update a dataset, create a new tileset, and d > [Tutorial: Creating a Creator indoor map](tutorial-creator-indoor-maps.md) > [!div class="nextstepaction"]-> [Create custom styles for indoor maps](how-to-create-custom-styles.md) +> [Create custom styles for indoor maps] ++[Azure Maps pricing]: https://aka.ms/CreatorPricing +[Manage authentication in Azure Maps]: how-to-manage-authentication.md +[Azure AD authentication]: azure-maps-authentication.md#azure-ad-authentication +[Authorization with role-based access control]: azure-maps-authentication.md#authorization-with-role-based-access-control +[Drawing package requirements]: drawing-requirements.md +[Data Upload]: /rest/api/maps/data-v2/update [style layers]: https://docs.mapbox.com/mapbox-gl-js/style-spec/layers/#layout [sprites]: https://docs.mapbox.com/help/glossary/sprite/-[create-style]: /rest/api/maps/v20220901preview/style/create +[Style - Create]: /rest/api/maps/v20220901preview/style/create [basemap]: supported-map-styles.md+[Manage Azure Maps Creator]: how-to-manage-creator.md +[Drawing package warnings and errors]: drawing-conversion-error-codes.md +[Azure Maps Drawing Error Visualizer]: drawing-error-visualizer.md +[Create custom styles for indoor maps]: how-to-create-custom-styles.md [style]: /rest/api/maps/v20220901preview/style [tileset]: /rest/api/maps/v20220901preview/tileset+[Dataset service]: /rest/api/maps/v2/dataset +[Dataset Create]: /rest/api/maps/v2/dataset/create +[Tileset service]: /rest/api/maps/v2/tileset +[Tileset Create]: /rest/api/maps/v2/tileset/create +[Tileset List]: /rest/api/maps/v2/tileset/list +[Feature State service]: /rest/api/maps/v2/feature-state [routeset]: /rest/api/maps/v20220901preview/routeset-[wayfind]: /rest/api/maps/v20220901preview/wayfinding +[wayfinding]: /rest/api/maps/v20220901preview/wayfinding +[wayfinding service]: /rest/api/maps/v20220901preview/wayfinding [wayfinding path]: /rest/api/maps/v20220901preview/wayfinding/path [style-picker-control]: choose-map-style.md#add-the-style-picker-control [style-how-to]: how-to-create-custom-styles.md [map-config-api]: /rest/api/maps/v20220901preview/map-configuration-[instantiate-indoor-manager]: how-to-use-indoor-module.md#instantiate-the-indoor-manager -[style editor]: https://azure.github.io/Azure-Maps-Style-Editor +[Instantiate the Indoor Manager]: how-to-use-indoor-module.md#instantiate-the-indoor-manager +[visual style editor]: https://azure.github.io/Azure-Maps-Style-Editor [verticalPenetration]: creator-facility-ontology.md?pivots=facility-ontology-v2#verticalpenetration+[Indoor maps wayfinding service]: how-to-creator-wayfinding.md +[Open Geospatial Consortium API Features]: https://docs.opengeospatial.org/DRAFTS/17-069r4.html +[Web Feature service]: /rest/api/maps/v2/wfs +[Azure Maps Web SDK]: how-to-use-map-control.md +[Use the Indoor Map module]: how-to-use-indoor-module.md +[Azure Maps services]: index.yml [structures]: creator-facility-ontology.md?pivots=facility-ontology-v2#structure+[Render V2-Get Map Tile API]: /rest/api/maps/render-v2/get-map-tile +[Tutorial: Implement IoT spatial analytics by using Azure Maps]: tutorial-iot-hub-maps.md +[Geofence service]: /rest/api/maps/spatial/postgeofence [havok]: https://www.havok.com/ |
azure-maps | Drawing Requirements | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-maps/drawing-requirements.md | -You can convert uploaded drawing packages into map data by using the [Azure Maps Conversion service](/rest/api/maps/v2/conversion). This article describes the drawing package requirements for the Conversion API. To view a sample package, you can download the sample [Drawing package](https://github.com/Azure-Samples/am-creator-indoor-data-examples). +You can convert uploaded drawing packages into map data by using the Azure Maps [Conversion service]. This article describes the drawing package requirements for the Conversion API. To view a sample package, you can download the sample [Drawing package]. -For a guide on how to prepare your drawing package, see [Conversion Drawing Package Guide](drawing-package-guide.md). +For a guide on how to prepare your drawing package, see [Conversion Drawing Package Guide]. ## Prerequisites The drawing package includes drawings saved in DWG format, which is the native f You can choose any CAD software to produce the drawings in the drawing package. -The [Azure Maps Conversion service](/rest/api/maps/v2/conversion) converts the drawing package into map data. The Conversion service works with the AutoCAD DWG file format `AC1032`. +The [Conversion service] converts the drawing package into map data. The Conversion service works with the AutoCAD DWG file format `AC1032`. ## Glossary of terms A drawing package is a .zip archive that contains the following files: - DWG files in AutoCAD DWG file format. - A _manifest.json_ file that describes the DWG files in the drawing package. -The drawing package must be zipped into a single archive file, with the .zip extension. The DWG files can be organized in any way inside the package, but the manifest file must live at the root directory of the zipped package. The next sections detail the requirements for the DWG files, manifest file, and the content of these files. To view a sample package, you can download the [sample drawing package](https://github.com/Azure-Samples/am-creator-indoor-data-examples). +The drawing package must be zipped into a single archive file, with the .zip extension. The DWG files can be organized in any way inside the package, but the manifest file must live at the root directory of the zipped package. The next sections detail the requirements for the DWG files, manifest file, and the content of these files. To view a sample package, you can download the [sample drawing package]. ## DWG file conversion process -The [Azure Maps Conversion service](/rest/api/maps/v2/conversion) does the following on each DWG file: +The [Conversion service] does the following on each DWG file: - Extracts feature classes: - Levels Each DWG layer must adhere to the following rules: - A layer must exclusively contain features of a single class. For example, units and walls canΓÇÖt be in the same layer. - A single class of features can be represented by multiple layers.-- Self-intersecting polygons are permitted, but are automatically repaired. When they repaired, the [Azure Maps Conversion service](/rest/api/maps/v2/conversion) raises a warning. It's advisable to manually inspect the repaired results, because they might not match the expected results.+- Self-intersecting polygons are permitted, but are automatically repaired. When they repaired, the [Conversion service] raises a warning. It's advisable to manually inspect the repaired results, because they might not match the expected results. - Each layer has a supported list of entity types. Any other entity types in a layer will be ignored. For example, text entities aren't supported on the wall layer. -The table below outlines the supported entity types and converted map features for each layer. If a layer contains unsupported entity types, then the [Azure Maps Conversion service](/rest/api/maps/v2/conversion) ignores those entities. +The table below outlines the supported entity types and converted map features for each layer. If a layer contains unsupported entity types, then the [Conversion service] ignores those entities. | Layer | Entity types | Converted Features | | :-- | :-| :- No matter how many entity drawings are in the exterior layer, the [resulting fac If the layer contains multiple overlapping PolyLines, the PolyLines are dissolved into a single Level feature. Instead, if the layer contains multiple non-overlapping PolyLines, the resulting Level feature has a multi-polygonal representation. -You can see an example of the Exterior layer as the outline layer in the [sample drawing package](https://github.com/Azure-Samples/am-creator-indoor-data-examples). +You can see an example of the Exterior layer as the outline layer in the [sample drawing package]. ### Unit layer The Units layer should adhere to the following requirements: Name a unit by creating a text object in the UnitLabel layer, and then place the object inside the bounds of the unit. For more information, see the [UnitLabel layer](#unitlabel-layer). -You can see an example of the Units layer in the [sample drawing package](https://github.com/Azure-Samples/am-creator-indoor-data-examples). +You can see an example of the Units layer in the [sample drawing package]. ### Wall layer The DWG file for each level can contain a layer that defines the physical extent - Walls must be drawn as Polygon, PolyLine (closed), Circle, or Ellipse (closed). - The wall layer or layers should only contain geometry that's interpreted as building structure. -You can see an example of the Walls layer in the [sample drawing package](https://github.com/Azure-Samples/am-creator-indoor-data-examples). +You can see an example of the Walls layer in the [sample drawing package]. ### Door layer The DWG file for each level can contain a Zone layer that defines the physical e Name a zone by creating a text object in the ZoneLabel layer, and placing the text object inside the bounds of the zone. For more information, see [ZoneLabel layer](#zonelabel-layer). -You can see an example of the Zone layer in the [sample drawing package](https://github.com/Azure-Samples/am-creator-indoor-data-examples). +You can see an example of the Zone layer in the [sample drawing package]. ### UnitLabel layer The DWG file for each level can contain a UnitLabel layer. The UnitLabel layer a - Unit labels must fall entirely inside the bounds of their unit. - Units must not contain multiple text entities in the UnitLabel layer. -You can see an example of the UnitLabel layer in the [sample drawing package](https://github.com/Azure-Samples/am-creator-indoor-data-examples). +You can see an example of the UnitLabel layer in the [sample drawing package]. ### ZoneLabel layer The DWG file for each level can contain a ZoneLabel layer. This layer adds a nam - Zones labels must fall inside the bounds of their zone. - Zones must not contain multiple text entities in the ZoneLabel layer. -You can see an example of the ZoneLabel layer in the [sample drawing package](https://github.com/Azure-Samples/am-creator-indoor-data-examples). +You can see an example of the ZoneLabel layer in the [sample drawing package]. ## Manifest file requirements -The zip folder must contain a manifest file at the root level of the directory, and the file must be named **manifest.json**. It describes the DWG files to allow the [Azure Maps Conversion service](/rest/api/maps/v2/conversion) to parse their content. Only the files identified by the manifest are ingested. Files that are in the zip folder, but aren't properly listed in the manifest, are ignored. +The zip folder must contain a manifest file at the root level of the directory, and the file must be named **manifest.json**. It describes the DWG files to allow the [Conversion service] to parse their content. Only the files identified by the manifest are ingested. Files that are in the zip folder, but aren't properly listed in the manifest, are ignored. The file paths in the `buildingLevels` object of the manifest file must be relative to the root of the zip folder. The DWG file name must exactly match the name of the facility level. For example, a DWG file for the "Basement" level is "Basement.dwg." A DWG file for level 2 is named as "level_2.dwg." Use an underscore, if your level name has a space. -Although there are requirements when you use the manifest objects, not all objects are required. The following table shows the required and optional objects for version 1.1 of the [Azure Maps Conversion service](/rest/api/maps/v2/conversion). +Although there are requirements when you use the manifest objects, not all objects are required. The following table shows the required and optional objects for version 1.1 of the [Conversion service]. >[!NOTE] > Unless otherwise specified, all properties with a string property type allow for one thousand characters. The next sections detail the requirements for each object. |`locality` |string | false | Name of a city, town, area, neighborhood, or region.| |`adminDivisions`|JSON array of strings | false| An array containing address designations. For example: (Country, State) Use ISO 3166 country codes and ISO 3166-2 state/territory codes. | |`postalCode`|string| false | The mail sorting code. |-|`hoursOfOperation` |string|false| Adheres to the [OSM Opening Hours](https://wiki.openstreetmap.org/wiki/Key:opening_hours/specification) format. | +|`hoursOfOperation` |string|false| Adheres to the [OSM Opening Hours] format. | |`phone` |string| false | Phone number associated with the building. | |`website` |string| false | Website associated with the building. | |`nonPublic`|bool| false | Flag specifying if the building is open to the public. | The `zoneProperties` object contains a JSON array of zone properties. ### Sample drawing package manifest -Below is the manifest file for the sample drawing package. Go to the [Sample drawing package for Azure Maps Creator](https://github.com/Azure-Samples/am-creator-indoor-data-examples) on GitHub to download the entire package. +Below is the manifest file for the sample drawing package. Go to the [Sample drawing package] for Azure Maps Creator on GitHub to download the entire package. #### Manifest file Learn more by reading: > [!div class="nextstepaction"] > [Creator for indoor maps](creator-indoor-maps.md)++[Conversion service]: /rest/api/maps/v2/conversion +[Drawing package]: https://github.com/Azure-Samples/am-creator-indoor-data-examples +[Conversion Drawing Package Guide]: drawing-package-guide.md +[sample drawing package]: https://github.com/Azure-Samples/am-creator-indoor-data-examples +[OSM Opening Hours]: https://wiki.openstreetmap.org/wiki/Key:opening_hours/specification |
azure-maps | Drawing Tools Events | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-maps/drawing-tools-events.md | Learn how to use additional features of the drawing tools module: > [!div class="nextstepaction"] > [Interaction types and keyboard shortcuts](drawing-tools-interactions-keyboard-shortcuts.md) -Learn more about the Services module: +Learn more about the services module: > [!div class="nextstepaction"] > [Services module](how-to-use-services-module.md) |
azure-maps | Geocoding Coverage | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-maps/geocoding-coverage.md | -The Azure Maps [Search service](/rest/api/maps/search) supports geocoding, which means that your API request can have search terms, like an address or the name of a place, and returns the result as latitude and longitude coordinates. For example, the Azure Maps [Get Search Address API](/rest/api/maps/search/getsearchaddress) receives queries that contain location information, and returns results as latitude and longitude coordinates. +The [Search service] supports geocoding, which means that your API request can have search terms, like an address or the name of a place, and returns the result as latitude and longitude coordinates. For example, [Get Search Address] receives queries that contain location information, and returns results as latitude and longitude coordinates. -However, the Azure Maps [Search service](/rest/api/maps/search) doesn't have the same level of information and accuracy for all regions and countries. Use this article to determine what kind of locations you can reliably search for in each region. +However, the [Search service] doesn't have the same level of information and accuracy for all regions and countries. Use this article to determine what kind of locations you can reliably search for in each region. The ability to geocode in a country/region is dependent upon the road data coverage and geocoding precision of the geocoding service. The following categorizations are used to specify the level of geocoding support in each country/region. The ability to geocode in a country/region is dependent upon the road data cover Learn more about Azure Maps geocoding: > [!div class="nextstepaction"] > [Azure Maps Search service](/rest/api/maps/search)++[Search service]: /rest/api/maps/search +[Get Search Address API]: /rest/api/maps/search/getsearchaddress |
azure-maps | Geofence Geojson | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-maps/geofence-geojson.md | -The Azure Maps [GET Geofence](/rest/api/maps/spatial/getgeofence) and [POST Geofence](/rest/api/maps/spatial/postgeofence) APIs allow you to retrieve proximity of a coordinate relative to a provided geofence or set of fences. This article details how to prepare the geofence data that can be used in the Azure Maps GET and POST API. +The Azure Maps [GET Geofence] and [POST Geofence] API allow you to retrieve proximity of a coordinate relative to a provided geofence or set of fences. This article details how to prepare the geofence data that can be used in the Azure Maps GET and POST API. -The data for geofence or set of geofences is represented by `Feature` Object and `FeatureCollection` Object in `GeoJSON` format, which is defined in [rfc7946](https://tools.ietf.org/html/rfc7946). In Addition to it: +The data for geofence or set of geofences is represented by `Feature` Object and `FeatureCollection` Object in `GeoJSON` format, which is defined in [rfc7946]. In Addition to it: * The GeoJSON Object type can be a `Feature` Object or a `FeatureCollection` Object. * The Geometry Object type can be a `Point`, `MultiPoint`, `LineString`, `MultiLineString`, `Polygon`, `MultiPolygon`, and `GeometryCollection`. The data for geofence or set of geofences is represented by `Feature` Object and | recurrenceType | string | false | The recurrence type of the period. The value can be `Daily`, `Weekly`, `Monthly`, or `Yearly`. Default value is `Daily`.| | businessDayOnly | Boolean | false | Indicate whether the data is only valid during business days. Default value is `false`.| - * All coordinate values are represented as [longitude, latitude] defined in `WGS84`. * For each Feature, which contains `MultiPoint`, `MultiLineString`, `MultiPolygon` , or `GeometryCollection`, the properties are applied to all the elements. for example: All the points in `MultiPoint` will use same radius to form a multiple circle geofence.-* In point-circle scenario, a circle geometry can be represented using a `Point` geometry object with properties elaborated in [Extending GeoJSON geometries](./extend-geojson.md). +* In point-circle scenario, a circle geometry can be represented using a `Point` geometry object with properties elaborated in [Extending GeoJSON geometries]. Following is a sample request body for a geofence represented as a circle geofence geometry in `GeoJSON` using a center point and a radius. The valid period of the geofence data starts from 2018-10-22, 9AM to 5PM, repeated every day except for the weekend. `expiredTime` indicates this geofence data will be considered expired, if `userTime` in the request is later than `2019-01-01`. Following is a sample request body for a geofence represented as a circle geofen } } }-``` +``` ++[GET Geofence]: /rest/api/maps/spatial/getgeofence +[POST Geofence]: /rest/api/maps/spatial/postgeofence +[rfc7946]: https://tools.ietf.org/html/rfc7946 +[Extending GeoJSON geometries]: extend-geojson.md |
azure-maps | How To Create Custom Styles | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-maps/how-to-create-custom-styles.md | When you create an indoor map using Azure Maps Creator, default styles are appli ## Prerequisites -- Understanding of [Creator concepts](creator-indoor-maps.md).-- An Azure Maps Creator [tileset][tileset]. If you have never used Azure Maps Creator to create an indoor map, you might find the [Use Creator to create indoor maps][tutorial] tutorial helpful.+- Understanding of [Creator concepts]. +- An Azure Maps Creator [tileset]. If you have never used Azure Maps Creator to create an indoor map, you might find the [Use Creator to create indoor maps] tutorial helpful. ## Create custom styles using Creators visual editor -While it's possible to modify your indoor maps styles using [Creators Rest API][creator api], Creator also offers a [visual style editor][style editor] to create custom styles that doesn't require coding. This article will focus exclusively on creating custom styles using this style editor. +While it's possible to modify your indoor maps styles using [Creators Rest API], Creator also offers a [visual style editor][style editor] to create custom styles that doesn't require coding. This article will focus exclusively on creating custom styles using this style editor. ### Open style When an indoor map is created in your Azure Maps Creator service, default styles are automatically created for you. In order to customize the styling elements of your indoor map, you'll need to open that default style. -Open the [Creator Style Editor][style editor] and select the **Open** toolbar button. +Open the [style editor] and select the **Open** toolbar button. :::image type="content" source="./media/creator-indoor-maps/style-editor/open-menu.png" alt-text="A screenshot of the open menu in the visual style editor."::: The **Open Style** dialog box opens. -Enter your [subscription key][subscription key] in the **Enter your Azure Maps subscription key** field. +Enter your [subscription key] in the **Enter your Azure Maps subscription key** field. Next, select the geography associated with your subscription key in the drop-down list. Select the **Get map configuration list** button to get a list of every map conf :::image type="content" source="./media/creator-indoor-maps/style-editor/select-the-map-configuration.png" alt-text="A screenshot of the open style dialog box in the visual style editor with the Select map configuration drop-down list highlighted."::: > [!NOTE]-> If the map configuration was created as part of a custom style and has a user provided alias, that alias will appear in the map configuration drop-down list, otherwise the `mapConfigurationId` will appear. The default map configuration ID for any given tileset can be found by using the [tileset Get][tileset get] HTTP request and passing in the tileset ID: +> If the map configuration was created as part of a custom style and has a user provided alias, that alias will appear in the map configuration drop-down list, otherwise the `mapConfigurationId` will appear. The default map configuration ID for any given tileset can be found by using the [tileset get] HTTP request and passing in the tileset ID: > > ```http > https://{geography}.atlas.microsoft.com/tilesets/{tilesetId}?2022-09-01-preview Once you've selected the desired style, select the **Load selected style** butto | # | Description | |||-| 1 | Your Azure Maps account [subscription key][subscription key] | +| 1 | Your Azure Maps account [subscription key] | | 2 | Select the geography of the Azure Maps account. | | 3 | A list of map configuration aliases. If a given map configuration has no alias, the `mapConfigurationId` will be shown instead. | | 4 | This value is created from a combination of the style and tileset. If the style has as alias it will be shown, if not the `styleId` will be shown. The `tilesetId` will always be shown for the tileset value. | Some important things to know about aliases: 1. Can be used to reference the underlying object, whether a style or map configuration, in place of that object's ID. This is especially important since neither the style or map configuration can be updated, meaning every time any changes are saved, a new ID is generated, but the alias can remain the same, making referencing it less error prone after it has been modified multiple times. > [!WARNING]-> Duplicate aliases are not allowed. If the alias of an existing style or map configuration is used, the style or map configuration that alias points to will be overwritten and the existing style or map configuration will be deleted and references to that ID will result in errors. See [map configuration](creator-indoor-maps.md#map-configuration) in the concepts article for more information. +> Duplicate aliases are not allowed. If the alias of an existing style or map configuration is used, the style or map configuration that alias points to will be overwritten and the existing style or map configuration will be deleted and references to that ID will result in errors. See [map configuration] in the concepts article for more information. Once you have entered values into each required field, select the **Upload map configuration** button to save the style and map configuration data to your Creator resource. > [!TIP]-> Make a note of the map configuration `alias` value, it will be required when you [Instantiate the Indoor Manager][Instantiate the Indoor Manager] of a Map object when developing applications in Azure Maps. +> Make a note of the map configuration `alias` value, it will be required when you [Instantiate the Indoor Manager] of a Map object when developing applications in Azure Maps. ## Custom categories -Azure Maps Creator has defined a [list of categories][categories]. When you create your [manifest][manifest], you associate each unit in your facility to one of these categories in the [unitProperties][unitProperties] object. +Azure Maps Creator has defined a list of [categories]. When you create your [manifest], you associate each unit in your facility to one of these categories in the [unitProperties] object. There may be times when you want to create a new category. For example, you may want the ability to apply different styling attributes to all rooms with special accommodations for people with disabilities like a phone room with phones that have screens showing what the caller is saying for those with hearing impairments. Now when you select that unit in the map, the pop-up menu will have the new laye > [!div class="nextstepaction"] > [Use the Azure Maps Indoor Maps module](how-to-use-indoor-module.md) +[Creator concepts]: creator-indoor-maps.md [tileset]: /rest/api/maps/v20220901preview/tileset [tileset get]: /rest/api/maps/v20220901preview/tileset/get-[tutorial]: tutorial-creator-indoor-maps.md -[creator api]: /rest/api/maps-creator/ +[Use Creator to create indoor maps]: tutorial-creator-indoor-maps.md +[Creators Rest API]: /rest/api/maps-creator/ [style editor]: https://azure.github.io/Azure-Maps-Style-Editor [subscription key]: quick-demo-map-app.md#get-the-primary-key-for-your-account [manifest]: drawing-requirements.md#manifest-file-requirements [unitProperties]: drawing-requirements.md#unitproperties [categories]: https://atlas.microsoft.com/sdk/javascript/indoor/0.1/categories.json [Instantiate the Indoor Manager]: how-to-use-indoor-module.md#instantiate-the-indoor-manager+[map configuration]: creator-indoor-maps.md#map-configuration |
azure-maps | How To Create Data Registries | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-maps/how-to-create-data-registries.md | -The [data registry] service enables you to register data content in an Azure Storage Account with your Azure Maps account. An example of data might include a collection of Geofences used in the Azure Maps Geofencing Service. Another example is ZIP files containing drawing packages (DWG) or GeoJSON files that Azure Maps Creator uses to create or update indoor maps. +The [data registry] service enables you to register data content in an Azure Storage Account with your Azure Maps account. An example of data might include a collection of Geofences used in the Azure Maps Geofencing service. Another example is ZIP files containing drawing packages (DWG) or GeoJSON files that Azure Maps Creator uses to create or update indoor maps. ## Prerequisites The [data registry] service enables you to register data content in an Azure Sto >[!IMPORTANT] >-> - This article uses the `us.atlas.microsoft.com` geographical URL. If your account wasn't created in the United States, you must use a different geographical URL. For more information, see [Access to Creator Services](how-to-manage-creator.md#access-to-creator-services). +> - This article uses the `us.atlas.microsoft.com` geographical URL. If your account wasn't created in the United States, you must use a different geographical URL. For more information, see [Access to Creator services](how-to-manage-creator.md#access-to-creator-services). > - In the URL examples in this article you will need to replace: > - `{Azure-Maps-Subscription-key}` with your Azure Maps [subscription key]. > - `{udid}` with the user data ID of your data registry. For more information, see [The user data ID](#the-user-data-id). The data returned when running the list request is similar to the data provided ## Get content from a data registry -Once you've uploaded one or more files to an Azure storage account, created and Azure Maps datastore to link to those files, then registered them using the [register][register or replace] API, you can access the data contained in the files. +Once you've uploaded one or more files to an Azure storage account, created and Azure Maps datastore to link to those files, then registered them using the [register] API, you can access the data contained in the files. Use the `udid` to get the content of a file registered in an Azure Maps account: When you register a file in Azure Maps using the data registry API, an MD5 hash [data registry]: /rest/api/maps/data-registry [list]: /rest/api/maps/data-registry/list-[Register Or Replace]: /rest/api/maps/data-registry/register-or-replace +[Register]: /rest/api/maps/data-registry/register-or-replace [Get operation]: /rest/api/maps/data-registry/get-operation [Azure Maps account]: quick-demo-map-app.md#create-an-azure-maps-account |
azure-maps | How To Creator Wayfinding | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-maps/how-to-creator-wayfinding.md | -The Azure Maps Creator [wayfinding service][wayfinding service] allows you to navigate from place to place anywhere within your indoor map. The service utilizes stairs and elevators to navigate between floors and provides guidance to help you navigate around physical obstructions. This article describes how to generate a path from a starting point to a destination point in a sample indoor map. +The Azure Maps Creator [wayfinding service] allows you to navigate from place to place anywhere within your indoor map. The service utilizes stairs and elevators to navigate between floors and provides guidance to help you navigate around physical obstructions. This article describes how to generate a path from a starting point to a destination point in a sample indoor map. ## Prerequisites -- Understanding of [Creator concepts](creator-indoor-maps.md).-- An Azure Maps Creator [dataset][dataset] and [tileset][tileset]. If you have never used Azure Maps Creator to create an indoor map, you might find the [Use Creator to create indoor maps](tutorial-creator-indoor-maps.md) tutorial helpful.+- Understanding of [Creator concepts]. +- An Azure Maps Creator [dataset] and [tileset]. If you have never used Azure Maps Creator to create an indoor map, you might find the [Use Creator to create indoor maps] tutorial helpful. >[!IMPORTANT] >-> - This article uses the `us.atlas.microsoft.com` geographical URL. If your Creator service wasn't created in the United States, you must use a different geographical URL. For more information, see [Access to Creator Services][how to manage access to creator services]. +> - This article uses the `us.atlas.microsoft.com` geographical URL. If your Creator service wasn't created in the United States, you must use a different geographical URL. For more information, see [Access to Creator services]. > - In the URL examples in this article you will need to: > - Replace `{Your-Azure-Maps-Subscription-key}` with your Azure Maps subscription key.-> - Replace `{datasetId`} with your `datasetId`. For more information, see the [Check the dataset creation status][check dataset creation status] section of the *Use Creator to create indoor maps* tutorial. +> - Replace `{datasetId`} with your `datasetId`. For more information, see the [Check the dataset creation status] section of the *Use Creator to create indoor maps* tutorial. ## Create a routeset -A [routeset][routeset] is a collection of indoor map data that is used by the wayfinding service. +A [routeset] is a collection of indoor map data that is used by the wayfinding service. A routeset is created from a dataset, but is independent from that dataset. This means that if the dataset is deleted, the routeset continues to exist. The `facilityId`, a property of the routeset, is a required parameter when searc ## Get a wayfinding path -In this section, youΓÇÖll use the [wayfinding API][wayfinding API] to generate a path from the routeset you created in the previous section. The wayfinding API requires a query that contains start and end points in an indoor map, along with floor level ordinal numbers. For more information about Creator wayfinding, see [wayfinding][wayfinding] in the concepts article. +In this section, youΓÇÖll use the [wayfinding API] to generate a path from the routeset you created in the previous section. The wayfinding API requires a query that contains start and end points in an indoor map, along with floor level ordinal numbers. For more information about Creator wayfinding, see [wayfinding] in the concepts article. To create a wayfinding query: The wayfinding service calculates the path through specific intervening points. <!-- TODO: ## Implement the wayfinding service in your map (Refer to sample app once completed) --> +[Creator concepts]: creator-indoor-maps.md [dataset]: creator-indoor-maps.md#datasets [tileset]: creator-indoor-maps.md#tilesets [routeset]: /rest/api/maps/v20220901preview/routeset [wayfinding]: creator-indoor-maps.md#wayfinding-preview [wayfinding API]: /rest/api/maps/v20220901preview/wayfinding-[how to manage access to creator services]: how-to-manage-creator.md#access-to-creator-services -[check dataset creation status]: tutorial-creator-indoor-maps.md#check-the-dataset-creation-status +[Access to Creator services]: how-to-manage-creator.md#access-to-creator-services +[Check the dataset creation status]: tutorial-creator-indoor-maps.md#check-the-dataset-creation-status [wayfinding service]: creator-indoor-maps.md#wayfinding-preview+[Use Creator to create indoor maps]: tutorial-creator-indoor-maps.md |
azure-maps | How To Creator Wfs | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-maps/how-to-creator-wfs.md | After the response returns, copy the feature `id` for one of the `unit` features } ``` +## Next steps + > [!div class="nextstepaction"] > [How to create a feature stateset] [datasets]: /rest/api/maps/v2/dataset [WFS API]: /rest/api/maps/v2/wfs- [Web Feature Service (WFS)]: /rest/api/maps/v2/wfs +[Web Feature Service (WFS)]: /rest/api/maps/v2/wfs [Tutorial: Use Creator to create indoor maps]: tutorial-creator-indoor-maps.md [Check dataset creation status]: tutorial-creator-indoor-maps.md#check-the-dataset-creation-status [Access to Creator Services]: how-to-manage-creator.md#access-to-creator-services |
azure-maps | How To Dataset Geojson | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-maps/how-to-dataset-geojson.md | -Azure Maps Creator enables users to import their indoor map data in GeoJSON format with [Facility Ontology 2.0][Facility Ontology], which can then be used to create a [dataset]. +Azure Maps Creator enables users to import their indoor map data in GeoJSON format with [Facility Ontology 2.0], which can then be used to create a [dataset]. > [!NOTE] > This article explains how to create a dataset from a GeoJSON package. For information on additional steps required to complete an indoor map, see [Next steps](#next-steps). ## Prerequisites -- Basic understanding of [Creator for indoor maps].-- Basic understanding of [Facility Ontology 2.0][Facility Ontology].-- [Azure Maps account].-- [Azure Maps Creator resource][Creator resource].-- [Subscription key].-- Zip package containing all required GeoJSON files. If you don't have GeoJSON- files, you can download the [Contoso building sample]. +- Basic understanding of [Creator for indoor maps](creator-indoor-maps.md). +- Basic understanding of [Facility Ontology 2.0]. +- [Azure Maps account] +- Basic understanding of [Creator for indoor maps] +- Basic understanding of [Facility Ontology 2.0] +- An [Azure Maps account] +- An Azure Maps [Creator resource]. +- A [Subscription key]. +- Zip package containing all required GeoJSON files. If you don't have GeoJSON files, you can download the [Contoso building sample]. >[!IMPORTANT] >-> - This article uses the `us.atlas.microsoft.com` geographical URL. If your Creator service wasn't created in the United States, you must use a different geographical URL. For more information, see [Access to Creator Services](how-to-manage-creator.md#access-to-creator-services). +> - This article uses the `us.atlas.microsoft.com` geographical URL. If your Creator service wasn't created in the United States, you must use a different geographical URL. For more information, see [Access to Creator services]. > - In the URL examples in this article you will need to replace `{Your-Azure-Maps-Subscription-key}` with your Azure Maps subscription key. ## Create dataset using the GeoJSON package To create a dataset: 1. Enter the following URL to the dataset service. The request should look like the following URL (replace {udid} with the `udid` obtained in [Check the GeoJSON package upload status](#check-the-geojson-package-upload-status) section): -<!--1. Enter the following URL to the [Dataset service][Dataset Create 2022-09-01-preview]. The request should look like the following URL (replace {udid} with the `udid` obtained in [Check the GeoJSON package upload status](#check-the-geojson-package-upload-status) section):--> - ```http https://us.atlas.microsoft.com/datasets?api-version=2022-09-01-preview&udid={udid}&subscription-key={Your-Azure-Maps-Subscription-key} ``` One thing to consider when adding to an existing dataset is how the feature IDs If your original dataset was created from a GoeJSON source and you wish to add another facility created from a drawing package, you can append it to your existing dataset by referencing its `conversionId`, as demonstrated by this HTTP POST request: -```http +```shttp https://us.atlas.microsoft.com/datasets?api-version=2022-09-01-preview&conversionId={conversionId}&outputOntology=facility-2.0&datasetId={datasetId} ``` | Identifier | Description | |--|-|-| conversionId | The ID returned when converting your drawing package. For more information, see [Convert a drawing package][conversion]. | +| conversionId | The ID returned when converting your drawing package. For more information, see [Convert a drawing package]. | | datasetId | The dataset ID returned when creating the original dataset from a GeoJSON package). | ## Geojson zip package requirements The GeoJSON zip package consists of one or more [RFC 7946] compliant GeoJSON files, one for each feature class, all in the root directory (subdirectories aren't supported), compressed with standard Zip compression and named using the `.ZIP` extension. -Each feature class file must match its definition in the [Facility ontology 2.0][Facility ontology] and each feature must have a globally unique identifier. +Each feature class file must match its definition in the [Facility Ontology 2.0] and each feature must have a globally unique identifier. Feature IDs can only contain alpha-numeric (a-z, A-Z, 0-9), hyphen (-), dot (.) and underscore (_) characters. Feature IDs can only contain alpha-numeric (a-z, A-Z, 0-9), hyphen (-), dot (.) > [!div class="nextstepaction"] > [Create a tileset] +[Data Upload API]: /rest/api/maps/data-v2/upload +[Creator Long-Running Operation API V2]: creator-long-running-operation-v2.md +[Access to Creator services]: how-to-manage-creator.md#access-to-creator-services + [Contoso building sample]: https://github.com/Azure-Samples/am-creator-indoor-data-examples [units]: creator-facility-ontology.md?pivots=facility-ontology-v2#unit [structures]: creator-facility-ontology.md?pivots=facility-ontology-v2#structure Feature IDs can only contain alpha-numeric (a-z, A-Z, 0-9), hyphen (-), dot (.) [line]: creator-facility-ontology.md?pivots=facility-ontology-v2#lineelement [point]: creator-facility-ontology.md?pivots=facility-ontology-v2#pointelement -[conversion]: tutorial-creator-indoor-maps.md#convert-a-drawing-package +[Convert a drawing package]: tutorial-creator-indoor-maps.md#convert-a-drawing-package [Azure Maps account]: quick-demo-map-app.md#create-an-azure-maps-account [Creator resource]: how-to-manage-creator.md [Subscription key]: quick-demo-map-app.md#get-the-primary-key-for-your-account-[Facility Ontology]: creator-facility-ontology.md?pivots=facility-ontology-v2 +[Facility Ontology 2.0]: creator-facility-ontology.md?pivots=facility-ontology-v2 [RFC 7946]: https://www.rfc-editor.org/rfc/rfc7946.html [dataset]: creator-indoor-maps.md#datasets [Dataset Create 2022-09-01-preview]: /rest/api/maps/v20220901preview/dataset/create [Dataset Create]: /rest/api/maps/v2/dataset/create [Visual Studio]: https://visualstudio.microsoft.com/downloads/-[Data Upload API]: /rest/api/maps/data-v2/upload -[Creator Long-Running Operation API V2]: creator-long-running-operation-v2.md [Creator for indoor maps]: creator-indoor-maps.md [Create a tileset]: tutorial-creator-indoor-maps.md#create-a-tileset |
azure-maps | How To Dev Guide Csharp Sdk | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-maps/how-to-dev-guide-csharp-sdk.md | dotnet add package Azure.Maps.Geolocation --prerelease #### Azure Maps services -| Service Name  | NuGet package  | Samples  | +| Service name  | NuGet package  | Samples  | ||-|--| | [Search][search readme] | [Azure.Maps.Search][search package] | [search samples][search sample] | | [Routing][routing readme] | [Azure.Maps.Routing][routing package] | [routing samples][routing sample] | |
azure-maps | How To Dev Guide Java Sdk | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-maps/how-to-dev-guide-java-sdk.md | New-Item demo.java ### Azure Maps services -| Service Name  | Maven package  | Samples  | +| Service name  | Maven package  | Samples  | ||-|--| | [Search][java search readme] | [azure-maps-search][java search package] | [search samples][java search sample] | | [Routing][java routing readme] | [azure-maps-routing][java routing package] | [routing samples][java routing sample] | |
azure-maps | How To Dev Guide Js Sdk | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-maps/how-to-dev-guide-js-sdk.md | -The Azure Maps JavaScript/TypeScript REST SDK (JavaScript SDK) supports searching using the [Azure Maps search Rest API][search], like searching for an address, fuzzy searching for a point of interest (POI), and searching by coordinates. This article will help you get started building location-aware applications that incorporate the power of Azure Maps. +The Azure Maps JavaScript/TypeScript REST SDK (JavaScript SDK) supports searching using the Azure Maps [Search service], like searching for an address, fuzzy searching for a point of interest (POI), and searching by coordinates. This article will help you get started building location-aware applications that incorporate the power of Azure Maps. > [!NOTE]-> Azure Maps JavaScript SDK supports the LTS version of Node.js. For more information, see [Node.js Release Working Group][Node.js Release]. +> Azure Maps JavaScript SDK supports the LTS version of Node.js. For more information, see [Node.js Release Working Group]. ## Prerequisites -- [Azure Maps account][Azure Maps account].-- [Subscription key][Subscription key] or other form of [authentication][authentication].-- [Node.js][Node.js].+- [Azure Maps account]. +- [Subscription key] or other form of [authentication]. +- [Node.js]. > [!TIP] > You can create an Azure Maps account programmatically, Here's an example using the Azure CLI: mapsDemo ### Azure Maps services -| Service Name  | npm packages | Samples  | +| Service name  | npm packages | Samples  | ||-|--| | [Search][search readme] | [@azure-rest/maps-search][search package] | [search samples][search sample] | | [Route][js route readme] | [@azure-rest/maps-route][js route package] | [route samples][js route sample] | You'll need a `credential` object for authentication when creating the `MapsSear ### Using an Azure AD credential -You can authenticate with Azure AD using the [Azure Identity library][Identity library]. To use the [DefaultAzureCredential][defaultazurecredential] provider, you'll need to install the `@azure/identity` package: +You can authenticate with Azure AD using the [Azure Identity library]. To use the [DefaultAzureCredential] provider, you'll need to install the `@azure/identity` package: ```powershell npm install @azure/identity ``` -You'll need to register the new Azure AD application and grant access to Azure Maps by assigning the required role to your service principal. For more information, see [Host a daemon on non-Azure resources][Host daemon]. During this process you'll get an Application (client) ID, a Directory (tenant) ID, and a client secret. Copy these values and store them in a secure place. You'll need them in the following steps. +You'll need to register the new Azure AD application and grant access to Azure Maps by assigning the required role to your service principal. For more information, see [Host a daemon on non-Azure resources]. During this process you'll get an Application (client) ID, a Directory (tenant) ID, and a client secret. Copy these values and store them in a secure place. You'll need them in the following steps. Set the values of the Application (client) ID, Directory (tenant) ID, and client secret of your Azure AD application, and the map resource’s client ID as environment variables: Set the values of the Application (client) ID, Directory (tenant) ID, and client | AZURE_TENANT_ID | Directory (tenant) ID in your registered application | | MAPS_CLIENT_ID | The client ID in your Azure Map account | -You can use a `.env` file for these variables. You'll need to install the [dotenv][dotenv] package: +You can use a `.env` file for these variables. You'll need to install the [dotenv] package: ```powershell npm install dotenv You can authenticate with your Azure Maps subscription key. Your subscription ke :::image type="content" source="./media/rest-sdk-dev-guides/subscription-key.png" alt-text="A screenshot showing the subscription key in the Authentication section of an Azure Maps account." lightbox="./media/rest-sdk-dev-guides/subscription-key.png"::: -You need to pass the subscription key to the `AzureKeyCredential` class provided by the [Azure Core Authentication Package][core auth package]. For security reasons, it's better to specify the key as an environment variable than to include it in your source code. +You need to pass the subscription key to the `AzureKeyCredential` class provided by the [Azure Core Authentication Package]. For security reasons, it's better to specify the key as an environment variable than to include it in your source code. -You can accomplish this by using a `.env` file to store the subscription key variable. You'll need to install the [dotenv][dotenv] package to retrieve the value: +You can accomplish this by using a `.env` file to store the subscription key variable. You'll need to install the [dotenv] package to retrieve the value: ```powershell npm install dotenv main().catch((err) => { ``` -The code snippet above shows how to use the `MapsSearch` method from the Azure Maps Search client library to create a `client` object with your Azure credentials. You can use either your Azure Maps subscription key or the [Azure AD credential](#using-an-azure-ad-credential) from the previous section. The `path` parameter specifies the API endpoint, which is "/search/fuzzy/{format}" in this case. The `get` method sends an HTTP GET request with the query parameters, such as `query`, `coordinates`, and `countryFilter`. The query searches for Starbucks locations near Seattle in the US. The SDK returns the results as a [FuzzySearchResult][FuzzySearchResult] object and writes them to the console. For more details, see the [FuzzySearchRequest][FuzzySearchRequest] documentation. +The code snippet above shows how to use the `MapsSearch` method from the Azure Maps Search client library to create a `client` object with your Azure credentials. You can use either your Azure Maps subscription key or the [Azure AD credential](#using-an-azure-ad-credential) from the previous section. The `path` parameter specifies the API endpoint, which is "/search/fuzzy/{format}" in this case. The `get` method sends an HTTP GET request with the query parameters, such as `query`, `coordinates`, and `countryFilter`. The query searches for Starbucks locations near Seattle in the US. The SDK returns the results as a [FuzzySearchResult] object and writes them to the console. For more details, see the [FuzzySearchRequest] documentation. Run `search.js` with Node.js: node search.js ## Search an Address -The [searchAddress][searchAddress] query can be used to get the coordinates of an address. Modify the `search.js` from the sample as follows: +The [searchAddress] query can be used to get the coordinates of an address. Modify the `search.js` from the sample as follows: ```JavaScript const MapsSearch = require("@azure-rest/maps-search").default; main().catch(console.error); [JS-SDK]: /javascript/api/@azure-rest/maps-search -[defaultazurecredential]: https://github.com/Azure/azure-sdk-for-js/tree/@azure/maps-search_1.0.0-beta.1/sdk/identity/identity#defaultazurecredential +[DefaultAzureCredential]: https://github.com/Azure/azure-sdk-for-js/tree/@azure/maps-search_1.0.0-beta.1/sdk/identity/identity#defaultazurecredential [searchAddress]: /javascript/api/@azure-rest/maps-search/searchaddress main().catch(console.error); [FuzzySearchResult]: /javascript/api/@azure-rest/maps-search/searchfuzzysearch200response -[search]: /rest/api/maps/search -[Node.js Release]: https://github.com/nodejs/release#release-schedule +[Search service]: /rest/api/maps/search +[Node.js Release Working Group]: https://github.com/nodejs/release#release-schedule [Node.js]: https://nodejs.org/en/download/ [Azure Maps account]: quick-demo-map-app.md#create-an-azure-maps-account [Subscription key]: quick-demo-map-app.md#get-the-primary-key-for-your-account [authentication]: azure-maps-authentication.md-[Identity library]: /javascript/api/overview/azure/identity-readme -[core auth package]: /javascript/api/@azure/core-auth/ +[Azure Identity library]: /javascript/api/overview/azure/identity-readme +[Azure Core Authentication Package]: /javascript/api/@azure/core-auth/ -[Host daemon]: ./how-to-secure-daemon-app.md#host-a-daemon-on-non-azure-resources +[Host a daemon on non-Azure resources]: ./how-to-secure-daemon-app.md#host-a-daemon-on-non-azure-resources [dotenv]: https://github.com/motdotla/dotenv#readme [search package]: https://www.npmjs.com/package/@azure-rest/maps-search |
azure-maps | How To Dev Guide Py Sdk | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-maps/how-to-dev-guide-py-sdk.md | The Azure Maps Python SDK can be integrated with Python applications and librari ## Prerequisites -- [Azure Maps account][Azure Maps account].-- [Subscription key][Subscription key] or other form of [authentication][authentication].-- Python on 3.7 or later. It's recommended to use the [latest release][python latest release]. For more information, see [Azure SDK for Python version support policy][support policy].+- [Azure Maps account]. +- [Subscription key] or other form of [authentication]. +- Python on 3.7 or later. It's recommended to use the [latest release]. For more information, see [Azure SDK for Python version support policy]. > [!TIP] > You can create an Azure Maps account programmatically, Here's an example using the Azure CLI: pip install azure-maps-search --pre ### Azure Maps services -Azure Maps Python SDK supports Python version 3.7 or later. For more information on future Python versions, see [Azure SDK for Python version support policy][support policy]. +Azure Maps Python SDK supports Python version 3.7 or later. For more information on future Python versions, see [Azure SDK for Python version support policy]. -| Service Name  | PyPi package  | Samples  | +| Service name  | PyPi package  | Samples  | |-|-|--| | [Search][py search readme] | [azure-maps-search][py search package] | [search samples][py search sample] | | [Route][py route readme] | [azure-maps-route][py route package] | [route samples][py route sample] | Azure Maps Python SDK supports Python version 3.7 or later. For more information You'll need a `credential` object for authentication when creating the `MapsSearchClient` object used to access the Azure Maps search APIs. You can use either an Azure Active Directory (Azure AD) credential or an Azure subscription key to authenticate. For more information on authentication, see [Authentication with Azure Maps][authentication]. > [!TIP]-> The`MapsSearchClient` is the primary interface for developers using the Azure Maps search library. See [Azure Maps Search package client library][Search package client library] to learn more about the search methods available. +> The`MapsSearchClient` is the primary interface for developers using the Azure Maps search library. See [Azure Maps Search package client library] to learn more about the search methods available. ### Using an Azure AD credential -You can authenticate with Azure AD using the [Azure Identity package][Azure Identity package]. To use the [DefaultAzureCredential][defaultazurecredential] provider, you'll need to install the Azure Identity client package: +You can authenticate with Azure AD using the [Azure Identity package]. To use the [DefaultAzureCredential] provider, you'll need to install the Azure Identity client package: ```powershell pip install azure-identity ``` -You'll need to register the new Azure AD application and grant access to Azure Maps by assigning the required role to your service principal. For more information, see [Host a daemon on non-Azure resources][Host daemon]. During this process you'll get an Application (client) ID, a Directory (tenant) ID, and a client secret. Copy these values and store them in a secure place. You'll need them in the following steps. +You'll need to register the new Azure AD application and grant access to Azure Maps by assigning the required role to your service principal. For more information, see [Host a daemon on non-Azure resources]. During this process you'll get an Application (client) ID, a Directory (tenant) ID, and a client secret. Copy these values and store them in a secure place. You'll need them in the following steps. -Next you'll need to specify the Azure Maps account you intend to use by specifying the maps’ client ID. The Azure Maps account client ID can be found in the Authentication sections of the Azure Maps account. For more information, see [View authentication details][View authentication details]. +Next you'll need to specify the Azure Maps account you intend to use by specifying the maps’ client ID. The Azure Maps account client ID can be found in the Authentication sections of the Azure Maps account. For more information, see [View authentication details]. Set the values of the Application (client) ID, Directory (tenant) ID, and client secret of your Azure AD application, and the map resource’s client ID as environment variables: if __name__ == '__main__': ## Additional information -The [Azure Maps Search Package client library for Python][Search package client library] in the *Azure SDK for Python Preview* documentation. +The [Azure Maps Search package client library] in the *Azure SDK for Python Preview* documentation. <!> [Azure Maps account]: quick-demo-map-app.md#create-an-azure-maps-account [Subscription key]: quick-demo-map-app.md#get-the-primary-key-for-your-account [authentication]: azure-maps-authentication.md -[Search package client library]: /python/api/overview/azure/maps-search-readme?view=azure-python-preview -[python latest release]: https://www.python.org/downloads/ +[Azure Maps Search package client library]: /python/api/overview/azure/maps-search-readme?view=azure-python-preview +[latest release]: https://www.python.org/downloads/ <!-- Python SDK Developers Guide >-[support policy]: https://github.com/Azure/azure-sdk-for-python/wiki/Azure-SDKs-Python-version-support-policy +[Azure SDK for Python version support policy]: https://github.com/Azure/azure-sdk-for-python/wiki/Azure-SDKs-Python-version-support-policy [py search package]: https://pypi.org/project/azure-maps-search [py search readme]: https://github.com/Azure/azure-sdk-for-python/blob/main/sdk/maps/azure-maps-search/README.md [py search sample]: https://github.com/Azure/azure-sdk-for-python/tree/main/sdk/maps/azure-maps-search/samples The [Azure Maps Search Package client library for Python][Search package client <! Authentication -> [Azure Identity package]: https://github.com/Azure/azure-sdk-for-python/tree/main/sdk/identity/azure-identity-[defaultazurecredential]: https://github.com/Azure/azure-sdk-for-python/tree/main/sdk/identity/azure-identity#defaultazurecredential -[Host daemon]: ./how-to-secure-daemon-app.md#host-a-daemon-on-non-azure-resources +[DefaultAzureCredential]: https://github.com/Azure/azure-sdk-for-python/tree/main/sdk/identity/azure-identity#DefaultAzureCredential +[Host a daemon on non-Azure resources]: ./how-to-secure-daemon-app.md#host-a-daemon-on-non-azure-resources [View authentication details]: how-to-manage-authentication.md#view-authentication-details |
azure-maps | How To Render Custom Data | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-maps/how-to-render-custom-data.md | -This article describes how to use the [static image service](/rest/api/maps/render/getmapimage) with image composition functionality. Image composition functionality supports the retrieval of static raster tile that contains custom data. +This article describes how to use the [static image service] with image composition functionality. Image composition functionality supports the retrieval of static raster tile that contains custom data. The following are examples of custom data: The following are examples of custom data: - Labels - Geometry overlays -> [!Tip] +> [!TIP] > To show a simple map on a web page, it's often more cost effective to use the Azure Maps Web SDK, rather than to use the static image service. The web SDK uses map tiles; and unless the user pans and zooms the map, they will often generate only a fraction of a transaction per map load. The Azure Maps web SDK has options for disabling panning and zooming. Also, the Azure Maps web SDK provides a richer set of data visualization options than a static map web service does. ## Prerequisites -1. [Make an Azure Maps account](quick-demo-map-app.md#create-an-azure-maps-account) -2. [Obtain a primary subscription key](quick-demo-map-app.md#get-the-primary-key-for-your-account), also known as the primary key or the subscription key. +- [Azure Maps account] +- [Subscription key] -This article uses the [Postman](https://www.postman.com/) application, but you may use a different API development environment. +This article uses the [Postman] application, but you may use a different API development environment. -We'll use the Azure Maps [Data Service APIs](/rest/api/maps/data) to store and render overlays. +We'll use the Azure Maps [Data service] to store and render overlays. ## Render pushpins with labels and a custom image -> [!Note] +> [!NOTE] > The procedure in this section requires an Azure Maps account in the Gen 1 or Gen 2 pricing tier. The Azure Maps account Gen 1 Standard S0 tier supports only a single instance of the `pins` parameter. It allows you to render up to five pushpins, specified in the URL request, with a custom image. To get a static image with custom pins and labels: 3. Enter a **Request name** for the request, such as *GET Static Image*. - 4. Select the **GET** HTTP method. - 5. Enter the following URL (replace {`Your-Azure-Maps-Subscription-key}` with your Azure Maps subscription key): ```HTTP To get a static image with custom pins and labels: ## Upload pins and path data -> [!Note] +> [!NOTE] > The procedure in this section requires an Azure Maps account Gen 1 (S1) or Gen 2 pricing tier. In this section, we'll upload path and pin data to Azure Map data storage. To upload pins and path data: ``` >[!TIP]->To obtain your own path and pin location information, use the [Data Upload API](/rest/api/maps/data-v2/upload). +>To obtain your own path and pin location information, use [Data Upload]. ### Check pins and path data upload status To render the uploaded pins and path data on the map: 4. Select the **GET** HTTP method. -5. Enter the following URL to the [Render Service](/rest/api/maps/render/get-map-image) (replace {`Your-Azure-Maps-Subscription-key}` with your Azure Maps subscription key and `udid` with the `udid` of the uploaded data): +5. Enter the following URL to the [Render service] (replace {`Your-Azure-Maps-Subscription-key}` with your Azure Maps subscription key and `udid` with the `udid` of the uploaded data): ```HTTP https://atlas.microsoft.com/map/static/png?subscription-key={Your-Azure-Maps-Subscription-key}&api-version=1.0&layer=basic&style=main&zoom=12¢er=-73.96682739257812%2C40.78119135317995&pins=default|la-35+50|ls12|lc003C62|co9B2F15||'Times Square'-73.98516297340393 40.758781646381024|'Central Park'-73.96682739257812 40.78119135317995&path=lc0000FF|fc0000FF|lw3|la0.80|fa0.30||udid-{udId} To render the uploaded pins and path data on the map: ## Render a polygon with color and opacity -> [!Note] +> [!NOTE] > The procedure in this section requires an Azure Maps account Gen 1 (S1) or Gen 2 pricing tier. -You can modify the appearance of a polygon by using style modifiers with the [path parameter](/rest/api/maps/render/getmapimage#uri-parameters). +You can modify the appearance of a polygon by using style modifiers with the [path parameter]. To render a polygon with color and opacity: To render a polygon with color and opacity: 4. Select the **GET** HTTP method. -5. Enter the following URL to the [Render Service](/rest/api/maps/render/get-map-image) (replace {`Your-Azure-Maps-Subscription-key}` with your Azure Maps subscription key): +5. Enter the following URL to the [Render service] (replace {`Your-Azure-Maps-Subscription-key}` with your Azure Maps subscription key): ```HTTP https://atlas.microsoft.com/map/static/png?api-version=1.0&style=main&layer=basic&sku=S1&zoom=14&height=500&Width=500¢er=-74.040701, 40.698666&path=lc0000FF|fc0000FF|lw3|la0.80|fa0.50||-74.03995513916016 40.70090237454063|-74.04082417488098 40.70028420372218|-74.04113531112671 40.70049568385827|-74.04298067092896 40.69899904076542|-74.04271245002747 40.69879568992435|-74.04367804527283 40.6980961582905|-74.04364585876465 40.698055487620714|-74.04368877410889 40.698022951066996|-74.04168248176573 40.696444909137|-74.03901100158691 40.69837271818651|-74.03824925422668 40.69837271818651|-74.03809905052185 40.69903971085914|-74.03771281242369 40.699340668780984|-74.03940796852112 40.70058515602143|-74.03948307037354 40.70052821920425|-74.03995513916016 40.70090237454063 To render a polygon with color and opacity: ## Render a circle and pushpins with custom labels -> [!Note] +> [!NOTE] > The procedure in this section requires an Azure Maps account Gen 1 (S1) or Gen 2 pricing tier. -You can modify the appearance of the pins by adding style modifiers. For example, to make pushpins and their labels larger or smaller, use the `sc` "scale style" modifier. This modifier takes a value that's greater than zero. A value of 1 is the standard scale. Values larger than 1 will make the pins larger, and values smaller than 1 will make them smaller. For more information about style modifiers, see [static image service path parameters](/rest/api/maps/render/getmapimage#uri-parameters). +You can modify the appearance of the pins by adding style modifiers. For example, to make pushpins and their labels larger or smaller, use the `sc` "scale style" modifier. This modifier takes a value that's greater than zero. A value of 1 is the standard scale. Values larger than 1 will make the pins larger, and values smaller than 1 will make them smaller. For more information about style modifiers, see [static image service path parameters]. To render a circle and pushpins with custom labels: To render a circle and pushpins with custom labels: 4. Select the **GET** HTTP method. -5. Enter the following URL to the [Render Service](/rest/api/maps/render/get-map-image) (replace {`Your-Azure-Maps-Subscription-key}` with your Azure Maps subscription key): +5. Enter the following URL to the [Render service] (replace {`Your-Azure-Maps-Subscription-key}` with your Azure Maps subscription key): ```HTTP https://atlas.microsoft.com/map/static/png?api-version=1.0&style=main&layer=basic&zoom=14&height=700&Width=700¢er=-122.13230609893799,47.64599069048016&path=lcFF0000|lw2|la0.60|ra1000||-122.13230609893799 47.64599069048016&pins=default|la15+50|al0.66|lc003C62|co002D62||'Microsoft Corporate Headquarters'-122.14131832122801 47.64690503939462|'Microsoft Visitor Center'-122.136828 47.642224|'Microsoft Conference Center'-122.12552547454833 47.642940335653996|'Microsoft The Commons'-122.13687658309935 47.64452336193245&subscription-key={Your-Azure-Maps-Subscription-key} Similarly, you can change, add, and remove other style modifiers. ## Next steps -* Explore the [Azure Maps Get Map Image API](/rest/api/maps/render/getmapimage) documentation. -* To learn more about Azure Maps Data service, see the [service documentation](/rest/api/maps/data). +- Explore the [Azure Maps Get Map Image API] documentation. +- To learn more about Azure Maps Data service, see the [service documentation]. ++[Azure Maps account]: quick-demo-map-app.md#create-an-azure-maps-account +[Subscription key]: quick-demo-map-app.md#get-the-primary-key-for-your-account +[Postman]: https://www.postman.com/ +[Data service]: /rest/api/maps/data +[static image service]: /rest/api/maps/render/getmapimage +[Data Upload]: /rest/api/maps/data-v2/upload +[Render service]: /rest/api/maps/render/get-map-image +[path parameter]: /rest/api/maps/render/getmapimage#uri-parameters +[Azure Maps Get Map Image API]: /rest/api/maps/render/getmapimage +[service documentation]: /rest/api/maps/data +[static image service path parameters]: /rest/api/maps/render/getmapimage#uri-parameters |
azure-maps | How To Request Weather Data | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-maps/how-to-request-weather-data.md | In this example, you'll use the [Get Minute Forecast API](/rest/api/maps/weather ## Next steps > [!div class="nextstepaction"]-> [Azure Maps Weather services concepts](./weather-services-concepts.md) +> [Weather services in Azure Maps](./weather-services-concepts.md) > [!div class="nextstepaction"]-> [Azure Maps Weather services REST API](/rest/api/maps/weather) +> [Azure Maps Weather services](/rest/api/maps/weather) |
azure-maps | How To Search For Address | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-maps/how-to-search-for-address.md | -The [Azure Maps Search Service](/rest/api/maps/search) is a set of RESTful APIs designed to help developers search addresses, places, and business listings by name, category, and other geographic information. In addition to supporting traditional geocoding, services can also reverse geocode addresses and cross streets based on latitudes and longitudes. Latitude and longitude values returned by the search can be used as parameters in other Azure Maps services, such as [Route](/rest/api/maps/route) and [Weather](/rest/api/maps/weather) services. +The [Search service] is a set of RESTful APIs designed to help developers search addresses, places, and business listings by name, category, and other geographic information. In addition to supporting traditional geocoding, services can also reverse geocode addresses and cross streets based on latitudes and longitudes. Latitude and longitude values returned by the search can be used as parameters in other Azure Maps services, such as [Route] and [Weather] services. In this article, you'll learn how to: -* Request latitude and longitude coordinates for an address (geocode address location) by using the [Search Address API](/rest/api/maps/search/getsearchaddress). -* Search for an address or Point of Interest (POI) using the [Fuzzy Search API](/rest/api/maps/search/getsearchfuzzy). -* Make a [Reverse Address Search](/rest/api/maps/search/getsearchaddressreverse) to translate coordinate location to street address. -* Translate coordinate location into a human understandable cross street by using [Search Address Reverse Cross Street API](/rest/api/maps/search/getsearchaddressreversecrossstreet). Most often, this is needed in tracking applications that receive a GPS feed from a device or asset, and wish to know where the coordinate is located. +* Request latitude and longitude coordinates for an address (geocode address location) by using [Search Address]. +* Search for an address or Point of Interest (POI) using [Fuzzy Search]. +* Use [Reverse Address Search] to translate coordinate location to street address. +* Translate coordinate location into a human understandable cross street using [Search Address Reverse Cross Street]. Most often, this is needed in tracking applications that receive a GPS feed from a device or asset, and wish to know where the coordinate is located. ## Prerequisites -1. [Make an Azure Maps account](quick-demo-map-app.md#create-an-azure-maps-account) -2. [Obtain a primary subscription key](quick-demo-map-app.md#get-the-primary-key-for-your-account), also known as the primary key or the subscription key. +1. An [Azure Maps account] +2. A [subscription key] -This tutorial uses the [Postman](https://www.postman.com/) application, but you may choose a different API development environment. +This tutorial uses the [Postman] application, but you may choose a different API development environment. ## Request latitude and longitude for an address (geocoding) -In this example, we'll use the Azure Maps [Get Search Address API](/rest/api/maps/search/getsearchaddress) to convert an address into latitude and longitude coordinates. This process is also called *geocoding*. In addition to returning the coordinates, the response will also return detailed address properties such as street, postal code, municipality, and country/region information. +In this example, we'll use [Get Search Address] to convert an address into latitude and longitude coordinates. This process is also called *geocoding*. In addition to returning the coordinates, the response will also return detailed address properties such as street, postal code, municipality, and country/region information. >[!TIP]->If you have a set of addresses to geocode, you can use the [Post Search Address Batch API](/rest/api/maps/search/postsearchaddressbatch) to send a batch of queries in a single API call. +>If you have a set of addresses to geocode, you can use [Post Search Address Batch] to send a batch of queries in a single request. 1. In the Postman app, select **New** to create the request. In the **Create New** window, select **HTTP Request**. Enter a **Request name** for the request. In this example, we'll use the Azure Maps [Get Search Address API](/rest/api/map 6. Click the **Send** button. You can now see that the response includes responses from multiple countries. To geobias results to the relevant area for your users, always add as many location details as possible to the request. -## Using Fuzzy Search API +## Fuzzy Search -The Azure Maps [Fuzzy Search API](/rest/api/maps/search/getsearchfuzzy) supports standard single line and free-form searches. We recommend that you use the Azure Maps Search Fuzzy API when you don't know your user input type for a search request. The query input can be a full or partial address. It can also be a Point of Interest (POI) token, like a name of POI, POI category or name of brand. Furthermore, to improve the relevance of your search results, the query results can be constrained by a coordinate location and radius, or by defining a bounding box. +[Fuzzy Search] supports standard single line and free-form searches. We recommend that you use the Azure Maps Search Fuzzy API when you don't know your user input type for a search request. The query input can be a full or partial address. It can also be a Point of Interest (POI) token, like a name of POI, POI category or name of brand. Furthermore, to improve the relevance of your search results, the query results can be constrained by a coordinate location and radius, or by defining a bounding box. >[!TIP]->Most Search queries default to maxFuzzyLevel=1 to gain performance and reduce unusual results. You can adjust fuzziness levels by using the `maxFuzzyLevel` or `minFuzzyLevel` parameters. For more information on `maxFuzzyLevel` and a complete list of all optional parameters, see [Fuzzy Search URI Parameters](/rest/api/maps/search/getsearchfuzzy#uri-parameters) +>Most Search queries default to maxFuzzyLevel=1 to gain performance and reduce unusual results. You can adjust fuzziness levels by using the `maxFuzzyLevel` or `minFuzzyLevel` parameters. For more information on `maxFuzzyLevel` and a complete list of all optional parameters, see [Fuzzy Search URI Parameters]. ### Search for an address using Fuzzy Search In this example, we'll use Fuzzy Search to search the entire world for `pizza`. Then, we'll show you how to search over the scope of a specific country. Finally, we'll show you how to use a coordinate location and radius to scope a search over a specific area, and limit the number of returned results. >[!IMPORTANT]->To geobias results to the relevant area for your users, always add as many location details as possible. To learn more, see [Best Practices for Search](how-to-use-best-practices-for-search.md#geobiased-search-results). +>To geobias results to the relevant area for your users, always add as many location details as possible. To learn more, see [Best Practices for Search]. 1. In the Postman app, select **New** to create the request. In the **Create New** window, select **HTTP Request**. Enter a **Request name** for the request. In this example, we'll use Fuzzy Search to search the entire world for `pizza`. ``` >[!NOTE]- >The _json_ attribute in the URL path determines the response format. This article uses json for ease of use and readability. To find other supported response formats, see the `format` parameter definition in the [URI Parameter reference documentation](/rest/api/maps/search/getsearchfuzzy#uri-parameters). + >The _json_ attribute in the URL path determines the response format. This article uses json for ease of use and readability. To find other supported response formats, see the `format` parameter definition in the [URI Parameter reference] documentation. 3. Click **Send** and review the response body. - The ambiguous query string for "pizza" returned 10 [point of interest result](/rest/api/maps/search/getsearchpoi#searchpoiresponse) (POI) in both the "pizza" and "restaurant" categories. Each result includes details such as street address, latitude and longitude values, view port, and entry points for the location. The results are now varied for this query, and are not tied to any reference location. + The ambiguous query string for "pizza" returned 10 [point of interest result] (POI) in both the "pizza" and "restaurant" categories. Each result includes details such as street address, latitude and longitude values, view port, and entry points for the location. The results are now varied for this query, and are not tied to any reference location. - In the next step, we'll use the `countrySet` parameter to specify only the countries/regions for which your application needs coverage. For a complete list of supported countries/regions, see [Search Coverage](./geocoding-coverage.md). + In the next step, we'll use the `countrySet` parameter to specify only the countries/regions for which your application needs coverage. For a complete list of supported countries/regions, see [Search Coverage]. 4. The default behavior is to search the entire world, potentially returning unnecessary results. Next, weΓÇÖll search for pizza only the United States. Add the `countrySet` key to the **Params** section, and set its value to `US`. Setting the `countrySet` key to `US` will bound the results to the United States. In this example, we'll use Fuzzy Search to search the entire world for `pizza`. ## Search for a street address using Reverse Address Search -The Azure Maps [Get Search Address Reverse API](/rest/api/maps/search/getsearchaddressreverse) translates coordinates into human readable street addresses. This API is often used for applications that consume GPS feeds and want to discover addresses at specific coordinate points. +[Get Search Address Reverse] translates coordinates into human readable street addresses. This API is often used for applications that consume GPS feeds and want to discover addresses at specific coordinate points. >[!IMPORTANT]->To geobias results to the relevant area for your users, always add as many location details as possible. To learn more, see [Best Practices for Search](how-to-use-best-practices-for-search.md#geobiased-search-results). +>To geobias results to the relevant area for your users, always add as many location details as possible. To learn more, see [Best Practices for Search]. >[!TIP]->If you have a set of coordinate locations to reverse geocode, you can use [Post Search Address Reverse Batch API](/rest/api/maps/search/postsearchaddressreversebatch) to send a batch of queries in a single API call. +>If you have a set of coordinate locations to reverse geocode, you can use [Post Search Address Reverse Batch] to send a batch of queries in a single request. -In this example, we'll be making reverse searches using a few of the optional parameters that are available. For the full list of optional parameters, see [Reverse Search Parameters](/rest/api/maps/search/getsearchaddressreverse#uri-parameters). +In this example, we'll be making reverse searches using a few of the optional parameters that are available. For the full list of optional parameters, see [Reverse Search Parameters]. 1. In the Postman app, select **New** to create the request. In the **Create New** window, select **HTTP Request**. Enter a **Request name** for the request. In this example, we'll be making reverse searches using a few of the optional pa |--||| | number | 1 |The response may include the side of the street (Left/Right) and also an offset position for the number.| | returnSpeedLimit | true | Returns the speed limit at the address.|- | returnRoadUse | true | Returns road use types at the address. For all possible road use types, see [Road Use Types](/rest/api/maps/search/getsearchaddressreverse#uri-parameters).| - | returnMatchType | true| Returns the type of match. For all possible values, see [Reverse Address Search Results](/rest/api/maps/search/getsearchaddressreverse#searchaddressreverseresult) + | returnRoadUse | true | Returns road use types at the address. For all possible road use types, see [Road Use Types].| + | returnMatchType | true| Returns the type of match. For all possible values, see [Reverse Address Search Results]. :::image type="content" source="./media/how-to-search-for-address/search-reverse.png" alt-text="Search reverse."::: 5. Click **Send**, and review the response body. -6. Next, we'll add the `entityType` key, and set its value to `Municipality`. The `entityType` key will override the `returnMatchType` key in the previous step. We'll also need to remove `returnSpeedLimit` and `returnRoadUse` since we're requesting information about the municipality. For all possible entity types, see [Entity Types](/rest/api/maps/search/getsearchaddressreverse#entitytype). +6. Next, we'll add the `entityType` key, and set its value to `Municipality`. The `entityType` key will override the `returnMatchType` key in the previous step. We'll also need to remove `returnSpeedLimit` and `returnRoadUse` since we're requesting information about the municipality. For all possible entity types, see [Entity Types]. :::image type="content" source="./media/how-to-search-for-address/search-reverse-entity-type.png" alt-text="Search reverse entityType."::: -7. Click **Send**. Compare the results to the results returned in step 5. Because the requested entity type is now `municipality`, the response does not include street address information. Also, the returned `geometryId` can be used to request boundary polygon through Azure Maps Get [Search Polygon API](/rest/api/maps/search/getsearchpolygon). +7. Click **Send**. Compare the results to the results returned in step 5. Because the requested entity type is now `municipality`, the response does not include street address information. Also, the returned `geometryId` can be used to request boundary polygon through Azure Maps Get [Search Polygon API]. >[!TIP]->To get more information on these parameters, as well as to learn about others, see the [Reverse Search Parameters section](/rest/api/maps/search/getsearchaddressreverse#uri-parameters). +>To get more information on these parameters, as well as to learn about others, see [Reverse Search Parameters]. ## Search for cross street using Reverse Address Cross Street Search In this example, we'll search for a cross street based on the coordinates of an ## Next steps > [!div class="nextstepaction"]-> [Azure Maps Search Service REST API](/rest/api/maps/search) +> [Azure Maps Search service](/rest/api/maps/search) > [!div class="nextstepaction"]-> [Azure Maps Search Service Best Practices](how-to-use-best-practices-for-search.md) +> [Best practices for Azure Maps Search service](how-to-use-best-practices-for-search.md) ++[Search service]: /rest/api/maps/search +[Route]: /rest/api/maps/route +[Weather]: /rest/api/maps/weather +[Search Address]: /rest/api/maps/search/getsearchaddress +[Fuzzy Search]: /rest/api/maps/search/getsearchfuzzy +[Reverse Address Search]: /rest/api/maps/search/getsearchaddressreverse +[Search Address Reverse Cross Street]: /rest/api/maps/search/getsearchaddressreversecrossstreet +[Azure Maps account]: quick-demo-map-app.md#create-an-azure-maps-account +[subscription key]: quick-demo-map-app.md#get-the-primary-key-for-your-account +[Postman]: https://www.postman.com/ +[Get Search Address]: /rest/api/maps/search/getsearchaddress +[Post Search Address Batch]: /rest/api/maps/search/postsearchaddressbatch +[Fuzzy Search URI Parameters]: /rest/api/maps/search/getsearchfuzzy#uri-parameters +[Get Search Address Reverse]: /rest/api/maps/search/getsearchaddressreverse +[point of interest result]: /rest/api/maps/search/getsearchpoi#searchpoiresponse +[Post Search Address Reverse Batch]: /rest/api/maps/search/postsearchaddressreversebatch +[Reverse Search Parameters]: /rest/api/maps/search/getsearchaddressreverse#uri-parameters +[Best Practices for Search]: how-to-use-best-practices-for-search.md#geobiased-search-results +[Road Use Types]: /rest/api/maps/search/getsearchaddressreverse#uri-parameters +[Reverse Address Search Results]: /rest/api/maps/search/getsearchaddressreverse#searchaddressreverseresult +[URI Parameter reference]: /rest/api/maps/search/getsearchfuzzy#uri-parameters +[Search Coverage]: geocoding-coverage.md +[Search Polygon API]: /rest/api/maps/search/getsearchpolygon +[Entity Types]: /rest/api/maps/search/getsearchaddressreverse#entitytype |
azure-maps | How To Secure Sas App | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-maps/how-to-secure-sas-app.md | az rest --method GET --url 'https://us.atlas.microsoft.com/search/address/json?a You can run requests to Azure Maps APIs from most clients, like C#, Java, or JavaScript. [Postman](https://learning.postman.com/docs/sending-requests/generate-code-snippets) converts an API request into a basic client code snippet in almost any programming language or framework you choose. You can use this generated code snippet in your front-end applications. -The following small JavaScript code example shows how you could use your SAS token with the JavaScript [Fetch API](https://developer.mozilla.org/en-US/docs/Web/API/Fetch_API/Using_Fetch#supplying_request_options) to get and return Azure Maps information. The example uses the Azure Maps [Get Search Address](/rest/api/maps/search/get-search-address) API version 1.0. Supply your own value for `<your SAS token>`. +The following small JavaScript code example shows how you could use your SAS token with the JavaScript [Fetch API](https://developer.mozilla.org/en-US/docs/Web/API/Fetch_API/Using_Fetch#supplying_request_options) to get and return Azure Maps information. The example uses [Get Search Address](/rest/api/maps/search/get-search-address) API version 1.0. Supply your own value for `<your SAS token>`. For this sample to work, make sure to run it from within the same origin as the `allowedOrigins` for the API call. For example, if you provide `https://contoso.com` as the `allowedOrigins` in the API call, the HTML page that hosts the JavaScript script should be `https://contoso.com`. |
azure-maps | How To Show Attribution | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-maps/how-to-show-attribution.md | -When using the [Azure Maps Render service V2](/rest/api/maps/render-v2), either as a basemap or layer, you're required to display the appropriate data provider copyright attribution on the map. This information should be displayed in the lower right-hand corner of the map. +When using the [Azure Maps Render service V2], either as a basemap or layer, you're required to display the appropriate data provider copyright attribution on the map. This information should be displayed in the lower right-hand corner of the map. :::image type="content" source="./media/how-to-show-attribution/attribution-road.png" border="false" alt-text="The above image is an example of a map from the Render service V2 showing the copyright attribution when using the road style"::: The above image is an example of a map from the Render service V2, displaying th ## The Get Map Attribution API -The [Get Map Attribution API](/rest/api/maps/render-v2/get-map-attribution) enables you to request map copyright attribution information so that you can display in on the map within your applications. +The [Get Map Attribution API] enables you to request map copyright attribution information so that you can display in on the map within your applications. ### When to use the Get Map Attribution API The map copyright attribution information must be displayed on the map in any applications that use the Render V2 API, including web and mobile applications. -The attribution is automatically displayed and updated on the map When using any of the Azure Maps SDKs. This includes the [Web SDK](how-to-use-map-control.md), [Android SDK](how-to-use-android-map-control-library.md) and the [iOS SDK](how-to-use-ios-map-control-library.md). +The attribution is automatically displayed and updated on the map When using any of the Azure Maps SDKs. This includes the [Web SDK], [Android SDK] and the [iOS SDK]. When using map tiles from the Render service in a third-party map, you must display and update the copyright attribution information on the map. You'll need the following information to run the `attribution` command: | -- | | -- | | api-version | string | Version number of Azure Maps API. Current version is 2.1 | | bounds | array | A string that represents the rectangular area of a bounding box. The bounds parameter is defined by the four bounding box coordinates. The first 2 are the WGS84 longitude and latitude defining the southwest corner and the last 2 are the WGS84 longitude and latitude defining the northeast corner. The string is presented in the following format: [SouthwestCorner_Longitude, SouthwestCorner_Latitude, NortheastCorner_Longitude, NortheastCorner_Latitude]. |-| tilesetId | TilesetID | A tileset is a collection of raster or vector data broken up into a uniform grid of square tiles at preset zoom levels. Every tileset has a tilesetId to use when making requests. The tilesetId for tilesets created using Azure Maps Creator are generated through the [Tileset Create API](/rest/api/maps/v2/tileset/create). There are ready-to-use tilesets supplied by Azure Maps, such as `microsoft.base.road`, `microsoft.base.hybrid` and `microsoft.weather.radar.main`, a complete list can be found the [Get Map Attribution](/rest/api/maps/render-v2/get-map-attribution#tilesetid) REST API documentation. | -| zoom | integer | Zoom level for the selected tile. The valid range depends on the tile, see the [TilesetID](/rest/api/maps/render-v2/get-map-attribution#tilesetid) table for valid values for a specific tileset. For more information, see the [Zoom levels and tile grid](zoom-levels-and-tile-grid.md) article. | -| subscription-key | string | One of the Azure Maps keys provided from an Azure Map Account. For more information, see the [Authentication with Azure Maps](azure-maps-authentication.md) article. | +| tilesetId | TilesetID | A tileset is a collection of raster or vector data broken up into a uniform grid of square tiles at preset zoom levels. Every tileset has a tilesetId to use when making requests. The tilesetId for tilesets created using Azure Maps Creator are generated through the [Tileset Create API]. There are ready-to-use tilesets supplied by Azure Maps, such as `microsoft.base.road`, `microsoft.base.hybrid` and `microsoft.weather.radar.main`, a complete list can be found the [Get Map Attribution] REST API documentation. | +| zoom | integer | Zoom level for the selected tile. The valid range depends on the tile, see the [TilesetID] table for valid values for a specific tileset. For more information, see the [Zoom levels and tile grid] article. | +| subscription-key | string | One of the Azure Maps keys provided from an Azure Map Account. For more information, see the [Authentication with Azure Maps] article. | Run the following GET request to get the corresponding copyright attribution to display on the map: https://atlas.microsoft.com/map/attribution?subscription-key={Your-Azure-Maps-Su ## Additional information -* For more information, see the [Azure Maps Render service V2](/rest/api/maps/render-v2) documentation. +* For more information, see the [Azure Maps Render service V2] documentation. ++[Azure Maps Render service V2]: /rest/api/maps/render-v2 +[Get Map Attribution API]: /rest/api/maps/render-v2/get-map-attribution +[Web SDK]: how-to-use-map-control.md +[Android SDK]: how-to-use-android-map-control-library.md +[iOS SDK]: how-to-use-ios-map-control-library.md +[Tileset Create API]: /rest/api/maps/v2/tileset/create +[Get Map Attribution]: /rest/api/maps/render-v2/get-map-attribution#tilesetid +[TilesetID]: /rest/api/maps/render-v2/get-map-attribution#tilesetid +[Zoom levels and tile grid]: zoom-levels-and-tile-grid.md +[Authentication with Azure Maps]: azure-maps-authentication.md |
azure-maps | How To Use Best Practices For Routing | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-maps/how-to-use-best-practices-for-routing.md | Title: Best practices for Azure Maps Route Service in Microsoft Azure Maps -description: Learn how to route vehicles by using Route Service from Microsoft Azure Maps. + Title: Best practices for Azure Maps Route service in Microsoft Azure Maps +description: Learn how to route vehicles by using Route service from Microsoft Azure Maps. Last updated 10/28/2021-The Route Directions and Route Matrix APIs in Azure Maps [Route Service](/rest/api/maps/route) can be used to calculate the estimated arrival times (ETAs) for each requested route. Route APIs consider factors such as real-time traffic information and historic traffic data, like the typical road speeds on the requested day of the week and time of day. The APIs return the shortest or fastest routes available to multiple destinations at a time in sequence or in optimized order, based on time or distance. Users can also request specialized routes and details for walkers, bicyclists, and commercial vehicles like trucks. In this article, we'll share the best practices to call Azure Maps [Route Service](/rest/api/maps/route), and you'll learn how-to: +The Route Directions and Route Matrix APIs in Azure Maps [Route service] can be used to calculate the estimated arrival times (ETAs) for each requested route. Route APIs consider factors such as real-time traffic information and historic traffic data, like the typical road speeds on the requested day of the week and time of day. The APIs return the shortest or fastest routes available to multiple destinations at a time in sequence or in optimized order, based on time or distance. Users can also request specialized routes and details for walkers, bicyclists, and commercial vehicles like trucks. In this article, we'll share the best practices to call Azure Maps [Route service], and you'll learn how-to: - * Choose between the Route Directions APIs and the Matrix Routing API - * Request historic and predicted travel times, based on real-time and historical traffic data - * Request route details, like time and distance, for the entire route and each leg of the route - * Request route for a commercial vehicle, like a truck - * Request traffic information along a route, like jams and toll information - * Request a route that consists of one or more stops (waypoints) - * Optimize a route of one or more stops to obtain the best order to visit each stop (waypoint) - * Optimize alternative routes using supporting points. For example, offer alternative routes that pass an electric vehicle charging station. - * Use the [Route Service](/rest/api/maps/route) with the Azure Maps Web SDK +* Choose between the Route Directions APIs and the Matrix Routing API +* Request historic and predicted travel times, based on real-time and historical traffic data +* Request route details, like time and distance, for the entire route and each leg of the route +* Request route for a commercial vehicle, like a truck +* Request traffic information along a route, like jams and toll information +* Request a route that consists of one or more stops (waypoints) +* Optimize a route of one or more stops to obtain the best order to visit each stop (waypoint) +* Optimize alternative routes using supporting points. For example, offer alternative routes that pass an electric vehicle charging station. +* Use the [Route service] with the Azure Maps Web SDK ## Prerequisites -1. [Make an Azure Maps account](quick-demo-map-app.md#create-an-azure-maps-account) -2. [Obtain a primary subscription key](quick-demo-map-app.md#get-the-primary-key-for-your-account), also known as the primary key or the subscription key. +1. An [Azure Maps account] +2. A [subscription key] -For more information about the coverage of the Route Service, see the [Routing Coverage](routing-coverage.md). +For more information about the coverage of the Route service, see the [Routing Coverage]. -This article uses the [Postman app](https://www.postman.com/downloads/) to build REST calls, but you can choose any API development environment. +This article uses the [Postman] application to build REST calls, but you can choose any API development environment. ## Choose between Route Directions and Matrix Routing https://atlas.microsoft.com/route/directions/json?subscription-key={Your-Azure-M The response below is for a truck carrying a class 9 hazardous material, which is less dangerous than a class 1 hazardous material. When you expand the `guidance` element to read the directions, you'll notice that the directions aren't the same. There are more route instructions for the truck carrying class 1 hazardous material. --  -- ## Request traffic information along a route With the Azure Maps Route Direction APIs, developers can request details for each section type by including the `sectionType` parameter in the request. For example, you can request the speed information for each traffic jam segment. Refer to the [list of values for the sectionType key](/rest/api/maps/route/getroutedirections#sectiontype) to learn about the various details that you can request. The response contains the sections that are suitable for traffic along the given  -This option can be used to color the sections when rendering the map, as in the image below: +This option can be used to color the sections when rendering the map, as in the image below:  This option can be used to color the sections when rendering the map, as in the Azure Maps currently provides two forms of route optimizations: -* Optimizations based on the requested route type, without changing the order of waypoints. You can find the [supported route types here](/rest/api/maps/route/postroutedirections#routetype) +* Optimizations based on the requested route type, without changing the order of waypoints. For more information, see [RouteType]. * Traveling salesman optimization, which changes the order of the waypoints to obtain the best order to visit each stop The optimal route has the following waypoint order: 0, 5, 1, 2, 4, 3, and 6. You might have situations where you want to reconstruct a route to calculate zero or more alternative routes for a reference route. For example, you may want to show customers alternative routes that pass your retail store. In this case, you need to bias a location using supporting points. Here are the steps to bias a location: 1. Calculate a route as-is and get the path from the route response-2. Use the route path to find the desired locations along or near the route path. For example, you can use Azure Maps [Point of Interest API](/rest/api/maps/search/getsearchpoi) or query your own data in your database. +2. Use the route path to find the desired locations along or near the route path. For example, you can use the [Point of Interest] request or query your own data in your database. 3. Order the locations based on the distance from the start of the route-4. Add these locations as supporting points in a new route request to the [Post Route Directions API](/rest/api/maps/route/postroutedirections). To learn more about the supporting points, see the [Post Route Directions API documentation](/rest/api/maps/route/postroutedirections#supportingpoints). +4. Add these locations as supporting points in a new route request to [Post Route Directions]. To learn more about the supporting points, see the [Post Route Directions API documentation]. -When calling the [Post Route Directions API](/rest/api/maps/route/postroutedirections), you can set the minimum deviation time or the distance constraints, along with the supporting points. Use these parameters if you want to offer alternative routes, but you also want to limit the travel time. When these constraints are used, the alternative routes will follow the reference route from the origin point for the given time or distance. In other words, the other routes diverge from the reference route per the given constraints. +When calling [Post Route Directions], you can set the minimum deviation time or the distance constraints, along with the supporting points. Use these parameters if you want to offer alternative routes, but you also want to limit the travel time. When these constraints are used, the alternative routes will follow the reference route from the origin point for the given time or distance. In other words, the other routes diverge from the reference route per the given constraints. The image below is an example of rendering alternative routes with specified deviation limits for the time and the distance. The image below is an example of rendering alternative routes with specified dev ## Use the Routing service in a web app -The Azure Maps Web SDK provides a [Service module](/javascript/api/azure-maps-rest/). This module is a helper library that makes it easy to use the Azure Maps REST APIs in web or Node.js applications, using JavaScript or TypeScript. The Service module can be used to render the returned routes on the map. The module automatically determines which API to use with GET and POST requests. +The Azure Maps Web SDK provides a [Service module]. This module is a helper library that makes it easy to use the Azure Maps REST APIs in web or Node.js applications, using JavaScript or TypeScript. The Service module can be used to render the returned routes on the map. The module automatically determines which API to use with GET and POST requests. ## Next steps To learn more, please see: > [!div class="nextstepaction"] > [Azure Maps npm Package](https://www.npmjs.com/package/azure-maps-rest )++[Route service]: /rest/api/maps/route +[Azure Maps account]: quick-demo-map-app.md#create-an-azure-maps-account +[subscription key]: quick-demo-map-app.md#get-the-primary-key-for-your-account +[Routing Coverage]: routing-coverage.md +[Postman]: https://www.postman.com/downloads/ +[RouteType]: /rest/api/maps/route/postroutedirections#routetype +[Point of Interest]: /rest/api/maps/search/getsearchpoi +[Post Route Directions]: /rest/api/maps/route/postroutedirections +[Post Route Directions API documentation]: /rest/api/maps/route/postroutedirections#supportingpoints +[Service module]: /javascript/api/azure-maps-rest/ |
azure-maps | How To Use Best Practices For Search | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-maps/how-to-use-best-practices-for-search.md | Title: Best practices for Azure Maps Search Service | Microsoft Azure Maps -description: Learn how to apply the best practices when using the Search Service from Microsoft Azure Maps. + Title: Best practices for Azure Maps Search service ++description: Learn how to apply the best practices when using the Search service from Microsoft Azure Maps. Last updated 10/28/2021-# Best practices for Azure Maps Search Service +# Best practices for Azure Maps Search service -Azure Maps [Search Service](/rest/api/maps/search) includes APIs that offer various capabilities to help developers to search addresses, places, business listings by name or category, and other geographic information. For example,[Fuzzy Search API](/rest/api/maps/search/getsearchfuzzy) allows users to search for an address or Point of Interest (POI). +Azure Maps [Search service] includes API that offer various capabilities to help developers to search addresses, places, business listings by name or category, and other geographic information. For example, [Search Fuzzy] allows users to search for an address or Point of Interest (POI). -This article explains how to apply sound practices when you call data from Azure Maps Search Service. You'll learn how to: +This article explains how to apply sound practices when you call data from Azure Maps Search service. You'll learn how to: > [!div class="checklist"] > > * Build queries to return relevant matches This article explains how to apply sound practices when you call data from Azure ## Prerequisites -1. [Make an Azure Maps account](quick-demo-map-app.md#create-an-azure-maps-account) -2. [Obtain a primary subscription key](quick-demo-map-app.md#get-the-primary-key-for-your-account), also known as the primary key or the subscription key. +1. An [Azure Maps account] +2. A [subscription key] -This article uses the [Postman app](https://www.postman.com/downloads/) to build REST calls, but you can choose any API development environment. +This article uses the [Postman] application to build REST calls, but you can choose any API development environment. ## Best practices to geocode addresses -When you search for a full or partial address by using Azure Maps Search Service, the API reads keywords from your search query. Then it returns the longitude and latitude coordinates of the address. This process is called *geocoding*. +When you search for a full or partial address by using Azure Maps Search service, the API reads keywords from your search query. Then it returns the longitude and latitude coordinates of the address. This process is called *geocoding*. -The ability to geocode in a country/region depends on the availability of road data and the precision of the geocoding service. For more information about Azure Maps geocoding capabilities by country or region, see [Geocoding coverage](./geocoding-coverage.md). +The ability to geocode in a country/region depends on the availability of road data and the precision of the geocoding service. For more information about Azure Maps geocoding capabilities by country or region, see [Geocoding coverage]. ### Limit search results To geobias results to the relevant area for your user, always add as many locati #### Fuzzy search parameters -We recommend that you use the Azure Maps [Search Fuzzy API](/rest/api/maps/search/getsearchfuzzy) when you don't know your user inputs for a search query. For example, input from the user could be an address or the type of Point of Interest (POI), like *shopping mall*. The API combines POI searching and geocoding into a canonical *single-line search*: +We recommend that you use [Search Fuzzy] when you don't know your user inputs for a search query. For example, input from the user could be an address or the type of Point of Interest (POI), like *shopping mall*. The API combines POI searching and geocoding into a canonical *single-line search*: * The `minFuzzyLevel` and `maxFuzzyLevel` parameters help return relevant matches even when query parameters don't exactly match the information that the user wants. To maximize performance and reduce unusual results, set search queries to defaults of `minFuzzyLevel=1` and `maxFuzzyLevel=2`. We recommend that you use the Azure Maps [Search Fuzzy API](/rest/api/maps/searc * `Addr` - **Address ranges**: Address points that are interpolated from the beginning and end of the street. These points are represented as address ranges. * `Geo` - **Geographies**: Administrative divisions of land. A geography can be a country/region, state, or city, for example. * `PAD` - **Point addresses**: Addresses that include a street name and number. Point addresses can be found in an index. An example is *Soquel Dr 2501*. A point address provides the highest level of accuracy available for addresses. -* `POI` - **Points of interest**: Points on a map that are considered to be worth attention or that might be interesting. The [Search Address API](/rest/api/maps/search/getsearchaddress) doesn't return POIs. +* `POI` - **Points of interest**: Points on a map that are considered to be worth attention or that might be interesting. [Search Address] doesn't return POIs. * `Str` - **Streets**: Streets on the map. * `XStr` - **Cross streets or intersections**: Junctions or places where two streets intersect. We recommend that you use the Azure Maps [Search Fuzzy API](/rest/api/maps/searc ### Reverse-geocode and filter for a geography entity type -When you do a reverse-geocode search in the [Search Address Reverse API](/rest/api/maps/search/getsearchaddressreverse), the service can return polygons for administrative areas. For example, you might want to fetch the area polygon for a city. To narrow the search to specific geography entity types, include the `entityType` parameter in your requests. +When you do a reverse-geocode search using [Search Address Reverse], the service can return polygons for administrative areas. For example, you might want to fetch the area polygon for a city. To narrow the search to specific geography entity types, include the `entityType` parameter in your requests. -The resulting response contains the geography ID and the entity type that was matched. If you provide more than one entity, then the endpoint returns the *smallest entity available*. You can use the returned geometry ID to get the geography's geometry through the [Search Polygon service](/rest/api/maps/search/getsearchpolygon). +The resulting response contains the geography ID and the entity type that was matched. If you provide more than one entity, then the endpoint returns the *smallest entity available*. You can use the returned geometry ID to get the geography's geometry through the [Search Polygon service]. #### Sample request https://atlas.microsoft.com/search/address/reverse/json?api-version=1.0&subscrip ### Set the results language -Use the `language` parameter to set the language for the returned search results. If the request doesn't set the language, then by default Search Service uses the most common language in the country or region. When no data is available in the specified language, the default language is used. +Use the `language` parameter to set the language for the returned search results. If the request doesn't set the language, then by default Search service uses the most common language in the country or region. When no data is available in the specified language, the default language is used. -For more information, see [Azure Maps supported languages](./supported-languages.md). +For more information, see [Azure Maps supported languages]. ### Use predictive mode (automatic suggestions) To improve the relevance of the results and the information in the response, a P In a request, you can submit a comma-separated list of brand names. Use the list to restrict the results to specific brands by setting the `brandSet` parameter. In your list, item order doesn't matter. When you provide multiple brand lists, the results that are returned must belong to at least one of your lists. -To explore brand searching, let's make a [POI category search](/rest/api/maps/search/getsearchpoicategory) request. In the following example, we look for gas stations near the Microsoft campus in Redmond, Washington. The response shows brand information for each POI that was returned. +To explore brand searching, let's make a [POI category search] request. In the following example, we look for gas stations near the Microsoft campus in Redmond, Washington. The response shows brand information for each POI that was returned. #### Sample query https://atlas.microsoft.com/search/poi/json?subscription-key={Your-Azure-Maps-Su ### Nearby search -To retrieve POI results around a specific location, you can try using the [Search Nearby API](/rest/api/maps/search/getsearchnearby). The endpoint returns only POI results. It doesn't take in a search query parameter. +To retrieve POI results around a specific location, you can try using [Search Nearby]. The endpoint returns only POI results. It doesn't take in a search query parameter. To limit the results, we recommend that you set the radius. ## Understanding the responses -Let's find an address in Seattle by making an address-search request to the Azure Maps Search Service. In the following request URL, we set the `countrySet` parameter to `US` to search for the address in the USA. +Let's find an address in Seattle by making an address-search request to the Azure Maps Search service. In the following request URL, we set the `countrySet` parameter to `US` to search for the address in the USA. ### Sample query Let's look at the response structure. In the response that follows, the types of Notice that the address search doesn't return POIs. -The `Score` parameter for each response object indicates how the matching score relates to the scores of other objects in the same response. For more information about response object parameters, see [Get Search Address](/rest/api/maps/search/getsearchaddress). +The `Score` parameter for each response object indicates how the matching score relates to the scores of other objects in the same response. For more information about response object parameters, see [Get Search Address]. ```JSON { The `Score` parameter for each response object indicates how the matching score ### Geometry -A response type of *Geometry* can include the geometry ID that's returned in the `dataSources` object under `geometry` and `id`. For example, you can use the [Search Polygon service](/rest/api/maps/search/getsearchpolygon) to request the geometry data in a GeoJSON format. By using this format, you can get a city or airport outline for a set of entities. You can then use this boundary data to [Set up a geofence](./tutorial-geofence.md) or [Search POIs inside the geometry](/rest/api/maps/search/postsearchinsidegeometry). +A response type of *Geometry* can include the geometry ID that's returned in the `dataSources` object under `geometry` and `id`. For example, you can use the [Search Polygon service] to request the geometry data in a GeoJSON format. By using this format, you can get a city or airport outline for a set of entities. You can then use this boundary data to [Set up a geofence] or [Search POIs inside the geometry]. -Responses for the [Search Address](/rest/api/maps/search/getsearchaddress) API or the [Search Fuzzy](/rest/api/maps/search/getsearchfuzzy) API can include the geometry ID that's returned in the `dataSources` object under `geometry` and `id`: +Responses for [Search Address] or the [Search Fuzzy] can include the geometry ID that's returned in the `dataSources` object under `geometry` and `id`: ```JSON "dataSources": { Responses for the [Search Address](/rest/api/maps/search/getsearchaddress) API o To learn more, please see: > [!div class="nextstepaction"]-> [How to build Azure Maps Search Service requests](./how-to-search-for-address.md) +> [How to build Azure Maps Search service requests](./how-to-search-for-address.md) > [!div class="nextstepaction"]-> [Search Service API documentation](/rest/api/maps/search) +> [Search service API documentation](/rest/api/maps/search) ++[Search service]: /rest/api/maps/search +[Search Fuzzy]: /rest/api/maps/search/getsearchfuzzy +[Azure Maps account]: quick-demo-map-app.md#create-an-azure-maps-account +[subscription key]: quick-demo-map-app.md#get-the-primary-key-for-your-account +[Postman]: https://www.postman.com/downloads/ +[Geocoding coverage]: geocoding-coverage.md +[Search Address Reverse]: /rest/api/maps/search/getsearchaddressreverse +[POI category search]: /rest/api/maps/search/getsearchpoicategory +[Search Nearby]: /rest/api/maps/search/getsearchnearby +[Get Search Address]: /rest/api/maps/search/getsearchaddress ++[Azure Maps supported languages]: supported-languages.md +[Search Address]: /rest/api/maps/search/getsearchaddress +[Search Polygon service]: /rest/api/maps/search/getsearchpolygon +[Set up a geofence]: tutorial-geofence.md +[Search POIs inside the geometry]: /rest/api/maps/search/postsearchinsidegeometry |
azure-maps | Map Get Information From Coordinate | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-maps/map-get-information-from-coordinate.md | -There are two ways to make a reverse address search. One way is to query the [Azure Maps Reverse Address Search API](/rest/api/maps/search/getsearchaddressreverse) through a service module. The other way is to use the [Fetch API](https://fetch.spec.whatwg.org/) to make a request to the [Azure Maps Reverse Address Search API](/rest/api/maps/search/getsearchaddressreverse) to find an address. Both ways are surveyed below. +There are two ways to make a reverse address search. One way is to query the [Reverse Address Search API] through a service module. The other way is to use the [Fetch API] to make a request to the [Reverse Address Search API] to find an address. Both ways are surveyed below. ## Make a reverse search request via service module <iframe height='500' scrolling='no' title='Get information from a coordinate (Service Module)' src='//codepen.io/azuremaps/embed/ejEYMZ/?height=265&theme-id=0&default-tab=js,result&embed-version=2&editable=true' frameborder='no' loading="lazy" allowtransparency='true' allowfullscreen='true'>See the Pen <a href='https://codepen.io/azuremaps/pen/ejEYMZ/'>Get information from a coordinate (Service Module)</a> by Azure Maps (<a href='https://codepen.io/azuremaps'>@azuremaps</a>) on <a href='https://codepen.io'>CodePen</a>. </iframe> -In the code above, the first block constructs a map object and sets the authentication mechanism to use the access token. You can see [create a map](./map-create.md) for instructions. +In the code above, the first block constructs a map object and sets the authentication mechanism to use the access token. For more information, see [create a map]. -The second code block creates a `TokenCredential` to authenticate HTTP requests to Azure Maps with the access token. It then passes the `TokenCredential` to `atlas.service.MapsURL.newPipeline()` and creates a [Pipeline](/javascript/api/azure-maps-rest/atlas.service.pipeline) instance. The `searchURL` represents a URL to Azure Maps [Search](/rest/api/maps/search) operations. +The second code block creates a `TokenCredential` to authenticate HTTP requests to Azure Maps with the access token. It then passes the `TokenCredential` to `atlas.service.MapsURL.newPipeline()` and creates a [Pipeline] instance. The `searchURL` represents a URL to the [Search service]. -The third code block updates the style of mouse cursor to a pointer and creates a [popup](/javascript/api/azure-maps-control/atlas.popup#open) object. You can see [add a popup on the map](./map-add-popup.md) for instructions. +The third code block updates the style of mouse cursor to a pointer and creates a [popup] object. For more information, see [add a popup on the map]. -The fourth block of code adds a mouse click [event listener](/javascript/api/azure-maps-control/atlas.map#events). When triggered, it creates a search query with the coordinates of the clicked point. It then uses the [getSearchAddressReverse](/javascript/api/azure-maps-rest/atlas.service.searchurl#searchaddressreverse-aborter--geojson-position--searchaddressreverseoptions-)method to query the [Get Search Address Reverse API](/rest/api/maps/search/getsearchaddressreverse) for the address of the coordinates. A GeoJSON feature collection is then extracted using the `geojson.getFeatures()` method from the response. +The fourth block of code adds a mouse click [event listener]. When triggered, it creates a search query with the coordinates of the clicked point. It then uses the [getSearchAddressReverse] method to query the [Get Search Address Reverse API] for the address of the coordinates. A GeoJSON feature collection is then extracted using the `geojson.getFeatures()` method from the response. The fifth block of code sets up the HTML popup content to display the response address for the clicked coordinate position. -The change of cursor, the popup object, and the click event are all created in the map's [load event listener](/javascript/api/azure-maps-control/atlas.map#events). This code structure ensures map fully loads before retrieving the coordinates information. +The change of cursor, the popup object, and the click event are all created in the map's [load event listener]. This code structure ensures map fully loads before retrieving the coordinates information. ## Make a reverse search request via Fetch API Click on the map to make a reverse geocode request for that location using fetch <iframe height='500' scrolling='no' title='Get information from a coordinate' src='//codepen.io/azuremaps/embed/ddXzoB/?height=516&theme-id=0&default-tab=js,result&embed-version=2&editable=true' frameborder='no' loading="lazy" allowtransparency='true' allowfullscreen='true'>See the Pen <a href='https://codepen.io/azuremaps/pen/ddXzoB/'>Get information from a coordinate</a> by Azure Maps (<a href='https://codepen.io/azuremaps'>@azuremaps</a>) on <a href='https://codepen.io'>CodePen</a>. </iframe> -In the code above, the first block of code constructs a map object and sets the authentication mechanism to use the access token. You can see [create a map](./map-create.md) for instructions. +In the code above, the first block of code constructs a map object and sets the authentication mechanism to use the access token. You can see [create a map] for instructions. -The second block of code updates the style of the mouse cursor to a pointer. It instantiates a [popup](/javascript/api/azure-maps-control/atlas.popup#open) object. You can see [add a popup on the map](./map-add-popup.md) for instructions. +The second block of code updates the style of the mouse cursor to a pointer. It instantiates a [popup](/javascript/api/azure-maps-control/atlas.popup#open) object. You can see [add a popup on the map] for instructions. -The third block of code adds an event listener for mouse clicks. Upon a mouse click, it uses the [Fetch API](https://fetch.spec.whatwg.org/) to query the [Azure Maps Reverse Address Search API](/rest/api/maps/search/getsearchaddressreverse) for the clicked coordinates address. For a successful response, it collects the address for the clicked location. It defines the popup content and position using the [setOptions](/javascript/api/azure-maps-control/atlas.popup#setoptions-popupoptions-) function of the popup class. +The third block of code adds an event listener for mouse clicks. Upon a mouse click, it uses the [Fetch API] to query the Azure Maps [Reverse Address Search API] for the clicked coordinates address. For a successful response, it collects the address for the clicked location. It defines the popup content and position using the [setOptions] function of the popup class. -The change of cursor, the popup object, and the click event are all created in the map's [load event listener](/javascript/api/azure-maps-control/atlas.map#events). This code structure ensures the map fully loads before retrieving the coordinates information. +The change of cursor, the popup object, and the click event are all created in the map's [load event listener]. This code structure ensures the map fully loads before retrieving the coordinates information. ## Next steps See the following articles for full code examples: > [Show directions from A to B](./map-route.md) > [!div class="nextstepaction"]-> [Show traffic](./map-show-traffic.md) +> [Show traffic](./map-show-traffic.md) ++[Reverse Address Search API]: /rest/api/maps/search/getsearchaddressreverse +[Fetch API]: https://fetch.spec.whatwg.org/ +[create a map]: map-create.md +[Search service]: /rest/api/maps/search +[Pipeline]: /javascript/api/azure-maps-rest/atlas.service.pipeline +[popup]: /javascript/api/azure-maps-control/atlas.popup#open +[add a popup on the map]: map-add-popup.md +[event listener]: /javascript/api/azure-maps-control/atlas.map#events +[getSearchAddressReverse]: /javascript/api/azure-maps-rest/atlas.service.searchurl#searchaddressreverse-aborter--geojson-position--searchaddressreverseoptions- +[Get Search Address Reverse API]: /rest/api/maps/search/getsearchaddressreverse +[load event listener]: /javascript/api/azure-maps-control/atlas.map#events +[setOptions]: /javascript/api/azure-maps-control/atlas.popup#setoptions-popupoptions- |
azure-maps | Map Route | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-maps/map-route.md | -There are two ways to do so. The first way is to query the [Azure Maps Route API](/rest/api/maps/route/getroutedirections) through a service module. The second way is to use the [Fetch API](https://fetch.spec.whatwg.org/) to make a search request to the [Azure Maps Route API](/rest/api/maps/route/getroutedirections). Both ways are discussed below. +There are two ways to do so. The first way is to query the [Get Route Directions] request in the Azure Maps Route service. The second way is to use the [Fetch API] to make a search request to the [Get Route Directions] request. Both ways are discussed below. ## Query the route via service module <iframe height='500' scrolling='no' title='Show directions from A to B on a map (Service Module)' src='//codepen.io/azuremaps/embed/RBZbep/?height=265&theme-id=0&default-tab=js,result&embed-version=2&editable=true' frameborder='no' loading="lazy" allowtransparency='true' allowfullscreen='true'>See the Pen <a href='https://codepen.io/azuremaps/pen/RBZbep/'>Show directions from A to B on a map (Service Module)</a> by Azure Maps (<a href='https://codepen.io/azuremaps'>@azuremaps</a>) on <a href='https://codepen.io'>CodePen</a>. </iframe> -In the above code, the first block constructs a map object and sets the authentication mechanism to use the access token. You can see [create a map](./map-create.md) for instructions. +In the above code, the first block constructs a map object and sets the authentication mechanism to use the access token. You can see [create a map] for instructions. -The second block of code creates a `TokenCredential` to authenticate HTTP requests to Azure Maps with the access token. It then passes the `TokenCredential` to `atlas.service.MapsURL.newPipeline()` and creates a [Pipeline](/javascript/api/azure-maps-rest/atlas.service.pipeline) instance. The `routeURL` represents a URL to Azure Maps [Route](/rest/api/maps/route) operations. +The second block of code creates a `TokenCredential` to authenticate HTTP requests to Azure Maps with the access token. It then passes the `TokenCredential` to `atlas.service.MapsURL.newPipeline()` and creates a [Pipeline] instance. The `routeURL` represents a URL to Azure Maps [Route service]. -The third block of code creates and adds a [DataSource](/javascript/api/azure-maps-control/atlas.source.datasource) object to the map. +The third block of code creates and adds a [DataSource] object to the map. -The fourth block of code creates start and end [points](/javascript/api/azure-maps-control/atlas.data.point) objects and adds them to the dataSource object. +The fourth block of code creates start and end [points] objects and adds them to the dataSource object. -A line is a [Feature](/javascript/api/azure-maps-control/atlas.data.feature) for LineString. A [LineLayer](/javascript/api/azure-maps-control/atlas.layer.linelayer) renders line objects wrapped in the [DataSource](/javascript/api/azure-maps-control/atlas.source.datasource) as lines on the map. The fourth block of code creates and adds a line layer to the map. See properties of a line layer at [LinestringLayerOptions](/javascript/api/azure-maps-control/atlas.linelayeroptions). +A line is a [Feature] for LineString. A [LineLayer] renders line objects wrapped in the [DataSource] as lines on the map. The fourth block of code creates and adds a line layer to the map. See properties of a line layer at [LinestringLayerOptions]. -A [symbol layer](/javascript/api/azure-maps-control/atlas.layer.symbollayer) uses texts or icons to render point-based data wrapped in the [DataSource](/javascript/api/azure-maps-control/atlas.source.datasource). The texts or the icons render as symbols on the map. The fifth block of code creates and adds a symbol layer to the map. +A [symbol layer] uses texts or icons to render point-based data wrapped in the [DataSource]. The texts or the icons render as symbols on the map. The fifth block of code creates and adds a symbol layer to the map. -The sixth block of code queries the Azure Maps routing service, which is part of the [service module](how-to-use-services-module.md). The [calculateRouteDirections](/javascript/api/azure-maps-rest/atlas.service.routeurl#methods) method of the RouteURL is used to get a route between the start and end points. A GeoJSON feature collection from the response is then extracted using the `geojson.getFeatures()` method and is added to the datasource. It then renders the response as a route on the map. For more information about adding a line to the map, see [add a line on the map](map-add-line-layer.md). +The sixth block of code queries the Azure Maps routing service, which is part of the [service module]. The [calculateRouteDirections] method of the `RouteURL` is used to get a route between the start and end points. A GeoJSON feature collection from the response is then extracted using the `geojson.getFeatures()` method and is added to the datasource. It then renders the response as a route on the map. For more information about adding a line to the map, see [add a line on the map]. -The last block of code sets the bounds of the map using the Map's [setCamera](/javascript/api/azure-maps-control/atlas.map#setcamera-cameraoptionscameraboundsoptionsanimationoptions-) property. +The last block of code sets the bounds of the map using the Map's [setCamera] property. -The route query, data source, symbol, line layers, and camera bounds are created inside the [event listener](/javascript/api/azure-maps-control/atlas.map#events). This code structure ensures the results are displayed only after the map fully loads. +The route query, data source, symbol, line layers, and camera bounds are created inside the [event listener]. This code structure ensures the results are displayed only after the map fully loads. ## Query the route via Fetch API <iframe height='500' scrolling='no' title='Show directions from A to B on a map' src='//codepen.io/azuremaps/embed/zRyNmP/?height=469&theme-id=0&default-tab=js,result&embed-version=2&editable=true' frameborder='no' loading="lazy" allowtransparency='true' allowfullscreen='true'>See the Pen <a href='https://codepen.io/azuremaps/pen/zRyNmP/'>Show directions from A to B on a map</a> by Azure Maps (<a href='https://codepen.io/azuremaps'>@azuremaps</a>) on <a href='https://codepen.io'>CodePen</a>. </iframe> -In the code above, the first block of code constructs a map object and sets the authentication mechanism to use the access token. You can see [create a map](./map-create.md) for instructions. +In the code above, the first block of code constructs a map object and sets the authentication mechanism to use the access token. You can see [create a map] for instructions. -The second block of code creates and adds a [DataSource](/javascript/api/azure-maps-control/atlas.source.datasource) object to the map. +The second block of code creates and adds a [DataSource] object to the map. -The third code block creates the start and destination points for the route. Then, it adds them to the data source. You can see [add a pin on the map](map-add-pin.md) for instructions about using [addPins](/javascript/api/azure-maps-control/atlas.map). +The third code block creates the start and destination points for the route. Then, it adds them to the data source. For more information, see [add a pin on the map]. -A [LineLayer](/javascript/api/azure-maps-control/atlas.layer.linelayer) renders line objects wrapped in the [DataSource](/javascript/api/azure-maps-control/atlas.source.datasource) as lines on the map. The fourth block of code creates and adds a line layer to the map. See properties of a line layer at [LineLayerOptions](/javascript/api/azure-maps-control/atlas.linelayeroptions). +A [LineLayer] renders line objects wrapped in the [DataSource] as lines on the map. The fourth block of code creates and adds a line layer to the map. See properties of a line layer at [LineLayerOptions]. -A [symbol layer](/javascript/api/azure-maps-control/atlas.layer.symbollayer) uses text or icons to render point-based data wrapped in the [DataSource](/javascript/api/azure-maps-control/atlas.source.datasource) as symbols on the map. The fifth block of code creates and adds a symbol layer to the map. See properties of a symbol layer at [SymbolLayerOptions](/javascript/api/azure-maps-control/atlas.symbollayeroptions). +A [symbol layer] uses text or icons to render point-based data wrapped in the [DataSource] as symbols on the map. The fifth block of code creates and adds a symbol layer to the map. See properties of a symbol layer at [SymbolLayerOptions]. -The next code block creates `SouthWest` and `NorthEast` points from the start and destination points and sets the bounds of the map using the Map's [setCamera](/javascript/api/azure-maps-control/atlas.map#setcamera-cameraoptionscameraboundsoptionsanimationoptions-) property. +The next code block creates `SouthWest` and `NorthEast` points from the start and destination points and sets the bounds of the map using the Map's [setCamera] property. -The last block of code uses the [Fetch API](https://fetch.spec.whatwg.org/) to make a search request to the [Azure Maps Route API](/rest/api/maps/route/getroutedirections). The response is then parsed. If the response was successful, the latitude and longitude information is used to create an array a line by connecting those points. The line data is then added to data source to render the route on the map. You can see [add a line on the map](map-add-line-layer.md) for instructions. +The last block of code uses the [Fetch API] to make a search request to [Get Route Directions]. The response is then parsed. If the response was successful, the latitude and longitude information is used to create an array a line by connecting those points. The line data is then added to data source to render the route on the map. For more information, see [add a line on the map]. -The route query, data source, symbol, line layers, and camera bounds are created inside the [event listener](/javascript/api/azure-maps-control/atlas.map#events). Again, we want to ensure that results are displayed after the map loads fully. +The route query, data source, symbol, line layers, and camera bounds are created inside the [event listener]. Again, we want to ensure that results are displayed after the map loads fully. ## Next steps See the following articles for full code examples: > [Show traffic on the map](./map-show-traffic.md) > [!div class="nextstepaction"]-> [Interacting with the map - mouse events](./map-events.md) +> [Interacting with the map - mouse events](./map-events.md) ++[Get Route Directions]: /rest/api/maps/route/getroutedirections +[Route service]: /rest/api/maps/route +[Fetch API]: https://fetch.spec.whatwg.org/ +[create a map]: map-create.md +[DataSource]: /javascript/api/azure-maps-control/atlas.source.datasource +[add a line on the map]: map-add-line-layer.md +[setCamera]: /javascript/api/azure-maps-control/atlas.map#setcamera-cameraoptionscameraboundsoptionsanimationoptions- +[SymbolLayerOptions]: /javascript/api/azure-maps-control/atlas.symbollayeroptions +[LineLayerOptions]: /javascript/api/azure-maps-control/atlas.linelayeroptions +[add a pin on the map]: map-add-pin.md +[LineLayer]: /javascript/api/azure-maps-control/atlas.layer.linelayer +[symbol layer]: /javascript/api/azure-maps-control/atlas.layer.symbollayer +[Pipeline]: /javascript/api/azure-maps-rest/atlas.service.pipeline +[event listener]: /javascript/api/azure-maps-control/atlas.map#events ++[service module]: how-to-use-services-module.md +[calculateRouteDirections]: /javascript/api/azure-maps-rest/atlas.service.routeurl#methods +[LinestringLayerOptions]: /javascript/api/azure-maps-control/atlas.linelayeroptions +[Feature]: /javascript/api/azure-maps-control/atlas.data.feature +[points]: /javascript/api/azure-maps-control/atlas.data.point |
azure-maps | Map Search Location | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-maps/map-search-location.md | -There are two ways to search for a location of interest. One way is to use a service module to make a search request. The other way is to make a search request to [Azure Maps Fuzzy search API](/rest/api/maps/search/getsearchfuzzy) through the [Fetch API](https://fetch.spec.whatwg.org/). Both ways are discussed below. +There are two ways to search for a location of interest. One way is to use a service module to make a search request. The other way is to make a search request to Azure Maps [Fuzzy search API] through the [Fetch API]. Both ways are discussed below. ## Make a search request via service module <iframe height='500' scrolling='no' title='Show search results on a map (Service Module)' src='//codepen.io/azuremaps/embed/zLdYEB/?height=265&theme-id=0&default-tab=js,result&embed-version=2&editable=true' frameborder='no' loading="lazy" allowtransparency='true' allowfullscreen='true'>See the Pen <a href='https://codepen.io/azuremaps/pen/zLdYEB/'>Show search results on a map (Service Module)</a> by Azure Maps (<a href='https://codepen.io/azuremaps'>@azuremaps</a>) on <a href='https://codepen.io'>CodePen</a>. </iframe> -In the code above, the first block constructs a map object and sets the authentication mechanism to use the access token. You can see [create a map](./map-create.md) for instructions. +In the code above, the first block constructs a map object and sets the authentication mechanism to use the access token. You can see [create a map] for instructions. -The second block of code creates a `TokenCredential` to authenticate HTTP requests to Azure Maps with the access token. It then passes the `TokenCredential` to `atlas.service.MapsURL.newPipeline()` and creates a [Pipeline](/javascript/api/azure-maps-rest/atlas.service.pipeline) instance. The `searchURL` represents a URL to Azure Maps [Search](/rest/api/maps/search) operations. +The second block of code creates a `TokenCredential` to authenticate HTTP requests to Azure Maps with the access token. It then passes the `TokenCredential` to `atlas.service.MapsURL.newPipeline()` and creates a [Pipeline] instance. The `searchURL` represents a URL to Azure Maps [Search service]. -The third block of code creates a data source object using the [DataSource](/javascript/api/azure-maps-control/atlas.source.datasource) class and add search results to it. A [symbol layer](/javascript/api/azure-maps-control/atlas.layer.symbollayer) uses text or icons to render point-based data wrapped in the [DataSource](/javascript/api/azure-maps-control/atlas.source.datasource) as symbols on the map. A symbol layer is then created. The data source is added to the symbol layer, which is then added to the map. +The third block of code creates a data source object using the [DataSource] class and add search results to it. A [symbol layer] uses text or icons to render point-based data wrapped in the [DataSource] as symbols on the map. A symbol layer is then created. The data source is added to the symbol layer, which is then added to the map. -The fourth code block uses the [SearchFuzzy](/javascript/api/azure-maps-rest/atlas.service.models.searchgetsearchfuzzyoptionalparams) method in the [service module](how-to-use-services-module.md). It allows you to perform a free form text search via the [Get Search Fuzzy rest API](/rest/api/maps/search/getsearchfuzzy) to search for point of interest. Get requests to the Search Fuzzy API can handle any combination of fuzzy inputs. A GeoJSON feature collection from the response is then extracted using the `geojson.getFeatures()` method and added to the data source, which automatically results in the data being rendered on the map via the symbol layer. +The fourth code block uses the [SearchFuzzy] method in the [service module]. It allows you to perform a free form text search via the [Get Search Fuzzy rest API] to search for point of interest. Get requests to the Search Fuzzy API can handle any combination of fuzzy inputs. A GeoJSON feature collection from the response is then extracted using the `geojson.getFeatures()` method and added to the data source, which automatically results in the data being rendered on the map via the symbol layer. -The last block of code adjusts the camera bounds for the map using the Map's [setCamera](/javascript/api/azure-maps-control/atlas.map#setcamera-cameraoptionscameraboundsoptionsanimationoptions-) property. +The last block of code adjusts the camera bounds for the map using the Map's [setCamera] property. -The search request, data source, symbol layer, and camera bounds are inside the [event listener](/javascript/api/azure-maps-control/atlas.map#events) of the map. We want to ensure that the results are displayed after the map fully loads. +The search request, data source, symbol layer, and camera bounds are inside the [event listener] of the map. We want to ensure that the results are displayed after the map fully loads. ## Make a search request via Fetch API The search request, data source, symbol layer, and camera bounds are inside the <iframe height='500' scrolling='no' title='Show search results on a map' src='//codepen.io/azuremaps/embed/KQbaeM/?height=265&theme-id=0&default-tab=js,result&embed-version=2&editable=true' frameborder='no' loading="lazy" allowtransparency='true' allowfullscreen='true'>See the Pen <a href='https://codepen.io/azuremaps/pen/KQbaeM/'>Show search results on a map</a> by Azure Maps (<a href='https://codepen.io/azuremaps'>@azuremaps</a>) on <a href='https://codepen.io'>CodePen</a>. </iframe> -In the code above, the first block of code constructs a map object. It sets the authentication mechanism to use the access token. You can see [create a map](./map-create.md) for instructions. +In the code above, the first block of code constructs a map object. It sets the authentication mechanism to use the access token. You can see [create a map] for instructions. The second block of code creates a URL to make a search request to. It also creates two arrays to store bounds and pins for search results. -The third block of code uses the [Fetch API](https://fetch.spec.whatwg.org/). The [Fetch API](https://fetch.spec.whatwg.org/) is used to make a request to [Azure Maps Fuzzy search API](/rest/api/maps/search/getsearchfuzzy) to search for the points of interest. The Fuzzy search API can handle any combination of fuzzy inputs. It then handles and parses the search response and adds the result pins to the searchPins array. +The third block of code uses the [Fetch API]. The [Fetch API] is used to make a request to Azure Maps [Fuzzy search API] to search for the points of interest. The Fuzzy search API can handle any combination of fuzzy inputs. It then handles and parses the search response and adds the result pins to the searchPins array. -The fourth block of code creates a data source object using the [DataSource](/javascript/api/azure-maps-control/atlas.source.datasource) class. In the code, we add search results to the source object. A [symbol layer](/javascript/api/azure-maps-control/atlas.layer.symbollayer) uses text or icons to render point-based data wrapped in the [DataSource](/javascript/api/azure-maps-control/atlas.source.datasource) as symbols on the map. A symbol layer is then created. The data source is added to the symbol layer, which is then added to the map. +The fourth block of code creates a data source object using the [DataSource] class. In the code, we add search results to the source object. A [symbol layer] uses text or icons to render point-based data wrapped in the [DataSource] as symbols on the map. A symbol layer is then created. The data source is added to the symbol layer, which is then added to the map. -The last block of code creates a [BoundingBox](/javascript/api/azure-maps-control/atlas.data.boundingbox) object. It uses the array of results, and then it adjusts the camera bounds for the map using the Map's [setCamera](/javascript/api/azure-maps-control/atlas.map#setcamera-cameraoptionscameraboundsoptionsanimationoptions-). It then renders the result pins. +The last block of code creates a [BoundingBox] object. It uses the array of results, and then it adjusts the camera bounds for the map using the Map's [setCamera]. It then renders the result pins. -The search request, the data source, symbol layer, and the camera bounds are set within the map's [event listener](/javascript/api/azure-maps-control/atlas.map#events) to ensure that the results are displayed after the map loads fully. +The search request, the data source, symbol layer, and the camera bounds are set within the map's [event listener] to ensure that the results are displayed after the map loads fully. ## Next steps See the following articles for full code examples: > [Get information from a coordinate](map-get-information-from-coordinate.md) <!-- Comment added to suppress false positive warning --> > [!div class="nextstepaction"]-> [Show directions from A to B](map-route.md) +> [Show directions from A to B](map-route.md) ++[Fuzzy search API]: /rest/api/maps/search/getsearchfuzzy +[Fetch API]: https://fetch.spec.whatwg.org/ +[DataSource]: /javascript/api/azure-maps-control/atlas.source.datasource +[Search service]: /rest/api/maps/search +[Pipeline]: /javascript/api/azure-maps-rest/atlas.service.pipeline +[symbol layer]: /javascript/api/azure-maps-control/atlas.layer.symbollayer +[create a map]: map-create.md +[SearchFuzzy]: /javascript/api/azure-maps-rest/atlas.service.models.searchgetsearchfuzzyoptionalparams +[service module]: how-to-use-services-module.md +[Get Search Fuzzy rest API]: /rest/api/maps/search/getsearchfuzzy +[setCamera]: /javascript/api/azure-maps-control/atlas.map#setcamera-cameraoptionscameraboundsoptionsanimationoptions- +[event listener]: /javascript/api/azure-maps-control/atlas.map#events +[BoundingBox]: /javascript/api/azure-maps-control/atlas.data.boundingbox |
azure-maps | Migrate From Bing Maps Web Services | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-maps/migrate-from-bing-maps-web-services.md | Azure Maps has several additional REST web services that may be of interest; * [Batch routing](/rest/api/maps/route/postroutedirectionsbatchpreview) ΓÇô Allows up to 1,000 route requests to be made in a single batch over a period of time. Routes are calculated in parallel on the server for faster processing. * [Traffic](/rest/api/maps/traffic) Flow ΓÇô Access real-time traffic flow data as both raster and vector tiles. * [Geolocation API](/rest/api/maps/geolocation/get-ip-to-location) ΓÇô Get the location of an IP address.-* [Weather Services](/rest/api/maps/weather) ΓÇô Gain access to real-time and forecast weather data. +* [Weather services](/rest/api/maps/weather) ΓÇô Gain access to real-time and forecast weather data. Be sure to also review the following best practices guides: |
azure-maps | Rest Sdk Developer Guide | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-maps/rest-sdk-developer-guide.md | -You can call the Azure Maps [Rest API][Rest API] directly from any programming language, however that can be error prone work requiring extra effort. To make incorporating Azure Maps in your applications easier and less error prone, the Azure Maps team has encapsulated their REST API in SDKs for C# (.NET), Python, JavaScript/Typescript, and Java. +You can call the Azure Maps [Rest API] directly from any programming language, however that can be error prone work requiring extra effort. To make incorporating Azure Maps in your applications easier and less error prone, the Azure Maps team has encapsulated their REST API in SDKs for C# (.NET), Python, JavaScript/Typescript, and Java. This article lists the libraries currently available for each SDK with links to how-to articles to help you get started. ## C# SDK -Azure Maps C# SDK supports any .NET version that is compatible with [.NET standard 2.0][.NET Standard versions]. +Azure Maps C# SDK supports any .NET version that is compatible with [.NET standard 2.0]. -| Service Name  | NuGet package  | Samples  | +| Service name  | NuGet package  | Samples  | ||-|--| | [Search][C# search readme] | [Azure.Maps.Search][C# search package] | [search samples][C# search sample] | | [Routing][C# routing readme] | [Azure.Maps.Routing][C# routing package] | [routing samples][C# routing sample] | | [Rendering][C# rendering readme]| [Azure.Maps.Rendering][C# rendering package]|[rendering sample][C# rendering sample] | | [Geolocation][C# geolocation readme]|[Azure.Maps.Geolocation][C# geolocation package]|[geolocation sample][C# geolocation sample] | -For more information, see the [C# SDK Developers Guide](how-to-dev-guide-csharp-sdk.md). +For more information, see the [C# SDK Developers Guide]. -## Python SDK -Azure Maps Python SDK supports Python version 3.7 or later. Check the [Azure SDK for Python policy planning][Python-version-support-policy] for more details on future Python versions. +## Python SDK -| Service Name  | PyPi package  | Samples  | +Azure Maps Python SDK supports Python version 3.7 or later. Check the [Azure SDK for Python policy planning] for more details on future Python versions. ++| Service name  | PyPi package  | Samples  | ||-|--| | [Search][py search readme] | [azure-maps-search][py search package] | [search samples][py search sample] | | [Route][py route readme] | [azure-maps-route][py route package] | [route samples][py route sample] | | [Render][py render readme]| [azure-maps-render][py render package]|[render sample][py render sample] | | [Geolocation][py geolocation readme]|[azure-maps-geolocation][py geolocation package]|[geolocation sample][py geolocation sample] | -For more information, see the [python SDK Developers Guide](how-to-dev-guide-py-sdk.md). +For more information, see the [python SDK Developers Guide]. ## JavaScript/TypeScript Azure Maps JavaScript/TypeScript SDK supports LTS versions of [Node.js][Node.js] including versions in Active status and Maintenance status. -| Service Name  | npm packages | Samples  | +| Service name  | npm packages | Samples  | ||-|--| | [Search][js search readme] | [@azure-rest/maps-search][js search package] | [search samples][js search sample] | | [Route][js route readme] | [@azure-rest/maps-route][js route package] | [route samples][js route sample] | | [Render][js render readme] | [@azure-rest/maps-render][js render package]|[render sample][js render sample] | | [Geolocation][js geolocation readme]|[@azure-rest/maps-geolocation][js geolocation package]|[geolocation sample][js geolocation sample] | -For more information, see the [JavaScript/TypeScript SDK Developers Guide](how-to-dev-guide-js-sdk.md). +For more information, see the [JavaScript/TypeScript SDK Developers Guide]. ## Java Azure Maps Java SDK supports [Java 8][Java 8] or above. -| Service Name  | Maven package  | Samples  | +| Service name  | Maven package  | Samples  | ||-|--| | [Search][java search readme] | [azure-maps-search][java search package] | [search samples][java search sample] | | [Routing][java routing readme] | [azure-maps-routing][java routing package] | [routing samples][java routing sample] | Azure Maps Java SDK supports [Java 8][Java 8] or above. | [Timezone][java timezone readme] | [azure-maps-timezone][java timezone package] | [timezone samples][java timezone sample] | | [Elevation][java elevation readme] | [azure-maps-elevation][java elevation package] | [elevation samples][java elevation sample] | -For more information, see the [Java SDK Developers Guide](how-to-dev-guide-java-sdk.md). +For more information, see the [Java SDK Developers Guide]. -<!-- C# SDK Developers Guide > [Rest API]: /rest/api/maps/-[.NET Standard versions]: https://dotnet.microsoft.com/platform/dotnet-standard#versions +[.NET standard 2.0]: https://dotnet.microsoft.com/platform/dotnet-standard#versions ++<!-- C# SDK Developers Guide > +[C# SDK Developers Guide]: how-to-dev-guide-csharp-sdk.md [C# search package]: https://www.nuget.org/packages/Azure.Maps.Search [C# search readme]: https://github.com/Azure/azure-sdk-for-net/blob/main/sdk/maps/Azure.Maps.Search/README.md [C# search sample]: https://github.com/Azure/azure-sdk-for-net/tree/main/sdk/maps/Azure.Maps.Search/samples For more information, see the [Java SDK Developers Guide](how-to-dev-guide-java- [C# geolocation sample]: https://github.com/Azure/azure-sdk-for-net/tree/main/sdk/maps/Azure.Maps.Geolocation/samples <!-- Python SDK Developers Guide >-[Python-version-support-policy]: https://github.com/Azure/azure-sdk-for-python/wiki/Azure-SDKs-Python-version-support-policy +[python SDK Developers Guide]: how-to-dev-guide-py-sdk.md +[Azure SDK for Python policy planning]: https://github.com/Azure/azure-sdk-for-python/wiki/Azure-SDKs-Python-version-support-policy [py search package]: https://pypi.org/project/azure-maps-search [py search readme]: https://github.com/Azure/azure-sdk-for-python/blob/main/sdk/maps/azure-maps-search/README.md [py search sample]: https://github.com/Azure/azure-sdk-for-python/tree/main/sdk/maps/azure-maps-search/samples For more information, see the [Java SDK Developers Guide](how-to-dev-guide-java- <!-- JavaScript/TypeScript SDK Developers Guide > [Node.js]: https://nodejs.org/en/download/+[JavaScript/TypeScript SDK Developers Guide]: how-to-dev-guide-js-sdk.md [js search readme]: https://github.com/Azure/azure-sdk-for-js/blob/main/sdk/maps/maps-search-rest/README.md [js search package]: https://www.npmjs.com/package/@azure-rest/maps-search [js search sample]: https://github.com/Azure/azure-sdk-for-js/tree/main/sdk/maps/maps-search-rest/samples/v1-beta/javascript For more information, see the [Java SDK Developers Guide](how-to-dev-guide-java- <!-- Java SDK Developers Guide > [Java 8]: https://www.java.com/en/download/java8_update.jsp+[Java SDK Developers Guide]: how-to-dev-guide-java-sdk.md [java search package]: https://repo1.maven.org/maven2/com/azure/azure-maps-search [java search readme]: https://github.com/Azure/azure-sdk-for-jav [java search sample]: https://github.com/Azure/azure-sdk-for-java/tree/main/sdk/maps/azure-maps-search/src/samples/java/com/azure/maps/search/samples |
azure-maps | Supported Languages | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-maps/supported-languages.md | Make sure you set up the **View** parameter as required for the REST APIs and th Ensure that you have set up the View parameter as required. View parameter specifies which set of geopolitically disputed content is returned via Azure Maps services. -Affected Azure Maps REST +Affected Azure Maps REST * Get Map Tile * Get Map Image |
azure-maps | Traffic Coverage | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-maps/traffic-coverage.md | Title: Traffic coverage | Microsoft Azure Maps + Title: Traffic coverage + description: Learn about traffic coverage in Azure Maps. See whether information on traffic flow and incidents is available in various regions throughout the world. -The Azure Maps [Traffic API](/rest/api/maps/traffic) is a suite of web services designed for developers to create web and mobile applications around real-time traffic. This data can be visualized on maps or used to generate smarter routes that factor in current driving conditions. +The Azure Maps [Traffic service] is a suite of web services designed for developers to create web and mobile applications around real-time traffic. This data can be visualized on maps or used to generate smarter routes that factor in current driving conditions. The following tables provide information about what kind of traffic information you can request from each country or region. If a market is missing in the following tables, it isn't currently supported. See the following articles in the REST API documentation for detailed informatio > [!div class="nextstepaction"] > [Get Traffic Incident Tile](/rest/api/maps/traffic/get-traffic-incident-tile)++[Traffic service]: /rest/api/maps/traffic |
azure-maps | Tutorial Create Store Locator | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-maps/tutorial-create-store-locator.md | In this tutorial, you'll learn how to: ## Prerequisites -1. An [Azure Maps account](quick-demo-map-app.md#create-an-azure-maps-account) using the Gen 1 (S1) or Gen 2 pricing tier. -2. An [Azure Maps primary subscription key](quick-demo-map-app.md#get-the-primary-key-for-your-account). +1. An [Azure Maps account] +2. A [subscription key] -For more information about Azure Maps authentication, see [Manage authentication in Azure Maps](how-to-manage-authentication.md). +For more information about Azure Maps authentication, see [Manage authentication in Azure Maps]. -[Visual Studio Code](https://code.visualstudio.com/) is recommended for this tutorial, but you can use any suitable integrated development environment (IDE). +[Visual Studio Code] is recommended for this tutorial, but you can use any suitable integrated development environment (IDE). ## Sample code In this tutorial, you'll create a store locator for a fictional company named *Contoso Coffee*. Also, this tutorial includes some tips to help you learn about extending the store locator with other optional functionality. -To see a live sample of what you will create in this tutorial, see [Simple Store Locator](https://samples.azuremaps.com/?sample=simple-store-locator) on the **Azure Maps Code Samples** site. +To see a live sample of what you will create in this tutorial, see [Simple Store Locator] on the **Azure Maps Code Samples** site. To more easily follow and engage this tutorial, you'll need to download the following resources: * Full source code for the [Simple Store Locator](https://github.com/Azure-Samples/AzureMapsCodeSamples/tree/master/Samples/Tutorials/Simple%20Store%20Locator) on GitHub. * [Store location data](https://github.com/Azure-Samples/AzureMapsCodeSamples/tree/master/Samples/Tutorials/Simple%20Store%20Locator/data) that you'll import into the store locator dataset.-* The [Map images](https://github.com/Azure-Samples/AzureMapsCodeSamples/tree/master/Samples/Tutorials/Simple%20Store%20Locator/images). +* The [Map images]. ## Store locator features This section lists the Azure Maps features that are demonstrated in the Contoso ## Store locator design -The following screenshot shows the general layout of the Contoso Coffee store locator application. To view and interact with the live sample, see the [Simple Store Locator](https://samples.azuremaps.com/?sample=simple-store-locator) sample application on the **Azure Maps Code Samples** site. +The following screenshot shows the general layout of the Contoso Coffee store locator application. To view and interact with the live sample, see the [Simple Store Locator] sample application on the **Azure Maps Code Samples** site. :::image type="content" source="./media/tutorial-create-store-locator/store-locator-wireframe.png" alt-text="A screenshot the Contoso Coffee store locator Azure Maps sample application."::: This section describes how to create a dataset of the stores that you want to di :::image type="content" source="./media/tutorial-create-store-locator/store-locator-data-spreadsheet.png" alt-text="Screenshot of the store locator data in an Excel workbook."::: -The excel file containing the full dataset for the Contoso Coffee locator sample application can be downloaded from the [data](https://github.com/Azure-Samples/AzureMapsCodeSamples/tree/master/Samples/Tutorials/Simple%20Store%20Locator/data) folder of the _Azure Maps code samples_ repository in GitHub. +The excel file containing the full dataset for the Contoso Coffee locator sample application can be downloaded from the [data] folder of the _Azure Maps code samples_ repository in GitHub. From the above screenshot of the data, we can make the following observations: * Location information is stored in the following six columns: **AddressLine**, **City**, **Municipality** (county), **AdminDivision** (state/province), **PostCode** (postal code), and **Country**. -* The **Latitude** and **Longitude** columns contain the coordinates for each Contoso Coffee location. If you don't have coordinate information, you can use the Azure Maps [Search service](/rest/api/maps/search) to determine the location coordinates. +* The **Latitude** and **Longitude** columns contain the coordinates for each Contoso Coffee location. If you don't have coordinate information, you can use the [Search service] to determine the location coordinates. * Some other columns contain metadata that's related to the coffee shops: a phone number, Boolean columns, and store opening and closing times in 24-hour format. The Boolean columns are for Wi-Fi and wheelchair accessibility. You can create your own columns that contain metadata that's more relevant to your location data. > [!NOTE]-> Azure Maps renders data in the [Spherical Mercator projection](glossary.md#spherical-mercator-projection) "[EPSG:3857](https://epsg.io/3857)" but reads data in "[EPSG:4326](https://epsg.io/4326)" that use the WGS84 datum. +> Azure Maps renders data in the [Spherical Mercator projection] "[EPSG:3857]" but reads data in "[EPSG:4326]" that use the WGS84 datum. ## Load Contoso Coffee shop locator dataset From the above screenshot of the data, we can make the following observations: To convert the Contoso Coffee shop location data from an Excel workbook into a tab-delimited text file: -1. Download the Excel workbook [ContosoCoffee.xlsx](https://github.com/Azure-Samples/AzureMapsCodeSamples/tree/master/Samples/Tutorials/Simple%20Store%20Locator/data) and Open it in Excel. +1. Download the Excel workbook [ContosoCoffee.xlsx] and Open it in Excel. 1. Select **File > Save As...**. If you open the text file in Notepad, it looks similar to the following text: ## Set up the project -1. Open [Visual Studio Code](https://code.visualstudio.com/), or your choice of development environments. +1. Open [Visual Studio Code], or your choice of development environments. 2. Select **File > Open Workspace...**. If you open the text file in Notepad, it looks similar to the following text: 8. Create another folder named *images*. -9. If you haven't already, download the 10 [Map images](https://github.com/Azure-Samples/AzureMapsCodeSamples/tree/master/Samples/Tutorials/Simple%20Store%20Locator/images) from the images directory in the GitHub Repository and add them to the *images* folder. +9. If you haven't already, download the 10 [Map images] from the images directory in the GitHub Repository and add them to the *images* folder. Your workspace folder should now look like the following screenshot: To create the HTML: </main> ``` -After you finish, *https://docsupdatetracker.net/index.html* should look like [Simple Store Locator.html](https://github.com/Azure-Samples/AzureMapsCodeSamples/blob/master/Samples/Tutorials/Simple%20Store%20Locator/Simple%20Store%20Locator.html). +After you finish, *https://docsupdatetracker.net/index.html* should look like [Simple Store Locator.html]. ## Define the CSS styles Run the application. You'll see the header, search box, and search button. Howev The JavaScript code in the Contoso Coffee shop locator app enables the following processes: -1. Adds an [event listener](/javascript/api/azure-maps-control/atlas.map#events) called `ready` to wait until the page has completed its loading process. When the page loading is complete, the event handler creates more event listeners to monitor the loading of the map, and give functionality to the search and **My location** buttons. +1. Adds an [event listener] called `ready` to wait until the page has completed its loading process. When the page loading is complete, the event handler creates more event listeners to monitor the loading of the map, and give functionality to the search and **My location** buttons. 2. When the user selects the search button, or types a location in the search box then presses enter, a fuzzy search against the user's query begins. The code passes in an array of country/region ISO 2 values to the `countrySet` option to limit the search results to those countries/regions. Limiting the countries/regions to search helps increase the accuracy of the results that are returned. If you resize the browser window to fewer than 700 pixels wide or open the appli In this tutorial, you learned how to create a basic store locator by using Azure Maps. The store locator you create in this tutorial might have all the functionality you need. You can add features to your store locator or use more advance features for a more custom user experience: -* Enable [suggestions as you type](https://samples.azuremaps.com/?sample=search-autosuggest-and-jquery-ui) in the search box. -* Add [support for multiple languages](https://samples.azuremaps.com/?sample=map-localization). -* Allow the user to [filter locations along a route](https://samples.azuremaps.com/?sample=filter-data-along-route). -* Add the ability to [set filters](https://samples.azuremaps.com/?sample=filter-symbols-by-property). +* Enable [suggestions as you type] in the search box. +* Add [support for multiple languages]. +* Allow the user to [filter locations along a route]. +* Add the ability to [set filters]. * Add support to specify an initial search value by using a query string. When you include this option in your store locator, users are then able to bookmark and share searches. It also provides an easy method for you to pass searches to this page from another page. -* Deploy your store locator as an [Azure App Service Web App](../app-service/quickstart-html.md). -* Store your data in a database and search for nearby locations. To learn more, see the [SQL Server spatial data types overview](/sql/relational-databases/spatial/spatial-data-types-overview?preserve-view=true&view=sql-server-2017) and [Query spatial data for the nearest neighbor](/sql/relational-databases/spatial/query-spatial-data-for-nearest-neighbor?preserve-view=true&view=sql-server-2017). +* Deploy your store locator as an [Azure App Service Web App]. +* Store your data in a database and search for nearby locations. To learn more, see the [SQL Server spatial data types overview] and [Query spatial data for the nearest neighbor]. ## Additional information * For the completed code used in this tutorial, see the [Simple Store Locator](https://github.com/Azure-Samples/AzureMapsCodeSamples/tree/master/Samples/Tutorials/Simple%20Store%20Locator) tutorial on GitHub.-* To view this sample live, see [Simple Store Locator](https://samples.azuremaps.com/?sample=simple-store-locator) on the **Azure Maps Code Samples** site. -* learn more about the coverage and capabilities of Azure Maps by using [Zoom levels and tile grid](zoom-levels-and-tile-grid.md). -* You can also [Use data-driven style expressions](data-driven-style-expressions-web-sdk.md) to apply to your business logic. +* To view this sample live, see [Simple Store Locator] on the **Azure Maps Code Samples** site. +* learn more about the coverage and capabilities of Azure Maps by using [Zoom levels and tile grid]. +* You can also [Use data-driven style expressions] to apply to your business logic. ## Next steps To see more code examples and an interactive coding experience: > [!div class="nextstepaction"] > [How to use the map control](how-to-use-map-control.md)++[Azure Maps account]: quick-demo-map-app.md#create-an-azure-maps-account +[subscription key]: quick-demo-map-app.md#get-the-primary-key-for-your-account +[Manage authentication in Azure Maps]: how-to-manage-authentication.md +[Visual Studio Code]: https://code.visualstudio.com +[Simple Store Locator]: https://samples.azuremaps.com/?sample=simple-store-locator +[data]: https://github.com/Azure-Samples/AzureMapsCodeSamples/tree/master/Samples/Tutorials/Simple%20Store%20Locator/data +[Search service]: /rest/api/maps/search +[Spherical Mercator projection]: glossary.md#spherical-mercator-projection +[EPSG:3857]: https://epsg.io/3857 +[EPSG:4326]: https://epsg.io/4326 +[ContosoCoffee.xlsx]: https://github.com/Azure-Samples/AzureMapsCodeSamples/tree/master/Samples/Tutorials/Simple%20Store%20Locator/data +[Map images]: https://github.com/Azure-Samples/AzureMapsCodeSamples/tree/master/Samples/Tutorials/Simple%20Store%20Locator/images +[Simple Store Locator.html]: https://github.com/Azure-Samples/AzureMapsCodeSamples/blob/master/Samples/Tutorials/Simple%20Store%20Locator/Simple%20Store%20Locator.html +[event listener]: /javascript/api/azure-maps-control/atlas.map#events +[suggestions as you type]: (https://samples.azuremaps.com/?sample=search-autosuggest-and-jquery-ui +[support for multiple languages]: (https://samples.azuremaps.com/?sample=map-localization +[filter locations along a route]: (https://samples.azuremaps.com/?sample=filter-data-along-route +[set filters]: (https://samples.azuremaps.com/?sample=filter-symbols-by-property +[Azure App Service Web App]: ../app-service/quickstart-html.md +[SQL Server spatial data types overview]: /sql/relational-databases/spatial/spatial-data-types-overview?preserve-view=true&view=sql-server-2017 +[Query spatial data for the nearest neighbor]: /sql/relational-databases/spatial/query-spatial-data-for-nearest-neighbor?preserve-view=true&view=sql-server-2017 +[Zoom levels and tile grid]: zoom-levels-and-tile-grid.md +[Use data-driven style expressions]: data-driven-style-expressions-web-sdk.md |
azure-maps | Tutorial Creator Indoor Maps | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-maps/tutorial-creator-indoor-maps.md | This tutorial describes how to create indoor maps for use in Microsoft Azure Map > * Get the default map configuration ID from your tileset. > [!TIP]-> You can also create a dataset from a GeoJSON package. For more information, see [Create a dataset using a GeoJson package (Preview)](how-to-dataset-geojson.md). +> You can also create a dataset from a GeoJSON package. For more information, see [Create a dataset using a GeoJson package (Preview)]. ## Prerequisites -1. [Make an Azure Maps account](quick-demo-map-app.md#create-an-azure-maps-account). -2. [Obtain a primary subscription key](quick-demo-map-app.md#get-the-primary-key-for-your-account), also known as the primary key or the subscription key. -3. [Create a Creator resource](how-to-manage-creator.md). -4. Download the [Sample drawing package](https://github.com/Azure-Samples/am-creator-indoor-data-examples/blob/master/Sample%20-%20Contoso%20Drawing%20Package.zip). +1. An [Azure Maps account] +2. A [subscription key] +3. A [Creator resource] +4. Download the [Sample drawing package] -This tutorial uses the [Postman](https://www.postman.com/) application, but you can use a different API development environment. +This tutorial uses the [Postman] application, but you can use a different API development environment. >[!IMPORTANT] >-> * This article uses the `us.atlas.microsoft.com` geographical URL. If your Creator service wasn't created in the United States, you must use a different geographical URL. For more information, see [Access to Creator Services](how-to-manage-creator.md#access-to-creator-services). +> * This article uses the `us.atlas.microsoft.com` geographical URL. If your Creator service wasn't created in the United States, you must use a different geographical URL. For more information, see [Access to Creator services]. > * In the URL examples in this article you will need to replace `{Your-Azure-Maps-Subscription-key}` with your Azure Maps subscription key. ## Upload a drawing package -Use the [Data Upload API](/rest/api/maps/data-v2/upload) to upload the drawing package to Azure Maps resources. +Use the [Data Upload API] to upload the drawing package to Azure Maps resources. -The Data Upload API is a long running transaction that implements the pattern defined in [Creator Long-Running Operation API V2](creator-long-running-operation-v2.md). +The Data Upload API is a long running transaction that implements the pattern defined in [Creator Long-Running Operation API V2]. To upload the drawing package: To upload the drawing package: 4. Select the **POST** HTTP method. -5. Enter the following URL to the [Data Upload API](/rest/api/maps/data-v2/upload) The request should look like the following URL: +5. Enter the following URL to the [Data Upload API] The request should look like the following URL: ```http https://us.atlas.microsoft.com/mapData?api-version=2.0&dataFormat=dwgzippackage&subscription-key={Your-Azure-Maps-Subscription-key} To retrieve content metadata: ## Convert a drawing package -Now that the drawing package is uploaded, you'll use the `udid` for the uploaded package to convert the package into map data. The [Conversion API](/rest/api/maps/v2/conversion) uses a long-running transaction that implements the pattern defined in the [Creator Long-Running Operation](creator-long-running-operation-v2.md) article. +Now that the drawing package is uploaded, you'll use the `udid` for the uploaded package to convert the package into map data. The [Conversion API] uses a long-running transaction that implements the pattern defined in the [Creator Long-Running Operation] article. To convert a drawing package: To convert a drawing package: 4. Select the **POST** HTTP method. -5. Enter the following URL to the [Conversion Service](/rest/api/maps/v2/conversion/convert) (replace `{Your-Azure-Maps-Subscription-key}` with your Azure Maps subscription key and `udid` with the `udid` of the uploaded package): +5. Enter the following URL to the [Conversion service] (replace `{Your-Azure-Maps-Subscription-key}` with your Azure Maps subscription key and `udid` with the `udid` of the uploaded package): ```http https://us.atlas.microsoft.com/conversions?subscription-key={Your-Azure-Maps-Subscription-key}&api-version=2.0&udid={udid}&inputType=DWG&outputOntology=facility-2.0 To check the status of the conversion process and retrieve the `conversionId`: :::image type="content" source="./media/tutorial-creator-indoor-maps/data-conversion-id.png" alt-text="A screenshot of Postman highlighting the conversion ID value that appears in the resource location key in the responses header."::: -The sample drawing package should be converted without errors or warnings. However, if you receive errors or warnings from your own drawing package, the JSON response includes a link to the [Drawing error visualizer](drawing-error-visualizer.md). You can use the Drawing Error visualizer to inspect the details of errors and warnings. To receive recommendations to resolve conversion errors and warnings, see [Drawing conversion errors and warnings](drawing-conversion-error-codes.md). +The sample drawing package should be converted without errors or warnings. However, if you receive errors or warnings from your own drawing package, the JSON response includes a link to the [Drawing error visualizer]. You can use the Drawing Error visualizer to inspect the details of errors and warnings. To receive recommendations to resolve conversion errors and warnings, see [Drawing conversion errors and warnings]. The following JSON fragment displays a sample conversion warning: The following JSON fragment displays a sample conversion warning: ## Create a dataset -A dataset is a collection of map features, such as buildings, levels, and rooms. To create a dataset, use the [Dataset Create API](/rest/api/maps/v2/dataset/create). The Dataset Create API takes the `conversionId` for the converted drawing package and returns a `datasetId` of the created dataset. +A dataset is a collection of map features, such as buildings, levels, and rooms. To create a dataset, use the [Dataset Create API]. The Dataset Create API takes the `conversionId` for the converted drawing package and returns a `datasetId` of the created dataset. To create a dataset: To create a dataset: 4. Select the **POST** HTTP method. -5. Enter the following URL to the [Dataset API](/rest/api/maps/v2/dataset). The request should look like the following URL (replace `{conversionId`} with the `conversionId` obtained in [Check drawing package conversion status](#check-the-drawing-package-conversion-status)): +5. Enter the following URL to the [Dataset service]. The request should look like the following URL (replace `{conversionId`} with the `conversionId` obtained in [Check drawing package conversion status](#check-the-drawing-package-conversion-status)): ```http https://us.atlas.microsoft.com/datasets?api-version=2.0&conversionId={conversionId}&subscription-key={Your-Azure-Maps-Subscription-key} To create a tileset: 4. Select the **POST** HTTP method. -5. Enter the following URL to the [Tileset API](/rest/api/maps/v2/tileset). The request should look like the following URL (replace `{datasetId`} with the `datasetId` obtained in the [Check the dataset creation status](#check-the-dataset-creation-status) section above: +5. Enter the following URL to the [Tileset service]. The request should look like the following URL (replace `{datasetId`} with the `datasetId` obtained in the [Check the dataset creation status](#check-the-dataset-creation-status) section above: ```http https://us.atlas.microsoft.com/tilesets?api-version=2022-09-01-preview&datasetID={datasetId}&subscription-key={Your-Azure-Maps-Primary-Subscription-key} To check the status of the tileset creation process and retrieve the `tilesetId` ## The map configuration (preview) -Once your tileset creation completes, you can get the `mapConfigurationId` using the [tileset get](/rest/api/maps/v20220901preview/tileset/get) HTTP request: +Once your tileset creation completes, you can get the `mapConfigurationId` using the [tileset get] HTTP request: 1. In the Postman app, select **New**. Once your tileset creation completes, you can get the `mapConfigurationId` using 4. Select the **GET** HTTP method. -5. Enter the following URL to the [Tileset API](/rest/api/maps/v20220901preview/tileset), passing in the tileset ID you obtained in the previous step. +5. Enter the following URL to the [Tileset service], passing in the tileset ID you obtained in the previous step. ```http https://us.atlas.microsoft.com/tilesets/{tilesetId}?api-version=2022-09-01-preview&subscription-key={Your-Azure-Maps-Subscription-key} Once your tileset creation completes, you can get the `mapConfigurationId` using "defaultMapConfigurationId": "5906cd57-2dba-389b-3313-ce6b549d4396" ``` -For more information, see [Map configuration](creator-indoor-maps.md#map-configuration) in the indoor maps concepts article. +For more information, see [Map configuration] in the indoor maps concepts article. ## Next steps > [!div class="nextstepaction"] > [Use the Azure Maps Indoor Maps module with custom styles](how-to-use-indoor-module.md)++[Azure Maps account]: quick-demo-map-app.md#create-an-azure-maps-account +[subscription key]: quick-demo-map-app.md#get-the-primary-key-for-your-account +[Creator resource]: how-to-manage-creator.md +[Sample drawing package]: https://github.com/Azure-Samples/am-creator-indoor-data-examples/blob/master/Sample%20-%20Contoso%20Drawing%20Package.zip +[Postman]: https://www.postman.com +[Access to Creator services]: how-to-manage-creator.md#access-to-creator-services +[Create a dataset using a GeoJson package (Preview)]: how-to-dataset-geojson.md +[Data Upload API]: /rest/api/maps/data-v2/upload +[Creator Long-Running Operation API V2]: creator-long-running-operation-v2.md +[Conversion API]: /rest/api/maps/v2/conversion +[Conversion service]: /rest/api/maps/v2/conversion/convert +[Creator Long-Running Operation]: creator-long-running-operation-v2.md +[Drawing error visualizer]: drawing-error-visualizer.md +[Drawing conversion errors and warnings]: drawing-conversion-error-codes.md +[Dataset Create API]: /rest/api/maps/v2/dataset/create +[Dataset service]: /rest/api/maps/v2/dataset +[Tileset service]: /rest/api/maps/v20220901preview/tileset +[tileset get]: /rest/api/maps/v20220901preview/tileset/get +[Map configuration]: creator-indoor-maps.md#map-configuration |
azure-maps | Tutorial Prioritized Routes | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-maps/tutorial-prioritized-routes.md | In this tutorial, you learn how to: 1. An [Azure Maps account](quick-demo-map-app.md#create-an-azure-maps-account). -1. An [Azure Maps primary subscription key](quick-demo-map-app.md#get-the-primary-key-for-your-account), also known as the primary key or the subscription key. For more information on authentication in Azure Maps, see [manage authentication in Azure Maps](how-to-manage-authentication.md). +1. A [subscription key](quick-demo-map-app.md#get-the-primary-key-for-your-account). ++> [!NOTE] +> For more information on authentication in Azure Maps, see [manage authentication in Azure Maps](how-to-manage-authentication.md). ## Create a new web page using the map control API The following steps show you how to create and display the Map control in a web * The `onload` event in the body of the page calls the `GetMap` function when the body of the page has loaded. * The `GetMap` function will contain the inline JavaScript code used to access the Azure Maps API. -3. Next, add the following JavaScript code to the `GetMap` function, just beneath the code added in the last step. This code creates a map control and initializes it using your Azure Maps primary subscription keys that you provide. Make sure and replace the string `<Your Azure Maps Key>` with the Azure Maps primary key that you copied from your Maps account. +3. Next, add the following JavaScript code to the `GetMap` function, just beneath the code added in the last step. This code creates a map control and initializes it using your Azure Maps subscription keys that you provide. Make sure and replace the string `<Your Azure Maps Subscription Key>` with the Azure Maps subscription key that you copied from your Maps account. ```JavaScript //Instantiate a map object var map = new atlas.Map("myMap", {- // Replace <Your Azure Maps Key> with your Azure Maps primary subscription key. https://aka.ms/am-primaryKey + // Replace <Your Azure Maps Subscription Key> with your Azure Maps subscription key. https://aka.ms/am-primaryKey authOptions: { authType: 'subscriptionKey',- subscriptionKey: '<Your Azure Maps Key>' + subscriptionKey: '<Your Azure Maps Subscription Key>' } }); ``` The following steps show you how to create and display the Map control in a web * [atlas](/javascript/api/azure-maps-control/atlas) is the namespace that contains the Azure Maps API and related visual components. * [atlas.Map](/javascript/api/azure-maps-control/atlas.map) provides the control for a visual and interactive web map. -4. Save the file and open it in your browser. The browser will display a basic map by calling `atlas.Map` using your Azure Maps primary subscription key. +4. Save the file and open it in your browser. The browser will display a basic map by calling `atlas.Map` using your Azure Maps subscription key. - :::image type="content" source="./media/tutorial-prioritized-routes/basic-map.png" alt-text="A screenshot that shows the most basic map you can make by calling the atlas Map API, using your Azure Maps primary subscription key."::: + :::image type="content" source="./media/tutorial-prioritized-routes/basic-map.png" alt-text="A screenshot that shows the most basic map you can make by calling the atlas Map API, using your Azure Maps subscription key."::: ## Render real-time traffic data on a map |
azure-maps | Tutorial Search Location | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-maps/tutorial-search-location.md | This section shows how to use the Maps [Search API](/rest/api/maps/search) to fi * The [searchURL](/javascript/api/azure-maps-rest/atlas.service.searchurl) represents a URL to Azure Maps [Search](/rest/api/maps/search) operations. -2. Next add the following script block just below the previous code just added in the map `ready` event handler. This is the code to build the search query. It uses the [Fuzzy Search Service](/rest/api/maps/search/get-search-fuzzy), a basic search API of the Search Service. Fuzzy Search Service handles most fuzzy inputs like addresses, places, and points of interest (POI). This code searches for nearby gas stations within the specified radius of the provided latitude and longitude. A GeoJSON feature collection from the response is then extracted using the `geojson.getFeatures()` method and added to the data source, which automatically results in the data being rendered on the maps symbol layer. The last part of this script block sets the maps camera view using the bounding box of the results using the Map's [setCamera](/javascript/api/azure-maps-control/atlas.map#setcamera-cameraoptionscameraboundsoptionsanimationoptions-) property. +2. Next add the following script block just below the previous code just added in the map `ready` event handler. This is the code to build the search query. It uses the [Fuzzy Search service](/rest/api/maps/search/get-search-fuzzy), a basic search API of the Search Service. Fuzzy Search service handles most fuzzy inputs like addresses, places, and points of interest (POI). This code searches for nearby gas stations within the specified radius of the provided latitude and longitude. A GeoJSON feature collection from the response is then extracted using the `geojson.getFeatures()` method and added to the data source, which automatically results in the data being rendered on the maps symbol layer. The last part of this script block sets the maps camera view using the bounding box of the results using the Map's [setCamera](/javascript/api/azure-maps-control/atlas.map#setcamera-cameraoptionscameraboundsoptionsanimationoptions-) property. ```JavaScript var query = 'gasoline-station'; |
azure-maps | Weather Coverage | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-maps/weather-coverage.md | Radar tiles, showing areas of rain, snow, ice and mixed conditions, are returned ### Severe weather alerts -Azure Maps [Severe weather alerts][severe-weather-alerts] service returns severe weather alerts from both official Government Meteorological Agencies and other leading severe weather alert providers. The service can return details such as alert type, category, level and detailed description. Severe weather includes conditions like hurricanes, tornados, tsunamis, severe thunderstorms, and fires. +[Severe weather alerts][severe-weather-alerts] service returns severe weather alerts from both official Government Meteorological Agencies and other leading severe weather alert providers. The service can return details such as alert type, category, level and detailed description. Severe weather includes conditions like hurricanes, tornados, tsunamis, severe thunderstorms, and fires. ### Other Azure Maps [Severe weather alerts][severe-weather-alerts] service returns severe - **Daily indices**. The [Get Daily Indices](/rest/api/maps/weather/get-daily-indices) service returns index values that provide information that can help in planning activities. For example, a health mobile application can notify users that today is good weather for running or playing golf. - **Historical weather**. The Historical Weather service includes Daily Historical [Records][dh-records], [Actuals][dh-actuals] and [Normals][dh-normals] that return climatology data such as past daily record temperatures, precipitation and snowfall at a given coordinate location. - **Hourly forecast**. The [Get Hourly Forecast](/rest/api/maps/weather/get-hourly-forecast) service returns detailed weather forecast information by the hour for up to 10 days.-- **Quarter-day forecast**. The [Get Quarter Day Forecast](/rest/api/maps/weather/get-quarter-day-forecast) Service returns detailed weather forecast by quarter-day for up to 15 days.-- **Tropical storms**. The Tropical Storm Service provides information about [active storms][tropical-storm-active], tropical storm [forecasts][tropical-storm-forecasts] and [locations][tropical-storm-locations] and the ability to [search][tropical-storm-search] for tropical storms by year, basin ID, or government ID.-- **Weather along route**. The [Get Weather Along Route](/rest/api/maps/weather/get-weather-along-route) Service returns hyper local (1 kilometer or less), up-to-the-minute weather nowcasts, weather hazard assessments, and notifications along a route described as a sequence of waypoints.+- **Quarter-day forecast**. The [Get Quarter Day Forecast](/rest/api/maps/weather/get-quarter-day-forecast) service returns detailed weather forecast by quarter-day for up to 15 days. +- **Tropical storms**. The Tropical Storm service provides information about [active storms][tropical-storm-active], tropical storm [forecasts][tropical-storm-forecasts] and [locations][tropical-storm-locations] and the ability to [search][tropical-storm-search] for tropical storms by year, basin ID, or government ID. +- **Weather along route**. The [Get Weather Along Route](/rest/api/maps/weather/get-weather-along-route) service returns hyper local (1 kilometer or less), up-to-the-minute weather nowcasts, weather hazard assessments, and notifications along a route described as a sequence of waypoints. ## Azure Maps Weather coverage tables |
azure-maps | Weather Service Tutorial | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-maps/weather-service-tutorial.md | for i in range(0, len(coords), 2): await session.close() ``` -The script below renders the turbine locations on the map by calling the Azure Maps [Get Map Image service](/rest/api/maps/render/getmapimage). +The script below renders the turbine locations on the map by calling the [Get Map Image service](/rest/api/maps/render/getmapimage). ```python # Render the turbine locations on the map by calling the Azure Maps Get Map Image service |
azure-monitor | Data Collection Text Log | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/agents/data-collection-text-log.md | To create the data collection rule in the Azure portal: See [Structure of a data collection rule in Azure Monitor (preview)](../essentials/data-collection-rule-structure.md#custom-logs) if you want to modify the text log DCR. > [!IMPORTANT]- > Custom data collection rules have a suffix of *Custom-*; for example, *Custom-rulename*. The *Custom-rulename* in the stream declaration must match the *Custom-rulename* name in the Log Analytics workspace. + > Custom data collection rules have a prefix of *Custom-*; for example, *Custom-rulename*. The *Custom-rulename* in the stream declaration must match the *Custom-rulename* name in the Log Analytics workspace. 1. Select **Save**. Learn more about: - [Azure Monitor Agent](azure-monitor-agent-overview.md). - [Data collection rules](../essentials/data-collection-rule-overview.md).-- [Best practices for cost management in Azure Monitor](../best-practices-cost.md).+- [Best practices for cost management in Azure Monitor](../best-practices-cost.md). |
azure-monitor | Asp Net Core | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/asp-net-core.md | This SDK requires `HttpContext`. It doesn't work in any non-HTTP applications, i For the latest updates and bug fixes, see the [release notes](./release-notes.md). +## Release Notes ++For version 2.12 and newer: [.NET SDKs (Including ASP.NET, ASP.NET Core, and Logging Adapters)](https://github.com/Microsoft/ApplicationInsights-dotnet/releases) ++Our [Service Updates](https://azure.microsoft.com/updates/?service=application-insights) also summarize major Application Insights improvements. + ## Next steps * [Explore user flows](./usage-flows.md) to understand how users move through your app. |
azure-monitor | Asp Net | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/asp-net.md | There's a known issue in the current version of Visual Studio 2019: storing the For the latest updates and bug fixes, [consult the release notes](./release-notes.md). +## Release Notes ++For version 2.12 and newer: [.NET SDKs (Including ASP.NET, ASP.NET Core, and Logging Adapters)](https://github.com/Microsoft/ApplicationInsights-dotnet/releases) ++Our [Service Updates](https://azure.microsoft.com/updates/?service=application-insights) also summarize major Application Insights improvements. + ## Next steps * Add synthetic transactions to test that your website is available from all over the world with [availability monitoring](availability-overview.md). |
azure-monitor | Opencensus Python | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/opencensus-python.md | For more information about how to use queries and logs, see [Logs in Azure Monit [!INCLUDE [azure-monitor-app-insights-test-connectivity](../../../includes/azure-monitor-app-insights-test-connectivity.md)] +## Release Notes ++For the latest release notes, see [Python Azure Monitor Exporter](https://github.com/census-instrumentation/opencensus-python/blob/master/contrib/opencensus-ext-azure/CHANGELOG.md) ++Our [Service Updates](https://azure.microsoft.com/updates/?service=application-insights) also summarize major Application Insights improvements. + ## Next steps * [Tracking incoming requests](./opencensus-python-dependency.md) |
azure-monitor | Release Notes | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/release-notes.md | - Title: Release notes for Azure Application Insights | Microsoft Docs -description: The latest updates for Application Insights SDKs. -- Previously updated : 07/27/2020---# Release Notes - Application Insights --This page outlines where to find detailed release notes regarding updates and bug fixes for each of the Application Insights SDKs. --## SDK --* .NET SDKs - - For version 2.12 and newer: [.NET SDKs (Including ASP.NET, ASP.NET Core, and Logging Adapters)](https://github.com/Microsoft/ApplicationInsights-dotnet/releases) - - For older releases: - - [ASP.NET Web Server SDK](https://github.com/Microsoft/ApplicationInsights-server-dotnet/releases) - - [.NET SDK](https://github.com/Microsoft/ApplicationInsights-dotnet/releases) - - [.NET Logging Adapters](https://github.com/Microsoft/ApplicationInsights-dotnet-logging/releases) - - [ASP.NET Core](https://github.com/Microsoft/ApplicationInsights-aspnet5/releases) -* [Java](https://github.com/Microsoft/ApplicationInsights-Java/releases) -* [JavaScript](https://github.com/microsoft/ApplicationInsights-JS/releases) -* [Python Azure Monitor Exporter](https://github.com/census-instrumentation/opencensus-python/blob/master/contrib/opencensus-ext-azure/CHANGELOG.md) --Read also our [blogs](https://azure.microsoft.com/blog/tag/application-insights/) and [Service Updates](https://azure.microsoft.com/updates/?service=application-insights) which summarize major improvements in the Application Insights service as a whole. --## Next steps --Get started with codeless monitor codeless monitoring: --* [Azure VM and Azure virtual machine scale set IIS-hosted apps](./azure-vm-vmss-apps.md) -* [IIS server](./status-monitor-v2-overview.md) -* [Azure Web Apps](./azure-web-apps.md) --Get started with code-based monitoring: --* [ASP.NET](./asp-net.md) -* [ASP.NET Core](./asp-net-core.md) -* [Java](./opentelemetry-enable.md?tabs=java) -* [Node.js](./nodejs.md) -* [Python](./opencensus-python.md) |
azure-monitor | Container Insights Cost Config | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/containers/container-insights-cost-config.md | Cost presets and collection settings are available for selection in the Azure po | Cost-optimized | 5 m | Excludes kube-system, gatekeeper-system, azure-arc | Not enabled | | Syslog | 1 m | None | Enabled by default | +[](media/container-insights-cost-config/cost-profiles-options.png#lightbox) + ## Configuring AKS data collection settings using Azure CLI Using the CLI to enable monitoring for your AKS requires passing in configuration as a JSON file. az aks enable-addons -a monitoring --enable-msi-auth-for-monitoring -g <clusterR 2. From the resource pane on the left, select the 'Insights' item under the 'Monitoring' section. 3. If you have not previously configured Container Insights, select the 'Configure Azure Monitor' button. For clusters already onboarded to Insights, select the "Monitoring Settings" button in the toolbar 4. If you are configuring Container Insights for the first time or have not migrated to using [managed identity authentication (preview)](../containers/container-insights-onboard.md#authentication), select the "Use managed identity (preview)" checkbox-5. Using the dropdown, choose one of the "Cost presets", for more configuration, you may select the "Edit profile settings" +[](media/container-insights-cost-config/cost-settings-onboarding.png#lightbox) +5. Using the dropdown, choose one of the "Cost presets", for more configuration, you may select the "Edit collection settings" +[](media/container-insights-cost-config/advanced-collection-settings.png#lightbox) 6. Click the blue "Configure" button to finish The collection settings can be modified through the input of the `dataCollection ## [Azure portal](#tab/create-portal) 1. In the Azure portal, select the AKS hybrid cluster that you wish to monitor 2. From the resource pane on the left, select the 'Insights' item under the 'Monitoring' section.-3. If you have not previously configured Container Insights, select the 'Configure Azure Monitor' button. For clusters already onboarded to Insights, select the "Monitoring Settings" button in the toolbar -4. Using the dropdown, choose one of the "Cost presets", for more configuration, you may select the "Edit advanced collection settings" +3. If you have not previously configured Container Insights, select the 'Configure Azure Monitor' button. For clusters already onboarded to Insights, select the "Monitoring Settings" button in the toolbar +[](media/container-insights-cost-config/cost-settings-onboarding.png#lightbox) +4. Using the dropdown, choose one of the "Cost presets", for more configuration, you may select the "Edit collection settings" +[](media/container-insights-cost-config/advanced-collection-settings.png#lightbox) 5. Click the blue "Configure" button to finish + ## [ARM](#tab/create-arm) The collection settings can be modified through the input of the `dataCollection 2. From the resource pane on the left, select the 'Insights' item under the 'Monitoring' section. 3. If you have not previously configured Container Insights, select the 'Configure Azure Monitor' button. For clusters already onboarded to Insights, select the "Monitoring Settings" button in the toolbar 4. If you are configuring Container Insights for the first time, select the "Use managed identity (preview)" checkbox+[](media/container-insights-cost-config/cost-settings-onboarding.png#lightbox) 5. Using the dropdown, choose one of the "Cost presets", for more configuration, you may select the "Edit advanced collection settings"+[](media/container-insights-cost-config/advanced-collection-settings.png#lightbox) 6. Click the blue "Configure" button to finish |
azure-monitor | Container Insights Cost | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/containers/container-insights-cost.md | Title: Monitoring cost for Container insights | Microsoft Docs description: This article describes the monitoring cost for metrics and inventory data collected by Container insights to help customers manage their usage and associated costs. Previously updated : 01/24/2023 Last updated : 03/02/2023 # Understand monitoring costs for Container insights By using the default [pricing](https://azure.microsoft.com/pricing/details/monit ## Control ingestion to reduce cost -Consider a scenario where your organization's different business units share Kubernetes infrastructure and a Log Analytics workspace. Each business unit is separated by a Kubernetes namespace. You can visualize how much data is ingested in each workspace by using the **Data Usage** runbook. The runbook is available from the **View Workbooks** dropdown list. +Consider a scenario where your organization's different business units share Kubernetes infrastructure and a Log Analytics workspace. Each business unit is separated by a Kubernetes namespace. You can visualize how much data is ingested in each workspace by using the **Data Usage** runbook. The runbook is available from the **Reports** tab. [](media/container-insights-cost/workbooks-dropdown.png#lightbox) |
azure-monitor | Container Insights Enable Arc Enabled Clusters | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/containers/container-insights-enable-arc-enabled-clusters.md | +- The following endpoints need to be enabled for outbound access in addition to the [Azure Arc-enabled Kubernetes network requirements](../../azure-arc/kubernetes/network-requirements.md). **Azure public cloud** |
azure-monitor | Container Insights Enable Provisioned Clusters | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/containers/container-insights-enable-provisioned-clusters.md | +- The following endpoints need to be enabled for outbound access in addition to the [Azure Arc-enabled Kubernetes network requirements](../../azure-arc/kubernetes/network-requirements.md). - Azure CLI version 2.43.0 or higher - Azure k8s-extension version 1.3.7 or higher - Azure Resource-graph version 2.1.0 |
azure-monitor | Vminsights Enable Overview | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/vm/vminsights-enable-overview.md | The DCR is defined by the options in the following table. | Option | Description | |:|:|-| Guest performance | Specifies whether to collect performance data from the guest operating system. This option is required for all machines. | +| Guest performance | Specifies whether to collect performance data from the guest operating system. This option is required for all machines. The collection interval for performance data is every 60 seconds.| | Processes and dependencies | Collects information about processes running on the virtual machine and dependencies between machines. This information enables the [Map feature in VM insights](vminsights-maps.md). This is optional and enables the [VM insights Map feature](vminsights-maps.md) for the machine. | | Log Analytics workspace | Workspace to store the data. Only workspaces with VM insights are listed. | |
azure-netapp-files | Azure Netapp Files Resource Limits | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-netapp-files/azure-netapp-files-resource-limits.md | The service dynamically adjusts the `maxfiles` limit for a volume based on its p >[!IMPORTANT] > If your volume has a volume size (quota) of more than 4 TiB and you want to increase the `maxfiles` limit, you must initiate [a support request](#request-limit-increase). -If you've allocated at least 4 TiB of quota for a volume, you can initiate a support request to increase the `maxfiles` (inodes) limit beyond 106,255,630. For every 106,255,630 files you increase (or a fraction thereof), you need to increase the corresponding volume quota by 4 TiB. For example, if you increase the `maxfiles` limit from 106,255,630 files to 212,511,260 files (or any number in between), you need to increase the volume quota from 4 TiB to 8 TiB. +For volumes 100 TiB or under, if you've allocated at least 5 TiB of quota for a volume, you can initiate a support request to increase the `maxfiles` (inodes) limit beyond 106,255,630. For every 106,255,630 files you increase (or a fraction thereof), you need to increase the corresponding volume quota by 5 TiB. For example, if you increase the `maxfiles` limit from 106,255,630 files to 212,511,260 files (or any number in between), you need to increase the volume quota from 5 TiB to 10 TiB. -You can increase the `maxfiles` limit to 531,278,150 if your volume quota is at least 20 TiB. +For volumes 100 TiB or under, you can increase the `maxfiles` limit up to 531,278,150 if your volume quota is at least 25 TiB. >[!IMPORTANT] > Once a volume has exceeded a `maxfiles` limit, you cannot reduce volume size below the quota corresponding to that `maxfiles` limit even if you have reduced the actual used file count. For example, if you have crossed the 63,753,378 `maxfiles` limit, the volume quota cannot be reduced below its corresponding index of 2 TiB. |
azure-netapp-files | Configure Customer Managed Keys | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-netapp-files/configure-customer-managed-keys.md | The following diagram demonstrates how customer-managed keys work with Azure Net Azure NetApp Files customer-managed keys is supported for the following regions: +* Australia Central +* Australia Central 2 +* Australia East +* Australia Southeast +* Brazil South +* Canada Central +* Central US * East Asia+* East US * East US 2+* France Central +* Germany North +* Germany West Central +* Japan East +* Japan West +* Korea Central +* North Central US +* North Europe +* Norway East +* Norway West +* South Africa North +* South Central US +* South India +* Southeast Asia +* Sweden Central +* Switzerland North +* UAE Central +* UAE North +* UK South * West Europe+* West US +* West US 2 +* West US 3 ## Requirements |
azure-video-indexer | Animated Characters Recognition How To | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-video-indexer/animated-characters-recognition-how-to.md | - Title: Animated character detection with Azure Video Indexer how to -description: This topic demonstrates how to use animated character detection with Azure Video Indexer. ----- Previously updated : 12/07/2020----# Use the animated character detection with portal and API ---Azure Video Indexer supports detection, grouping, and recognition of characters in animated content, this functionality is available through the Azure portal and through API. Review [this overview](animated-characters-recognition.md) article. --This article demonstrates to how to use the animated character detection with the Azure portal and the Azure Video Indexer API. --## Use the animated character detection with portal --In the trial accounts the Custom Vision integration is managed by Azure Video Indexer, you can start creating and using the animated characters model. If using the trial account, you can skip the following ("Connect your Custom Vision account") section. --### Connect your Custom Vision account (paid accounts only) --If you own an Azure Video Indexer paid account, you need to connect a Custom Vision account first. If you don't have a Custom Vision account already, create one. For more information, see [Custom Vision](../cognitive-services/custom-vision-service/overview.md). --> [!NOTE] -> Both accounts need to be in the same region. The Custom Vision integration is currently not supported in the Japan region. --Paid accounts that have access to their Custom Vision account can see the models and tagged images there. Learn more about [improving your classifier in Custom Vision](../cognitive-services/custom-vision-service/getting-started-improving-your-classifier.md). --The training of the model should be done only via Azure Video Indexer, and not via the Custom Vision website. --#### Connect a Custom Vision account with API --Follow these steps to connect your Custom Vision account to Azure Video Indexer, or to change the Custom Vision account that is currently connected to Azure Video Indexer: --1. Browse to [www.customvision.ai](https://www.customvision.ai) and sign in. -1. Copy the keys for the Training and Prediction resources: -- > [!NOTE] - > To provide all the keys you need to have two separate resources in Custom Vision, one for training and one for prediction. -1. Provide other information: -- * Endpoint - * Prediction resource ID -1. Browse and sign in to the [Azure Video Indexer](https://vi.microsoft.com/). -1. Select the question mark on the top-right corner of the page and choose **API Reference**. -1. Make sure you're subscribed to API Management by clicking **Products** tab. If you have an API connected you can continue to the next step, otherwise, subscribe. -1. On the developer portal, select the **Complete API Reference** and browse to **Operations**. -1. Select **Connect Custom Vision Account** and select **Try it**. -1. Fill in the required fields and the access token and select **Send**. -- For more information about how to get the Azure Video Indexer access token, go to the [developer portal](https://api-portal.videoindexer.ai/api-details#api=Operations&operation=Get-Account-Access-Token), and see the [relevant documentation](video-indexer-use-apis.md#obtain-access-token-using-the-authorization-api). -1. Once the call return 200 OK response, your account is connected. -1. To verify your connection by browse to the [Azure Video Indexer](https://vi.microsoft.com/) portal: -1. Select the **Content model customization** button in the top-right corner. -1. Go to the **Animated characters** tab. -1. Once you select Manage models in Custom Vision, you'll be transferred to the Custom Vision account you just connected. --> [!NOTE] -> Currently, only models that were created via Azure Video Indexer are supported. Models that are created through Custom Vision will not be available. In addition, the best practice is to edit models that were created through Azure Video Indexer only through the Azure Video Indexer platform, since changes made through Custom Vision may cause unintended results. --### Create an animated characters model --1. Browse to the [Azure Video Indexer](https://vi.microsoft.com/) website and sign in. -1. To customize a model in your account, select the **Content model customization** button on the left of the page. -- > [!div class="mx-imgBorder"] - > :::image type="content" source="./media/content-model-customization/content-model-customization.png" alt-text="Customize content model in Azure Video Indexer "::: -1. Go to the **Animated characters** tab in the model customization section. -1. Select **Add model**. -1. Name your model and select enter to save the name. --> [!NOTE] -> The best practice is to have one custom vision model for each animated series. --### Index a video with an animated model --For the initial training, upload at least two videos. Each should be preferably longer than 15 minutes, before expecting good recognition model. If you have shorter episodes, we recommend uploading at least 30 minutes of video content before training. This will allow you to merge groups that belong to the same character from different scenes and backgrounds, and therefore increase the chance it will detect the character in the following episodes you index. To train a model on multiple videos (episodes), you need to index them all with the same animation model. --1. Select the **Upload** button. -1. Choose a video to upload (from a file or a URL). -1. Select **Advanced options**. -1. Under **People / Animated characters** choose **Animation models**. -1. If you have one model it will be chosen automatically, and if you have multiple models you can choose the relevant one out of the dropdown menu. -1. Select upload. -1. Once the video is indexed, you'll see the detected characters in the **Animated characters** section in the **Insights** pane. --Before tagging and training the model, all animated characters will be named “Unknown #X”. After you train the model, they'll also be recognized. --### Customize the animated characters models --1. Name the characters in Azure Video Indexer. -- 1. After the model created character group, it's recommended to review these groups in Custom Vision. - 1. To tag an animated character in your video, go to the **Insights** tab and select the **Edit** button on the top-right corner of the window. - 1. In the **Insights** pane, select any of the detected animated characters and change their names from "Unknown #X" to a temporary name (or the name that was previously assigned to the character). - 1. After typing in the new name, select the check icon next to the new name. This saves the new name in the model in Azure Video Indexer. -1. Paid accounts only: Review the groups in Custom Vision -- > [!NOTE] - > Paid accounts that have access to their Custom Vision account can see the models and tagged images there. Learn more about [improving your classifier in Custom Vision](../cognitive-services/custom-vision-service/getting-started-improving-your-classifier.md). It’s important to note that training of the model should be done only via Azure Video Indexer (as described in this topic), and not via the Custom Vision website. -- 1. Go to the **Custom Models** page in Azure Video Indexer and choose the **Animated characters** tab. - 1. Select the Edit button for the model you're working on to manage it in Custom Vision. - 1. Review each character group: -- * If the group contains unrelated images, it's recommended to delete these in the Custom Vision website. - * If there are images that belong to a different character, change the tag on these specific images by selecting the image, adding the right tag and deleting the wrong tag. - * If the group isn't correct, meaning it contains mainly non-character images or images from multiple characters, you can delete in Custom Vision website or in Azure Video Indexer insights. - * The grouping algorithm will sometimes split your characters to different groups. It's therefore recommended to give all the groups that belong to the same character the same name (in Azure Video Indexer Insights), which will immediately cause all these groups to appear as on in Custom Vision website. - 1. Once the group is refined, make sure the initial name you tagged it with reflects the character in the group. -1. Train the model -- 1. After you finished editing all names you want, you need to train the model. - 1. Once a character is trained into the model, it will be recognized it the next video indexed with that model. - 1. Open the customization page and select the **Animated characters** tab and then select the **Train** button to train your model. In order to keep the connection between Video - -Indexer and the model, don't train the model in the Custom Vision website (paid accounts have access to Custom Vision website), only in Azure Video Indexer. -Once trained, any video that will be indexed or reindexed with that model will recognize the trained characters. --## Delete an animated character and the model --1. Delete an animated character. -- 1. To delete an animated character in your video insights, go to the **Insights** tab and select the **Edit** button on the top-right corner of the window. - 1. Choose the animated character and then select the **Delete** button under their name. -- > [!NOTE] - > This will delete the insight from this video but will not affect the model. -1. Delete a model. -- 1. Select the **Content model customization** button on the top menu and go to the **Animated characters** tab. - 1. Select the ellipsis icon to the right of the model you wish to delete and then on the delete button. - - * Paid account: the model will be disconnected from Azure Video Indexer and you won't be able to reconnect it. - * Trial account: the model will be deleted from Customs vision as well. - - > [!NOTE] - > In a trial account, you only have one model you can use. After you delete it, you can’t train other models. --## Use the animated character detection with API --1. Connect a Custom Vision account. -- If you own an Azure Video Indexer paid account, you need to connect a Custom Vision account first. <br/> - If you don’t have a Custom Vision account already, create one. For more information, see [Custom Vision](../cognitive-services/custom-vision-service/overview.md). -- [Connect your Custom Vision account using API](https://api-portal.videoindexer.ai/api-details#api=Operations&operation=Connect-Custom-Vision-Account). -1. Create an animated characters model. -- Use the [create animation model](https://api-portal.videoindexer.ai/api-details#api=Operations&operation=Create-Animation-Model) API. -1. Index or reindex a video. -- Use the [re-indexing](https://api-portal.videoindexer.ai/api-details#api=Operations&operation=Re-Index-Video) API. -1. Customize the animated characters models. -- Use the [train animation model](https://api-portal.videoindexer.ai/api-details#api=Operations&operation=Train-Animation-Model) API. --### View the output --See the animated characters in the generated JSON file. --```json -"animatedCharacters": [ - { - "videoId": "e867214582", - "confidence": 0, - "thumbnailId": "00000000-0000-0000-0000-000000000000", - "seenDuration": 201.5, - "seenDurationRatio": 0.3175, - "isKnownCharacter": true, - "id": 4, - "name": "Bunny", - "appearances": [ - { - "startTime": "0:00:52.333", - "endTime": "0:02:02.6", - "startSeconds": 52.3, - "endSeconds": 122.6 - }, - { - "startTime": "0:02:40.633", - "endTime": "0:03:16.6", - "startSeconds": 160.6, - "endSeconds": 196.6 - }, - ] - }, -] -``` --## Limitations --* Currently, the "animation identification" capability isn't supported in East-Asia region. -* Characters that appear to be small or far in the video may not be identified properly if the video's quality is poor. -* The recommendation is to use a model per set of animated characters (for example per an animated series). --## Next steps --[Azure Video Indexer overview](video-indexer-overview.md) |
azure-video-indexer | Animated Characters Recognition | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-video-indexer/animated-characters-recognition.md | - Title: Animated character detection with Azure Video Indexer -description: Azure Video Indexer supports detection, grouping, and recognition of characters in animated content via integration with Cognitive Services custom vision. This functionality is available both through the portal and through the API. - Previously updated : 11/19/2019---# Animated character detection ---Azure Video Indexer supports detection, grouping, and recognition of characters in animated content via integration with [Cognitive Services custom vision](https://azure.microsoft.com/services/cognitive-services/custom-vision-service/). This functionality is available both through the portal and through the API. --After uploading an animated video with a specific animation model, Azure Video Indexer extracts keyframes, detects animated characters in these frames, groups similar character, and chooses the best sample. Then, it sends the grouped characters to Custom Vision that identifies characters based on the models it was trained on. --Before you start training your model, the characters are detected namelessly. As you add names and train the model the Azure Video Indexer will recognize the characters and name them accordingly. --## Flow diagram --The following diagram demonstrates the flow of the animated character detection process. --> [!div class="mx-imgBorder"] -> :::image type="content" source="./media/animated-characters-recognition/flow.png" alt-text="Image of a flow diagram ." lightbox="./media/animated-characters-recognition/flow.png"::: --## Accounts --Depending on a type of your Azure Video Indexer account, different feature sets are available. For information on how to connect your account to Azure, see [Create an Azure Video Indexer account connected to Azure](connect-to-azure.md). --* The trial account: Azure Video Indexer uses an internal Custom Vision account to create model and connect it to your Azure Video Indexer account. -* The paid account: you connect your Custom Vision account to your Azure Video Indexer account (if you donΓÇÖt already have one, you need to create an account first). --### Trial vs. paid --|Functionality|Trial|Paid| -|||| -|Custom Vision account|Managed behind the scenes by Azure Video Indexer. |Your Custom Vision account is connected to Azure Video Indexer.| -|Number of animation models|One|Up to 100 models per account (Custom Vision limitation).| -|Training the model|Azure Video Indexer trains the model for new characters additional examples of existing characters.|The account owner trains the model when they're ready to make changes.| -|Advanced options in Custom Vision|No access to the Custom Vision portal.|You can adjust the models yourself in the Custom Vision portal.| --## Use the animated character detection with portal and API --For details, see [Use the animated character detection with portal and API](animated-characters-recognition-how-to.md). --## Limitations --* Currently, the "animation identification" capability isn't supported in East-Asia region. -* Characters that appear to be small or far in the video may not be identified properly if the video's quality is poor. -* The recommendation is to use a model per set of animated characters (for example per an animated series). --## Next steps --[Azure Video Indexer overview](video-indexer-overview.md) |
azure-video-indexer | Customize Content Models Overview | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-video-indexer/customize-content-models-overview.md | Azure Video Indexer allows you to customize some of its models to be adapted to This article gives links to articles that explain the benefits of each type of customization. The article also links to how-to guides that show how you can implement the customization of each model. -## Animated characters --* [Animated character detection](animated-characters-recognition.md) - ## Brands model * [Customizing the brands model overview](customize-brands-model-overview.md) |
azure-video-indexer | Indexing Configuration Guide | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-video-indexer/indexing-configuration-guide.md | When indexing a video, default streaming settings are applied. Below are the str Output minutes (standard encoder): 130 x $0.015/minute = $1.95. - **No streaming**: Insights are generated but no streaming operation is performed and the video isn't available on the Azure Video Indexer website. When No streaming is selected, you aren't billed for encoding. -### Customizing content models - People/Animated characters and Brand categories +### Customizing content models -Azure Video Indexer allows you to customize some of its models to be adapted to your specific use case. These models include animated characters, brands, language, and person. If you have customized models, this section enables you to configure if one of the created models should be used for the indexing. +Azure Video Indexer allows you to customize some of its models to be adapted to your specific use case. These models include brands, language, and person. If you have customized models, this section enables you to configure if one of the created models should be used for the indexing. ## Next steps |
azure-video-indexer | Release Notes | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-video-indexer/release-notes.md | You can now create an Azure Video Indexer paid account in the Switzerland West a ## October 2020 -### Animated character identification improvements --Azure Video Indexer supports detection, grouping, and recognition of characters in animated content via integration with Cognitive Services custom vision. We added a major improvement to this AI algorithm in the detection and characters recognition, as a result insight accuracy and identified characters are significantly improved. - ### Planned Azure Video Indexer website authenticatication changes Starting March 1st 2021, you no longer will be able to sign up and sign in to the [Azure Video Indexer website](https://www.videoindexer.ai/) [developer portal](video-indexer-use-apis.md) using Facebook or LinkedIn. To fix the account configuration, in the Azure Video Indexer website, navigate t ### Configure the custom vision account -Configure the custom vision account on paid accounts using the Azure Video Indexer website (previously, this was only supported by API). To do that, sign in to the Azure Video Indexer website, choose Model Customization > Animated characters > Configure. +Configure the custom vision account on paid accounts using the Azure Video Indexer website (previously, this was only supported by API). To do that, sign in to the Azure Video Indexer website, choose Model Customization > <*model*> > Configure. ### Scenes, shots and keyframes ΓÇô now in one insight pane Status code 409 will now be returned from [Re-Index Video](https://api-portal.vi * Search for animated characters in the gallery - When indexing animated characters, you can now search for them in the accountΓÇÖs video galley. For more information, see [Animated characters recognition](animated-characters-recognition.md). + When indexing animated characters, you can now search for them in the accountΓÇÖs video galley. ## September 2019 Multiple advancements announced at IBC 2019: * Animated character recognition (public preview) - Ability to detect group ad recognize characters in animated content, via integration with custom vision. For more information, see [Animated character detection](animated-characters-recognition.md). + Ability to detect group ad recognize characters in animated content, via integration with custom vision. * Multi-language identification (public preview) Detect segments in multiple languages in the audio track and create a multilingual transcript based on them. Initial support: English, Spanish, German and French. For more information, see [Automatically identify and transcribe multi-language content](multi-language-identification-transcription.md). |
azure-video-indexer | Video Indexer Embed Widgets | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-video-indexer/video-indexer-embed-widgets.md | A Cognitive Insights widget includes all visual insights that were extracted fro |Name|Definition|Description| ||||-|`widgets` | Strings separated by comma | Allows you to control the insights that you want to render.<br/>Example: `https://www.videoindexer.ai/embed/insights/<accountId>/<videoId>/?widgets=people,keywords` renders only people and keywords UI insights.<br/>Available options: `people`, `animatedCharacters`, `keywords`, `audioEffects`, `labels`, `sentiments`, `emotions`, `topics`, `keyframes`, `transcript`, `ocr`, `speakers`, `scenes`, `spokenLanguage`, `observedPeople`, `namedEntities`.| +|`widgets` | Strings separated by comma | Allows you to control the insights that you want to render.<br/>Example: `https://www.videoindexer.ai/embed/insights/<accountId>/<videoId>/?widgets=people,keywords` renders only people and keywords UI insights.<br/>Available options: `people`, `keywords`, `audioEffects`, `labels`, `sentiments`, `emotions`, `topics`, `keyframes`, `transcript`, `ocr`, `speakers`, `scenes`, `spokenLanguage`, `observedPeople`, `namedEntities`.| |`controls`|Strings separated by comma|Allows you to control the controls that you want to render.<br/>Example: `https://www.videoindexer.ai/embed/insights/<accountId>/<videoId>/?controls=search,download` renders only search option and download button.<br/>Available options: `search`, `download`, `presets`, `language`.| |`language`|A short language code (language name)|Controls insights language.<br/>Example: `https://www.videoindexer.ai/embed/insights/<accountId>/<videoId>/?language=es-es` <br/>or `https://www.videoindexer.ai/embed/insights/<accountId>/<videoId>/?language=spanish`| |`locale` | A short language code | Controls the language of the UI. The default value is `en`. <br/>Example: `locale=de`.| If you embed Azure Video Indexer insights with your own [Azure Media Player](htt You can choose the types of insights that you want. To do this, specify them as a value to the following URL parameter that's added to the embed code that you get (from the [API](https://aka.ms/avam-dev-portal) or from the [Azure Video Indexer](https://www.videoindexer.ai/) website): `&widgets=<list of wanted widgets>`. -The possible values are: `people`, `animatedCharacters` , `keywords`, `labels`, `sentiments`, `emotions`, `topics`, `keyframes`, `transcript`, `ocr`, `speakers`, `scenes`, `namedEntities`, `logos`. +The possible values are: `people`, `keywords`, `labels`, `sentiments`, `emotions`, `topics`, `keyframes`, `transcript`, `ocr`, `speakers`, `scenes`, `namedEntities`, `logos`. For example, if you want to embed a widget that contains only people and keywords insights, the iframe embed URL will look like this: |
azure-video-indexer | Video Indexer Output Json V2 | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-video-indexer/video-indexer-output-json-v2.md | This section shows a summary of the insights. |`duration`|The time when an insight occurred, in seconds.| |`thumbnailVideoId`|The ID of the video from which the thumbnail was taken.| |`thumbnailId`|The video's thumbnail ID. To get the actual thumbnail, call [Get-Thumbnail](https://api-portal.videoindexer.ai/api-details#api=Operations&operation=Get-Video-Thumbnail) and pass it `thumbnailVideoId` and `thumbnailId`.|-|`faces/animatedCharacters`|Contains zero or more faces. For more information, see [faces/animatedCharacters](#facesanimatedcharacters).| +|`faces`|Contains zero or more faces. For more information, see [faces](#faces).| |`keywords`|Contains zero or more keywords. For more information, see [keywords](#keywords).| |`sentiments`|Contains zero or more sentiments. For more information, see [sentiments](#sentiments).| |`audioEffects`| Contains zero or more audio effects. For more information, see [audioEffects](#audioeffects-preview).| A face might have an ID, a name, a thumbnail, other metadata, and a list of its |`ocr`|The [OCR](#ocr) insight.| |`keywords`|The [keywords](#keywords) insight.| |`transcripts`|Might contain one or more [transcript](#transcript).|-|`faces/animatedCharacters`|The [faces/animatedCharacters](#facesanimatedcharacters) insight.| +|`faces`|The [faces](#faces) insight.| |`labels`|The [labels](#labels) insight.| |`shots`|The [shots](#shots) insight.| |`brands`|The [brands](#brands) insight.| Example: } ``` -#### faces/animatedCharacters +#### faces -The `animatedCharacters` element replaces `faces` if the video was indexed with an animated characters model. This indexing is done through a custom model in Custom Vision. Azure Video Indexer runs it on keyframes. --If faces (not animated characters) are present, Azure Video Indexer uses the Face API on all the video's frames to detect faces and celebrities. +If faces are present, Azure Video Indexer uses the Face API on all the video's frames to detect faces and celebrities. |Name|Description| ||| |
batch | Batch Custom Image Pools To Azure Compute Gallery Migration Guide | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/batch/batch-custom-image-pools-to-azure-compute-gallery-migration-guide.md | + + Title: Migrate Azure Batch custom image pools to Azure Compute Gallery +description: Learn how to migrate Azure Batch custom image pools to Azure compute gallery and plan for feature end of support. ++ Last updated : 03/07/2023+++# Migrate Azure Batch custom image pools to Azure Compute Gallery ++To improve reliability, scale, and align with modern Azure offerings, Azure Batch will retire custom image Batch pools specified +from virtual hard disk (VHD) blobs in Azure Storage and Azure Managed Images on *March 31, 2026*. Learn how to migrate your Azure +Batch custom image pools using Azure Compute Gallery. ++## Feature end of support ++When you create an Azure Batch pool using the Virtual Machine Configuration, you specify an image reference that provides the +operating system for each compute node in the pool. You can create a pool of virtual machines either with a supported Azure +Marketplace image or with a custom image. Custom images from VHD blobs and managed Images are either legacy offerings or +non-scalable solutions for Azure Batch. To ensure reliable infrastructure provisioning at scale, all custom image sources other +than Azure Compute Gallery will be retired on *March 31, 2026*. ++## Alternative: Use Azure Compute Gallery references for Batch custom image pools ++When you use the Azure Compute Gallery (formerly known as Shared Image Gallery) for your custom image, you have control over +the operating system type and configuration, and the type of data disks. Your shared image can include applications and reference +data that become available on all the Batch pool nodes as soon as they're provisioned. You can also have multiple versions of an +image as needed for your environment. When you use an image version to create a VM, the image version is used to create new +disks for the VM. ++Using a shared image saves time in preparing your pool's compute nodes to run your Batch workload. It's possible to use an +Azure Marketplace image and install software on each compute node after allocation. However, using a shared image can lead +to more efficiencies in faster compute node to ready state and reproducible workloads. Additionally, you can specify multiple +replicas for the shared image so when you create pools with many compute nodes, provisioning latencies can be lower. ++## Migrate your eligible pools ++To migrate your Batch custom image pools from managed image to shared image, review the Azure Batch guide on using +[Azure Compute Gallery to create a custom image pool](batch-sig-images.md). ++If you have either a VHD blob or a managed image, you can convert them directly to a Compute Gallery image that can be used +with Azure Batch custom image pools. When you're creating a VM image definition for a Compute Gallery, on the Version tab, +you can select a source option to migrate from, including types being retired for Batch custom image pools: ++| Source | Other fields | +||| +| Managed image | Select the **Source image** from the drop-down. The managed image must be in the same region that you chose in **Instance details.** | +| VHD in a storage account | Select **Browse** to choose the storage account for the VHD. | ++For more information about this process, see +[creating an image definition and version for Compute Gallery](../virtual-machines/image-version.md#create-an-image). ++## FAQs ++- How can I create an Azure Compute Gallery? ++ See the [guide](../virtual-machines/create-gallery.md#create-a-private-gallery) for Compute Gallery creation. ++- How do I create a Pool with a Compute Gallery image? ++ See the [guide](batch-sig-images.md) for creating a Pool with a Compute Gallery image. ++- What considerations are there for Compute Gallery image based Pools? ++ See the [considerations for large pools](batch-sig-images.md#considerations-for-large-pools). ++- Can I use Azure Compute Gallery images in different subscriptions or in different Azure AD tenants? ++ If the Shared Image isn't in the same subscription as the Batch account, you must register the `Microsoft.Batch` resource provider for that subscription. The two subscriptions must be in the same Azure AD tenant. The image can be in a different region as long as it has replicas in the same region as your Batch account. ++## Next steps ++For more information, see [Azure Compute Gallery](../virtual-machines/azure-compute-gallery.md). |
batch | Batch Pools To Simplified Compute Node Communication Model Migration Guide | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/batch/batch-pools-to-simplified-compute-node-communication-model-migration-guide.md | + + Title: Migrate Azure Batch pools to the simplified compute node communication model +description: Learn how to migrate Azure Batch pools to the simplified compute node communication model and plan for feature end of support. ++ Last updated : 03/07/2023+++# Migrate Azure Batch pools to the simplified compute node communication model ++To improve security, simplify the user experience, and enable key future improvements, Azure Batch will retire the classic +compute node communication model on *March 31, 2026*. Learn how to migrate your Batch pools to using the simplified compute +node communication model. ++## About the feature ++An Azure Batch pool contains one or more compute nodes, which execute user-specified workloads in the form of Batch tasks. +To enable Batch functionality and Batch pool infrastructure management, compute nodes must communicate with the Azure Batch +service. In the classic compute node communication model, the Batch service initiates communication to the compute nodes and +compute nodes must be able to communicate with Azure Storage for baseline operations. In the Simplified compute node +communication model, Batch pools only require outbound access to the Batch service for baseline operations. ++## Feature end of support ++The simplified compute node communication model will replace the classic compute node communication model after *March 31, 2026*. +The change is introduced in two phases: ++- From now until *September 30, 2024*, the default node communication mode for newly created +[Batch pools with virtual networks](./batch-virtual-network.md) will remain as classic. +- After *September 30, 2024*, the default node communication mode for newly created Batch pools with virtual networks will +switch to the simplified. ++After *March 31, 2026*, the option to use classic compute node communication mode will no longer be honored. Batch pools +without user-specified virtual networks are generally unaffected by this change and the Batch service controls the default +communication mode. ++## Alternative: Use simplified compute node communication model ++The simplified compute node communication mode streamlines the way Batch pool infrastructure is managed on behalf of users. +This communication mode reduces the complexity and scope of inbound and outbound networking connections required in the +baseline operations. ++The simplified model also provides more fine-grained data exfiltration control, since outbound communication to +*Storage.region* is no longer required. You can explicitly lock down outbound communication to Azure Storage if necessary for +your workflow. For example, autostorage accounts for AppPackages and other storage accounts for resource files or output files +can be scoped appropriately. ++## Migrate your eligible pools ++To migrate your Batch pools from classic to the simplified compute node communication model, follow this document +from the section entitled +[potential impact between classic and simplified communication modes](simplified-compute-node-communication.md#potential-impact-between-classic-and-simplified-communication-modes). +You can either create new pools or update existing pools with simplified compute node communication. ++## FAQs ++- Are public IP addresses still required for my pools? ++ By default, a public IP address is still needed to initiate the outbound connection to the Azure Batch service from compute nodes. If you want to eliminate the need for public IP addresses from compute nodes entirely, see the guide to [create a simplified node communication pool without public IP addresses](./simplified-node-communication-pool-no-public-ip.md) ++- How can I connect to my nodes for diagnostic purposes? ++ RDP or SSH connectivity to the node is unaffected ΓÇô load balancer(s) are still created which can route those requests through to the node when provisioned with a public IP address. ++- Are there any differences in billing? ++ There should be no cost or billing implications for the new model. ++- Are there any changes to Azure Batch agents on the compute node? ++ An extra agent on compute nodes is invoked in simplified compute node communication mode for both Linux and Windows, `Microsoft.BatchClusters.Agent` and `Microsoft.BatchClusters.Agent.exe`, respectively. ++- Are there any changes to how my linked resources from Azure Storage in Batch pools and tasks are downloaded? ++ This behavior is unaffected ΓÇô all user-specified resources that require Azure Storage such as resource files, output files, or application packages are made from the compute node directly to Azure Storage. You need to ensure your networking configuration allows these flows. ++## Next steps ++For more information, see [Simplified compute node communication](./simplified-compute-node-communication.md). |
cloud-shell | Overview | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cloud-shell/overview.md | description: Overview of the Azure Cloud Shell. ms.contributor: jahelmic Previously updated : 11/14/2022 Last updated : 03/03/2023 vm-linux Title: Azure Cloud Shell overview # Overview of Azure Cloud Shell -Azure Cloud Shell is an interactive, authenticated, browser-accessible shell for managing Azure +Azure Cloud Shell is an interactive, authenticated, browser-accessible terminal for managing Azure resources. It provides the flexibility of choosing the shell experience that best suits the way you work, either Bash or PowerShell. -You can access Cloud Shell in three ways: +Cloud Shell runs on a temporary host provided on a per-session, per-user basis. Your Cloud Shell +session times out after 20 minutes without interactive activity. Cloud Shell persists your files in +your `$HOME` location using a 5-GB file share. -- **Direct link**: Open a browser to [https://shell.azure.com][11].--- **Azure portal**: Select the Cloud Shell icon on the [Azure portal][10]:-- ![Icon to launch Cloud Shell from the Azure portal][14] --- **Code samples**: In Microsoft [technical documentation][02] and [training resources][05], select- the **Try It** button that appears with Azure CLI and Azure PowerShell code snippets: -- ```azurecli-interactive - az account show - ``` -- ```azurepowershell-interactive - Get-AzSubscription - ``` -- The **Try It** button opens Cloud Shell directly alongside the documentation using Bash (for - Azure CLI snippets) or PowerShell (for Azure PowerShell snippets). -- To run the command, use **Copy** in the code snippet, use - <kbd>Ctrl</kbd>+<kbd>Shift</kbd>+<kbd>V</kbd> (Windows/Linux) or - <kbd>Cmd</kbd>+<kbd>Shift</kbd>+<kbd>V</kbd> (macOS) to paste the command, and then press - <kbd>Enter</kbd>. --## Features --### Browser-based shell experience --Cloud Shell enables access to a browser-based command-line experience built with Azure management -tasks in mind. Use Cloud Shell to work untethered from a local machine in a way only the cloud -can provide. --### Choice of preferred shell experience --Users can choose between Bash or PowerShell. --1. Select **Cloud Shell**. -- ![Cloud Shell icon][13] --1. Select **Bash** or **PowerShell**. -- ![Choose either Bash or PowerShell][12] -- After first launch, you can use the shell type drop-down control to switch between Bash and - PowerShell: -- ![Drop-down control to select Bash or PowerShell][15] --### Authenticated and configured Azure workstation --Cloud Shell is managed by Microsoft so it comes with popular command-line tools and language -support. Cloud Shell also securely authenticates automatically for instant access to your resources -through the Azure CLI or Azure PowerShell cmdlets. --View the full [list of tools installed in Cloud Shell.][07] --### Integrated Cloud Shell editor --Cloud Shell offers an integrated graphical text editor based on the open source Monaco Editor. -Create and edit configuration files by running `code .` for seamless deployment through Azure CLI or -Azure PowerShell. --[Learn more about the Cloud Shell editor][20]. --### Multiple access points +## Multiple access points Cloud Shell is a flexible tool that can be used from: -- [portal.azure.com][10]-- [shell.azure.com][11]-- [Azure CLI documentation][03]-- [Azure PowerShell documentation][04]-- [Azure mobile app][08]-- [Visual Studio Code Azure Account extension][09]--### Connect your Microsoft Azure Files storage --Cloud Shell machines are temporary, but your files are persisted in two ways: through a disk image, -and through a mounted file share named `clouddrive`. On first launch, Cloud Shell prompts to create -a resource group, storage account, and Azure Files share on your behalf. This is a one-time step and -the resources created are automatically attached for all future sessions. A single file share can be -mapped and is used by both Bash and PowerShell in Cloud Shell. +- [portal.azure.com][06] +- [shell.azure.com][07] +- [Azure CLI documentation][01] +- [Azure PowerShell documentation][02] +- [Azure mobile app][04] +- [Visual Studio Code Azure Account extension][05] -Read more to learn how to mount a [new or existing storage account][16] or to learn about the -[persistence mechanisms used in Cloud Shell][17]. +## Authenticated and configured Azure workstation -> [!NOTE] -> Azure storage firewall isn't supported for cloud shell storage accounts. +Microsoft manages Cloud Shell so you don't have to. Cloud Shell comes with popular command-line +tools and language support. Cloud Shell automatically securely authenticates for instant access to +your resources through the Azure CLI or Azure PowerShell cmdlets. See the +[list of tools installed][03] in Cloud Shell. -## Concepts +Cloud Shell offers an integrated graphical text editor so you can create and edit files for seamless +deployment through Azure CLI or Azure PowerShell. For more information, see +[Using the Azure Cloud Shell editor][09]. -- Cloud Shell runs on a temporary host provided on a per-session, per-user basis-- Cloud Shell times out after 20 minutes without interactive activity-- Cloud Shell requires an Azure file share to be mounted-- Cloud Shell uses the same Azure file share for both Bash and PowerShell-- Cloud Shell is assigned one machine per user account-- Cloud Shell persists $HOME using a 5-GB image held in your file share-- Permissions are set as a regular Linux user in Bash+## Security and compliance -Learn more about features in [Bash in Cloud Shell][06] and [PowerShell in Cloud Shell][01]. +- Encryption at rest -## Compliance + All Cloud Shell infrastructure is compliant with double encryption at rest by default. You don't + have to configure anything. -### Encryption at rest +- Shell permissions -All Cloud Shell infrastructure is compliant with double encryption at rest by default. No action is -required by users. + Your user account has the permissions of a regular Linux user. ## Pricing -The machine hosting Cloud Shell is free, with a pre-requisite of a mounted Azure Files share. -Regular storage costs apply. +Use of the machine hosting Cloud Shell is free. Cloud Shell requires a storage account to host the +mounted Azure Files share. Regular storage costs apply. ## Next steps -- [Bash in Cloud Shell quickstart][19]-- [PowerShell in Cloud Shell quickstart][18]+- [Cloud Shell quickstart][08] <!-- link references -->-[01]: ./features.md -[02]: /samples/browse -[03]: /cli/azure -[04]: /powershell/azure -[05]: /training -[06]: features.md -[07]: features.md#pre-installed-tools -[08]: https://azure.microsoft.com/features/azure-portal/mobile-app/ -[09]: https://marketplace.visualstudio.com/items?itemName=ms-vscode.azure-account -[10]: https://portal.azure.com -[11]: https://shell.azure.com -[12]: media/overview/overview-choices.png -[13]: media/overview/overview-cloudshell-icon.png -[14]: media/overview/portal-launch-icon.png -[15]: media/overview/select-shell-drop-down.png -[16]: persisting-shell-storage.md -[17]: persisting-shell-storage.md#how-cloud-shell-storage-works -[18]: quickstart-powershell.md -[19]: quickstart.md -[20]: using-cloud-shell-editor.md +[01]: /cli/azure +[02]: /powershell/azure +[03]: features.md#pre-installed-tools +[04]: https://azure.microsoft.com/features/azure-portal/mobile-app/ +[05]: https://marketplace.visualstudio.com/items?itemName=ms-vscode.azure-account +[06]: https://portal.azure.com +[07]: https://shell.azure.com +[08]: quickstart.md +[09]: using-cloud-shell-editor.md |
cloud-shell | Quickstart Powershell | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cloud-shell/quickstart-powershell.md | -- -description: Learn how to use the PowerShell in your browser with Azure Cloud Shell. ---ms.contributor: jahelmic - Previously updated : 11/14/2022----tags: azure-resource-manager Title: Quickstart for PowerShell in Azure Cloud Shell--# Quickstart for PowerShell in Azure Cloud Shell --This document details how to use the PowerShell in Cloud Shell in the [Azure portal][06]. --The PowerShell experience in Azure Cloud Shell now runs [PowerShell 7.2][02] in a Linux environment. -There are differences in the PowerShell experience in Cloud Shell compared Windows PowerShell. --The filesystem in Linux is case-sensitive. Windows considers `file.txt` and `FILE.txt` to be the -same file. In Linux, they're considered to be different files. Proper casing must be used while -tab-completing in the filesystem. PowerShell specific experiences, such as tab-completing cmdlet -names, parameters, and values, aren't case-sensitive. --For a detailed list of differences, see [PowerShell differences on non-Windows platforms][01]. --## Start Cloud Shell --1. Select on **Cloud Shell** button from the top navigation bar of the Azure portal -- ![Screenshot showing how to start Azure Cloud Shell from the Azure portal.][09] --1. Select the PowerShell environment from the drop-down and you'll be in Azure drive `(Azure:)` -- ![Screenshot showing how to select the PowerShell environment for the Azure Cloud Shell.][08] --## Registering your subscription with Azure Cloud Shell --Azure Cloud Shell needs access to manage resources. Access is provided through namespaces that must -be registered to your subscription. Use the following commands to register the Microsoft.CloudShell -RP namespace in your subscription: --```azurepowershell-interactive -Select-AzSubscription -SubscriptionId <SubscriptionId> -Register-AzResourceProvider -ProviderNamespace Microsoft.CloudShell -``` --> [!NOTE] -> You only need to register the namespace once per subscription. --## Run PowerShell commands --Run regular PowerShell commands in the Cloud Shell, such as: --```azurepowershell-interactive -PS Azure:\> Get-Date -``` --```output -# You will see output similar to the following: -Friday, July 27, 2018 7:08:48 AM -``` --```azurepowershell-interactive -PS Azure:\> Get-AzVM -Status -``` --```output -# You will see output similar to the following: -ResourceGroupName Name Location VmSize OsType ProvisioningState PowerState - -- -- --MyResourceGroup2 Demo westus Standard_DS1_v2 Windows Succeeded running -MyResourceGroup MyVM1 eastus Standard_DS1 Windows Succeeded running -MyResourceGroup MyVM2 eastus Standard_DS2_v2_Promo Windows Succeeded deallocated -``` --### Interact with virtual machines --You can find all your virtual machines under the current subscription via `VirtualMachines` -directory. --```azurepowershell-interactive -PS Azure:\MySubscriptionName\VirtualMachines> dir -``` --```output -# You will see output similar to the following: - Directory: Azure:\MySubscriptionName\VirtualMachines ---Name ResourceGroupName Location VmSize OsType NIC ProvisioningState PowerState -- -- -- -- --TestVm1 MyResourceGroup1 westus Standard_DS2_v2 Windows my2008r213 Succeeded stopped -TestVm2 MyResourceGroup1 westus Standard_DS1_v2 Windows jpstest Succeeded deallocated -TestVm10 MyResourceGroup2 eastus Standard_DS1_v2 Windows mytest Succeeded running -``` --#### Invoke PowerShell script across remote VMs -- > [!WARNING] - > Please refer to [Troubleshooting remote management of Azure VMs][11]. --Assuming you have a VM, MyVM1, let's use `Invoke-AzVMCommand` to invoke a PowerShell script block on -the remote machine. --```azurepowershell-interactive -Enable-AzVMPSRemoting -Name MyVM1 -ResourceGroupname MyResourceGroup -Invoke-AzVMCommand -Name MyVM1 -ResourceGroupName MyResourceGroup -Scriptblock {Get-ComputerInfo} -Credential (Get-Credential) -``` --You can also navigate to the VirtualMachines directory first and run `Invoke-AzVMCommand` as follows. --```azurepowershell-interactive -PS Azure:\> cd MySubscriptionName\ResourceGroups\MyResourceGroup\Microsoft.Compute\virtualMachines -PS Azure:\MySubscriptionName\ResourceGroups\MyResourceGroup\Microsoft.Compute\virtualMachines> Get-Item MyVM1 | Invoke-AzVMCommand -Scriptblock {Get-ComputerInfo} -Credential (Get-Credential) -``` --```output -# You will see output similar to the following: --PSComputerName : 65.52.28.207 -RunspaceId : 2c2b60da-f9b9-4f42-a282-93316cb06fe1 -WindowsBuildLabEx : 14393.1066.amd64fre.rs1_release_sec.170327-1835 -WindowsCurrentVersion : 6.3 -WindowsEditionId : ServerDatacenter -WindowsInstallationType : Server -WindowsInstallDateFromRegistry : 5/18/2017 11:26:08 PM -WindowsProductId : 00376-40000-00000-AA947 -WindowsProductName : Windows Server 2016 Datacenter -WindowsRegisteredOrganization : -... -``` --#### Interactively sign-in to a remote VM --You can use `Enter-AzVM` to interactively log into a VM running in Azure. --```azurepowershell-interactive -Enter-AzVM -Name MyVM1 -ResourceGroupName MyResourceGroup -Credential (Get-Credential) -``` --You can also navigate to the `VirtualMachines` directory first and run `Enter-AzVM` as follows: --```azurepowershell-interactive -Get-Item MyVM1 | Enter-AzVM -Credential (Get-Credential) -``` --### Discover WebApps --By entering into the `WebApps` directory, you can easily navigate your web apps resources --```azurepowershell-interactive -dir .\WebApps\ -``` --```output -# You will see output similar to the following: - Directory: Azure:\MySubscriptionName\WebApps --Name State ResourceGroup EnabledHostNames Location -- -- - - ---mywebapp1 Stopped MyResourceGroup1 {mywebapp1.azurewebsites.net... West US -mywebapp2 Running MyResourceGroup2 {mywebapp2.azurewebsites.net... West Europe -mywebapp3 Running MyResourceGroup3 {mywebapp3.azurewebsites.net... South Central US -``` --```azurepowershell-interactive -# You can use Azure cmdlets to Start/Stop your web apps -PS Azure:\MySubscriptionName\WebApps> Start-AzWebApp -Name mywebapp1 -ResourceGroupName MyResourceGroup1 -``` --```output -# You will see output similar to the following: -Name State ResourceGroup EnabledHostNames Location -- -- - - ---mywebapp1 Running MyResourceGroup1 {mywebapp1.azurewebsites.net ... West US -``` --```azurepowershell-interactive -# Refresh the current state with -Force -PS Azure:\MySubscriptionName\WebApps> dir -Force -``` --```output -# You will see output similar to the following: - Directory: Azure:\MySubscriptionName\WebApps --Name State ResourceGroup EnabledHostNames Location -- -- - - ---mywebapp1 Running MyResourceGroup1 {mywebapp1.azurewebsites.net... West US -mywebapp2 Running MyResourceGroup2 {mywebapp2.azurewebsites.net... West Europe -mywebapp3 Running MyResourceGroup3 {mywebapp3.azurewebsites.net... South Central US -``` --## SSH --To authenticate to servers or VMs using SSH, generate the public-private key pair in Cloud Shell and -publish the public key to `authorized_keys` on the remote machine, such as -`/home/user/.ssh/authorized_keys`. --> [!NOTE] -> You can create SSH private-public keys using `ssh-keygen` and publish them to -> `$env:USERPROFILE\.ssh` in Cloud Shell. --### Using SSH --Follow instructions [here][03] to create a new VM configuration using Azure PowerShell cmdlets. -Before calling into `New-AzVM` to kick off the deployment, add SSH public key to the VM -configuration. The newly created VM will contain the public key in the `~\.ssh\authorized_keys` -location, thereby enabling credential-free SSH session to the VM. --```azurepowershell-interactive -# Create VM config object - $vmConfig using instructions on linked page above --# Generate SSH keys in Cloud Shell -ssh-keygen -t rsa -b 2048 -f $HOME\.ssh\id_rsa --# Ensure VM config is updated with SSH keys -$sshPublicKey = Get-Content "$HOME\.ssh\id_rsa.pub" -Add-AzVMSshPublicKey -VM $vmConfig -KeyData $sshPublicKey -Path "/home/azureuser/.ssh/authorized_keys" --# Create a virtual machine -New-AzVM -ResourceGroupName <yourResourceGroup> -Location <vmLocation> -VM $vmConfig --# SSH to the VM -ssh azureuser@MyVM.Domain.Com -``` --## List available commands --Under `Azure` drive, type `Get-AzCommand` to get context-specific Azure commands. --Alternatively, you can always use `Get-Command *az* -Module Az.*` to find out the available Azure -commands. --## Install custom modules --You can run `Install-Module` to install modules from the [PowerShell Gallery][07]. --## Get-Help --Type `Get-Help` to get information about PowerShell in Azure Cloud Shell. --```azurepowershell-interactive -Get-Help -``` --For a specific command, you can still do `Get-Help` followed by a cmdlet. --```azurepowershell-interactive -Get-Help Get-AzVM -``` --## Use Azure Files to store your data --You can create a script, say `helloworld.ps1`, and save it to your `clouddrive` to use it across -shell sessions. --```azurepowershell-interactive -cd $HOME\clouddrive -# Create a new file in clouddrive directory -New-Item helloworld.ps1 -# Open the new file for editing -code .\helloworld.ps1 -# Add the content, such as 'Hello World!' -.\helloworld.ps1 -Hello World! -``` --Next time when you use PowerShell in Cloud Shell, the `helloworld.ps1` file will exist under the -`$HOME\clouddrive` directory that mounts your Azure Files share. --## Use custom profile --You can customize your PowerShell environment, by creating PowerShell profiles - `profile.ps1` (or -`Microsoft.PowerShell_profile.ps1`). Save it under `$profile.CurrentUserAllHosts` (or -`$profile.CurrentUserCurrentHost`), so that it can be loaded in every PowerShell in Cloud Shell -session. --For how to create a profile, refer to [About Profiles][04]. --## Use Git --To clone a Git repo in Cloud Shell, you need to create a [personal access token][05] and use it as -the username. Once you have your token, clone the repository as follows: --```azurepowershell-interactive -git clone https://<your-access-token>@github.com/username/repo.git -``` --## Exit the shell --Type `exit` to terminate the session. --<!-- link references --> -[01]: /powershell/scripting/whats-new/unix-support -[02]: /powershell/scripting/whats-new/what-s-new-in-powershell-72 -[03]: ../virtual-machines/linux/quick-create-powershell.md -[04]: /powershell/module/microsoft.powershell.core/about/about_profiles -[05]: https://help.github.com/articles/creating-a-personal-access-token-for-the-command-line/ -[06]: https://portal.azure.com/ -[07]: https://www.powershellgallery.com/ -[08]: media/quickstart-powershell/environment-ps.png -[09]: media/quickstart-powershell/shell-icon.png -[11]: troubleshooting.md#troubleshooting-remote-management-of-azure-vms |
cloud-shell | Quickstart | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cloud-shell/quickstart.md | -description: Learn how to use the Bash command line in your browser with Azure Cloud Shell. +description: Learn how to start using Azure Cloud Shell. ms.contributor: jahelmic Previously updated : 11/14/2022 Last updated : 03/06/2023 vm-linux tags: azure-resource-manager Title: Quickstart for Bash in Azure Cloud Shell+ Title: Quickstart for Azure Cloud Shell -# Quickstart for Bash in Azure Cloud Shell +# Quickstart for Azure Cloud Shell -This document details how to use Bash in Azure Cloud Shell in the [Azure portal][03]. --> [!NOTE] -> A [PowerShell in Azure Cloud Shell][09] Quickstart is also available. +This document details how to use Bash and PowerShell in Azure Cloud Shell from the +[Azure portal][03]. ## Start Cloud Shell 1. Launch **Cloud Shell** from the top navigation of the Azure portal. - ![Screenshot showing how to start Azure Cloud Shell in the Azure portal.][05] + ![Screenshot showing how to start Azure Cloud Shell in the Azure portal.][06] -1. Select a subscription to create a storage account and Microsoft Azure Files share. -1. Select "Create storage" + The first time you start Cloud Shell you're prompted to create an Azure Storage account for the + Azure file share. -> [!TIP] -> You are automatically authenticated for Azure CLI in every session. + ![Screenshot showing the create storage prompt.][05] ++1. Select the **Subscription** used to create the storage account and file share. +1. Select **Create storage**. ++### Select your shell environment ++Cloud Shell allows you to select either **Bash** or **PowerShell** for your command-line experience. ++![Screenshot showing the shell selector.][04] ### Registering your subscription with Azure Cloud Shell Azure Cloud Shell needs access to manage resources. Access is provided through namespaces that must-be registered to your subscription. Use the following commands to register the Microsoft.CloudShell -RP namespace in your subscription: +be registered to your subscription. Use the following commands to register the +**Microsoft.CloudShell** namespace in your subscription: ++<!-- markdownlint-disable MD023 --> +<!-- markdownlint-disable MD024 --> +<!-- markdownlint-disable MD051 --> +#### [Azure CLI](#tab/azurecli) -```azurecli-interactive +```azurecli-interactive az account set --subscription <Subscription Name or Id> az provider register --namespace Microsoft.CloudShell ``` +#### [Azure PowerShell](#tab/powershell) ++```azurepowershell-interactive +Select-AzSubscription -SubscriptionId <SubscriptionId> +Register-AzResourceProvider -ProviderNamespace Microsoft.CloudShell +``` +<!-- markdownlint-enable MD023 --> +<!-- markdownlint-enable MD024 --> +<!-- markdownlint-enable MD051 --> +++ > [!NOTE] > You only need to register the namespace once per subscription. az provider register --namespace Microsoft.CloudShell 1. List subscriptions you have access to. +<!-- markdownlint-disable MD023 --> +<!-- markdownlint-disable MD024 --> +<!-- markdownlint-disable MD051 --> + #### [Azure CLI](#tab/azurecli) + ```azurecli-interactive az account list ``` + #### [Azure PowerShell](#tab/powershell) ++ ```azurepowershell-interactive + Get-AzSubscription + ``` +<!-- markdownlint-enable MD023 --> +<!-- markdownlint-enable MD024 --> +<!-- markdownlint-enable MD051 --> ++ + 1. Set your preferred subscription: +<!-- markdownlint-disable MD023 --> +<!-- markdownlint-disable MD024 --> +<!-- markdownlint-disable MD051 --> + #### [Azure CLI](#tab/azurecli) + ```azurecli-interactive az account set --subscription 'my-subscription-name' ``` + #### [Azure PowerShell](#tab/powershell) ++ ```azurepowershell-interactive + Set-AzContext -Subscription <SubscriptionId> + ``` +<!-- markdownlint-enable MD023 --> +<!-- markdownlint-enable MD024 --> +<!-- markdownlint-enable MD051 --> ++ + > [!TIP] > Your subscription is remembered for future sessions using `/home/<user>/.azure/azureProfile.json`. -### Create a resource group +### Get a list of Azure commands -Create a new resource group in WestUS named "MyRG". +<!-- markdownlint-disable MD023 --> +<!-- markdownlint-disable MD024--> +<!-- markdownlint-disable MD051 --> +#### [Azure CLI](#tab/azurecli) ++Run the following command to see a list of all Azure CLI commands. ```azurecli-interactive-az group create --location westus --name MyRG +az ``` -### Create a Linux VM --Create an Ubuntu VM in your new resource group. The Azure CLI will create SSH keys and set up the VM -with them. +Run the following command to get a list of Azure CLI commands that apply to WebApps: ```azurecli-interactive-az vm create -n myVM -g MyRG --image UbuntuLTS --generate-ssh-keys +az webapp --help ``` -> [!NOTE] -> Using `--generate-ssh-keys` instructs Azure CLI to create and set up public and private keys in -> your VM and `$Home` directory. By default keys are placed in Cloud Shell at -> `/home/<user>/.ssh/id_rsa` and `/home/<user>/.ssh/id_rsa.pub`. Your `.ssh` folder is persisted in -> your attached file share's 5-GB image used to persist `$Home`. +#### [Azure PowerShell](#tab/powershell) -Your username on this VM will be your username used in Cloud Shell ($User@Azure:). +Run the following command to see a list of all Azure PowerShell cmdlets. -### SSH into your Linux VM --1. Search for your VM name in the Azure portal search bar. -1. Select **Connect** to get your VM name and public IP address. -- ![Screenshot showing how to connect to a Linux VM using SSH.][06] --1. SSH into your VM with the `ssh` cmd. -- ```bash - ssh username@ipaddress - ``` --Upon establishing the SSH connection, you should see the Ubuntu welcome prompt. --![Screenshot showing the Ubuntu initialization and welcome prompt after you establish an SSH connection.][07] --## Cleaning up +```azurepowershell-interactive +Get-Command -Module Az.* +``` -1. Exit your ssh session. +Under `Azure` drive, the `Get-AzCommand` lists context-specific Azure commands. - ``` - exit - ``` +Run the following commands to get a list the Azure PowerShell commands that apply to WebApps. -1. Delete your resource group and any resources within it. +```azurepowershell-interactive +cd 'Azure:/My Subscription/WebApps' +Get-AzCommand +``` +<!-- markdownlint-enable MD023 --> +<!-- markdownlint-enable MD024 --> +<!-- markdownlint-enable MD051 --> - ```azurecli-interactive - az group delete -n MyRG - ``` + ## Next steps -- [Learn about persisting files for Bash in Cloud Shell][08]+- [Learn about persisting files in Cloud Shell][07] - [Learn about Azure CLI][02] - [Learn about Azure Files storage][01] Upon establishing the SSH connection, you should see the Ubuntu welcome prompt. [01]: ../storage/files/storage-files-introduction.md [02]: /cli/azure/ [03]: https://portal.azure.com/-[04]: media/quickstart/env-selector.png -[05]: media/quickstart/shell-icon.png -[06]: medi-copy.png -[07]: media/quickstart/ubuntu-welcome.png -[08]: persisting-shell-storage.md -[09]: quickstart-powershell.md +[04]: media/quickstart/choose-shell.png +[05]: media/quickstart/create-storage.png +[06]: media/quickstart/shell-icon.png +[07]: persisting-shell-storage.md |
cognitive-services | Concept Background Removal | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Computer-vision/concept-background-removal.md | + + Title: Background removal - Image Analysis ++description: Learn about background removal, an operation of Image Analysis +++++++ Last updated : 03/02/2023+++++# Background removal (version 4.0 preview) ++The Image Analysis service can divide images into multiple segments or regions to help the user identify different objects or parts of the image. Background removal creates an alpha matte that separates the foreground object from the background in an image. ++> [!div class="nextstepaction"] +> [Call the Background removal API](./how-to/background-removal.md) ++This feature provides two possible outputs based on the customer's needs: ++- The foreground object of the image without the background. This edited image shows the foreground object and makes the background transparent, allowing the foreground to be placed on a new background. +- An alpha matte that shows the opacity of the detected foreground object. This matte can be used to separate the foreground object from the background for further processing. ++This service is currently in preview, and the API may change in the future. ++## Background removal examples ++The following example images illustrate what the Image Analysis service returns when removing the background of an image and creating an alpha matte. +++|Original image |With background removed |Alpha matte | +|||| ++| | | | +|||| +| :::image type="content" source="media/background-removal/building-1.png" alt-text="Photo of a city near water."::: | :::image type="content" source="media/background-removal/building-1-result.png" alt-text="Photo of a city near water; sky is transparent."::: | :::image type="content" source="media/background-removal/building-1-matte.png" alt-text="Alpha matte of a city skyline."::: | +| :::image type="content" source="media/background-removal/person-5.png" alt-text="Photo of a group of people using a tablet."::: | :::image type="content" source="media/background-removal/person-5-result.png" alt-text="Photo of a group of people using a tablet; background is transparent."::: | :::image type="content" source="media/background-removal/person-5-matte.png" alt-text="Alpha matte of a group of people."::: | +| :::image type="content" source="media/background-removal/bears.png" alt-text="Photo of a group of bears in the woods."::: | :::image type="content" source="media/background-removal/bears-result.png" alt-text="Photo of a group of bears; background is transparent."::: | :::image type="content" source="media/background-removal/bears-alpha.png" alt-text="Alpha matte of a group of bears."::: | +++## Limitations ++It's important to note the limitations of background removal: ++* Background removal works best for categories such as people and animals, buildings and environmental structures, furniture, vehicles, food, text and graphics, and personal belongings. +* Objects that aren't prominent in the foreground may not be identified as part of the foreground. +* Images with thin and detailed structures, like hair or fur, may show some artifacts when overlaid on backgrounds with strong contrast to the original background. +* The latency of the background removal operation will be higher, up to several seconds, for large images. We suggest you experiment with integrating both modes into your workflow to find the best usage for your needs (for instance, calling background removal on the original image versus calling foreground matting on a downsampled version of the image, then resizing the alpha matte to the original size and applying it to the original image). ++## Use the API ++The background removal feature is available through the [Image Analysis - Segment](https://aka.ms/vision-4-0-ref) API (`imageanalysis:segment`). You can call this API through REST calls. See the [Background removal how-to guide](./how-to/background-removal.md) for more information. ++## Next steps ++* [Call the background removal API](./how-to/background-removal.md) |
cognitive-services | Concept Describe Images 40 | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Computer-vision/concept-describe-images-40.md | + + Title: Image captions - Image Analysis 4.0 ++description: Concepts related to the image captioning feature of the Image Analysis 4.0 API. +++++++ Last updated : 01/24/2023+++++# Image captions (version 4.0 preview) +Image captions in Image Analysis 4.0 (preview) are available through the Caption and Dense Captions features. ++Caption generates a one sentence description for all image contents. Dense Captions provides more detail by generating one sentence descriptions of up to 10 regions of the image in addition to describing the whole image. Dense Captions also returns bounding box coordinates of the described image regions. Both these features use the latest groundbreaking Florence based AI models. ++At this time, image captioning is available in English language only. ++### Gender-neutral captions +All captions contain gender terms: "man", "woman", "boy" and "girl" by default. You have the option to replace these terms with "person" in your results and receive gender-neutral captions. You can do so by setting the optional API request parameter, **gender-neutral-caption** to `true` in the request URL. ++> [!IMPORTANT] +> Image captioning in Image Analysis 4.0 is only available in the following Azure data center regions at this time: East US, France Central, Korea Central, North Europe, Southeast Asia, West Europe, West US. You must use a Computer Vision resource located in one of these regions to get results from Caption and Dense Captions features. +> +> If you have to use a Computer Vision resource outside these regions to generate image captions, please use [Image Analysis 3.2](concept-describing-images.md) which is available in all Computer Vision regions. +++Try out the image captioning features quickly and easily in your browser using Vision Studio. ++> [!div class="nextstepaction"] +> [Try Vision Studio](https://portal.vision.cognitive.azure.com/) ++## Caption example ++#### [Caption](#tab/image) ++The following JSON response illustrates what the Analysis 4.0 API returns when describing the example image based on its visual features. ++ ++```json +"captions": [ + { + "text": "a man pointing at a screen", + "confidence": 0.4891590476036072 + } +] +``` ++#### [Dense Captions](#tab/dense) ++The following JSON response illustrates what the Analysis 4.0 API returns when generating dense captions for the example image. ++ ++```json +{ + "denseCaptionsResult": { + "values": [ + { + "text": "a man driving a tractor in a farm", + "confidence": 0.535620927810669, + "boundingBox": { + "x": 0, + "y": 0, + "w": 850, + "h": 567 + } + }, + { + "text": "a man driving a tractor in a field", + "confidence": 0.5428450107574463, + "boundingBox": { + "x": 132, + "y": 266, + "w": 209, + "h": 219 + } + }, + { + "text": "a blurry image of a tree", + "confidence": 0.5139822363853455, + "boundingBox": { + "x": 147, + "y": 126, + "w": 76, + "h": 131 + } + }, + { + "text": "a man riding a tractor", + "confidence": 0.4799223840236664, + "boundingBox": { + "x": 206, + "y": 264, + "w": 64, + "h": 97 + } + }, + { + "text": "a blue sky above a hill", + "confidence": 0.35495415329933167, + "boundingBox": { + "x": 0, + "y": 0, + "w": 837, + "h": 166 + } + }, + { + "text": "a tractor in a field", + "confidence": 0.47338250279426575, + "boundingBox": { + "x": 0, + "y": 243, + "w": 838, + "h": 311 + } + } + ] + }, + "modelVersion": "2023-02-01-preview", + "metadata": { + "width": 850, + "height": 567 + } +} +``` ++++## Use the API ++#### [Image captions](#tab/image) ++The image captioning feature is part of the [Analyze Image](https://aka.ms/vision-4-0-ref) API. Include `Caption` in the **features** query parameter. Then, when you get the full JSON response, parse the string for the contents of the `"captionResult"` section. ++#### [Dense captions](#tab/dense) ++The dense captioning feature is part of the [Analyze Image](https://aka.ms/vision-4-0-ref) API. You can call this API using REST. Include `denseCaptions` in the **features** query parameter. Then, when you get the full JSON response, parse the string for the contents of the `"denseCaptionsResult"` section. ++++## Next steps ++* Learn the related concept of [object detection](concept-object-detection-40.md). +* [Quickstart: Image Analysis REST API or client libraries](./quickstarts-sdk/image-analysis-client-library-40.md?pivots=programming-language-csharp) +* [Call the Analyze Image API](./how-to/call-analyze-image-40.md) |
cognitive-services | Concept Describing Images | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Computer-vision/concept-describing-images.md | -# Image description generation +# Image descriptions Computer Vision can analyze an image and generate a human-readable phrase that describes its contents. The algorithm returns several descriptions based on different visual features, and each description is given a confidence score. The final output is a list of descriptions ordered from highest to lowest confidence. The following JSON response illustrates what the Analyze API returns when descri  -#### [Version 3.2](#tab/3-2) - ```json { "description":{ The following JSON response illustrates what the Analyze API returns when descri "modelVersion":"2021-05-01" } ```-#### [Version 4.0](#tab/4-0) --```json -{ - "metadata": - { - "width": 239, - "height": 300 - }, - "descriptionResult": - { - "values": - [ - { - "text": "a city with tall buildings", - "confidence": 0.3551448881626129 - } - ] - } -} -``` - ## Use the API -#### [Version 3.2](#tab/3-2) The image description feature is part of the [Analyze Image](https://westcentralus.dev.cognitive.microsoft.com/docs/services/computer-vision-v3-2/operations/56f91f2e778daf14a499f21b) API. You can call this API through a native SDK or through REST calls. Include `Description` in the **visualFeatures** query parameter. Then, when you get the full JSON response, parse the string for the contents of the `"description"` section. -#### [Version 4.0](#tab/4-0) --The image description feature is part of the [Analyze Image](https://aka.ms/vision-4-0-ref) API. You can call this API using REST. Include `Description` in the **features** query parameter. Then, when you get the full JSON response, parse the string for the contents of the `"description"` section. --- * [Quickstart: Image Analysis REST API or client libraries](./quickstarts-sdk/image-analysis-client-library.md?pivots=programming-language-csharp) ## Next steps |
cognitive-services | Concept Generate Thumbnails 40 | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Computer-vision/concept-generate-thumbnails-40.md | + + Title: Smart-cropped thumbnails - Image Analysis 4.0 ++description: Concepts related to generating thumbnails for images using the Image Analysis 4.0 API. +++++++ Last updated : 01/24/2023+++++# Smart-cropped thumbnails (version 4.0 preview) ++A thumbnail is a reduced-size representation of an image. Thumbnails are used to represent images and other data in a more economical, layout-friendly way. The Computer Vision API uses smart cropping to create intuitive image thumbnails that include the most important regions of an image with priority given to any detected faces. ++The Computer Vision smart-cropping utility takes one or more aspect ratios in the range [0.75, 1.80] and returns the bounding box coordinates (in pixels) of the region(s) identified. Your app can then crop and return the image using those coordinates. ++> [!IMPORTANT] +> This feature uses face detection to help determine important regions in the image. The detection does not involve distinguishing one face from another face, predicting or classifying facial attributes, or creating a facial template (a unique set of numbers generated from an image that represents the distinctive features of a face). ++## Examples ++The generated bounding box can vary widely depending on what you specify for aspect ratio, as shown in the following images. ++| Aspect ratio | Bounding box | +|-|--| +| original | :::image type="content" source="Images/cropped-original.png" alt-text="Photo of a man with a dog at a table."::: | +| 0.75 | :::image type="content" source="Images/cropped-075-bb.png" alt-text="Photo of a man with a dog at a table. A 0.75 ratio bounding box is drawn."::: | +| 1.00 | :::image type="content" source="Images/cropped-1-0-bb.png" alt-text="Photo of a man with a dog at a table. A 1.00 ratio bounding box is drawn."::: | +| 1.50 | :::image type="content" source="Images/cropped-150-bb.png" alt-text="Photo of a man with a dog at a table. A 1.50 ratio bounding box is drawn."::: | +++## Use the API ++The smart cropping feature is available through the [Analyze Image API](https://aka.ms/vision-4-0-ref). Include `SmartCrops` in the **features** query parameter. Also include a **smartcrops-aspect-ratios** query parameter, and set it to a decimal value for the aspect ratio you want (defined as width / height) in the range [0.75, 1.80]. Multiple aspect ratio values should be comma-separated. If no aspect ratio value is provided the API will return a crop with an aspect ratio that best preserves the imageΓÇÖs most important region. ++## Next steps ++* [Call the Analyze Image API](./how-to/call-analyze-image-40.md) |
cognitive-services | Concept Generating Thumbnails | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Computer-vision/concept-generating-thumbnails.md | -#### [Version 3.2](#tab/3-2) The Computer Vision thumbnail generation algorithm works as follows: 1. Remove distracting elements from the image and identify the _area of interest_—the area of the image in which the main object(s) appears. The following table illustrates thumbnails defined by smart-cropping for the exa | |  | | |  | -#### [Version 4.0](#tab/4-0) --The Computer Vision smart-cropping utility takes one or more aspect ratios in the range [0.75, 1.80] and returns the bounding box coordinates (in pixels) of the region(s) identified. Your app can then crop and return the image using those coordinates. --> [!IMPORTANT] -> This feature uses face detection to help determine important regions in the image. The detection does not involve distinguishing one face from another face, predicting or classifying facial attributes, or creating a facial template (a unique set of numbers generated from an image that represents the distinctive features of a face). --## Examples --The generated bounding box can vary widely depending on what you specify for aspect ratio, as shown in the following images. --| Aspect ratio | Bounding box | -|-|--| -| original | :::image type="content" source="Images/cropped-original.png" alt-text="Photo of a man with a dog at a table."::: | -| 0.75 | :::image type="content" source="Images/cropped-075-bb.png" alt-text="Photo of a man with a dog at a table. A 0.75 ratio bounding box is drawn."::: | -| 1.00 | :::image type="content" source="Images/cropped-1-0-bb.png" alt-text="Photo of a man with a dog at a table. A 1.00 ratio bounding box is drawn."::: | -| 1.50 | :::image type="content" source="Images/cropped-150-bb.png" alt-text="Photo of a man with a dog at a table. A 1.50 ratio bounding box is drawn."::: | --- ## Use the API -#### [Version 3.2](#tab/3-2) The generate thumbnail feature is available through the [Get Thumbnail](https://westus.dev.cognitive.microsoft.com/docs/services/computer-vision-v3-2/operations/56f91f2e778daf14a499f20c) and [Get Area of Interest](https://westus.dev.cognitive.microsoft.com/docs/services/computer-vision-v3-2/operations/b156d0f5e11e492d9f64418d) APIs. You can call this API through a native SDK or through REST calls. -#### [Version 4.0](#tab/4-0) --The smart cropping feature is available through the [Analyze](https://aka.ms/vision-4-0-ref) API. You can call this API using REST. Include `SmartCrops` in the **visualFeatures** query parameter. Also include a **smartcrops-aspect-ratios** query parameter, and set it to a decimal value for the aspect ratio you want (defined as width / height). Multiple aspect ratio values should be comma-separated. -- * [Generate a thumbnail (how-to)](./how-to/generate-thumbnail.md) |
cognitive-services | Concept Image Retrieval | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Computer-vision/concept-image-retrieval.md | + + Title: Image Retrieval concepts - Image Analysis 4.0 ++description: Concepts related to image vectorization using the Image Analysis 4.0 API. +++++++ Last updated : 03/06/2023++++# Image retrieval (version 4.0 preview) ++Image retrieval is the process of searching a large collection of images to find those that are most similar to a given query image. Image retrieval systems have traditionally used features extracted from the images, such as content labels, tags, and image descriptors, to compare images and rank them by similarity. However, vector similarity search is gaining more popularity due to a number of benefits over traditional keyword-based search and is becoming a vital component in popular content search services. ++## What's the difference between vector search and keyword-based search? ++Keyword search is the most basic and traditional method of information retrieval. In this approach, the search engine looks for the exact match of the keywords or phrases entered by the user in the search query and compares with labels and tags provided for the images. The search engine then returns images that contain those exact keywords as content tags and image labels. Keyword search relies heavily on the user's ability to input relevant and specific search terms. ++Vector search, on the other hand, searches large collections of vectors in high-dimensional space to find vectors that are similar to a given query. Vector search looks for semantic similarities by capturing the context and meaning of the search query. This approach is often more efficient than traditional image retrieval techniques, as it can reduce search space and improve the accuracy of the results. ++## Business Applications ++Image retrieval has a variety of applications in different fields, including: ++- Digital asset management: Image retrieval can be used to manage large collections of digital images, such as in museums, archives, or online galleries. Users can search for images based on visual features and retrieve the images that match their criteria. +- Medical image retrieval: Image retrieval can be used in medical imaging to search for images based on their diagnostic features or disease patterns. This can help doctors or researchers to identify similar cases or track disease progression. +- Security and surveillance: Image retrieval can be used in security and surveillance systems to search for images based on specific features or patterns, such as in, people & object tracking, or threat detection. +- Forensic image retrieval: Image retrieval can be used in forensic investigations to search for images based on their visual content or metadata, such as in cases of cyber-crime. +- E-commerce: Image retrieval can be used in online shopping applications to search for similar products based on their features or descriptions or provide recommendations based on previous purchases. +- Fashion and design: Image retrieval can be used in fashion and design to search for images based on their visual features, such as color, pattern, or texture. This can help designers or retailers to identify similar products or trends. ++## What are vector embeddings? ++Vector embeddings are a way of representing content—text or images—as vectors of real numbers in a high-dimensional space. Vector embeddings are often learned from large amounts of textual and visual data using machine learning algorithms, such as neural networks. Each dimension of the vector corresponds to a different feature or attribute of the content, such as its semantic meaning, syntactic role, or context in which it commonly appears. ++> [!NOTE] +> Vector embeddings can only be meaningfully compared if they are from the same model type. ++## How does it work? ++1. Vectorize Images and Text: the Image Retrieval APIs, **VectorizeImage** and **VectorizeText**, can be used to extract feature vectors out of an image or text respectively. The APIs return a single feature vector representing the entire input. +- Measure similarity: Vector search systems typically use distance metrics, such as cosine distance or Euclidean distance, to compare vectors and rank them by similarity. The [Vision studio](https://portal.vision.cognitive.azure.com/) demo uses [cosine distance](./how-to/image-retrieval.md#calculate-vector-similarity) to measure similarity. +- Retrieve Images: Use the top _N_ vectors similar to the search query and retrieve images corresponding to those vectors from your photo library to provide as the final result. ++## Next steps ++Enable image retrieval for your search service and follow the steps to generate vector embeddings for text and images. +* [Call the Image retrieval APIs](./how-to/image-retrieval.md) + |
cognitive-services | Concept Model Customization | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Computer-vision/concept-model-customization.md | + + Title: Model customization concepts - Image Analysis 4.0 ++description: Concepts related to the custom model feature of the Image Analysis 4.0 API. +++++++ Last updated : 02/06/2023++++# Model customization (version 4.0 preview) ++Model customization lets you train a specialized Image Analysis model for your own use case. Custom models can do either image classification (tags apply to the whole image) or object detection (tags apply to specific areas of the image). Once your custom model is created and trained, it belongs to your Computer Vision resource, and you can call it using the [Analyze Image API](./how-to/call-analyze-image-40.md). ++> [!div class="nextstepaction"] +> [Vision Studio quickstart](./how-to/model-customization.md?tabs=studio) ++> [!div class="nextstepaction"] +> [REST quickstart](./how-to/model-customization.md?tabs=rest) +++## Scenario components ++The main components of a model customization system are the training images, COCO file, dataset object, and model object. ++### Training images ++Your set of training images should include several examples of each of the labels you want to detect. You'll also want to collect a few extra images to test your model with once it's trained. The images need to be stored in an Azure Storage container in order to be accessible to the model. ++In order to train your model effectively, use images with visual variety. Select images that vary by: ++- camera angle +- lighting +- background +- visual style +- individual/grouped subject(s) +- size +- type ++Additionally, make sure all of your training images meet the following criteria: ++- The image must be presented in JPEG, PNG, GIF, BMP, WEBP, ICO, TIFF, or MPO format +- The file size of the image must be less than 20 megabytes (MB) +- The dimensions of the image must be greater than 50 x 50 pixels and less than 16,000 x 16,000 pixels ++### COCO file ++The COCO file references all of the training images and associates them with their labeling information. In the case of object detection, it specified the bounding box coordinates of each tag on each image. This file must be in the COCO format, which is a specific type of JSON file. The COCO file should be stored in the same Azure Storage container as the training images. ++> [!TIP] +> [!INCLUDE [coco-files](includes/coco-files.md)] ++### Dataset object ++The **Dataset** object is a data structure stored by the Image Analysis service that references the association file. You need to create a **Dataset** object before you can create and train a model. ++### Model object ++The **Model** object is a data structure stored by the Image Analysis service that represents a custom model. It must be associated with a **Dataset** in order to do initial training. Once it's trained, you can query your model by entering its name in the `model-version` query parameter of the [Analyze Image API call](./how-to/call-analyze-image-40.md). ++## Quota limits ++The following table describes the limits on the scale of your custom model projects. ++| Category | Generic image classifier | Generic object detector | +| - | - | - | +| Max # training hours | 288 (12 days) | 288 (12 days) | +| Max # training images | 1,000,000 | 200,000 | +| Max # evaluation images | 1,00,000 | 100,000 | +| Min # training images per category | 2 | 2 | +| Max # tags per image | multiclass: 1 | NA | +| Max # regions per image | NA | 1,000 | +| Max # categories | 2,000 | 1,000 | +| Min # categories | 2 | 1 | +| Max image size (Training) | 20 MB | 20 MB | +| Max image size (Prediction) | Sync: 6 MB, Batch: 20 MB | Sync: 6 MB, Batch: 20 MB | +| Max image width/height (Training) | 10,240 | 10,240 | +| Min image width/height (Prediction) | 50 | 50 | +| Available regions | West US 2, East US, West Europe | West US 2, East US, West Europe | +| Accepted image types | jpg, png, bmp, gif, jpeg | jpg, png, bmp, gif, jpeg | ++## Frequently asked questions ++### Why is my COCO file import failing when importing from blob storage? ++Currently, Microsoft is addressing an issue that causes COCO file import to fail with large datasets when initiated in Vision Studio. To train using a large dataset, it's recommended to use the REST API instead. ++### Why does training take longer/shorter than my specified budget? ++The specified training budget is the calibrated **compute time**, not the **wall-clock time**. Some common reasons for the difference are listed: ++- **Longer than specified budget:** + - Image Analysis experiences a high training traffic, and GPU resources may be tight. Your job may wait in the queue or be put on hold during training. + - The backend training process ran into unexpected failures, which resulted in retrying logic. The failed runs don't consume your budget, but this can lead to longer training time in general. + - Your data is stored in a different region than your created Computer Vision resource, which will lead to longer data transmission time. ++- **Shorter than specified budget:** The following factors speed up training at the cost of using more budget in certain wall-clock time. + - Image Analysis sometimes trains with multiple GPUs depending on your data. + - Image Analysis sometimes trains multiple exploration trials on multiple GPUs at the same time. + - Image Analysis sometimes uses premier (faster) GPU SKUs to train. ++### Why does my training fail and what I should do? ++The following are some common reasons for training failure: ++- `diverged`: The training can't learn meaningful things from your data. Some common causes are: + - Data is not enough: providing more data should help. + - Data is of poor quality: check if your images are of low resolution, extreme aspect ratios, or if annotations are wrong. +- `notEnoughBudget`: Your specified budget isn't enough for the size of your dataset and model type you're training. Specify a larger budget. +- `datasetCorrupt`: Usually this means your provided images aren't accessible or the annotation file is in the wrong format. +- `datasetNotFound`: dataset cannot be found +- `unknown`: This could be a backend issue. Reach out to support for investigation. ++### What metrics are used for evaluating the models? ++The following metrics are used: ++- Image classification: Average Precision, Accuracy Top 1, Accuracy Top 5 +- Object detection: Mean Average Precision @ 30, Mean Average Precision @ 50, Mean Average Precision @ 75 ++### Why does my dataset registration fail? ++The API responses should be informative enough. They are: +- `DatasetAlreadyExists`: A dataset with the same name exists +- `DatasetInvalidAnnotationUri`: "An invalid URI was provided among the annotation URIs at dataset registration time. ++### How many images are required for reasonable/good/best model quality? ++Although Florence models have great few-shot capability (achieving great model performance under limited data availability), in general more data makes your trained model better and more robust. Some scenarios require little data (like classifying an apple against a banana), but others require more (like detecting 200 kinds of insects in a rainforest). This makes it difficult to give a single recommendation. ++If your data labeling budget is constrained, our recommended workflow is to repeat the following steps: ++1. Collect `N` images per class, where `N` images are easy for you to collect (for example, `N=3`) +1. Train a model and test it on your evaluation set. +1. If the model performance is: ++ - **Good enough** (performance is better than your expectation or performance close to your previous experiment with less data collected): Stop here and use this model. + - **Not good** (performance is still below your expectation or better than your previous experiment with less data collected at a reasonable margin): + - Collect more images for each class—a number that's easy for you to collect—and go back to Step 2. + - If you notice the performance is not improving any more after a few iterations, it could be because: + - this problem isn't well defined or is too hard. Reach out to us for case-by-case analysis. + - the training data might be of low quality: check if there are wrong annotations or very low-pixel images. +++### How much training budget should I specify? ++You should specify the upper limit of budget that you're willing to consume. Image Analysis uses an AutoML system in its backend to try out different models and training recipes to find the best model for your use case. The more budget that's given, the higher the chance of finding a better model. ++The AutoML system also stops automatically if it concludes there's no need to try more, even if there is still remaining budget. So, it doesn't always exhaust your specified budget. You're guaranteed not to be billed over your specified budget. ++### Can I control the hyper-parameters or use my own models in training? ++No, Image Analysis model customization service uses a low-code AutoML training system that handles hyper-param search and base model selection in the backend. ++### Can I export my model after training? ++The prediction API is only supported through the cloud service. ++### Why does the evaluation fail for my object detection model? ++Below are the possible reasons: +- `internalServerError`: An unknown error occurred. Please try again later. +- `modelNotFound`: The specified model was not found. +- `datasetNotFound`: The specified dataset was not found. +- `datasetAnnotationsInvalid`: An error occurred while trying to download or parse the ground truth annotations associated with the test dataset. +- `datasetEmpty`: The test dataset did not contain any "ground truth" annotations. ++### What is the expected latency for predictions with custom models? ++We do not recommend you use custom models for business critical environments due to potential high latency. When customers train custom models in Vision Studio, those custom models belong to the Computer Vision resource that they were trained under, and the customer is able to make calls to those models using the **Analyze Image** API. When they make these calls, the custom model is loaded in memory, and the prediction infrastructure is initialized. While this happens, customers might experience longer than expected latency to receive prediction results. ++## Data privacy and security ++As with all of the Cognitive Services, developers using Image Analysis model customization should be aware of Microsoft's policies on customer data. See the [Cognitive Services page](https://www.microsoft.com/trustcenter/cloudservices/cognitiveservices) on the Microsoft Trust Center to learn more. ++## Next steps ++[Create and train a custom model](./how-to/model-customization.md) |
cognitive-services | Concept Object Detection 40 | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Computer-vision/concept-object-detection-40.md | + + Title: Object detection - Image Analysis 4.0 ++description: Learn concepts related to the object detection feature of the Image Analysis 4.0 API - usage and limits. +++++++ Last updated : 01/24/2023+++++# Object detection (version 4.0 preview) ++Object detection is similar to [tagging](concept-tag-images-40.md), but the API returns the bounding box coordinates (in pixels) for each object found in the image. For example, if an image contains a dog, cat and person, the object detection operation will list those objects with their coordinates in the image. You can use this functionality to process the relationships between the objects in an image. It also lets you determine whether there are multiple instances of the same object in an image. ++The object detection function applies tags based on the objects or living things identified in the image. There is currently no formal relationship between the tagging taxonomy and the object detection taxonomy. At a conceptual level, the object detection function only finds objects and living things, while the tag function can also include contextual terms like "indoor", which can't be localized with bounding boxes. ++Try out the capabilities of object detection quickly and easily in your browser using Vision Studio. ++> [!div class="nextstepaction"] +> [Try Vision Studio](https://portal.vision.cognitive.azure.com/) ++## Object detection example ++The following JSON response illustrates what the Analysis 4.0 API returns when detecting objects in the example image. ++ ++++```json +{ + "metadata": + { + "width": 1260, + "height": 473 + }, + "objectsResult": + { + "values": + [ + { + "name": "kitchen appliance", + "confidence": 0.501, + "boundingBox": {"x":730,"y":66,"w":135,"h":85} + }, + { + "name": "computer keyboard", + "confidence": 0.51, + "boundingBox": {"x":523,"y":377,"w":185,"h":46} + }, + { + "name": "Laptop", + "confidence": 0.85, + "boundingBox": {"x":471,"y":218,"w":289,"h":226} + }, + { + "name": "person", + "confidence": 0.855, + "boundingBox": {"x":654,"y":0,"w":584,"h":473} + } + ] + } +} +``` ++## Limitations ++It's important to note the limitations of object detection so you can avoid or mitigate the effects of false negatives (missed objects) and limited detail. ++* Objects are generally not detected if they're small (less than 5% of the image). +* Objects are generally not detected if they're arranged closely together (a stack of plates, for example). +* Objects are not differentiated by brand or product names (different types of sodas on a store shelf, for example). However, you can get brand information from an image by using the [Brand detection](concept-brand-detection.md) feature. ++## Use the API ++The object detection feature is part of the [Analyze Image](https://aka.ms/vision-4-0-ref) API. You can call this API using REST. Include `Objects` in the **features** query parameter. Then, when you get the full JSON response, parse the string for the contents of the `"objects"` section. ++## Next steps ++* [Call the Analyze Image API](./how-to/call-analyze-image-40.md) + |
cognitive-services | Concept Object Detection | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Computer-vision/concept-object-detection.md | The following JSON response illustrates what the Analyze API returns when detect  -#### [Version 3.2](#tab/3-2) ```json { The following JSON response illustrates what the Analyze API returns when detect } ``` -#### [Version 4.0](#tab/4-0) --```json -{ - "metadata": - { - "width": 1260, - "height": 473 - }, - "objectsResult": - { - "values": - [ - { - "name": "kitchen appliance", - "confidence": 0.501, - "boundingBox": {"x":730,"y":66,"w":135,"h":85} - }, - { - "name": "computer keyboard", - "confidence": 0.51, - "boundingBox": {"x":523,"y":377,"w":185,"h":46} - }, - { - "name": "Laptop", - "confidence": 0.85, - "boundingBox": {"x":471,"y":218,"w":289,"h":226} - }, - { - "name": "person", - "confidence": 0.855, - "boundingBox": {"x":654,"y":0,"w":584,"h":473} - } - ] - } -} -``` - ## Limitations It's important to note the limitations of object detection so you can avoid or m ## Use the API -#### [Version 3.2](#tab/3-2) - The object detection feature is part of the [Analyze Image](https://westcentralus.dev.cognitive.microsoft.com/docs/services/computer-vision-v3-2/operations/56f91f2e778daf14a499f21b) API. You can call this API through a native SDK or through REST calls. Include `Objects` in the **visualFeatures** query parameter. Then, when you get the full JSON response, parse the string for the contents of the `"objects"` section. -#### [Version 4.0](#tab/4-0) --The object detection feature is part of the [Analyze Image](https://aka.ms/vision-4-0-ref) API. You can call this API using REST. Include `Objects` in the **features** query parameter. Then, when you get the full JSON response, parse the string for the contents of the `"objects"` section. -- * [Quickstart: Computer Vision REST API or client libraries](./quickstarts-sdk/image-analysis-client-library.md?pivots=programming-language-csharp) |
cognitive-services | Concept Ocr | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Computer-vision/concept-ocr.md | Title: OCR for images - Computer Vision -description: Extract text from in-the-wild and non-document images with a fast and synchronous Computer Vision API. +description: Extract text from in-the-wild and non-document images with a fast and synchronous Computer Vision Image Analysis 4.0 API. -# OCR for images -+# OCR for images (version 4.0 preview) > [!NOTE] >-> For extracting text from PDF, Office, and HTML documents and document images, use the [Form Recognizer Read OCR model](../../applied-ai-services/form-recognizer/concept-read.md) optimized for text-heavy digital and scanned documents with an asynchronous API that makes it easy to power your intelliegnt document processing scenarios. -> +> For extracting text from PDF, Office, and HTML documents and document images, use the [Form Recognizer Read OCR model](../../applied-ai-services/form-recognizer/concept-read.md) optimized for text-heavy digital and scanned documents with an asynchronous API that makes it easy to power your intelligent document processing scenarios. OCR traditionally started as a machine-learning based technique for extracting text from in-the-wild and non-document images like product labels, user generated images, screenshots, street signs, and posters. For several scenarios that including running OCR on single images that are not text-heavy, you need a fast, synchronous API or service. This allows OCR to be embedded in near real-time user experiences to enrich content understanding and follow-up user actions with fast turn-around times. ## What is Computer Vision v4.0 Read OCR (preview) -The new Computer Vision v4.0 Image Analysis REST API preview offers the ability to extract printed or handwritten text from images in a unified performance-enhanced synchronous API that makes it easy to get all image insights including OCR results in a single API operation. The Read OCR engine is built on top of multiple deep learning models supported by universal script-based models for [global language support](./language-support.md). --## Use the V4.0 REST API preview --The text extraction feature is part of the [v4.0 Analyze Image REST API](https://aka.ms/vision-4-0-ref). Include `Read` in the **features** query parameter. Then, when you get the full JSON response, parse the string for the contents of the `"readResult"` section. --For an example, copy the following command into a text editor and replace the `<key>` with your API key and optionally, your API endpoint URL. Then open a command prompt window and run the command. +The new Computer Vision Image Analysis 4.0 REST API offers the ability to extract printed or handwritten text from images in a unified performance-enhanced synchronous API that makes it easy to get all image insights including OCR results in a single API operation. The Read OCR engine is built on top of multiple deep learning models supported by universal script-based models for [global language support](./language-support.md). -```bash - curl.exe -H "Ocp-Apim-Subscription-Key: <key>" -H "Content-Type: application/json" "https://westcentralus.api.cognitive.microsoft.com/computervision/imageanalysis:analyze?features=Read&model-version=latest&language=en&api-version=2022-10-12-preview" -d "{'url':'https://upload.wikimedia.org/wikipedia/commons/thumb/3/3c/Salto_del_Angel-Canaima-Venezuela08.JPG/800px-Salto_del_Angel-Canaima-Venezuela08.JPG'}" - -``` --## Text extraction output +## Text extraction example -The following JSON response illustrates what the v4.0 Analyze Image API returns when extracting text from the given image. +The following JSON response illustrates what the Image Analysis 4.0 API returns when extracting text from the given image.  The following JSON response illustrates what the v4.0 Analyze Image API returns } ``` +## Use the API ++The text extraction feature is part of the [Analyze Image API](https://aka.ms/vision-4-0-ref). Include `Read` in the **features** query parameter. Then, when you get the full JSON response, parse the string for the contents of the `"readResult"` section. ++ ## Next steps -Follow the v4.0 REST API sections in the [Image Analysis quickstart](./quickstarts-sdk/image-analysis-client-library.md) to extract text from an image using the Analyze API. +Follow the [Image Analysis quickstart](./quickstarts-sdk/image-analysis-client-library.md) to extract text from an image using the Image Analysis 4.0 API. |
cognitive-services | Concept People Detection | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Computer-vision/concept-people-detection.md | Last updated 09/12/2022 -# People detection (preview) +# People detection (version 4.0 preview) Version 4.0 of Image Analysis offers the ability to detect people appearing in images. The bounding box coordinates of each detected person are returned, along with a confidence score. Version 4.0 of Image Analysis offers the ability to detect people appearing in i ## People detection example -The following JSON response illustrates what the Analyze API returns when describing the example image based on its visual features. +The following JSON response illustrates what the Analysis 4.0 API returns when describing the example image based on its visual features. - + ```json {- "metadata": - { - "width": 1260, - "height": 473 - }, - "peopleResult": - { - "values": - [ - { - "boundingBox": - { - "x": 660, - "y": 0, - "w": 582, - "h": 473 - }, - "confidence": 0.9680353999137878 - } - ] - } + "modelVersion": "2023-02-01-preview", + "metadata": { + "width": 300, + "height": 231 + }, + "peopleResult": { + "values": [ + { + "boundingBox": { + "x": 0, + "y": 41, + "w": 95, + "h": 189 + }, + "confidence": 0.9474349617958069 + }, + { + "boundingBox": { + "x": 204, + "y": 96, + "w": 95, + "h": 134 + }, + "confidence": 0.9470965266227722 + }, + { + "boundingBox": { + "x": 53, + "y": 20, + "w": 136, + "h": 210 + }, + "confidence": 0.8943784832954407 + }, + { + "boundingBox": { + "x": 170, + "y": 31, + "w": 91, + "h": 199 + }, + "confidence": 0.2713555097579956 + } + ] + } } ``` ## Use the API -The people detection feature is part of the [Analyze Image](https://aka.ms/vision-4-0-ref) API. You can call this API using REST. Include `People` in the **visualFeatures** query parameter. Then, when you get the full JSON response, parse the string for the contents of the `"people"` section. +The people detection feature is part of the [Image Analysis 4.0 API](https://aka.ms/vision-4-0-ref). Include `People` in the **features** query parameter. Then, when you get the full JSON response, parse the string for the contents of the `"people"` section. ## Next steps -Learn the related concept of [Face detection](concept-face-detection.md). +* [Call the Analyze Image API](./how-to/call-analyze-image-40.md) |
cognitive-services | Concept Tag Images 40 | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Computer-vision/concept-tag-images-40.md | + + Title: Content tags - Image Analysis 4.0 ++description: Learn concepts related to the images tagging feature of the Image Analysis 4.0 API. +++++++ Last updated : 01/24/2023+++++# Image tagging (version 4.0 preview) ++Image Analysis can return content tags for thousands of recognizable objects, living beings, scenery, and actions that appear in images. Tagging is not limited to the main subject, such as a person in the foreground, but also includes the setting (indoor or outdoor), furniture, tools, plants, animals, accessories, gadgets, and so on. Tags are not organized as a taxonomy and do not have inheritance hierarchies. When tags are ambiguous or not common knowledge, the API response provides hints to clarify the meaning of the tag in context of a known setting. ++Try out the image tagging features quickly and easily in your browser using Vision Studio. ++> [!div class="nextstepaction"] +> [Try Vision Studio](https://portal.vision.cognitive.azure.com/) ++## Image tagging example ++The following JSON response illustrates what Computer Vision returns when tagging visual features detected in the example image. ++. +++```json +{ + "metadata": + { + "width": 300, + "height": 200 + }, + "tagsResult": + { + "values": + [ + { + "name": "grass", + "confidence": 0.9960499405860901 + }, + { + "name": "outdoor", + "confidence": 0.9956876635551453 + }, + { + "name": "building", + "confidence": 0.9893627166748047 + }, + { + "name": "property", + "confidence": 0.9853052496910095 + }, + { + "name": "plant", + "confidence": 0.9791355729103088 + }, + { + "name": "sky", + "confidence": 0.976455569267273 + }, + { + "name": "home", + "confidence": 0.9732913374900818 + }, + { + "name": "house", + "confidence": 0.9726771116256714 + }, + { + "name": "real estate", + "confidence": 0.972320556640625 + }, + { + "name": "yard", + "confidence": 0.9480281472206116 + }, + { + "name": "siding", + "confidence": 0.945357620716095 + }, + { + "name": "porch", + "confidence": 0.9410697221755981 + }, + { + "name": "cottage", + "confidence": 0.9143695831298828 + }, + { + "name": "tree", + "confidence": 0.9111745357513428 + }, + { + "name": "farmhouse", + "confidence": 0.8988940119743347 + }, + { + "name": "window", + "confidence": 0.894851803779602 + }, + { + "name": "lawn", + "confidence": 0.894050121307373 + }, + { + "name": "backyard", + "confidence": 0.8931854963302612 + }, + { + "name": "garden buildings", + "confidence": 0.8859137296676636 + }, + { + "name": "roof", + "confidence": 0.8695330619812012 + }, + { + "name": "driveway", + "confidence": 0.8670969009399414 + }, + { + "name": "land lot", + "confidence": 0.856428861618042 + }, + { + "name": "landscaping", + "confidence": 0.8540748357772827 + } + ] + } +} +``` ++## Use the API ++The tagging feature is part of the [Analyze Image](https://aka.ms/vision-4-0-ref) API. You can call this API using REST. Include `Tags` in the **features** query parameter. Then, when you get the full JSON response, parse the string for the contents of the `"tags"` section. +++* [Quickstart: Image Analysis REST API or client libraries](./quickstarts-sdk/image-analysis-client-library-40.md?pivots=programming-language-csharp) ++## Next steps ++* Learn the related concept of [describing images](concept-describe-images-40.md). +* [Call the Analyze Image API](./how-to/call-analyze-image-40.md) + |
cognitive-services | Concept Tagging Images | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Computer-vision/concept-tagging-images.md | The following JSON response illustrates what Computer Vision returns when taggin . -#### [Version 3.2](#tab/3-2) - ```json { "tags":[ The following JSON response illustrates what Computer Vision returns when taggin } ``` -#### [Version 4.0](#tab/4-0) --```json -{ - "metadata": - { - "width": 300, - "height": 200 - }, - "tagsResult": - { - "values": - [ - { - "name": "grass", - "confidence": 0.9960499405860901 - }, - { - "name": "outdoor", - "confidence": 0.9956876635551453 - }, - { - "name": "building", - "confidence": 0.9893627166748047 - }, - { - "name": "property", - "confidence": 0.9853052496910095 - }, - { - "name": "plant", - "confidence": 0.9791355729103088 - }, - { - "name": "sky", - "confidence": 0.976455569267273 - }, - { - "name": "home", - "confidence": 0.9732913374900818 - }, - { - "name": "house", - "confidence": 0.9726771116256714 - }, - { - "name": "real estate", - "confidence": 0.972320556640625 - }, - { - "name": "yard", - "confidence": 0.9480281472206116 - }, - { - "name": "siding", - "confidence": 0.945357620716095 - }, - { - "name": "porch", - "confidence": 0.9410697221755981 - }, - { - "name": "cottage", - "confidence": 0.9143695831298828 - }, - { - "name": "tree", - "confidence": 0.9111745357513428 - }, - { - "name": "farmhouse", - "confidence": 0.8988940119743347 - }, - { - "name": "window", - "confidence": 0.894851803779602 - }, - { - "name": "lawn", - "confidence": 0.894050121307373 - }, - { - "name": "backyard", - "confidence": 0.8931854963302612 - }, - { - "name": "garden buildings", - "confidence": 0.8859137296676636 - }, - { - "name": "roof", - "confidence": 0.8695330619812012 - }, - { - "name": "driveway", - "confidence": 0.8670969009399414 - }, - { - "name": "land lot", - "confidence": 0.856428861618042 - }, - { - "name": "landscaping", - "confidence": 0.8540748357772827 - } - ] - } -} -``` -- ## Use the API -#### [Version 3.2](#tab/3-2) - The tagging feature is part of the [Analyze Image](https://westcentralus.dev.cognitive.microsoft.com/docs/services/computer-vision-v3-2/operations/56f91f2e778daf14a499f21b) API. You can call this API through a native SDK or through REST calls. Include `Tags` in the **visualFeatures** query parameter. Then, when you get the full JSON response, parse the string for the contents of the `"tags"` section. -#### [Version 4.0](#tab/4-0) --The tagging feature is part of the [Analyze Image](https://aka.ms/vision-4-0-ref) API. You can call this API using REST. Include `Tags` in the **features** query parameter. Then, when you get the full JSON response, parse the string for the contents of the `"tags"` section. -- * [Quickstart: Image Analysis REST API or client libraries](./quickstarts-sdk/image-analysis-client-library.md?pivots=programming-language-csharp) |
cognitive-services | Background Removal | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Computer-vision/how-to/background-removal.md | + + Title: Remove the background in images ++description: Learn how to call the Image Analysis - Segment API to isolate and remove the background from images. ++++++ Last updated : 03/03/2023++++# Remove the background from images ++This article demonstrates how to call the Image Analysis 4.0 API to segment an image. It also shows you how to parse the returned information. ++This guide assumes you've already [created a Computer Vision resource](https://portal.azure.com/#create/Microsoft.CognitiveServicesComputerVision) and obtained a key and endpoint URL. ++> [!IMPORTANT] +> These APIs are only available in the following geographic regions: East US, France Central, Korea Central, North Europe, Southeast Asia, West Europe, West US. ++## Submit data to the service ++When calling the **Image Analysis - Segment** API, you specify the image's URL by formatting the request body like this: `{"url":"https://docs.microsoft.com/azure/cognitive-services/computer-vision/images/windows-kitchen.jpg"}`. ++To analyze a local image, you'd put the binary image data in the HTTP request body. ++## Determine how to process the data ++### Select a mode +++|URL parameter |Value |Description | +|||| +|`mode` | `backgroundRemoval` | Outputs an image of the detected foreground object with a transparent background. | +|`mode` | `foregroundMatting` | Outputs a grayscale alpha matte image showing the opacity of the detected foreground object. | +++A populated URL for backgroundRemoval would look like this: `https://{endpoint}/computervision/imageanalysis:segment?api-version=2023-02-01-preview&mode=backgroundRemoval` ++## Get results from the service ++This section shows you how to parse the results of the API call. ++The service returns a `200` HTTP response, and the body contains the returned image in the form of a binary stream. The following is an example of the 4-channel PNG image response for the `backgroundRemoval` mode: +++The following is an example of the 1-channel PNG image response for the `foregroundMatting` mode: +++The API will return an image the same size as the original for the `foregroundMatting` mode, but at most 16 megapixels (preserving image aspect ratio) for the `backgroundRemoval` mode. ++## Error codes ++See the following list of possible errors and their causes: ++- `400 - InvalidRequest` + - `Value for mode is invalid.` Ensure you have selected exactly one of the valid options for the `mode` parameter. + - `This operation is not enabled in this region.` Ensure that your resource is in one of the geographic regions where the API is supported. + - `The image size is not allowed to be zero or larger than {number} bytes.` Ensure your image is within the specified size limits. + - `The image dimension is not allowed to be smaller than {min number of pixels} and larger than {max number of pixels}`. Ensure both dimensions of the image are within the specified dimension limits. + - `Image format is not valid.` Ensure the input data is a valid JPEG, GIF, TIFF, BMP, or PNG image. +- `500` + - `InternalServerError.` The processing resulted in an internal error. +- `503` + - `ServiceUnavailable.` The service is unavailable. ++## Next steps ++[Background removal concepts](../concept-background-removal.md) ++ |
cognitive-services | Blob Storage Search | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Computer-vision/how-to/blob-storage-search.md | + + Title: Configure your blob storage for image retrieval and video search in Vision Studio ++description: To get started with the **Search photos with natural language** or with **Video summary and frame locator** in Vision Studio, you will need to select or create a new storage account. +++++++ Last updated : 03/06/2023+++++# Configure your blob storage for image retrieval and video search in Vision Studio ++To get started with the **Search photos with natural language** or **Video summary and frame locator** scenario in Vision Studio, you need to select or create a new Azure storage account. Your storage account can be in any region, but creating it in the same region as your Azure Computer Vision resource is more efficient and reduces cost. ++> [!IMPORTANT] +> You need to create your storage account on the same Azure subscription as the Computer Vision resource you're using in the **Search photos with natural language** or **Video summary and frame locator** scenarios as shown below. +++## Create a new storage account ++To get started, <a href="https://ms.portal.azure.com/#create/Microsoft.StorageAccount" title="create a new storage account" target="_blank">create a new storage account</a>. ++++Fill in the required parameters to configure your storage account, then select `Review` and `Create`. ++Once your storage account has been deployed, select `Go to resource` to open the storage account overview. ++++## Configure CORS rule on the storage account ++In your storage account overview, find the **Settings** section in the left hand navigation and select `Resource sharing (CORS)`, shown below. ++++Create a CORS rule by setting the **Allowed Origins** field to `https://portal.vision.cognitive.azure.com`. ++In the Allowed Methods field, select the `GET` checkbox to allow an authenticated request from a different domain. In the **Max age** field, enter the value `9999`, and click `Save`. ++[Learn more about CORS support for Azure Storage](/rest/api/storageservices/cross-origin-resource-sharing--cors--support-for-the-azure-storage-services). +++++This will allow Vision Studio to access images and videos in your blob storage container to extract insights on your data. ++## Upload images and videos in Vision Studio ++In the **Try with your own video** or **Try with your own image** section in Vision Studio, select the storage account that you configured with the CORS rule. Select the container in which your images or videos are stored. If you don't have a container, you can create one and upload the images or videos from your local device. If you have updated the CORS rules on the storage account, refresh the Blob container or Video files on container sections. ++++++++ |
cognitive-services | Call Analyze Image 40 | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Computer-vision/how-to/call-analyze-image-40.md | + + Title: Call the Image Analysis 4.0 Analyze API ++description: Learn how to call the Image Analysis 4.0 API and configure its behavior. ++++++ Last updated : 01/24/2023++++# Call the Image Analysis 4.0 Analyze API (preview) ++This article demonstrates how to call the Image Analysis 4.0 API to return information about an image's visual features. It also shows you how to parse the returned information. ++This guide assumes you've already <a href="https://portal.azure.com/#create/Microsoft.CognitiveServicesComputerVision" title="created a Computer Vision resource" target="_blank">created a Computer Vision resource </a> and obtained a key and endpoint URL. If you're using a client SDK, you'll also need to authenticate a client object. If you haven't done these steps, follow the [quickstart](../quickstarts-sdk/image-analysis-client-library-40.md) to get started. + +## Submit data to the service ++The code in this guide uses remote images referenced by URL. You may want to try different images on your own to see the full capability of the Image Analysis features. ++#### [C#](#tab/csharp) ++In your main class, save a reference to the URL of the image you want to analyze. ++```csharp +var imageSource = VisionSource.FromUrl(new Uri("https://docs.microsoft.com/azure/cognitive-services/computer-vision/images/windows-kitchen.jpg")); +``` ++> [!TIP] +> You can also analyze a local image. See the [reference documentation](/dotnet/api/azure.ai.vision.imageanalysis) for alternative **Analyze** methods. Or, see the sample code on [GitHub](https://github.com/Azure-Samples/azure-ai-vision-sdk) for scenarios involving local images. +++#### [Python](#tab/python) ++Save a reference to the URL of the image you want to analyze. ++```python +image_url = 'https://learn.microsoft.com/azure/cognitive-services/computer-vision/images/windows-kitchen.jpg' +vision_source = visionsdk.VisionSource(url=image_url) +``` ++> [!TIP] +> You can also analyze a local image. See the [reference documentation](/python/api/azure-ai-vision) for alternative **Analyze** methods. Or, see the sample code on [GitHub](https://github.com/Azure-Samples/azure-ai-vision-sdk) for scenarios involving local images. ++#### [C++](#tab/cpp) ++Save a reference to the URL of the image you want to analyze. ++```cpp +auto imageSource = VisionSource::FromUrl("https://learn.microsoft.com/azure/cognitive-services/computer-vision/images/windows-kitchen.jpg"); +``` ++> [!TIP] +> You can also analyze a local image. See the [reference documentation]() for alternative **Analyze** methods. Or, see the sample code on [GitHub](https://github.com/Azure-Samples/azure-ai-vision-sdk) for scenarios involving local images. ++#### [REST](#tab/rest) ++When analyzing a remote image, you specify the image's URL by formatting the request body like this: `{"url":"https://docs.microsoft.com/azure/cognitive-services/computer-vision/images/windows-kitchen.jpg"}`. ++To analyze a local image, you'd put the binary image data in the HTTP request body. +++++## Determine how to process the data ++### Select visual features ++The Analysis 4.0 API gives you access to all of the service's image analysis features. Choose which operations to do based on your own use case. See the [overview](../overview.md) for a description of each feature. The examples in the sections below add all of the available visual features, but for practical usage you'll likely only need one or two. +++#### [C#](#tab/csharp) ++Define an **ImageAnalysisOptions** object, which specifies visual features you'd like to extract in your analysis. ++```csharp +var analysisOptions = new ImageAnalysisOptions() +{ + // Mandatory. You must set one or more features to analyze. Here we use the full set of features. + // Note that 'Captions' is only supported in Azure GPU regions (East US, France Central, Korea Central, + // North Europe, Southeast Asia, West Europe, West US) + Features = + ImageAnalysisFeature.CropSuggestions + | ImageAnalysisFeature.Captions + | ImageAnalysisFeature.Objects + | ImageAnalysisFeature.People + | ImageAnalysisFeature.Text + | ImageAnalysisFeature.Tags +}; +``` ++#### [Python](#tab/python) ++Specify which visual features you'd like to extract in your analysis. ++```python +# Set the language and one or more visual features as analysis options +image_analysis_options = visionsdk.ImageAnalysisOptions() ++# Mandatory. You must set one or more features to analyze. Here we use the full set of features. +# Note that 'Captions' is only supported in Azure GPU regions (East US, France Central, Korea Central, +# North Europe, Southeast Asia, West Europe, West US) +image_analysis_options.features = ( + visionsdk.ImageAnalysisFeature.CROP_SUGGESTIONS | + visionsdk.ImageAnalysisFeature.CAPTIONS | + visionsdk.ImageAnalysisFeature.OBJECTS | + visionsdk.ImageAnalysisFeature.PEOPLE | + visionsdk.ImageAnalysisFeature.TEXT | + visionsdk.ImageAnalysisFeature.TAGS +) +``` +#### [C++](#tab/cpp) ++Define an **ImageAnalysisOptions** object, which specifies visual features you'd like to extract in your analysis. ++```cpp +auto analysisOptions = ImageAnalysisOptions::Create(); ++analysisOptions->SetFeatures( + { + ImageAnalysisFeature::CropSuggestions, + ImageAnalysisFeature::Caption, + ImageAnalysisFeature::Objects, + ImageAnalysisFeature::People, + ImageAnalysisFeature::Text, + ImageAnalysisFeature::Tags + }); +``` ++#### [REST](#tab/rest) ++You can specify which features you want to use by setting the URL query parameters of the [Analysis 4.0 API](https://aka.ms/vision-4-0-ref). A parameter can have multiple values, separated by commas. Each feature you specify will require more computation time, so only specify what you need. ++|URL parameter | Value | Description| +|||--| +|`features`|`Read` | reads the visible text in the image and outputs it as structured JSON data.| +|`features`|`Caption` | describes the image content with a complete sentence in supported languages.| +|`features`|`DenseCaption` | generates detailed captions for individual regions in the image. | +|`features`|`SmartCrops` | finds the rectangle coordinates that would crop the image to a desired aspect ratio while preserving the area of interest.| +|`features`|`Objects` | detects various objects within an image, including the approximate location. The Objects argument is only available in English.| +|`features`|`Tags` | tags the image with a detailed list of words related to the image content.| ++A populated URL might look like this: ++`https://{endpoint}/computervision/imageanalysis:analyze?api-version=2023-02-01-preview&features=tags,read,caption,denseCaption,smartCrops,objects,people` +++++### Use a custom model ++You can also do image analysis with a custom trained model. To create and train a model, see [Create a custom Image Analysis model](./model-customization.md). Once your model is trained, all you need is the model's name value. ++#### [C#](#tab/csharp) ++To use a custom model, create the ImageAnalysisOptions with no features, and set the name of your model. ++```csharp +var analysisOptions = new ImageAnalysisOptions() +{ + ModelName = "MyCustomModelName" +}; +``` ++#### [Python](#tab/python) ++To use a custom model, create an **ImageAnalysisOptions** object with no features set, and set the name of your model. +++```python +analysis_options = sdk.ImageAnalysisOptions() ++analysis_options.model_name = "MyCustomModelName" +``` ++#### [C++](#tab/cpp) ++To use a custom model, create an **ImageAnalysisOptions** object with no features set, and set the name of your model. ++```cpp +auto analysisOptions = ImageAnalysisOptions::Create(); ++analysisOptions->SetModelName("MyCustomModelName"); +``` +++#### [REST](#tab/rest) ++To use a custom model, do not use the features query parameter. Set the model-name parameter to the name of your model. ++`https://{endpoint}/computervision/imageanalysis:analyze?api-version=2023-02-01-preview&model-name=MyCustomModelName` ++++### Specify languages ++You can also specify the language of the returned data. This is optional, and the default language is English. See [Language support](https://aka.ms/cv-languages) for a list of supported language codes and which visual features are supported for each language. +++#### [C#](#tab/csharp) ++Use the *language* property of your **ImageAnalysisOptions** object to specify a language. ++```csharp +var analysisOptions = new ImageAnalysisOptions() +{ ++ // Optional. Default is "en" for English. See https://aka.ms/cv-languages for a list of supported + // language codes and which visual features are supported for each language. + Language = "en", +}; +``` +++#### [Python](#tab/python) ++Use the *language* property of your **ImageAnalysisOptions** object to specify a language. ++```python +# Optional. Default is "en" for English. See https://aka.ms/cv-languages for a list of supported +# language codes and which visual features are supported for each language. +image_analysis_options.language = 'en' +``` ++#### [C++](#tab/cpp) ++Use the *language* property of your **ImageAnalysisOptions** object to specify a language. ++```cpp +// Optional. Default is "en" for English. See https://aka.ms/cv-languages for a list of supported +// language codes and which visual features are supported for each language. +analysisOptions->SetLanguage("en"); +``` ++#### [REST](#tab/rest) ++The following URL query parameter specifies the language. The default value is `en`. ++|URL parameter | Value | Description| +|||--| +|`language`|`en` | English| +|`language`|`es` | Spanish| +|`language`|`ja` | Japanese| +|`language`|`pt` | Portuguese| +|`language`|`zh` | Simplified Chinese| ++A populated URL might look like this: ++`https://{endpoint}/computervision/imageanalysis:analyze?api-version=2023-02-01-preview&features=tags,read,caption,denseCaption,smartCrops,objects,people&language=en` ++++++## Get results from the service ++This section shows you how to parse the results of the API call. It includes the API call itself. +++#### [C#](#tab/csharp) ++### With visual features ++The following code calls the Image Analysis API and prints the results for all standard visual features. ++```csharp +using var analyzer = new ImageAnalyzer(serviceOptions, imageSource, analysisOptions); ++var result = analyzer.Analyze(); ++if (result.Reason == ImageAnalysisResultReason.Analyzed) +{ + Console.WriteLine($" Image height = {result.ImageHeight}"); + Console.WriteLine($" Image width = {result.ImageWidth}"); + Console.WriteLine($" Model version = {result.ModelVersion}"); ++ if (result.Caption != null) + { + Console.WriteLine(" Caption:"); + Console.WriteLine($" \"{result.Caption.Content}\", Confidence {result.Caption.Confidence:0.0000}"); + } ++ if (result.Objects != null) + { + Console.WriteLine(" Objects:"); + foreach (var detectedObject in result.Objects) + { + Console.WriteLine($" \"{detectedObject.Name}\", Bounding box {detectedObject.BoundingBox}, Confidence {detectedObject.Confidence:0.0000}"); + } + } ++ if (result.Tags != null) + { + Console.WriteLine($" Tags:"); + foreach (var tag in result.Tags) + { + Console.WriteLine($" \"{tag.Name}\", Confidence {tag.Confidence:0.0000}"); + } + } ++ if (result.People != null) + { + Console.WriteLine($" People:"); + foreach (var person in result.People) + { + Console.WriteLine($" Bounding box {person.BoundingBox}, Confidence {person.Confidence:0.0000}"); + } + } ++ if (result.CropSuggestions != null) + { + Console.WriteLine($" Crop Suggestions:"); + foreach (var cropSuggestion in result.CropSuggestions) + { + Console.WriteLine($" Aspect ratio {cropSuggestion.AspectRatio}: " + + $"Crop suggestion {cropSuggestion.BoundingBox}"); + }; + } ++ if (result.Text != null) + { + Console.WriteLine($" Text:"); + foreach (var line in result.Text.Lines) + { + string pointsToString = "{" + string.Join(',', line.BoundingPolygon.Select(pointsToString => pointsToString.ToString())) + "}"; + Console.WriteLine($" Line: '{line.Content}', Bounding polygon {pointsToString}"); ++ foreach (var word in line.Words) + { + pointsToString = "{" + string.Join(',', word.BoundingPolygon.Select(pointsToString => pointsToString.ToString())) + "}"; + Console.WriteLine($" Word: '{word.Content}', Bounding polygon {pointsToString}, Confidence {word.Confidence:0.0000}"); + } + } + } ++ var resultDetails = ImageAnalysisResultDetails.FromResult(result); + Console.WriteLine($" Result details:"); + Console.WriteLine($" Image ID = {resultDetails.ImageId}"); + Console.WriteLine($" Result ID = {resultDetails.ResultId}"); + Console.WriteLine($" Connection URL = {resultDetails.ConnectionUrl}"); + Console.WriteLine($" JSON result = {resultDetails.JsonResult}"); +} +else if (result.Reason == ImageAnalysisResultReason.Error) +{ + var errorDetails = ImageAnalysisErrorDetails.FromResult(result); + Console.WriteLine(" Analysis failed."); + Console.WriteLine($" Error reason : {errorDetails.Reason}"); + Console.WriteLine($" Error code : {errorDetails.ErrorCode}"); + Console.WriteLine($" Error message: {errorDetails.Message}"); +} +``` ++### With custom model ++The following code calls the Image Analysis API and prints the results for custom model analysis. ++```csharp +using var analyzer = new ImageAnalyzer(serviceOptions, imageSource, analysisOptions); ++var result = analyzer.Analyze(); ++if (result.Reason == ImageAnalysisResultReason.Analyzed) +{ + if (result.CustomObjects != null) + { + Console.WriteLine(" Custom Objects:"); + foreach (var detectedObject in result.CustomObjects) + { + Console.WriteLine($" \"{detectedObject.Name}\", Bounding box {detectedObject.BoundingBox}, Confidence {detectedObject.Confidence:0.0000}"); + } + } ++ if (result.CustomTags != null) + { + Console.WriteLine($" Custom Tags:"); + foreach (var tag in result.CustomTags) + { + Console.WriteLine($" \"{tag.Name}\", Confidence {tag.Confidence:0.0000}"); + } + } +} +else if (result.Reason == ImageAnalysisResultReason.Error) +{ + var errorDetails = ImageAnalysisErrorDetails.FromResult(result); + Console.WriteLine(" Analysis failed."); + Console.WriteLine($" Error reason : {errorDetails.Reason}"); + Console.WriteLine($" Error code : {errorDetails.ErrorCode}"); + Console.WriteLine($" Error message: {errorDetails.Message}"); +} +``` ++#### [Python](#tab/python) ++### With visual features ++The following code calls the Image Analysis API and prints the results for all standard visual features. ++```python +image_analyzer = sdk.ImageAnalyzer(service_options, vision_source, analysis_options) ++result = image_analyzer.analyze() ++if result.reason == sdk.ImageAnalysisResultReason.ANALYZED: ++ print(" Image height: {}".format(result.image_height)) + print(" Image width: {}".format(result.image_width)) + print(" Model version: {}".format(result.model_version)) ++ if result.caption is not None: + print(" Caption:") + print(" '{}', Confidence {:.4f}".format(result.caption.content, result.caption.confidence)) ++ if result.objects is not None: + print(" Objects:") + for object in result.objects: + print(" '{}', {} Confidence: {:.4f}".format(object.name, object.bounding_box, object.confidence)) ++ if result.tags is not None: + print(" Tags:") + for tag in result.tags: + print(" '{}', Confidence {:.4f}".format(tag.name, tag.confidence)) ++ if result.people is not None: + print(" People:") + for person in result.people: + print(" {}, Confidence {:.4f}".format(person.bounding_box, person.confidence)) ++ if result.crop_suggestions is not None: + print(" Crop Suggestions:") + for crop_suggestion in result.crop_suggestions: + print(" Aspect ratio {}: Crop suggestion {}" + .format(crop_suggestion.aspect_ratio, crop_suggestion.bounding_box)) ++ if result.text is not None: + print(" Text:") + for line in result.text.lines: + points_string = "{" + ", ".join([str(int(point)) for point in line.bounding_polygon]) + "}" + print(" Line: '{}', Bounding polygon {}".format(line.content, points_string)) + for word in line.words: + points_string = "{" + ", ".join([str(int(point)) for point in word.bounding_polygon]) + "}" + print(" Word: '{}', Bounding polygon {}, Confidence {:.4f}" + .format(word.content, points_string, word.confidence)) ++ result_details = sdk.ImageAnalysisResultDetails.from_result(result) + print(" Result details:") + print(" Image ID: {}".format(result_details.image_id)) + print(" Result ID: {}".format(result_details.result_id)) + print(" Connection URL: {}".format(result_details.connection_url)) + print(" JSON result: {}".format(result_details.json_result)) ++elif result.reason == sdk.ImageAnalysisResultReason.ERROR: ++ error_details = sdk.ImageAnalysisErrorDetails.from_result(result) + print(" Analysis failed.") + print(" Error reason: {}".format(error_details.reason)) + print(" Error code: {}".format(error_details.error_code)) + print(" Error message: {}".format(error_details.message)) +``` ++### With custom model ++The following code calls the Image Analysis API and prints the results for custom model analysis. ++```python +image_analyzer = sdk.ImageAnalyzer(service_options, vision_source, analysis_options) ++result = image_analyzer.analyze() ++if result.reason == sdk.ImageAnalysisResultReason.ANALYZED: ++ if result.custom_objects is not None: + print(" Custom Objects:") + for object in result.custom_objects: + print(" '{}', {} Confidence: {:.4f}".format(object.name, object.bounding_box, object.confidence)) ++ if result.custom_tags is not None: + print(" Custom Tags:") + for tag in result.custom_tags: + print(" '{}', Confidence {:.4f}".format(tag.name, tag.confidence)) ++elif result.reason == sdk.ImageAnalysisResultReason.ERROR: ++ error_details = sdk.ImageAnalysisErrorDetails.from_result(result) + print(" Analysis failed.") + print(" Error reason: {}".format(error_details.reason)) + print(" Error code: {}".format(error_details.error_code)) + print(" Error message: {}".format(error_details.message)) +``` +#### [C++](#tab/cpp) ++### With visual features ++The following code calls the Image Analysis API and prints the results for all standard visual features. ++```cpp +auto analyzer = ImageAnalyzer::Create(serviceOptions, imageSource, analysisOptions); ++auto result = analyzer->Analyze(); ++if (result->GetReason() == ImageAnalysisResultReason::Analyzed) +{ + std::cout << " Image height = " << result->GetImageHeight().Value() << std::endl; + std::cout << " Image width = " << result->GetImageWidth().Value() << std::endl; + std::cout << " Model version = " << result->GetModelVersion().Value() << std::endl; ++ const auto caption = result->GetCaption(); + if (caption.HasValue()) + { + std::cout << " Caption:" << std::endl; + std::cout << " \"" << caption.Value().Content << "\", Confidence " << caption.Value().Confidence << std::endl; + } ++ const auto objects = result->GetObjects(); + if (objects.HasValue()) + { + std::cout << " Objects:" << std::endl; + for (const auto object : objects.Value()) + { + std::cout << " \"" << object.Name << "\", "; + std::cout << "Bounding box " << object.BoundingBox.ToString(); + std::cout << ", Confidence " << object.Confidence << std::endl; + } + } ++ const auto tags = result->GetTags(); + if (tags.HasValue()) + { + std::cout << " Tags:" << std::endl; + for (const auto tag : tags.Value()) + { + std::cout << " \"" << tag.Name << "\""; + std::cout << ", Confidence " << tag.Confidence << std::endl; + } + } ++ const auto people = result->GetPeople(); + if (people.HasValue()) + { + std::cout << " People:" << std::endl; + for (const auto person : people.Value()) + { + std::cout << " Bounding box " << person.BoundingBox.ToString(); + std::cout << ", Confidence " << person.Confidence << std::endl; + } + } ++ const auto cropSuggestions = result->GetCropSuggestions(); + if (cropSuggestions.HasValue()) + { + std::cout << " Crop Suggestions:" << std::endl; + for (const auto cropSuggestion : cropSuggestions.Value()) + { + std::cout << " Aspect ratio " << cropSuggestion.AspectRatio; + std::cout << ": Crop suggestion " << cropSuggestion.BoundingBox.ToString() << std::endl; + } + } ++ const auto detectedText = result->GetText(); + if (detectedText.HasValue()) + { + std::cout << " Text:\n"; + for (const auto line : detectedText.Value().Lines) + { + std::cout << " Line: \"" << line.Content << "\""; + std::cout << ", Bounding polygon " << PolygonToString(line.BoundingPolygon) << std::endl; ++ for (const auto word : line.Words) + { + std::cout << " Word: \"" << word.Content << "\""; + std::cout << ", Bounding polygon " << PolygonToString(word.BoundingPolygon); + std::cout << ", Confidence " << word.Confidence << std::endl; + } + } + } ++ auto resultDetails = ImageAnalysisResultDetails::FromResult(result); + std::cout << " Result details:\n";; + std::cout << " Image ID = " << resultDetails->GetImageId() << std::endl; + std::cout << " Result ID = " << resultDetails->GetResultId() << std::endl; + std::cout << " Connection URL = " << resultDetails->GetConnectionUrl() << std::endl; + std::cout << " JSON result = " << resultDetails->GetJsonResult() << std::endl; +} +else if (result->GetReason() == ImageAnalysisResultReason::Error) +{ + auto errorDetails = ImageAnalysisErrorDetails::FromResult(result); + std::cout << " Analysis failed." << std::endl; + std::cout << " Error reason = " << (int)errorDetails->GetReason() << std::endl; + std::cout << " Error code = " << errorDetails->GetErrorCode() << std::endl; + std::cout << " Error message = " << errorDetails->GetMessage() << std::endl; +} +``` ++Use the following helper method to display rectangle coordinates: ++```cpp +std::string PolygonToString(std::vector<int32_t> boundingPolygon) +{ + std::string out = "{"; + for (int i = 0; i < boundingPolygon.size(); i += 2) + { + out += ((i == 0) ? "{" : ",{") + + std::to_string(boundingPolygon[i]) + "," + + std::to_string(boundingPolygon[i + 1]) + "}"; + } + out += "}"; + return out; +} +``` ++### With custom model ++The following code calls the Image Analysis API and prints the results for custom model analysis. ++```cpp +auto analyzer = ImageAnalyzer::Create(serviceOptions, imageSource, analysisOptions); ++auto result = analyzer->Analyze(); ++if (result->GetReason() == ImageAnalysisResultReason::Analyzed) +{ + const auto objects = result->GetCustomObjects(); + if (objects.HasValue()) + { + std::cout << " Custom objects:" << std::endl; + for (const auto object : objects.Value()) + { + std::cout << " \"" << object.Name << "\", "; + std::cout << "Bounding box " << object.BoundingBox.ToString(); + std::cout << ", Confidence " << object.Confidence << std::endl; + } + } ++ const auto tags = result->GetCustomTags(); + if (tags.HasValue()) + { + std::cout << " Custom tags:" << std::endl; + for (const auto tag : tags.Value()) + { + std::cout << " \"" << tag.Name << "\""; + std::cout << ", Confidence " << tag.Confidence << std::endl; + } + } +} +else if (result->GetReason() == ImageAnalysisResultReason::Error) +{ + auto errorDetails = ImageAnalysisErrorDetails::FromResult(result); + std::cout << " Analysis failed." << std::endl; + std::cout << " Error reason = " << (int)errorDetails->GetReason() << std::endl; + std::cout << " Error code = " << errorDetails->GetErrorCode() << std::endl; + std::cout << " Error message = " << errorDetails->GetMessage() << std::endl; +} +``` ++#### [REST](#tab/rest) ++The service returns a `200` HTTP response, and the body contains the returned data in the form of a JSON string. The following text is an example of a JSON response. ++```json +{ + "captionResult": + { + "text": "a person using a laptop", + "confidence": 0.55291348695755 + }, + "objectsResult": + { + "values": + [ + {"boundingBox":{"x":730,"y":66,"w":135,"h":85},"tags":[{"name":"kitchen appliance","confidence":0.501}]}, + {"boundingBox":{"x":523,"y":377,"w":185,"h":46},"tags":[{"name":"computer keyboard","confidence":0.51}]}, + {"boundingBox":{"x":471,"y":218,"w":289,"h":226},"tags":[{"name":"Laptop","confidence":0.85}]}, + {"boundingBox":{"x":654,"y":0,"w":584,"h":473},"tags":[{"name":"person","confidence":0.855}]} + ] + }, + "modelVersion": "2023-02-01-preview", + "metadata": + { + "width": 1260, + "height": 473 + }, + "tagsResult": + { + "values": + [ + {"name":"computer","confidence":0.9865934252738953}, + {"name":"clothing","confidence":0.9695653915405273}, + {"name":"laptop","confidence":0.9658201932907104}, + {"name":"person","confidence":0.9536289572715759}, + {"name":"indoor","confidence":0.9420197010040283}, + {"name":"wall","confidence":0.8871886730194092}, + {"name":"woman","confidence":0.8632704019546509}, + {"name":"using","confidence":0.5603535771369934} + ] + }, + "readResult": + { + "stringIndexType": "TextElements", + "content": "", + "pages": + [ + {"height":473,"width":1260,"angle":0,"pageNumber":1,"words":[],"spans":[{"offset":0,"length":0}],"lines":[]} + ], + "styles": [], + "modelVersion": "2022-04-30" + }, + "smartCropsResult": + { + "values": + [ + {"aspectRatio":1.94,"boundingBox":{"x":158,"y":20,"w":840,"h":433}} + ] + }, + "peopleResult": + { + "values": + [ + {"boundingBox":{"x":660,"y":0,"w":584,"h":471},"confidence":0.9698998332023621}, + {"boundingBox":{"x":566,"y":276,"w":24,"h":30},"confidence":0.022009700536727905}, + {"boundingBox":{"x":587,"y":273,"w":20,"h":28},"confidence":0.01859394833445549}, + {"boundingBox":{"x":609,"y":271,"w":19,"h":30},"confidence":0.003902678843587637}, + {"boundingBox":{"x":563,"y":279,"w":15,"h":28},"confidence":0.0034854013938456774}, + {"boundingBox":{"x":566,"y":299,"w":22,"h":41},"confidence":0.0031260766554623842}, + {"boundingBox":{"x":570,"y":311,"w":29,"h":38},"confidence":0.0026493810582906008}, + {"boundingBox":{"x":588,"y":275,"w":24,"h":54},"confidence":0.001754675293341279}, + {"boundingBox":{"x":574,"y":274,"w":53,"h":64},"confidence":0.0012078586732968688}, + {"boundingBox":{"x":608,"y":270,"w":32,"h":59},"confidence":0.0011869356967508793}, + {"boundingBox":{"x":591,"y":305,"w":29,"h":42},"confidence":0.0010676260571926832} + ] + } +} +``` ++### Error codes ++See the following list of possible errors and their causes: ++* 400 + * `InvalidImageUrl` - Image URL is badly formatted or not accessible. + * `InvalidImageFormat` - Input data is not a valid image. + * `InvalidImageSize` - Input image is too large. + * `NotSupportedVisualFeature` - Specified feature type isn't valid. + * `NotSupportedImage` - Unsupported image, for example child pornography. + * `InvalidDetails` - Unsupported `detail` parameter value. + * `NotSupportedLanguage` - The requested operation isn't supported in the language specified. + * `BadArgument` - More details are provided in the error message. +* 415 - Unsupported media type error. The Content-Type isn't in the allowed types: + * For an image URL, Content-Type should be `application/json` + * For a binary image data, Content-Type should be `application/octet-stream` or `multipart/form-data` +* 500 + * `FailedToProcess` + * `Timeout` - Image processing timed out. + * `InternalServerError` +++++> [!TIP] +> While working with Computer Vision, you might encounter transient failures caused by [rate limits](https://azure.microsoft.com/pricing/details/cognitive-services/computer-vision/) enforced by the service, or other transient problems like network outages. For information about handling these types of failures, see [Retry pattern](/azure/architecture/patterns/retry) in the Cloud Design Patterns guide, and the related [Circuit Breaker pattern](/azure/architecture/patterns/circuit-breaker). +++## Next steps ++* Explore the [concept articles](../concept-describe-images-40.md) to learn more about each feature. +* Explore the [code samples on GitHub](https://github.com/Azure-Samples/azure-ai-vision-sdk/blob/main/samples/). +* See the [API reference](https://aka.ms/vision-4-0-ref) to learn more about the API functionality. |
cognitive-services | Call Analyze Image | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Computer-vision/how-to/call-analyze-image.md | You can specify which features you want to use by setting the URL query paramete A populated URL might look like this: -`https://{endpoint}/computervision/imageanalysis:analyze?api-version=2022-10-12-preview&features=Tags` +`https://{endpoint}/computervision/imageanalysis:analyze?api-version=2023-02-01-preview&features=Tags` #### [C#](#tab/csharp) The following URL query parameter specifies the language. The default value is ` A populated URL might look like this: -`https://{endpoint}/computervision/imageanalysis:analyze?api-version=2022-10-12-preview&features=Tags&language=en` +`https://{endpoint}/computervision/imageanalysis:analyze?api-version=2023-02-01-preview&features=Tags&language=en` #### [C#](#tab/csharp) |
cognitive-services | Image Retrieval | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Computer-vision/how-to/image-retrieval.md | + + Title: Do image retrieval using vectorization - Image Analysis 4.0 ++description: Learn how to call the image retrieval API to vectorize image and search terms. +++++++ Last updated : 02/21/2023+++++# Do image retrieval using vectorization (version 4.0 preview) ++The Image Retrieval APIs enable the _vectorization_ of images and text queries. They convert images to coordinates in a multi-dimensional vector space. Then, incoming text queries can also be converted to vectors, and images can be matched to the text based on semantic closeness. This allows the user to search a set of images using text, without the need to use image tags or other metadata. Semantic closeness often produces better results in search. ++> [!IMPORTANT] +> These APIs are only available in the following geographic regions: East US, France Central, Korea Central, North Europe, Southeast Asia, West Europe, West US. ++## Prerequisites ++* Azure subscription - [Create one for free](https://azure.microsoft.com/free/cognitive-services) +* Once you have your Azure subscription, <a href="https://portal.azure.com/#create/Microsoft.CognitiveServicesComputerVision" title="Create a Computer Vision resource" target="_blank">create a Computer Vision resource </a> in the Azure portal to get your key and endpoint. Be sure to create it in one of the permitted geographic regions: East US, France Central, Korea Central, North Europe, Southeast Asia, West Europe, West US. + * After it deploys, select **Go to resource**. Copy the key and endpoint to a temporary location to use later on. ++## Try out Image Retrieval ++You can try out the Image Retrieval feature quickly and easily in your browser using Vision Studio. ++> [!div class="nextstepaction"] +> [Try Vision Studio](https://portal.vision.cognitive.azure.com/) ++## Call the Vectorize Image API ++The `retrieval:vectorizeImage` API lets you convert an image's data to a vector. To call it, make the following changes to the cURL command below: ++1. Replace `<endpoint>` with your Computer Vision endpoint. +1. Replace `<subscription-key>` with your Computer Vision key. +1. In the request body, set `"url"` to the URL of a remote image you want to use. ++```bash +curl.exe -v -X POST "https://<endpoint>/computervision/retrieval:vectorizeImage?api-version=2023-02-01-preview&modelVersion=latest" -H "Content-Type: application/json" -H "Ocp-Apim-Subscription-Key: <subscription-key>" --data-ascii " +{ +'url':'https://upload.wikimedia.org/wikipedia/commons/thumb/a/af/Atomist_quote_from_Democritus.png/338px-Atomist_quote_from_Democritus.png' +}" +``` ++The API call returns an **vector** JSON object, which defines the image's coordinates in the high-dimensional vector space. ++```json +{ + "modelVersion": "2022-04-11", + "vector": [ -0.09442752, -0.00067171326, -0.010985051, ... ] +} +``` ++## Call the Vectorize Text API ++The `retrieval:vectorizeText` API lets you convert a text string to a vector. To call it, make the following changes to the cURL command below: ++1. Replace `<endpoint>` with your Computer Vision endpoint. +1. Replace `<subscription-key>` with your Computer Vision key. +1. In the request body, set `"text"` to the example search term you want to use. ++```bash +curl.exe -v -X POST "https://<endpoint>/computervision/retrieval:vectorizeText?api-version=2023-02-01-preview&modelVersion=latest" -H "Content-Type: application/json" -H "Ocp-Apim-Subscription-Key: <subscription-key>" --data-ascii " +{ +'text':'cat jumping' +}" +``` ++The API call returns an **vector** JSON object, which defines the text string's coordinates in the high-dimensional vector space. ++```json +{ + "modelVersion": "2022-04-11", + "vector": [ -0.09442752, -0.00067171326, -0.010985051, ... ] +} +``` ++## Calculate vector similarity ++Cosine similarity is a method for measuring the similarity of two vectors. In an Image Retrieval scenario, you'll compare the search query vector with each image's vector. Images that are above a certain threshold of similarity can then be returned as search results. ++The following example C# code calculates the cosine similarity between two vectors. It's up to you to decide what similarity threshold to use for returning images as search results. ++```csharp +public static float GetCosineSimilarity(float[] vector1, float[] vector2) +{ + float dotProduct = 0; + int length = Math.Min(vector1.Length, vector2.Length); + for (int i = 0; i < length; i++) + { + dotProduct += vector1[i] * vector2[i]; + } + float magnitude1 = Math.Sqrt(vector1.Select(x => x * x).Sum()); + float magnitude2 = Math.Sqrt(vector2.Select(x => x * x).Sum()); + + return dotProduct / (magnitude1 * magnitude2); +} +``` ++## Next steps ++[Image retrieval concepts](../concept-image-retrieval.md) |
cognitive-services | Migrate From Custom Vision | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Computer-vision/how-to/migrate-from-custom-vision.md | + + Title: "Migrate a Custom Vision project to Image Analysis 4.0" ++description: Learn how to generate an annotation file from an old Custom Vision project, so you can train a custom Image Analysis model on previous training data. +++++ Last updated : 02/06/2023++++# Migrate a Custom Vision project to Image Analysis 4.0 preview ++You can migrate an existing Azure Custom Vision project to the new Image Analysis 4.0 system. [Custom Vision](/azure/cognitive-services/custom-vision-service/overview) is a model customization service that existed before Image Analysis 4.0. ++This guide uses a Python script to take all of the training data from an existing Custom Vision project (images and their label data) and convert it to a COCO file. You can then import the COCO file into Vision Studio to train a custom model. See [Create and train a custom model](model-customization.md) and go to the section on importing a COCO file—you can follow the guide from there to the end. ++## Prerequisites ++* An Azure subscription - [Create one for free](https://azure.microsoft.com/free/cognitive-services/) +* [Python 3.x](https://www.python.org/) +* A Custom Vision resource where an existing project is stored. +* An Azure Storage resource - [Create one](/azure/storage/common/storage-account-create?tabs=azure-portal) ++## Install libraries ++This script requires certain Python libraries. Install them in your project directory with the following command. ++```bash +pip install azure-storage-blob azure-cognitiveservices-vision-customvision cffi +``` ++## Prepare the migration script ++Create a new Python file—_export-cvs-data-to-coco.py_, for example. Then open it in a text editor and paste in the following contents. ++```python +from typing import List, Union +from azure.cognitiveservices.vision.customvision.training import CustomVisionTrainingClient +from azure.cognitiveservices.vision.customvision.training.models import Image, ImageTag, ImageRegion, Project +from msrest.authentication import ApiKeyCredentials +import argparse +import time +import json +import pathlib +import logging +from azure.storage.blob import ContainerClient, BlobClient +import multiprocessing +++N_PROCESS = 8 +++def get_file_name(sub_folder, image_id): + return f'{sub_folder}/images/{image_id}' +++def blob_copy(params): + container_client, sub_folder, image = params + blob_client: BlobClient = container_client.get_blob_client(get_file_name(sub_folder, image.id)) + blob_client.start_copy_from_url(image.original_image_uri) + return blob_client +++def wait_for_completion(blobs, time_out=5): + pendings = blobs + time_break = 0.5 + while pendings and time_out > 0: + pendings = [b for b in pendings if b.get_blob_properties().copy.status == 'pending'] + if pendings: + logging.info(f'{len(pendings)} pending copies. wait for {time_break} seconds.') + time.sleep(time_break) + time_out -= time_break +++def copy_images_with_retry(pool, container_client, sub_folder, images: List, batch_id, n_retries=5): + retry_limit = n_retries + urls = [] + while images and n_retries > 0: + params = [(container_client, sub_folder, image) for image in images] + img_and_blobs = zip(images, pool.map(blob_copy, params)) + logging.info(f'Batch {batch_id}: Copied {len(images)} images.') + urls = urls or [b.url for _, b in img_and_blobs] ++ wait_for_completion([b for _, b in img_and_blobs]) + images = [image for image, b in img_and_blobs if b.get_blob_properties().copy.status in ['failed', 'aborted']] + n_retries -= 1 + if images: + time.sleep(0.5 * (retry_limit - n_retries)) ++ if images: + raise RuntimeError(f'Copy failed for some images in batch {batch_id}') ++ return urls +++class CocoOperator: + def __init__(self): + self._images = [] + self._annotations = [] + self._categories = [] + self._category_name_to_id = {} ++ @property + def num_imges(self): + return len(self._images) ++ @property + def num_categories(self): + return len(self._categories) ++ @property + def num_annotations(self): + return len(self._annotations) ++ def add_image(self, width, height, coco_url, file_name): + self._images.append( + { + 'id': len(self._images) + 1, + 'width': width, + 'height': height, + 'coco_url': coco_url, + 'file_name': file_name, + }) ++ def add_annotation(self, image_id, category_id_or_name: Union[int, str], bbox: List[float] = None): + self._annotations.append({ + 'id': len(self._annotations) + 1, + 'image_id': image_id, + 'category_id': category_id_or_name if isinstance(category_id_or_name, int) else self._category_name_to_id[category_id_or_name]}) ++ if bbox: + self._annotations[-1]['bbox'] = bbox ++ def add_category(self, name): + self._categories.append({ + 'id': len(self._categories) + 1, + 'name': name + }) ++ self._category_name_to_id[name] = len(self._categories) ++ def to_json(self) -> str: + coco_dict = { + 'images': self._images, + 'categories': self._categories, + 'annotations': self._annotations, + } ++ return json.dumps(coco_dict, ensure_ascii=False, indent=2) +++def log_project_info(training_client: CustomVisionTrainingClient, project_id): + project: Project = training_client.get_project(project_id) + proj_settings = project.settings + project.settings = None + logging.info(f'Project info dict: {project.__dict__}') + logging.info(f'Project setting dict: {proj_settings.__dict__}') + logging.info(f'Project info: n tags: {len(training_client.get_tags(project_id))},' + f' n images: {training_client.get_image_count(project_id)} (tagged: {training_client.get_tagged_image_count(project_id)},' + f' untagged: {training_client.get_untagged_image_count(project_id)})') +++def export_data(azure_storage_account_name, azure_storage_key, azure_storage_container_name, custom_vision_endpoint, custom_vision_training_key, custom_vision_project_id, n_process): + azure_storage_account_url = f"https://{azure_storage_account_name}.blob.core.windows.net" + container_client = ContainerClient(azure_storage_account_url, azure_storage_container_name, credential=azure_storage_key) + credentials = ApiKeyCredentials(in_headers={"Training-key": custom_vision_training_key}) + trainer = CustomVisionTrainingClient(custom_vision_endpoint, credentials) ++ coco_operator = CocoOperator() + for tag in trainer.get_tags(custom_vision_project_id): + coco_operator.add_category(tag.name) ++ skip = 0 + batch_id = 0 + project_name = trainer.get_project(custom_vision_project_id).name + log_project_info(trainer, custom_vision_project_id) + sub_folder = f'{project_name}_{custom_vision_project_id}' + with multiprocessing.Pool(n_process) as pool: + while True: + images: List[Image] = trainer.get_images(project_id=custom_vision_project_id, skip=skip) + if not images: + break + urls = copy_images_with_retry(pool, container_client, sub_folder, images, batch_id) + for i, image in enumerate(images): + coco_operator.add_image(image.width, image.height, urls[i], get_file_name(sub_folder, image.id)) + image_tags: List[ImageTag] = image.tags + image_regions: List[ImageRegion] = image.regions + if image_regions: + for img_region in image_regions: + coco_operator.add_annotation(coco_operator.num_imges, img_region.tag_name, [img_region.left, img_region.top, img_region.width, img_region.height]) + elif image_tags: + for img_tag in image_tags: + coco_operator.add_annotation(coco_operator.num_imges, img_tag.tag_name) ++ skip += len(images) + batch_id += 1 ++ coco_json_file_name = 'train.json' + local_json = pathlib.Path(coco_json_file_name) + local_json.write_text(coco_operator.to_json(), encoding='utf-8') + coco_json_blob_client: BlobClient = container_client.get_blob_client(f'{sub_folder}/{coco_json_file_name}') + if coco_json_blob_client.exists(): + logging.warning(f'coco json file exists in blob. Skipped uploading. If existing one is outdated, please manually upload your new coco json from ./train.json to {coco_json_blob_client.url}') + else: + coco_json_blob_client.upload_blob(local_json.read_bytes()) + logging.info(f'coco file train.json uploaded to {coco_json_blob_client.url}.') +++def parse_args(): + parser = argparse.ArgumentParser('Export Custom Vision workspace data to blob storage.') ++ parser.add_argument('--custom_vision_project_id', '-p', type=str, required=True, help='Custom Vision Project Id.') + parser.add_argument('--custom_vision_training_key', '-k', type=str, required=True, help='Custom Vision training key.') + parser.add_argument('--custom_vision_endpoint', '-e', type=str, required=True, help='Custom Vision endpoint.') ++ parser.add_argument('--azure_storage_account_name', '-a', type=str, required=True, help='Azure storage account name.') + parser.add_argument('--azure_storage_account_key', '-t', type=str, required=True, help='Azure storage account key.') + parser.add_argument('--azure_storage_container_name', '-c', type=str, required=True, help='Azure storage container name.') ++ parser.add_argument('--n_process', '-n', type=int, required=False, default=8, help='Number of processes used in exporting data.') ++ return parser.parse_args() +++def main(): + args = parse_args() ++ export_data(args.azure_storage_account_name, args.azure_storage_account_key, args.azure_storage_container_name, + args.custom_vision_endpoint, args.custom_vision_training_key, args.custom_vision_project_id, args.n_process) +++if __name__ == '__main__': + main() +``` ++## Run the script ++Run the script using the `python` command. ++```console +python export-cvs-data-to-coco.py -p <project ID> -k <training key> -e <endpoint url> -a <storage account> -t <storage key> -c <container name> +``` ++You need to fill in the correct parameter values. You need the following information: ++- The project ID of your Custom Vision project +- Your Custom Vision training key +- Your Custom Vision endpoint URL +- The name of the Azure Storage account you want to use with your new custom model project +- The key for that storage account +- The name of the container you want to use in that storage account ++## Use COCO file in a new project ++The script generates a COCO file and uploads it to the blob storage location you specified. You can now import it to your model customization project. See [Create and train a custom model](model-customization.md) and go to the section on selecting a COCO file—you can follow the guide from there to the end. ++## Next steps ++* [Create and train a custom model](model-customization.md) |
cognitive-services | Model Customization | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Computer-vision/how-to/model-customization.md | + + Title: Create a custom Image Analysis model ++description: Learn how to create and train a custom model to do image classification and object detection that's specific to your use case. +++++ Last updated : 02/06/2023+++++# Create a custom Image Analysis model (preview) ++Image Analysis 4.0 allows you to train a custom model using your own training images. By manually labeling your images, you can train a model to apply custom tags to the images (image classification) or detect custom objects (object detection). Image Analysis 4.0 models are especially effective at few-shot learning, so you can get accurate models with less training data. ++This guide shows you how to create and train a custom image classification model. The few differences between this and object detection models are noted. ++## Prerequisites ++* Azure subscription - [Create one for free](https://azure.microsoft.com/free/cognitive-services) +* Once you have your Azure subscription, <a href="https://portal.azure.com/#create/Microsoft.CognitiveServicesComputerVision" title="Create a Computer Vision resource" target="_blank">create a Computer Vision resource </a> in the Azure portal to get your key and endpoint. If you're following this guide using Vision Studio, you must create your resource in the East US region. After it deploys, select **Go to resource**. Copy the key and endpoint to a temporary location to use later on. +* An Azure Storage resource - [Create one](/azure/storage/common/storage-account-create?tabs=azure-portal) +* A set of images with which to train your classification model. You can use the set of [sample images on GitHub](https://github.com/Azure-Samples/cognitive-services-sample-data-files/tree/master/CustomVision/ImageClassification/Images). Or, you can use your own images. You only need about 3-5 images per class. ++> [!NOTE] +> We do not recommend you use custom models for business critical environments due to potential high latency. When customers train custom models in Vision Studio, those custom models belong to the Computer Vision resource that they were trained under and the customer is able to make calls to those models using the **Analyze Image** API. When they make these calls, the custom model is loaded in memory and the prediction infrastructure is initialized. While this happens, customers might experience longer than expected latency to receive prediction results. ++#### [Vision Studio](#tab/studio) ++## Create a new custom model ++Begin by going to [Vision Studio](https://portal.vision.cognitive.azure.com/) and selecting the **Image analysis** tab. Then select either the **Extract common tags from images** tile for image classification or the **Extract common objects in images** tile for object detection. This guide will demonstrate a custom image classification model. ++> [!IMPORTANT] +> To train a custom model in Vision Studio, your Azure subscription needs to be approved for access. Please request access using [this form](https://aka.ms/visionaipublicpreview). ++On the next screen, the **Choose the model you want to try out** drop-down lets you select the Pretrained Vision model (to do ordinary Image Analysis) or a custom trained model. Since you don't have a custom model yet, select **Train a custom model**. ++ ++## Prepare training images ++You need to upload your training images to an Azure Blob Storage container. Go to your storage resource in the Azure portal and navigate to the **Storage browser** tab. Here you can create a blob container and upload your images. Put them all at the root of the container. ++## Add a dataset ++To train a custom model, you need to associate it with a **Dataset** where you provide images and their label information as training data. In Vision Studio, select the **Datasets** tab to view your datasets. ++To create a new dataset, select **add new dataset**. Enter a name and select a dataset type: If you'd like to do image classification, select `Multi-class image classification`. If you'd like to do object detection, select `Object detection`. ++++ ++Then, select the container from the Azure Blob Storage account where you stored the training images. Check the box to allow Vision Studio to read and write to the blob storage container. This is a necessary step to import labeled data. Create the dataset. ++## Create an Azure Machine Learning labeling project ++You need a COCO file to convey the labeling information. An easy way to generate a COCO file is to create an Azure Machine Learning project, which comes with a data-labeling workflow. ++In the dataset details page, select **Add a new Data Labeling project**. Name it and select **Create a new workspace**. That will open a new Azure portal tab where you can create the Azure Machine Learning project. ++ ++Once the Azure Machine Learning project is created, return to the Vision Studio tab and select it under **Workspace**. The Azure Machine Learning portal will then open in a new browser tab. ++## Azure Machine Learning: Create labels ++To start labeling, follow the **Please add label classes** prompt to add label classes. ++ ++ ++Once you've added all the class labels, save them, select **start** on the project, and then select **Label data** at the top. ++ +++### Azure Machine Learning: Manually label training data ++Choose **Start labeling** and follow the prompts to label all of your images. When you're finished, return to the Vision Studio tab in your browser. ++Now select **Add COCO file**, then select **Import COCO file from an Azure ML Data Labeling project**. This will import the labeled data from Azure Machine Learning. ++The COCO file you just created is now stored in the Azure Storage container that you linked to this project. You can now import it into the model customization workflow. Select it from the drop-down list. Once the COCO file is imported into the dataset, the dataset can be used for training a model. ++> [!NOTE] +> ## Import COCO files from elsewhere +> +> If you have a ready-made COCO file you want to import, go to the **Datasets** tab and select `Add COCO files to this dataset`. You can choose to add a specific COCO file from a Blob storage account or import from the Azure Machine Learning labeling project. +> +> Currently, Microsoft is addressing an issue which causes COCO file import to fail with large datasets when initiated in Vision Studio. To train using a large dataset, it's recommended to use the REST API instead. +> +>  +> +> [!INCLUDE [coco-files](../includes/coco-files.md)] ++## Train the custom model ++To start training a model with your COCO file, go to the **Custom models** tab and select **Add a new model**. Enter a name for the model and select `Image classification` or `Object detection` as the model type. ++ ++Select your dataset, which is now associated with the COCO file containing the labeling information. ++Then select a time budget and train the model. For small examples, you can use a `1 hour` budget. ++ ++It may take some time for the training to complete. Image Analysis 4.0 models can be very accurate with only a small set of training data, but they take longer to train than previous models. ++## Evaluate the trained model ++After training has completed, you can view the model's performance evaluation. The following metrics are used: ++- Image classification: Average Precision, Accuracy Top 1, Accuracy Top 5 +- Object detection: Mean Average Precision @ 30, Mean Average Precision @ 50, Mean Average Precision @ 75 ++If an evaluation set is not provided when training the model, the reported performance is estimated based on part of the training set. We strongly recommend you use an evaluation dataset (using the same process as above) to have a reliable estimation of your model performance. ++ ++## Test custom model in Vision Studio ++Once you've built a custom model, you can go back to the **Extract common tags from images** tile in Vision Studio and test it by selecting it in the drop-down menu and then uploading new images. ++ ++The prediction results will appear in the right column. ++#### [REST API](#tab/rest) ++## Prepare training data ++The first thing you need to do is create a COCO file from your training data. You can create a COCO file by converting an old Custom Vision project using the [migration script](migrate-from-custom-vision.md). Or, you can create a COCO file from scratch using some other labeling tool. See the following specification: +++## Upload to storage ++Upload your COCO file to a blob storage container, ideally the same blob container that holds the training images themselves. ++## Create your training dataset ++The `datasets/<dataset-name>` API lets you create a new dataset object that references the training data. Make the following changes to the cURL command below: ++1. Replace `<endpoint>` with your Computer Vision endpoint. +1. Replace `<dataset-name>` with a name for your dataset. +1. Replace `<subscription-key>` with your Computer Vision key. +1. In the request body, set `"annotationKind"` to either `"MultiClassClassification"` or `"ObjectDetection"`, depending on your project. +1. In the request body, set the `"annotationFileUris"` array to an array of string(s) that show the URI location(s) of your COCO file(s) in blob storage. ++```bash +curl.exe -v -X PUT "https://<endpoint>/computervision/datasets/<dataset-name>?api-version=2023-02-01-preview" -H "Content-Type: application/json" -H "Ocp-Apim-Subscription-Key: <subscription-key>" --data-ascii " +{ +'annotationKind':'MultiClassClassification', +'annotationFileUris':['<URI>'] +}" +``` ++## Create and train a model ++The `models/<model-name>` API lets you create a new custom model and associate it with an existing dataset. It also starts the training process. Make the following changes to the cURL command below: ++1. Replace `<endpoint>` with your Computer Vision endpoint. +1. Replace `<model-name>` with a name for your model. +1. Replace `<subscription-key>` with your Computer Vision key. +1. In the request body, set `"trainingDatasetName"` to the name of the dataset from the previous step. +1. In the request body, set `"modelKind"` to either `"Generic-Classifier"` or `"Generic-Detector"`, depending on your project. ++```bash +curl.exe -v -X PUT "https://<endpoint>/computervision/models/<model-name>?api-version=2023-02-01-preview" -H "Content-Type: application/json" -H "Ocp-Apim-Subscription-Key: <subscription-key>" --data-ascii " +{ +'trainingParameters': { + 'trainingDatasetName':'<dataset-name>', + 'timeBudgetInHours':1, + 'modelKind':'Generic-Classifier', + } +}" +``` ++## Evaluate the model's performance on a dataset ++The `models/<model-name>/evaluations/<eval-name>` API evaluates the performance of an existing model. Make the following changes to the cURL command below: ++1. Replace `<endpoint>` with your Computer Vision endpoint. +1. Replace `<model-name>` with the name of your model. +1. Replace `<eval-name>` with a name that can be used to uniquely identify the evaluation. +1. Replace `<subscription-key>` with your Computer Vision key. +1. In the request body, set `"testDatasetName"` to the name of the dataset you want to use for evaluation. If you don't have a dedicated dataset, you can use the same dataset you used for training. ++```bash +curl.exe -v -X PUT "https://<endpoint>/computervision/models/<model-name>/evaluations/<eval-name>?api-version=2023-02-01-preview" -H "Content-Type: application/json" -H "Ocp-Apim-Subscription-Key: <subscription-key>" --data-ascii " +{ +'evaluationParameters':{ + 'testDatasetName':'<dataset-name>' + }, +}" +``` ++The API call returns a **ModelPerformance** JSON object, which lists the model's scores in several categories. The following metrics are used: ++- Image classification: Average Precision, Accuracy Top 1, Accuracy Top 5 +- Object detection: Mean Average Precision @ 30, Mean Average Precision @ 50, Mean Average Precision @ 75 ++## Test the custom model on an image ++The `imageanalysis:analyze` API does ordinary Image Analysis operations. By specifying some parameters, you can use this API to query your own custom model instead of the prebuilt Image Analysis models. Make the following changes to the cURL command below: ++1. Replace `<endpoint>` with your Computer Vision endpoint. +1. Replace `<model-name>` with the name of your model. +1. Replace `<subscription-key>` with your Computer Vision key. +1. In the request body, set `"url"` to the URL of a remote image you want to test your model on. ++```bash +curl.exe -v -X POST "https://<endpoint>/computervision/imageanalysis:analyze?model-version=<model-name>&api-version=2023-02-01-preview" -H "Content-Type: application/json" -H "Ocp-Apim-Subscription-Key: <subscription-key>" --data-ascii " +{'url':'https://upload.wikimedia.org/wikipedia/commons/thumb/a/af/Atomist_quote_from_Democritus.png/338px-Atomist_quote_from_Democritus.png' +}" +``` ++The API call returns an **ImageAnalysisResult** JSON object, which contains all the detected tags for an image classifier, or objects for an object detector, with their confidence scores. ++```json +{ + "kind": "imageAnalysisResult", + "metadata": { + "height": 900, + "width": 1260 + }, + "customModelResult": { + "classifications": [ + { + "confidence": 0.97970027, + "label": "hemlock" + }, + { + "confidence": 0.020299695, + "label": "japanese-cherry" + } + ], + "objects": [], + "imageMetadata": { + "width": 1260, + "height": 900 + } + } +} +``` ++++## Next steps ++In this guide, you created and trained a custom image classification model using Image Analysis. Next, learn more about the Analyze Image 4.0 API, so you can call your custom model from an application using REST or library SDKs. ++* [Call the Analyze Image API](./call-analyze-image-40.md#use-a-custom-model) +* See the [Model customization concepts](../concept-model-customization.md) guide for a broad overview of this feature and a list of frequently asked questions. |
cognitive-services | Overview Image Analysis | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Computer-vision/overview-image-analysis.md | This documentation contains the following types of articles: For a more structured approach, follow a Training module for Image Analysis. * [Analyze images with the Computer Vision service](/training/modules/analyze-images-computer-vision/) -## Image Analysis features -You can analyze images to provide insights about their visual features and characteristics. All of the features in the list below are provided by the [Analyze Image](https://westcentralus.dev.cognitive.microsoft.com/docs/services/computer-vision-v3-2/operations/56f91f2e778daf14a499f21b) API. Follow a [quickstart](./quickstarts-sdk/image-analysis-client-library.md) to get started. +## Analyze Image -### Extract text from images (preview) +You can analyze images to provide insights about their visual features and characteristics. All of the features in the list below are provided by the Analyze Image API. Follow a [quickstart](./quickstarts-sdk/image-analysis-client-library-40.md) to get started. -Version 4.0 preview of Image Analysis offers the ability to extract text from images. Compared with the async Computer Vision 3.2 GA Read, the new version offers the familiar Read OCR engine in a unified performance-enhanced synchronous API that makes it easy to get all image insights including OCR in a single API operation. [Extract text from images](concept-ocr.md) +### Model customization (v4.0 preview only) +You can create and train custom models to do image classification or object detection. Bring your own images, label them with custom tags, and Image Analysis will train a model customized for your use case. [Model customization](./concept-model-customization.md) -### Detect people in images (preview) +### Read text from images (v4.0 preview only) ++Version 4.0 preview of Image Analysis offers the ability to extract readable text from images. Compared with the async Computer Vision 3.2 Read API, the new version offers the familiar Read OCR engine in a unified performance-enhanced synchronous API that makes it easy to get OCR along with other insights in a single API call. [Extract text from images](concept-ocr.md) ++### Detect people in images (v4.0 preview only) Version 4.0 preview of Image Analysis offers the ability to detect people appearing in images. The bounding box coordinates of each detected person are returned, along with a confidence score. [People detection](concept-people-detection.md) -### Tag visual features +### Generate image captions ++Generate a caption of an image in human-readable language, using complete sentences. Computer Vision's algorithms generate captions based on the objects identified in the image. ++The version 4.0 image captioning model is a more advanced implementation and works with a wider range of input images. It is only available in the following geographic regions: East US, France Central, Korea Central, North Europe, Southeast Asia, West Europe, West US. -Identify and tag visual features in an image, from a set of thousands of recognizable objects, living things, scenery, and actions. When the tags are ambiguous or not common knowledge, the API response provides hints to clarify the context of the tag. Tagging isn't limited to the main subject, such as a person in the foreground, but also includes the setting (indoor or outdoor), furniture, tools, plants, animals, accessories, gadgets, and so on. [Tag visual features](concept-tagging-images.md) +Version 4.0 also lets you use dense captioning, which generates detailed captions for individual objects that are found in the image. The API returns the bounding box coordinates (in pixels) of each object found in the image, plus a caption. You can use this functionality to generate descriptions of separate parts of an image. +[Image captions (v3.2)](concept-describing-images.md) [(v4.0 preview)](concept-describe-images-40.md) ### Detect objects -Object detection is similar to tagging, but the API returns the bounding box coordinates for each tag applied. For example, if an image contains a dog, cat and person, the Detect operation will list those objects together with their coordinates in the image. You can use this functionality to process further relationships between the objects in an image. It also lets you know when there are multiple instances of the same tag in an image. [Detect objects](concept-object-detection.md) +Object detection is similar to tagging, but the API returns the bounding box coordinates for each tag applied. For example, if an image contains a dog, cat and person, the Detect operation will list those objects together with their coordinates in the image. You can use this functionality to process further relationships between the objects in an image. It also lets you know when there are multiple instances of the same tag in an image. [Detect objects (v3.2)](concept-object-detection.md) [(v4.0 preview)](concept-object-detection-40.md) +### Tag visual features -### Detect brands +Identify and tag visual features in an image, from a set of thousands of recognizable objects, living things, scenery, and actions. When the tags are ambiguous or not common knowledge, the API response provides hints to clarify the context of the tag. Tagging isn't limited to the main subject, such as a person in the foreground, but also includes the setting (indoor or outdoor), furniture, tools, plants, animals, accessories, gadgets, and so on. [Tag visual features (v3.2)](concept-tagging-images.md) [(v4.0 preview)](concept-tag-images-40.md) -Identify commercial brands in images or videos from a database of thousands of global logos. You can use this feature, for example, to discover which brands are most popular on social media or most prevalent in media product placement. [Detect brands](concept-brand-detection.md) -### Categorize an image +### Get the area of interest / smart crop -Identify and categorize an entire image, using a [category taxonomy](Category-Taxonomy.md) with parent/child hereditary hierarchies. Categories can be used alone, or with our new tagging models.<br/>Currently, English is the only supported language for tagging and categorizing images. [Categorize an image](concept-categorizing-images.md) +Analyze the contents of an image to return the coordinates of the *area of interest* that matches a specified aspect ratio. Computer Vision returns the bounding box coordinates of the region, so the calling application can modify the original image as desired. -### Describe an image +The version 4.0 smart cropping model is a more advanced implementation and works with a wider range of input images. It is only available in the following geographic regions: East US, France Central, Korea Central, North Europe, Southeast Asia, West Europe, West US. -Generate a description of an entire image in human-readable language, using complete sentences. Computer Vision's algorithms generate various descriptions based on the objects identified in the image. The descriptions are each evaluated and a confidence score generated. A list is then returned ordered from highest confidence score to lowest. [Describe an image](concept-describing-images.md) +[Generate a thumbnail (v3.2)](concept-generating-thumbnails.md) [(v4.0 preview)](concept-generate-thumbnails-40.md) +### Detect brands (v3.2 only) -### Detect faces +Identify commercial brands in images or videos from a database of thousands of global logos. You can use this feature, for example, to discover which brands are most popular on social media or most prevalent in media product placement. [Detect brands](concept-brand-detection.md) ++### Categorize an image (v3.2 only) ++Identify and categorize an entire image, using a [category taxonomy](Category-Taxonomy.md) with parent/child hereditary hierarchies. Categories can be used alone, or with our new tagging models.<br/>Currently, English is the only supported language for tagging and categorizing images. [Categorize an image](concept-categorizing-images.md) ++### Detect faces (v3.2 only) Detect faces in an image and provide information about each detected face. Computer Vision returns the coordinates, rectangle, gender, and age for each detected face. [Detect faces](concept-detecting-faces.md) You can also use the dedicated [Face API](./index-identity.yml) for these purposes. It provides more detailed analysis, such as facial identification and pose detection. -### Detect image types +### Detect image types (v3.2 only) Detect characteristics about an image, such as whether an image is a line drawing or the likelihood of whether an image is clip art. [Detect image types](concept-detecting-image-types.md) -### Detect domain-specific content +### Detect domain-specific content (v3.2 only) Use domain models to detect and identify domain-specific content in an image, such as celebrities and landmarks. For example, if an image contains people, Computer Vision can use a domain model for celebrities to determine if the people detected in the image are known celebrities. [Detect domain-specific content](concept-detecting-domain-content.md) -### Detect the color scheme +### Detect the color scheme (v3.2 only) Analyze color usage within an image. Computer Vision can determine whether an image is black & white or color and, for color images, identify the dominant and accent colors. [Detect the color scheme](concept-detecting-color-schemes.md) -### Get the area of interest / smart crop +### Moderate content in images (v3.2 only) -Analyze the contents of an image to return the coordinates of the *area of interest* that matches a specified aspect ratio. Computer Vision returns the bounding box coordinates of the region, so the calling application can modify the original image as desired. [Generate a thumbnail](concept-generating-thumbnails.md) +You can use Computer Vision to [detect adult content](concept-detecting-adult-content.md) in an image and return confidence scores for different classifications. The threshold for flagging content can be set on a sliding scale to accommodate your preferences. +## Image Retrieval (v4.0 preview only) -### Moderate content in images +The Image Retrieval APIs enable the _vectorization_ of images and text queries. They convert images to coordinates in a multi-dimensional vector space. Then, incoming text queries can also be converted to vectors, and images can be matched to the text based on semantic closeness. This allows the user to search a set of images using text, without the need to use image tags or other metadata. Semantic closeness often produces better results in search. -You can use Computer Vision to [detect adult content](concept-detecting-adult-content.md) in an image and return confidence scores for different classifications. The threshold for flagging content can be set on a sliding scale to accommodate your preferences. +These APIs are only available in the following geographic regions: East US, France Central, Korea Central, North Europe, Southeast Asia, West Europe, West US. -## Image requirements +[Image Retrieval](./concept-image-retrieval.md) -#### [Version 3.2](#tab/3-2) +## Background removal (v4.0 preview only) -Image Analysis works on images that meet the following requirements: +Image Analysis 4.0 (preview) offers the ability to remove the background of an image. This feature can either output an image of the detected foreground object with a transparent background, or a grayscale alpha matte image showing the opacity of the detected foreground object. [Background removal](./concept-background-removal.md) -- The image must be presented in JPEG, PNG, GIF, or BMP format-- The file size of the image must be less than 4 megabytes (MB)-- The dimensions of the image must be greater than 50 x 50 pixels and less than 16,000 x 16,000 pixels++## Image requirements #### [Version 4.0](#tab/4-0) Image Analysis works on images that meet the following requirements: - The file size of the image must be less than 20 megabytes (MB) - The dimensions of the image must be greater than 50 x 50 pixels and less than 16,000 x 16,000 pixels ++#### [Version 3.2](#tab/3-2) ++Image Analysis works on images that meet the following requirements: ++- The image must be presented in JPEG, PNG, GIF, or BMP format +- The file size of the image must be less than 4 megabytes (MB) +- The dimensions of the image must be greater than 50 x 50 pixels and less than 16,000 x 16,000 pixels + ## Data privacy and security As with all of the Cognitive Services, developers using the Computer Vision serv Get started with Image Analysis by following the quickstart guide in your preferred development language: -- [Quickstart: Computer Vision REST API or client libraries](./quickstarts-sdk/image-analysis-client-library.md)+- [Quickstart (v4.0 preview): Computer Vision REST API or client libraries](./quickstarts-sdk/image-analysis-client-library-40.md) +- [Quickstart (v3.2): Computer Vision REST API or client libraries](./quickstarts-sdk/image-analysis-client-library.md) |
cognitive-services | Image Analysis Client Library 40 | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Computer-vision/quickstarts-sdk/image-analysis-client-library-40.md | + + Title: "Quickstart: Image Analysis 4.0" ++description: Learn how to tag images in your application using Image Analysis 4.0 through a native client library in the language of your choice. ++++++ Last updated : 01/24/2023++ms.devlang: csharp, golang, java, javascript, python ++zone_pivot_groups: programming-languages-computer-vision-40 +keywords: computer vision, computer vision service +++# Quickstart: Image Analysis 4.0 ++Get started with the Image Analysis 4.0 REST API or client libraries to set up a basic image analysis script. The Analyze Image service provides you with AI algorithms for processing images and returning information on their visual features. Follow these steps to install a package to your application and try out the sample code. ++++++++++++ |
cognitive-services | Whats New | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Computer-vision/whats-new.md | +## March 2023 ++### Computer Vision Image Analysis 4.0 SDK public preview ++Image Analysis 4.0 is now available through client library SDKs in C#, C++, and Python. This update also includes the Florence-powered image captioning model that achieved human parity performance. ++### Image Analysis V4.0 Captioning and Dense Captioning (public preview): ++"Caption" replaces "Describe" in V4.0 as the significantly improved image captioning feature rich with details and sematic understanding. Dense Captions provides more detail by generating one sentence descriptions of up to 10 regions of the image in addition to describing the whole image. Dense Captions also returns bounding box coordinates of the described image regions. There's also a new gender-neutral parameter to allow customers to choose whether to enable probabilistic gender inference for alt-text and Seeing AI applications. Automatically deliver rich captions, accessible alt-text, SEO optimization, and intelligent photo curation to support digital content. [Image captions](./concept-describe-images-40.md). ++### Video summary and frame locator (public preview): +Search and interact with video content in the same intuitive way you think and write. Locate relevant content without the need for additional metadata. Available only in [Vision Studio](https://aka.ms/VisionStudio). +++### Image Analysis 4.0 model customization (public preview) ++You can now create and train your own [custom image classification and object detection models](./concept-model-customization.md), using Vision Studio or the v4.0 REST APIs. ++### Image Retrieval APIs (public preview) ++The [Image Retrieval APIs](./how-to/image-retrieval.md), part of the Image Analysis 4.0 API, enable the _vectorization_ of images and text queries. They let you convert images and text to coordinates in a multi-dimensional vector space. You can now search with natural language and find relevant images using vector similarity search. ++### Background removal APIs (public preview) ++As part of the Image Analysis 4.0 API, the [Background removal API](./concept-background-removal.md) lets you remove the background of an image. This operation can either output an image of the detected foreground object with a transparent background, or a grayscale alpha matte image showing the opacity of the detected foreground object. + ## October 2022 ### Computer Vision Image Analysis 4.0 public preview |
cognitive-services | Overview | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Custom-Vision-Service/overview.md | keywords: image recognition, image identifier, image recognition app, custom vis Azure Custom Vision is an image recognition service that lets you build, deploy, and improve your own image identifier models. An image identifier applies labels to images, according to their visual characteristics. Each label represents a classification or object. Unlike the [Computer Vision](../computer-vision/overview.md) service, Custom Vision allows you to specify your own labels and train custom models to detect them. +> [!TIP] +> The Azure Computer Vision Image Analysis API now supports custom models. [Use Image Analysis 4.0](../computer-vision/how-to/model-customization.md) to create custom image identifier models using the latest technology from Azure. To migrate a Custom Vision project to the new Image Analysis 4.0 system, see the [Migration guide](../computer-vision/how-to/migrate-from-custom-vision.md). + You can use Custom Vision through a client library SDK, REST API, or through the [Custom Vision web portal](https://customvision.ai/). Follow a quickstart to get started. > [!div class="nextstepaction"] |
cognitive-services | Intent Recognition | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Speech-Service/intent-recognition.md | keywords: intent recognition # What is intent recognition? -In this overview, you will learn about the benefits and capabilities of intent recognition. The Cognitive Services Speech SDK provides two ways to recognize intents, both described below. An intent is something the user wants to do: book a flight, check the weather, or make a call. Using intent recognition, your applications, tools, and devices can determine what the user wants to initiate or do based on options you define in the Intent Recognizer or LUIS. +In this overview, you will learn about the benefits and capabilities of intent recognition. The Cognitive Services Speech SDK provides two ways to recognize intents, both described below. An intent is something the user wants to do: book a flight, check the weather, or make a call. Using intent recognition, your applications, tools, and devices can determine what the user wants to initiate or do based on options you define in the Intent Recognizer or Conversational Language Understanding (CLU) model. ## Pattern matching -The Speech SDK provides an embedded pattern matcher that you can use to recognize intents in a very strict way. This is useful for when you need a quick offline solution. This works especially well when the user is going to be trained in some way or can be expected to use specific phrases to trigger intents. For example: "Go to floor seven", or "Turn on the lamp" etc. It is recommended to start here and if it no longer meets your needs, switch to using LUIS or a combination of the two. +The Speech SDK provides an embedded pattern matcher that you can use to recognize intents in a very strict way. This is useful for when you need a quick offline solution. This works especially well when the user is going to be trained in some way or can be expected to use specific phrases to trigger intents. For example: "Go to floor seven", or "Turn on the lamp" etc. It is recommended to start here and if it no longer meets your needs, switch to using [CLU](#conversational-language-understanding) or a combination of the two. Use pattern matching if: * You're only interested in matching strictly what the user said. These patterns match more aggressively than [conversational language understanding (CLU)](/azure/cognitive-services/language-service/conversational-language-understanding/overview). |
cognitive-services | Openai Speech | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Speech-Service/openai-speech.md | + + Title: "Azure OpenAI speech to speech chat - Speech service" ++description: In this how-to guide, you can use Speech to converse with Azure OpenAI. The text recognized by the Speech service is sent to Azure OpenAI. The text response from Azure OpenAI is then synthesized by the Speech service. ++++++ Last updated : 03/07/2023++ms.devlang: python +keywords: speech to text, openai +++# Azure OpenAI speech to speech chat +++## Next steps ++- [Learn more about Speech](overview.md) +- [Learn more about Azure OpenAI](/azure/cognitive-services/openai/overview) |
cognitive-services | Model Lifecycle | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/language-service/concepts/model-lifecycle.md | Use the table below to find which model versions are supported by each feature: | Language Detection | `2021-11-20`, `2022-10-01*` | | Entity Linking | `2021-06-01*` | | Named Entity Recognition (NER) | `2021-06-01*`, `2022-10-01-preview` |-| Personally Identifiable Information (PII) detection | `2020-07-01`, `2021-01-15*` | +| Personally Identifiable Information (PII) detection | `2020-07-01`, `2021-01-15*`, `2023-01-01-preview**` | | PII detection for conversations (Preview) | `2022-05-15-preview**` | | Question answering | `2021-10-01*` |-| Text Analytics for health | `2021-05-15`, `2022-03-01*`, `2022-08-15-preview**` | +| Text Analytics for health | `2021-05-15`, `2022-03-01*`, `2022-08-15-preview`, `2023-01-01-preview**` | | Key phrase extraction | `2021-06-01`, `2022-07-01`,`2022-10-01*` | | Document summarization - extractive only (preview) | `2022-08-31-preview**` | Use the table below to find which model versions are supported by each feature: As new training configs and new functionality become available; older and less accurate configs are retired, see the following timelines for configs expiration: -New configs are being released every few months. So, training configs expiration of any publicly available config is **six months** after its release. If you have assigned a trained model to a deployment, this deployment expires after **twelve months** from the training config expiration. If your models are about to expire, you can retrain and redeploy your models with the latest training configuration version. +New configs are being released every few months. So, training configs expiration of any publicly available config is **six months** after its release. If you've assigned a trained model to a deployment, this deployment expires after **twelve months** from the training config expiration. If your models are about to expire, you can retrain and redeploy your models with the latest training configuration version. -After training config version expires, API calls will return an error when called or used if called with an expired config version. By default, training requests will use the latest available training config version. To change the config version, use `trainingConfigVersion` when submitting a training job and assign the version you want. +After training config version expires, API calls will return an error when called or used if called with an expired config version. By default, training requests use the latest available training config version. To change the config version, use `trainingConfigVersion` when submitting a training job and assign the version you want. > [!Tip] > It's recommended to use the latest supported config version -You can train and deploy a custom AI model from the date of training config version release, up until the **Training config expiration** date. After this date, you will have to use another supported training config version for submitting any training or deployment jobs. +You can train and deploy a custom AI model from the date of training config version release, up until the **Training config expiration** date. After this date, you'll have to use another supported training config version for submitting any training or deployment jobs. Deployment expiration is when your deployed model will be unavailable to be used for prediction. Use the table below to find which model versions are supported by each feature: When you're making API calls to the following features, you need to specify the `API-VERISON` you want to use to complete your request. It's recommended to use the latest available API versions. -If you're using the [Language Studio](https://aka.ms/languageStudio) for building your project you will be using the latest API version available. If you need to use another API version this is only available directly through APIs. +If you're using [Language Studio](https://aka.ms/languageStudio) for your projects, you'll use the latest API version available. Other API versions are only available through the REST APIs and client libraries. -Use the table below to find which API versions are supported by each feature: +Use the following table to find which API versions are supported by each feature: | Feature | Supported versions | Latest Generally Available version | Latest preview version | |--|||| |
cognitive-services | Connect Services | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/language-service/orchestration-workflow/tutorials/connect-services.md | This tutorial will include creating a **chit chat** knowledge base and **email c ## Prerequisites -- Create a [Language resource](https://portal.azure.com/#create/Microsoft.CognitiveServicesTextAnalytics) **and select the custom question answering feature** in the Azure portal to get your key and endpoint. After it deploys, click **Go to resource**.+- Create a [Language resource](https://portal.azure.com/#create/Microsoft.CognitiveServicesTextAnalytics) **and select the custom question answering feature** in the Azure portal to get your key and endpoint. After it deploys, select **Go to resource**. - You will need the key and endpoint from the resource you create to connect your bot to the API. You'll paste your key and endpoint into the code below later in the tutorial. Copy them from the **Keys and Endpoint** tab in your resource. - When you enable custom question answering, you must select an Azure search resource to connect to. - Make sure the region of your resource is supported by [conversational language understanding](../../conversational-language-understanding/service-limits.md#regional-availability). |
cognitive-services | Power Automate | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/language-service/tutorials/power-automate.md | In this tutorial, you'll create a Power Automate flow to extract entities found ## Prerequisites * Azure subscription - [Create one for free](https://azure.microsoft.com/free/cognitive-services)-* <a href="https://portal.azure.com/#create/Microsoft.CognitiveServicesTextAnalytics" title="Create a Language resource" target="_blank">A Language resource </a> - * (optional) A trained model if you're using a custom capability such as [custom NER](../custom-named-entity-recognition/overview.md), [custom text classification](../custom-text-classification/overview.md), or [conversational language understanding](../conversational-language-understanding/overview.md). - * You will need the key and endpoint from your Language resource to authenticate your Power Automate flow. +* Once you have your Azure subscription, <a href="https://portal.azure.com/#create/Microsoft.CognitiveServicesTextAnalytics" title="Create a Language resource" target="_blank">create a Language resource </a> in the Azure portal to get your key and endpoint. After it deploys, select **Go to resource**. + * You will need the key and endpoint from the resource you create to connect your application to the API. You'll paste your key and endpoint into the code below later in the quickstart. + * You can use the free pricing tier (`Free F0`) to try the service, and upgrade later to a paid tier for production. +* Optional for this tutorial: A trained model is required if you're using a custom capability such as [custom NER](../custom-named-entity-recognition/overview.md), [custom text classification](../custom-text-classification/overview.md), or [conversational language understanding](../conversational-language-understanding/overview.md). ## Create a Power Automate flow +For this tutorial, you will create a flow that extracts named entities from text. + 1. [Sign in to power automate](https://make.powerautomate.com/) -2. From the left side menu, select **My flows** and create a **Automated cloud flow** +1. From the left side menu, select **My flows**. Then select **New flow** > **Automated cloud flow**. :::image type="content" source="../media/create-flow.png" alt-text="A screenshot of the menu for creating an automated cloud flow." lightbox="../media/create-flow.png"::: -3. Enter a name your flow. For example `Languageflow`. +1. Enter a name for your flow such as `LanguageFlow`. Then select **Skip** to continue without choosing a trigger. :::image type="content" source="../media/language-flow.png" alt-text="A screenshot of automated cloud flow screen." lightbox="../media/language-flow.png"::: -4. Start by selecting **Manually trigger flow**. +1. Under **Triggers** select **Manually trigger a flow**. :::image type="content" source="../media/trigger-flow.png" alt-text="A screenshot of how to manually trigger a flow." lightbox="../media/trigger-flow.png"::: -5. To add a Language service connector, search for **Azure Language**. +1. Select **+ New step** to begin adding a Language service connector. ++1. Under **Choose an operation** search for **Azure Language**. Then select **Azure Cognitive Service for Language**. This will narrow down the list of actions to only those that are available for Language. :::image type="content" source="../media/language-connector.png" alt-text="A screenshot of An Azure language connector." lightbox="../media/language-connector.png"::: -6. For this tutorial, you will create a flow that extracts named entities from text. Search for **Named entity recognition**, and select the connector. +1. Under **Actions** search for **Named Entity Recognition**, and select the connector. :::image type="content" source="../media/entity-connector.png" alt-text="A screenshot of a named entity recognition connector." lightbox="../media/entity-connector.png"::: -7. Add endpoint and key for your Language resource, which will be used for authentication. You can find your key and endpoint by navigating to your resource in the [Azure portal](https://portal.azure.com), and selecting **Keys and endpoint** from the left navigation menu. +1. Get the endpoint and key for your Language resource, which will be used for authentication. You can find your key and endpoint by navigating to your resource in the [Azure portal](https://portal.azure.com), and selecting **Keys and Endpoint** from the left side menu. :::image type="content" source="../media/azure-portal-resource-credentials.png" alt-text="A screenshot of A language resource key and endpoint in the Azure portal." lightbox="../media/azure-portal-resource-credentials.png"::: -8. Once you have your key and endpoint, add it to the connector in Power Automate. +1. Once you have your key and endpoint, add it to the connector in Power Automate. :::image type="content" source="../media/language-auth.png" alt-text="A screenshot of adding the language key and endpoint to the Power Automate flow." lightbox="../media/language-auth.png"::: -9. Add the data in the connector +1. Add the data in the connector :::image type="content" source="../media/connector-data.png" alt-text="A screenshot of data being added to the connector." lightbox="../media/connector-data.png"::: > [!NOTE] > You will need deployment name and project name if you are using custom language capability. -9. From the top navigation menu, save the flow and select **Test the flow**. In the window that appears, select **Test**. +1. From the top navigation menu, save the flow and select **Test the flow**. In the window that appears, select **Test**. :::image type="content" source="../media/test-connector.png" alt-text="A screenshot of how to run the flow." lightbox="../media/test-connector.png"::: -10. After the flow runs, you will see the response in the **outputs** field. +1. After the flow runs, you will see the response in the **outputs** field. :::image type="content" source="../media/response-connector.png" alt-text="A screenshot of flow response." lightbox="../media/response-connector.png"::: |
cognitive-services | Whats New | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/language-service/whats-new.md | Azure Cognitive Service for Language is updated on an ongoing basis. To stay up- * China East 2 (Authoring and Prediction) * China North 2 (Prediction) * New model evaluation updates for Conversational language understanding and Orchestration workflow.+* New model version ('2023-01-01-preview') for Text Analytics for health featuring new [entity categories](./text-analytics-for-health/concepts/health-entity-categories.md) for social determinants of health ## December 2022 |
communication-services | European Union Data Boundary | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communication-services/concepts/european-union-data-boundary.md | + + Title: European Union Data Boundary compliance for Azure Communication Services +description: Learn about how Azure Communication Services meets European Union data handling compliance laws +++++ Last updated : 02/01/2023++++++# European Union Data Boundary (EUDB) ++Azure Communication Services complies with European Union Data Boundary (EUDB) [announced by Microsoft Dec 15, 2022](https://blogs.microsoft.com/eupolicy/2022/12/15/eu-data-boundary-cloud-rollout/). ++This boundary defines data residency and processing rules for resources based on the data location selected when creating a new communication resource. When a data location for a resource is one of the European countries in scope of EUDB, then all processing and storage of personal data remain within the European Union. The EU Data Boundary consists of the countries in the European Union (EU) and the European Free Trade Association (EFTA). The EU Countries are Austria, Belgium, Bulgaria, Croatia, Cyprus, Czechia, Denmark, Estonia, Finland, France, Germany, Greece, Hungary, Ireland, Italy, Latvia, Lithuania, Luxembourg, Malta, Netherlands, Poland, Portugal, Romania, Slovakia, Slovenia, Spain, and Sweden; and the EFTA countries are Liechtenstein, Iceland, Norway, and Switzerland. ++## Calling ++Calls and meetings can be established in various ways by various users. We define a few terms: +- Organizer: person who created the meeting, for example, set it up using Outlook +- Initiator: the first person who joins the meeting (the meeting only exists as a calendar item before the first person joins it) +- Guest: a participant who isn't a member of the tenant of the Organizer. May include a member of a different tenant, PSTN (dial-in) user, etc. (Note that this use of Guest is specific to this description and broader than used within IC3 generally, but useful for the discussion here) +- Call: refers to a 1:1 call and\or to a Group call to a larger group. For the purposes of this conversation, they should be the same. ++For EU communication resources, when the organizer, initiator, or guests join a call from the EU, processing and storage of personal data will be limited to the EU. ++## Messaging ++All threads created from an EU resource will process and storage personal data in the EU. +++## Other resources ++For more information, please refer to the Microsoft documentation on the EUDB: +- [Microsoft EU Data Boundary Overview](https://www.microsoft.com/en-us/trust-center/privacy/european-data-boundary-eudb) |
communication-services | Privacy | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communication-services/concepts/privacy.md | Azure Communication Services is committed to helping our customers meet their pr ## Data residency -When [creating](../quickstarts/create-communication-resource.md) an Azure Communication Services resource, you specify a **geography** (not an Azure data center). All chat messages, and resource data stored by Communication Services at rest will be retained in that geography, in a data center selected internally by Communication Services. Data may transit or be processed in other geographies. These global endpoints are necessary to provide a high-performance, low-latency experience to end-users no matter their location. +When [creating](../quickstarts/create-communication-resource.md) an Azure Communication Services resource, you specify a **geography** (not an Azure data center). All chat messages, and resource data stored by Communication Services at rest are retained in that geography, in a data center selected internally by Communication Services. Data **may** transit or be processed in other geographies. These global endpoints are necessary to provide a high-performance, low-latency experience to end-users no matter their location. The list of geographies you can choose from includes: - Africa Azure Communication Services only collects diagnostic data required to deliver t ## Data residency and events -Any Event Grid system topic configured with Azure Communication Services will be created in a global location. To support reliable delivery, a global Event Grid system topic may store the event data in any Microsoft data center. When you configure Event Grid with Azure Communication Services, you're delivering your event data to Event Grid, which is an Azure resource under your control. While Azure Communication Services may be configured to utilize Azure Event Grid, you're responsible for managing your Event Grid resource and the data stored within it. +Any Event Grid system topic configured with Azure Communication Services is created in a global location. To support reliable delivery, a global Event Grid system topic may store the event data in any Microsoft data center. When you configure Event Grid with Azure Communication Services, you're delivering your event data to Event Grid, which is an Azure resource under your control. While Azure Communication Services may be configured to utilize Azure Event Grid, you're responsible for managing your Event Grid resource and the data stored within it. ## Relating humans to Azure Communication Services identities Your application manages the relationship between human users and Communication There are two categories of Communication Service data: - **API Data.** This data is created and managed by Communication Service APIs, a typical example being Chat messages managed through Chat APIs.-- **Azure Monitor Logs** This data is created by the service and managed through the Azure Monitor data platform. This data includes telemetry and metrics to help you understand your Communication Services usage. This is not managed by Communication Service APIs.+- **Azure Monitor Logs** This data is created by the service and managed through the Azure Monitor data platform. This data includes telemetry and metrics to help you understand your Communication Services usage. ## API data ### Identities -Azure Communication Services maintains a directory of identities, use the [DeleteIdentity](/rest/api/communication/communication-identity/delete?tabs=HTTP) API to remove them. Deleting an identity will revoke all associated access tokens and delete their chat messages. For more information on how to remove an identity [see this page](../quickstarts/identity/access-tokens.md). +Azure Communication Services maintains a directory of identities, use the [DeleteIdentity](/rest/api/communication/communication-identity/delete?tabs=HTTP) API to remove them. Deleting an identity revokes all associated access tokens and deletes their chat messages. For more information on how to remove an identity [see this page](../quickstarts/identity/access-tokens.md). - DeleteIdentity Azure Communication Services maintains a directory of phone numbers associated w ### Chat -Chat threads and messages are retained until explicitly deleted. Use [Chat APIs](/rest/api/communication/chat/chatthread) to get, list, update, and delete messages. +Chat threads and messages are kept for 90 days unless explicitly deleted by the customer sooner due to their internal policies. Customers that require the option of keeping messages longer need to submit [a request to Azure Support](../../azure-portal/supportability/how-to-create-azure-support-request.md). ++Use [Chat APIs](/rest/api/communication/chat/chatthread) to get, list, update, and delete messages. - `Get Thread` - `Get Message` Call recordings are stored temporarily in the same geography that was selected f ## Azure Monitor and Log Analytics -Azure Communication Services will feed into Azure Monitor logging data for understanding operational health and utilization of the service. Some of these logs include Communication Service identities and phone numbers as field data. To delete any potentially personal data [use these procedures for Azure Monitor](../../azure-monitor/logs/personal-data-mgmt.md). You may also want to configure [the default retention period for Azure Monitor](../../azure-monitor/logs/data-retention-archive.md). +Azure Communication Services feed into Azure Monitor logging data for understanding operational health and utilization of the service. Some of these logs include Communication Service identities and phone numbers as field data. To delete any potentially personal data, [use these procedures for Azure Monitor](../../azure-monitor/logs/personal-data-mgmt.md). You may also want to configure [the default retention period for Azure Monitor](../../azure-monitor/logs/data-retention-archive.md). ## Additional resources |
container-apps | Metrics | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/container-apps/metrics.md | -Azure Monitor collects metric data from your container app at regular interval to help you gain insights into the performance and health of your container app. +Azure Monitor collects metric data from your container app at regular intervals to help you gain insights into the performance and health of your container app. The metrics explorer in the Azure portal allows you to visualize the data. You can also retrieve raw metric data through the [Azure CLI](/cli/azure/monitor/metrics) and Azure [PowerShell cmdlets](/powershell/module/az.monitor/get-azmetric). You can add more scopes to view metrics across multiple container apps. :::image type="content" source="media/observability/metrics-across-apps.png" alt-text="Screenshot of the metrics explorer that shows a chart with metrics for multiple container apps."::: > [!div class="nextstepaction"]-> [Set up alerts in Azure Container Apps](alerts.md) +> [Set up alerts in Azure Container Apps](alerts.md) |
cosmos-db | Quickstart Portal | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/nosql/quickstart-portal.md | Title: Quickstart - Create Azure Cosmos DB resources from the Azure portal -description: This quickstart shows how to create an Azure Cosmos DB database, container, and items by using the Azure portal. +description: Use this quickstart to learn how to create an Azure Cosmos DB database, container, and items by using the Azure portal. Previously updated : 08/26/2021 Last updated : 03/03/2023 # Quickstart: Create an Azure Cosmos DB account, database, container, and items from the Azure portal [!INCLUDE[NoSQL](../includes/appliesto-nosql.md)] > [!div class="op_single_selector"]-> * [Azure portal](quickstart-portal.md) -> * [.NET](quickstart-dotnet.md) -> * [Java](quickstart-java.md) -> * [Node.js](quickstart-nodejs.md) -> * [Python](quickstart-python.md) +> - [Azure portal](quickstart-portal.md) +> - [.NET](quickstart-dotnet.md) +> - [Java](quickstart-java.md) +> - [Node.js](quickstart-nodejs.md) +> - [Python](quickstart-python.md) > -Azure Cosmos DB is Microsoft's globally distributed multi-model database service. You can use Azure Cosmos DB to quickly create and query key/value databases, document databases, and graph databases, all of which benefit from the global distribution and horizontal scale capabilities at the core of Azure Cosmos DB. +Azure Cosmos DB is Microsoft's globally distributed multi-model database service. You can use Azure Cosmos DB to quickly create and query key/value databases, document databases, and graph databases. This approach benefits from the global distribution and horizontal scale capabilities at the core of Azure Cosmos DB. -This quickstart demonstrates how to use the Azure portal to create an Azure Cosmos DB [API for NoSQL](../introduction.md) account, create a document database, and container, and add data to the container. Without a credit card or an Azure subscription, you can set up a free [Try Azure Cosmos DB account](https://aka.ms/trycosmosdb) +This quickstart demonstrates how to use the Azure portal to create an Azure Cosmos DB [API for NoSQL](../introduction.md) account. In that account, you create a document database, and container, and add data to the container. Without a credit card or an Azure subscription, you can set up a free [Try Azure Cosmos DB account](https://aka.ms/trycosmosdb). ## Prerequisites -An Azure subscription or free Azure Cosmos DB trial account -- [!INCLUDE [quickstarts-free-trial-note](../../../includes/quickstarts-free-trial-note.md)] +An Azure subscription or free Azure Cosmos DB trial account. ++- [!INCLUDE [quickstarts-free-trial-note](../../../includes/quickstarts-free-trial-note.md)] - [!INCLUDE [cosmos-db-emulator-docdb-api](../includes/cosmos-db-emulator-docdb-api.md)] An Azure subscription or free Azure Cosmos DB trial account [!INCLUDE [cosmos-db-create-dbaccount](../includes/cosmos-db-create-dbaccount.md)] -## <a id="create-container-database"></a>Add a database and a container +## <a id="create-container-database"></a>Add a database and a container You can use the Data Explorer in the Azure portal to create a database and container. -1. Select **Data Explorer** from the left navigation on your Azure Cosmos DB account page, and then select **New Container**. +1. Select **Data Explorer** from the left navigation on your Azure Cosmos DB account page, and then select **New Container** > **New Container**. - You may need to scroll right to see the **Add Container** window. + You may need to scroll right to see the **New Container** window. - :::image type="content" source="./media/create-cosmosdb-resources-portal/add-database-container.png" alt-text="The Azure portal Data Explorer, Add Container pane"::: + :::image type="content" source="./media/quickstart-portal/add-database-container.png" alt-text="Screenshot shows the Azure portal Data Explorer page with the New Container pane open." lightbox="./media/quickstart-portal/add-database-container.png"::: -1. In the **Add container** pane, enter the settings for the new container. +1. In the **New Container** pane, enter the settings for the new container. - |Setting|Suggested value|Description + |Setting|Suggested value|Description| ||||- |**Database ID**|ToDoList|Enter *ToDoList* as the name for the new database. Database names must contain from 1 through 255 characters, and they cannot contain `/, \\, #, ?`, or a trailing space. Check the **Share throughput across containers** option, it allows you to share the throughput provisioned on the database across all the containers within the database. This option also helps with cost savings. | - | **Database throughput**| You can provision **Autoscale** or **Manual** throughput. Manual throughput allows you to scale RU/s yourself whereas autoscale throughput allows the system to scale RU/s based on usage. Select **Manual** for this example. <br><br> Leave the throughput at 400 request units per second (RU/s). If you want to reduce latency, you can scale up the throughput later by estimating the required RU/s with the [capacity calculator](estimate-ru-with-capacity-planner.md).<br><br>**Note**: This setting is not available when creating a new container in a serverless account. | - |**Container ID**|Items|Enter *Items* as the name for your new container. Container IDs have the same character requirements as database names.| + |**Database id**|ToDoList|Enter *ToDoList* as the name for the new database. Database names must contain 1-255 characters, and they can't contain `/`, `\`, `#`, `?`, or a trailing space. Check the **Share throughput across containers** option. It allows you to share the throughput provisioned on the database across all the containers within the database. This option also helps with cost savings. | + | **Database throughput**|**Autoscale** or **Manual**|Manual throughput allows you to scale request units per second (RU/s) yourself whereas autoscale throughput allows the system to scale RU/s based on usage. Select **Manual** for this example.| + |**Database Max RU/s**| 400 RU/s|If you want to reduce latency, you can scale up the throughput later by estimating the required RU/s with the [capacity calculator](estimate-ru-with-capacity-planner.md). **Note**: This setting isn't available when creating a new container in a serverless account. | + |**Container id**|Items|Enter *Items* as the name for your new container. Container IDs have the same character requirements as database names.| |**Partition key**| /category| The sample described in this article uses */category* as the partition key.| - Don't add **Unique keys** or turn on **Analytical store** for this example. Unique keys let you add a layer of data integrity to the database by ensuring the uniqueness of one or more values per partition key. For more information, see [Unique keys in Azure Cosmos DB.](../unique-keys.md) [Analytical store](../analytical-store-introduction.md) is used to enable large-scale analytics against operational data without any impact to your transactional workloads. + Don't add **Unique keys** or turn on **Analytical store** for this example. ++ - Unique keys let you add a layer of data integrity to the database by ensuring the uniqueness of one or more values per partition key. For more information, see [Unique keys in Azure Cosmos DB](../unique-keys.md). + - [Analytical store](../analytical-store-introduction.md) is used to enable large-scale analytics against operational data without any effect on your transactional workloads. 1. Select **OK**. The Data Explorer displays the new database and the container that you created. You can use the Data Explorer in the Azure portal to create a database and conta Add data to your new database using Data Explorer. -1. In **Data Explorer**, expand the **ToDoList** database, and expand the **Items** container. Next, select **Items**, and then select **New Item**. - - :::image type="content" source="./media/quickstart-portal/azure-cosmosdb-new-document.png" alt-text="Create new documents in Data Explorer in the Azure portal"::: - +1. In **Data Explorer**, expand the **ToDoList** database, and expand the **Items** container. ++1. Next, select **Items**, and then select **New Item**. ++ :::image type="content" source="./media/quickstart-portal/azure-cosmosdb-new-document.png" alt-text="Screenshot shows the New Item option in Data Explorer in the Azure portal." lightbox="./media/quickstart-portal/azure-cosmosdb-new-document.png"::: + 1. Add the following structure to the document on the right side of the **Documents** pane: - ```json - { - "id": "1", - "category": "personal", - "name": "groceries", - "description": "Pick up apples and strawberries.", - "isComplete": false - } - ``` + ```json + { + "id": "1", + "category": "personal", + "name": "groceries", + "description": "Pick up apples and strawberries.", + "isComplete": false + } + ``` 1. Select **Save**.- - :::image type="content" source="./media/quickstart-portal/azure-cosmosdb-save-document.png" alt-text="Copy in json data and select Save in Data Explorer in the Azure portal"::: - ++ :::image type="content" source="./media/quickstart-portal/azure-cosmosdb-save-document.png" alt-text="Screenshot shows where you can copy json data and select Save in Data Explorer in the Azure portal." lightbox="./media/quickstart-portal/azure-cosmosdb-save-document.png"::: + 1. Select **New Item** again, and create and save another document with a unique `id`, and any other properties and values you want. Your documents can have any structure, because Azure Cosmos DB doesn't impose any schema on your data. ## Query your data ## Clean up resources Add data to your new database using Data Explorer. If you wish to delete just the database and use the Azure Cosmos DB account in future, you can delete the database with the following steps: -* Go to your Azure Cosmos DB account. -* Open **Data Explorer**, right click on the database that you want to delete and select **Delete Database**. -* Enter the Database ID/database name to confirm the delete operation. +1. Go to your Azure Cosmos DB account. +1. Open **Data Explorer**, select the **More** (**...**) for the database that you want to delete and select **Delete Database**. +1. Enter the database ID or database name to confirm the delete operation. ## Next steps -In this quickstart, you learned how to create an Azure Cosmos DB account, create a database and container using the Data Explorer. You can now import additional data to your Azure Cosmos DB account. +You can now import more data to your Azure Cosmos DB account. -Trying to do capacity planning for a migration to Azure Cosmos DB? You can use information about your existing database cluster for capacity planning. -* If all you know is the number of vcores and servers in your existing database cluster, read about [estimating request units using vCores or vCPUs](../convert-vcore-to-request-unit.md) -* If you know typical request rates for your current database workload, read about [estimating request units using Azure Cosmos DB capacity planner](estimate-ru-with-capacity-planner.md) +- [Convert the number of vCores or vCPUs in your nonrelational database to Azure Cosmos DB RU/s](../convert-vcore-to-request-unit.md) +- [Estimate RU/s using the Azure Cosmos DB capacity planner - API for NoSQL](estimate-ru-with-capacity-planner.md) |
cost-management-billing | Group Filter | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cost-management-billing/costs/group-filter.md | Title: Group and filter options in Cost Management + Title: Group and filter options in Cost analysis and budgets -description: This article explains how to use group and filter options in Cost Management. +description: This article explains how to use group and filter options. Previously updated : 10/12/2021 Last updated : 03/06/2023 -+ -# Group and filter options in Cost analysis +# Group and filter options in Cost analysis and budgets Cost analysis has many grouping and filtering options. This article helps you understand when to use them. To watch a video about grouping and filtering options, watch the [Cost Managemen ## Group and filter properties -The following table lists some of the most common grouping and filtering options available in Cost analysis and when you should use them. +The following table lists some of the most common grouping and filtering options available in Cost analysis and budgets. See the notes column to learn when to use them. ++Some filters are only available to specific offers. For example, a billing profile isn't available for an enterprise agreement. For more information, see [Supported Microsoft Azure offers](understand-cost-mgt-data.md#supported-microsoft-azure-offers). | Property | When to use | Notes | | | | | | **Availability zones** | Break down AWS costs by availability zone. | Applicable only to AWS scopes and management groups. Azure data doesn't include availability zone and will show as **No availability zone**. | | **Billing period** | Break down PAYG costs by the month that they were, or will be, invoiced. | Use **Billing period** to get a precise representation of invoiced PAYG charges. Include two extra days before and after the billing period if filtering down to a custom date range. Limiting to the exact billing period dates won't match the invoice. Will show costs from all invoices in the billing period. Use **Invoice ID** to filter down to a specific invoice. Applicable only to PAYG subscriptions because EA and MCA are billed by calendar months. EA/MCA accounts can use calendar months in the date picker or monthly granularity to accomplish the same goal. |-| **Charge type** | Break down usage, purchase, refund, and unused reservation costs. | Reservation purchases and refunds are available only when using actual costs and not when using amortized costs. Unused reservation costs are available only when looking at amortized costs. | +| **BillingProfileId** | The ID of the billing profile that is billed for the subscription's charges. | Unique identifier of the EA enrollment, pay-as-you-go subscription, MCA billing profile, or AWS consolidated account.| +| **BillingProfileName** | Name of the EA enrollment, pay-as-you-go subscription, MCA billing profile, or AWS consolidated account. | Name of the EA enrollment, pay-as-you-go subscription, MCA billing profile, or AWS consolidated account.| +| **Charge type** | Break down usage, purchase, refund, and unused reservation and savings plan costs. | Reservation purchases, savings plan purchases, and refunds are available only when using actual costs and not when using amortized costs. Unused reservation and savings plan costs are available only when looking at amortized costs. | | **Department** | Break down costs by EA department. | Available only for EA and management groups. PAYG subscriptions don't have a department and will show as **No department** or **unassigned**. | | **Enrollment account** | Break down costs by EA account owner. | Available only for EA billing accounts, departments, and management groups. PAYG subscriptions don't have EA enrollment accounts and will show as **No enrollment account** or **unassigned**. |-| **Frequency** | Break down usage-based, one-time, and recurring costs. | | +| **Frequency** | Break down usage-based, one-time, and recurring costs. | Indicates whether a charge is expected to repeat. Charges can either happen once **OneTime**, repeat on a monthly or yearly basis **Recurring**, or be based on usage **UsageBased**.| | **Invoice ID** | Break down costs by billed invoice. | Unbilled charges don't have an invoice ID yet and EA costs don't include invoice details and will show as **No invoice ID**. |+| **InvoiceSectionId**| Unique identifier for the MCA invoice section. | Unique identifier for the EA department or MCA invoice section. | +| **InvoiceSectionName**| Name of the invoice section. | Name of the EA department or MCA invoice section. | | **Location** | Break down costs by resource location or region. | Purchases and Marketplace usage may be shown as **unassigned**, or **No resource location**. | | **Meter** | Break down costs by usage meter. | Purchases and Marketplace usage will show as **unassigned** or **No meter**. Refer to **Charge type** to identify purchases and **Publisher type** to identify Marketplace charges. | | **Operation** | Break down AWS costs by operation. | Applicable only to AWS scopes and management groups. Azure data doesn't include operation and will show as **No operation** - use **Meter** instead. | | **Pricing model** | Break down costs by on-demand, reservation, or spot usage. | Purchases show as **OnDemand**. If you see **Not applicable**, group by **Reservation** to determine whether the usage is reservation or on-demand usage and **Charge type** to identify purchases.+| **PartNumber** | The identifier used to get specific meter pricing. | | +| **Product** | Name of the product. | | +| **ProductOrderId** | Unique identifier for the product order | | +| **ProductOrderName** | Unique name for the product order. | | | **Provider** | Break down costs by the provider type: Azure, Microsoft 365, Dynamics 365, AWS, and so on. | Identifier for product and line of business. | | **Publisher type** | Break down Microsoft, Azure, AWS, and Marketplace costs. | Values are **Microsoft** for MCA accounts and **Azure** for EA and pay-as-you-go accounts. | | **Reservation** | Break down costs by reservation. | Any usage or purchases that aren't associated with a reservation will show as **No reservation** or **No values**. Group by **Publisher type** to identify other Azure, AWS, or Marketplace purchases. |+| **ReservationId**| Unique identifier for the purchased reservation instance. | In actual costs, use ReservationID to know which reservation the charge is for. | +| **ReservationName**| Name of the purchased reservation instance. | In actual costs, use ReservationName to know which reservation the charge is for. | | **Resource** | Break down costs by resource. | Marketplace purchases show as **Other Marketplace purchases** and Azure purchases, like Reservations and Support charges, show as **Other Azure purchases**. Group by or filter on **Publisher type** to identify other Azure, AWS, or Marketplace purchases. | | **Resource group** | Break down costs by resource group. | Purchases, tenant resources not associated with subscriptions, subscription resources not deployed to a resource group, and classic resources don't have a resource group and will show as **Other Marketplace purchases**, **Other Azure purchases**, **Other tenant resources**, **Other subscription resources**, **$system**, or **Other charges**. |-| **Resource type** | Break down costs by resource type. | Purchases and classic services don't have an Azure Resource Manager resource type and will show as **others**, **classic services**, or **No resource type**. | +| **ResourceId**| Unique identifier of the [Azure Resource Manager](/rest/api/resources/resources) resource. | | +| **Resource type** | Break down costs by resource type. | Type of resource instance. Not all charges come from deployed resources. Charges that don't have a resource type will be shown as null or empty, **Others**, or **Not applicable**. For example, purchases and classic services will show as **others**, **classic services**, or **No resource type**. | +| **ServiceFamily**| Type of Azure service. For example, Compute, Analytics, and Security. | | +| **ServiceName**| Name of the Azure service. | Name of the classification category for the meter. For example, Cloud services and Networking. | | **Service name** or **Meter category** | Break down cost by Azure service. | Purchases and Marketplace usage will show as **No service name** or **unassigned**. | | **Service tier** or **Meter subcategory** | Break down cost by Azure usage meter subclassification. | Purchases and Marketplace usage will be empty or show as **unassigned**. | | **Subscription** | Break down costs by Azure subscription and AWS linked account. | Purchases and tenant resources may show as **No subscription**. | | **Tag** | Break down costs by tag values for a specific tag key. | Purchases, tenant resources not associated with subscriptions, subscription resources not deployed to a resource group, and classic resources cannot be tagged and will show as **Tags not supported**. Services that don't include tags in usage data will show as **Tags not available**. Any remaining cases where tags aren't specified on a resource will show as **Untagged**. Learn more about [tags support for each resource type](../../azure-resource-manager/management/tag-support.md). |+| **UnitOfMeasure**| The billing unit of measure for the service. For example, compute services are billed per hour. | | For more information about terms, see [Understand the terms used in the Azure usage and charges file](../understand/understand-usage.md). ## Publisher Type value changes -In Cost Management, the PublisherType field indicates whether charges are for Microsoft, Marketplace, or AWS (if you have a [Cross Cloud connector](aws-integration-set-up-configure.md) configured) products. +In Cost Management, the `PublisherType field` indicates whether charges are for Microsoft, Marketplace, or AWS (if you have a [Cross Cloud connector](aws-integration-set-up-configure.md) configured) products. -What's changing? +What changed? -Effective 14 October 2021, the PublisherType field with the value "Azure" will be updated to ΓÇ£MicrosoftΓÇ¥ for all customers with a [Microsoft Customer Agreement](../understand/review-customer-agreement-bill.md#check-access-to-a-microsoft-customer-agreement). This change is being made to accommodate upcoming enhancements to support Microsoft products other than Azure like Microsoft 365 and Dynamics 365. +Effective 14 October 2021, the `PublisherType` field with the value `Azure` was updated to `Microsoft` for all customers with a [Microsoft Customer Agreement](../understand/review-customer-agreement-bill.md#check-access-to-a-microsoft-customer-agreement). The change was made to accommodate enhancements to support Microsoft products other than Azure like Microsoft 365 and Dynamics 365. -Values of ΓÇ£MarketplaceΓÇ¥ and ΓÇ£AWSΓÇ¥ will remain unchanged. +Values of `Marketplace` and `AWS` remain unchanged. -This change doesn't affect customers with an Enterprise Agreement or pay-as-you-go offers. +The change didn't affect customers with an Enterprise Agreement or pay-as-you-go offers. **Impact and action** <a name="impact-action"></a> -For any Cost Management data that you've downloaded before 14 October 2021, you'll need to consider the older ΓÇ£AzureΓÇ¥ and the new ΓÇ£MicrosoftΓÇ¥ PublisherType field values. The data could have been downloaded through exports, usage details, or from Cost Management. +For any Cost Management data that you've downloaded before 14 October 2021, consider the `PublisherType` change from the older `Azure` and the new `Microsoft` field values. The data could have been downloaded through exports, usage details, or from Cost Management. -If you use Cost Management + Billing REST API calls that filter the PublisherType field by the value ΓÇ£AzureΓÇ¥, you'll need to address the change and filter by the new value ΓÇ£MicrosoftΓÇ¥ after 14 October 2021. Afterward, if you make any API calls with a filter for Publisher type = ΓÇ£AzureΓÇ¥, data won't be returned. +If you use Cost Management + Billing REST API calls that filter the `PublisherType` field by the value `Azure`, you need to address the change and filter by the new value `Microsoft` after 14 October 2021. If you make any API calls with a filter for Publisher type = `Azure`, data won't be returned. There's no impact to Cost analysis or budgets because the changes are automatically reflected in the filters. Any saved views or budgets created with Publisher Type = ΓÇ£AzureΓÇ¥ filter will be automatically updated. |
cost-management-billing | Tutorial Acm Create Budgets | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cost-management-billing/costs/tutorial-acm-create-budgets.md | Title: Tutorial - Create and manage Azure budgets description: This tutorial helps you plan and account for the costs of Azure services that you consume. Previously updated : 10/07/2022 Last updated : 03/02/2023 Select **Add**. :::image type="content" source="./media/tutorial-acm-create-budgets/budgets-cost-management.png" alt-text="Screenthost showing a list of budgets already created." lightbox="./media/tutorial-acm-create-budgets/budgets-cost-management.png" ::: -In the **Create budget** window, make sure that the scope shown is correct. Choose any filters that you want to add. Filters allow you to create budgets on specific costs, such as resource groups in a subscription or a service like virtual machines. Any filter you can use in cost analysis can also be applied to a budget. +In the **Create budget** window, make sure that the scope shown is correct. Choose any filters that you want to add. Filters allow you to create budgets on specific costs, such as resource groups in a subscription or a service like virtual machines. For more information about the common filter properties that you can use in budgets and cost analysis, see [Group and filter properties](group-filter.md#group-and-filter-properties). After you identify your scope and filters, type a budget name. Then, choose a monthly, quarterly, or annual budget reset period. The reset period determines the time window that's analyzed by the budget. The cost evaluated by the budget starts at zero at the beginning of each new period. When you create a quarterly budget, it works in the same way as a monthly budget. The difference is that the budget amount for the quarter is evenly divided among the three months of the quarter. An annual budget amount is evenly divided among all 12 months of the calendar year. |
cost-management-billing | Cancel Azure Subscription | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cost-management-billing/manage/cancel-azure-subscription.md | tags: billing Previously updated : 01/17/2023 Last updated : 03/07/2023 The following table describes the permission required to cancel a subscription. An account administrator without the service administrator or subscription owner role canΓÇÖt cancel an Azure subscription. However, an account administrator can make themself the service administrator and then they can cancel a subscription. For more information, see [Change the Service Administrator](../../role-based-access-control/clas |