Updates from: 04/05/2023 01:13:45
Service Microsoft Docs article Related commit history on GitHub Change details
active-directory-b2c Add Api Connector https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-b2c/add-api-connector.md
Last updated 12/20/2022
-+ zone_pivot_groups: b2c-policy-type
Content-type: application/json
} ], "displayName": "John Smith",
- "objectId": "11111111-0000-0000-0000-000000000000",
"givenName":"John", "surname":"Smith", "step": "PostFederationSignup",
Content-type: application/json
} ], "displayName": "John Smith",
- "objectId": "11111111-0000-0000-0000-000000000000",
"givenName":"John", "surname":"Smith", "jobTitle":"Supplier",
active-directory-b2c Billing https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-b2c/billing.md
Your Azure AD B2C tenant must also be linked to the appropriate Azure pricing ti
## About Go-Local add-on
-Azure AD B2C's [Go-Local add-on](data-residency.md#go-local-add-on) enables you to create Azure AD B2C tenant within the country you choose when you [create your Azure AD B2C](tutorial-create-tenant.md). *Go-Local* refers to MicrosoftΓÇÖs commitment to allow some customers to configure some services to store their data at rest in the Geo of the customerΓÇÖs choice, typically a country. This feature isn't available in all countries.
+Azure AD B2C's [Go-Local add-on](data-residency.md#go-local-add-on) enables you to create Azure AD B2C tenant within the country/region you choose when you [create your Azure AD B2C](tutorial-create-tenant.md). *Go-Local* refers to MicrosoftΓÇÖs commitment to allow some customers to configure some services to store their data at rest in the Geo of the customerΓÇÖs choice, typically a country/region. This feature isn't available in all countries/regions.
> [!NOTE] > If you enable Go-Local add-on , the 50,000 free MAUs per month given by your AD B2C subscription doesn't apply for Go-Local add-on . You'll incur a charge per MAU, on the Go-Local add-on from the first MAU. However, you'll continue to enjoy free 50,000 MAUs per month on the other features available on your Azure AD B2C [Premium P1 or P2 pricing](https://azure.microsoft.com/pricing/details/active-directory-b2c/).
active-directory-b2c Custom Policy Developer Notes https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-b2c/custom-policy-developer-notes.md
The following table summarizes the Security Assertion Markup Language (SAML) app
| Feature | Status | Notes | | - | :--: | -- |
-| [Go-Local add-on](data-residency.md#go-local-add-on) | Preview | Azure AD B2C's [Go-Local add-on](data-residency.md#go-local-add-on) enables you to create Azure AD B2C tenant within the country you choose when you [create your Azure AD B2C](tutorial-create-tenant.md). |
+| [Go-Local add-on](data-residency.md#go-local-add-on) | Preview | Azure AD B2C's [Go-Local add-on](data-residency.md#go-local-add-on) enables you to create Azure AD B2C tenant within the country/region you choose when you [create your Azure AD B2C](tutorial-create-tenant.md). |
## Responsibilities of custom policy feature-set developers
active-directory-b2c Data Residency https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-b2c/data-residency.md
Azure AD B2C is **generally available worldwide** with the option for **data res
[Region availability](#region-availability) refers to where a service is available for use. [Data residency](#data-residency) refers to where customer data is stored. For customers in the EU and EFTA, see [EU Data Boundary](#eu-data-boundary).
-If you enable [Go-Local add-on](#go-local-add-on), you can store your data exclusively in a specific country.
+If you enable [Go-Local add-on](#go-local-add-on), you can store your data exclusively in a specific country/region.
## Region availability
The following locations are in the process of being added to the list. For now,
> Argentina, Brazil, Chile, Colombia, Ecuador, Iraq, Paraguay, Peru, Uruguay, and Venezuela
-To find the exact location where your data is located per region or country, refer to [where Azure Active Directory data is located](https://aka.ms/aaddatamap)service.
+To find the exact location where your data is located per country/country, refer to [where Azure Active Directory data is located](https://aka.ms/aaddatamap)service.
### Go-Local add-on
-*Go-Local* refers to MicrosoftΓÇÖs commitment to allow some customers to configure some services to store their data at rest in the Geo of the customerΓÇÖs choice, typically a country. Go-Local is as way fulfilling corporate policies and compliance requirements. You choose the country where you want to store your data when you [create your Azure AD B2C](tutorial-create-tenant.md).
+*Go-Local* refers to MicrosoftΓÇÖs commitment to allow some customers to configure some services to store their data at rest in the Geo of the customerΓÇÖs choice, typically a country/region. Go-Local is as way fulfilling corporate policies and compliance requirements. You choose the country/region where you want to store your data when you [create your Azure AD B2C](tutorial-create-tenant.md).
The Go-Local add-on is a paid add-on, but it's optional. If you choose to use it, you'll incur an extra charge in addition to your Azure AD B2C Premium P1 or P2 licenses. See more information in [Billing model](billing.md).
-At the moment, the following countries have the local data residence option:
+At the moment, the following countries/regions have the local data residence option:
- Japan
active-directory-b2c Tutorial Create Tenant https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-b2c/tutorial-create-tenant.md
Before you create your Azure AD B2C tenant, you need to take the following consi
- For **Organization name**, enter a name for your Azure AD B2C tenant. - For **Initial domain name**, enter a domain name for your Azure AD B2C tenant.
- - For **Location**, select your country from the list. If the country you select has a [Go-Local add-on](data-residency.md#go-local-add-on) option, such as Japan or Australia, and you want to store your data exclusively within that country, select the **Store Azure AD Core Store data, components and service data in the location selected above** checkbox. Go-Local add-on is a paid add-on whose charge is added to your Azure AD B2C Premium P1 or P2 licenses charges, see [Billing model](billing.md#about-go-local-add-on). You can't change the data residency region after you create your Azure AD B2C tenant.
+ - For **Location**, select your country/region from the list. If the country/region you select has a [Go-Local add-on](data-residency.md#go-local-add-on) option, such as Japan or Australia, and you want to store your data exclusively within that country/region, select the **Store Azure AD Core Store data, components and service data in the location selected above** checkbox. Go-Local add-on is a paid add-on whose charge is added to your Azure AD B2C Premium P1 or P2 licenses charges, see [Billing model](billing.md#about-go-local-add-on). You can't change the data residency region after you create your Azure AD B2C tenant.
- For **Subscription**, select your subscription from the list. - For **Resource group**, select or search for the resource group that will contain the tenant.
active-directory On Premises Scim Provisioning https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/app-provisioning/on-premises-scim-provisioning.md
Previously updated : 08/25/2022 Last updated : 04/04/2023
The Azure Active Directory (Azure AD) provisioning service supports a [SCIM 2.0]
## Deploying Azure AD provisioning agent The Azure AD Provisioning agent can be deployed on the same server hosting a SCIM enabled application, or a separate server, providing it has line of sight to the application's SCIM endpoint. A single agent also supports provision to multiple applications hosted locally on the same server or separate hosts, again as long as each SCIM endpoint is reachable by the agent.
- 1. [Download](https://aka.ms/OnPremProvisioningAgent) the provisioning agent and copy it onto the virtual machine or server that your SCIM application endpoint is hosted on.
+ 1. [Download](https://aka.ms/OnPremProvisioningAgent) the provisioning agent and copy it onto the virtual machine or server that your SCIM application endpoint is hosted on.
2. Run the provisioning agent installer, agree to the terms of service, and select **Install**. 3. Once installed, locate and launch the **AAD Connect Provisioning Agent wizard**, and when prompted for an extensions select **On-premises provisioning** 4. For the agent to register itself with your tenant, provide credentials for an Azure AD admin with Hybrid administrator or global administrator permissions.
Once the agent is installed, no further configuration is necesary on-prem, and a
3. Select **Automatic** from the dropdown list and expand the **On-Premises Connectivity** option. 4. Select the agent that you installed from the dropdown list and select **Assign Agent(s)**. 5. Now either wait 10 minutes or restart the **Microsoft Azure AD Connect Provisioning Agent** before proceeding to the next step & testing the connection.
- 6. In the **Tenant URL** field, provide the SCIM endpoint URL for your application. The URL is typically unique to each target application and must be resolveable by DNS. An example for a scenario where the agent is installed on the same host as the application is https://localhost:8585/scim ![Screenshot that shows assigning an agent.](./media/on-premises-scim-provisioning/scim-2.png)
+ 6. In the **Tenant URL** field, provide the SCIM endpoint URL for your application. The URL is typically unique to each target application and must be resolveable by DNS. An example for a scenario where the agent is installed on the same host as the application is https://localhost:8585/scim ![Screenshot that shows assigning an agent.](./media/on-premises-scim-provisioning/scim-2.png)
+>[!NOTE]
+>The Azure AD provisioning service currently drops everything in the URL after the hostname.
+ 7. Select **Test Connection**, and save the credentials. The application SCIM endpoint must be actively listening for inbound provisioning requests, otherwise the test will fail. Use the steps [here](on-premises-ecma-troubleshoot.md#troubleshoot-test-connection-issues) if you run into connectivity issues. 8. Configure any [attribute mappings](customize-application-attributes.md) or [scoping](define-conditional-rules-for-provisioning-user-accounts.md) rules required for your application. 9. Add users to scope by [assigning users and groups](../../active-directory/manage-apps/add-application-portal-assign-users.md) to the application.
active-directory Plan Cloud Hr Provision https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/app-provisioning/plan-cloud-hr-provision.md
The cloud HR app to Active Directory user provisioning solution requires that yo
To prepare the on-premises environment, the Azure AD Connect provisioning agent configuration wizard registers the agent with your Azure AD tenant, [opens ports](../app-proxy/application-proxy-add-on-premises-application.md#open-ports), [allows access to URLs](../app-proxy/application-proxy-add-on-premises-application.md#allow-access-to-urls), and supports [outbound HTTPS proxy configuration](../saas-apps/workday-inbound-tutorial.md#how-do-i-configure-the-provisioning-agent-to-use-a-proxy-server-for-outbound-http-communication). The provisioning agent configures a [Global Managed Service Account (GMSA)](../cloud-sync/how-to-prerequisites.md#group-managed-service-accounts)
-to communicate with the Active Directory domains. If you want to use a non-GMSA service account for provisioning, you can [skip GMSA configuration](../cloud-sync/how-to-manage-registry-options.md#skip-gmsa-configuration) and specify your service account during configuration.
+to communicate with the Active Directory domains.
You can select domain controllers that should handle provisioning requests. If you have several geographically distributed domain controllers, install the provisioning agent in the same site as your preferred domain controllers. This positioning improves the reliability and performance of the end-to-end solution.
This topology supports business requirements where attribute mapping and provisi
Use this topology to manage multiple independent child AD domains belonging to the same forest, if managers always exist in the same domain as the user and your unique ID generation rules for attributes like *userPrincipalName*, *samAccountName* and *mail* does not require a forest-wide lookup. It also offers the flexibility of delegating the administration of each provisioning job by domain boundary.
-For example: In the diagram below, the provisioning apps are setup for each geographic region: North America (NA), Europe, Middle East and Africa (EMEA) and Asia Pacific (APAC). Depending on the location, users are provisioned to the respective AD domain. Delegated administration of the provisioning app is possible so that *EMEA administrators* can independently manage the provisioning configuration of users belonging to the EMEA region.
+For example: In the diagram below, the provisioning apps are set up for each geographic region: North America (NA), Europe, Middle East and Africa (EMEA) and Asia Pacific (APAC). Depending on the location, users are provisioned to the respective AD domain. Delegated administration of the provisioning app is possible so that *EMEA administrators* can independently manage the provisioning configuration of users belonging to the EMEA region.
:::image type="content" source="media/plan-cloud-hr-provision/topology-3-separate-apps-with-multiple-ad-domains-no-cross-domain.png" alt-text="Screenshot of separate apps to provision users from Cloud HR to multiple AD domains" lightbox="media/plan-cloud-hr-provision/topology-3-separate-apps-with-multiple-ad-domains-no-cross-domain.png":::
For example: In the diagram below, the provisioning apps are setup for each geog
Use this topology to manage multiple independent child AD domains belonging to the same forest, if a user's manager may exist in the different domain and your unique ID generation rules for attributes like *userPrincipalName*, *samAccountName* and *mail* requires a forest-wide lookup.
-For example: In the diagram below, the provisioning apps are setup for each geographic region: North America (NA), Europe, Middle East and Africa (EMEA) and Asia Pacific (APAC). Depending on the location, users are provisioned to the respective AD domain. Cross-domain manager references and forest-wide lookup is handled by enabling referral chasing on the provisioning agent.
+For example: In the diagram below, the provisioning apps are set up for each geographic region: North America (NA), Europe, Middle East and Africa (EMEA) and Asia Pacific (APAC). Depending on the location, users are provisioned to the respective AD domain. Cross-domain manager references and forest-wide lookup are handled by enabling referral chasing on the provisioning agent.
:::image type="content" source="media/plan-cloud-hr-provision/topology-4-separate-apps-with-multiple-ad-domains-cross-domain.png" alt-text="Screenshot of separate apps to provision users from Cloud HR to multiple AD domains with cross domain support" lightbox="media/plan-cloud-hr-provision/topology-4-separate-apps-with-multiple-ad-domains-cross-domain.png":::
For example: In the diagram below, the provisioning apps are setup for each geog
Use this topology if you want to use a single provisioning app to manage users belonging to all your parent and child AD domains. This topology is recommended if provisioning rules are consistent across all domains and there is no requirement for delegated administration of provisioning jobs. This topology supports resolving cross-domain manager references and can perform forest-wide uniqueness check.
-For example: In the diagram below, a single provisioning app manages users present in three different child domains grouped by region: North America (NA), Europe, Middle East and Africa (EMEA) and Asia Pacific (APAC). The attribute mapping for *parentDistinguishedName* is used to dynamically create a user in the appropriate child domain. Cross-domain manager references and forest-wide lookup is handled by enabling referral chasing on the provisioning agent.
+For example: In the diagram below, a single provisioning app manages users present in three different child domains grouped by region: North America (NA), Europe, Middle East and Africa (EMEA) and Asia Pacific (APAC). The attribute mapping for *parentDistinguishedName* is used to dynamically create a user in the appropriate child domain. Cross-domain manager references and forest-wide lookup are handled by enabling referral chasing on the provisioning agent.
:::image type="content" source="media/plan-cloud-hr-provision/topology-5-single-app-with-multiple-ad-domains-cross-domain.png" alt-text="Screenshot of single app to provision users from Cloud HR to multiple AD domains with cross domain support" lightbox="media/plan-cloud-hr-provision/topology-5-single-app-with-multiple-ad-domains-cross-domain.png":::
active-directory How To Mfa Number Match https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/authentication/how-to-mfa-number-match.md
Minimum Microsoft Authenticator version for number matching which prompts to ent
- Android 6.2111.7701 - iOS 6.5.85
+### How can users recheck the number on mobile iOS devices after the match request appears?
+
+During mobile iOS broker flows, the number match request appears over the number after a two-second delay. To recheck the number, click **Show me the number again**. This action only occurs in mobile iOS broker flows.
+ ## Next steps [Authentication methods in Azure Active Directory](concept-authentication-authenticator-app.md)
active-directory How To Manage Registry Options https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/cloud-sync/how-to-manage-registry-options.md
na Previously updated : 01/11/2023 Last updated : 04/03/2023
Use the following steps to turn on referral chasing:
1. Restart the Azure AD Connect Provisioning Service from the *Services* console. 1. If you have deployed multiple provisioning agents, apply this registry change to all agents for consistency.
-## Skip GMSA configuration
-With agent version 1.1.281.0+, by default, when you run the agent configuration wizard, you are prompted to setup [Group Managed Service Account (GMSA)](/windows-server/security/group-managed-service-accounts/group-managed-service-accounts-overview). The GMSA setup by the wizard is used at runtime for all sync and provisioning operations.
-
-If you are upgrading from a prior version of the agent and have setup a custom service account with delegated OU-level permissions specific to your Active Directory topology, you may want to skip/postpone GMSA configuration and plan for this change.
-
-> [!NOTE]
-> This guidance specifically applies to customers who have configured HR (Workday/SuccessFactors) inbound provisioning with agent versions prior to 1.1.281.0 and have setup a custom service account for agent operations. In the long run, we recommend switching to GMSA as a best practice.
-
-In this scenario, you can still upgrade the agent binaries and skip the GMSA configuration using the following steps:
-
-1. Log on as Administrator on the Windows server running the Azure AD Connect Provisioning Agent.
-1. Run the agent installer to install the new agent binaries. Close the agent configuration wizard which opens up automatically after the installation is successful.
-1. Use the *Run* menu item to open the registry editor (regedit.exe)
-1. Locate the key folder **HKEY_LOCAL_MACHINE\SOFTWARE\Microsoft\Azure AD Connect Agents\Azure AD Connect Provisioning Agent**
-1. Right-click and select "New -> DWORD Value"
-1. Provide the name:
- `UseCredentials`
-1. Double-click on the **Value Name** and enter the value data as `1`.
- > [!div class="mx-imgBorder"]
- > ![Use Credentials](media/how-to-manage-registry-options/use-credentials.png)
-1. Restart the Azure AD Connect Provisioning Service from the *Services* console.
-1. If you have deployed multiple provisioning agents, apply this registry change to all agents for consistency.
-1. From the desktop short cut, run the agent configuration wizard. The wizard will skip the GMSA configuration.
> [!NOTE]
active-directory Howto Conditional Access Policy Authentication Strength External https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/conditional-access/howto-conditional-access-policy-authentication-strength-external.md
Previously updated : 10/12/2022 Last updated : 04/03/2023
Use the following steps to create a Conditional Access policy that applies an au
<!![Screenshot showing where to select guest and external user types.](media/howto-conditional-access-policy-authentication-strength-external/assignments-external-user-types.png)>
-1. Select the types of [guest or external users](../external-identities/authentication-conditional-access.md#assigning-conditional-access-policies-to-external-user-types-preview) you want to apply the policy to.
+1. Select the types of [guest or external users](../external-identities/authentication-conditional-access.md#assigning-conditional-access-policies-to-external-user-types) you want to apply the policy to.
1. Under **Exclude**, select **Users and groups** and choose your organization's emergency access or break-glass accounts. 1. Under **Cloud apps or actions**, under **Include** or **Exclude**, select any applications you want to include in or exclude from the authentication strength requirements.
active-directory Location Condition https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/conditional-access/location-condition.md
Title: Using networks and countries in Azure Active Directory
+ Title: Using networks and countries/regions in Azure Active Directory
description: Use GPS locations and public IPv4 and IPv6 networks in Conditional Access policy to make access decisions.
The IP address used in policy evaluation is the public IPv4 or IPv6 address of t
A policy that uses the location condition to block access is considered restrictive, and should be done with care after thorough testing. Some instances of using the location condition to block authentication may include: -- Blocking countries where your organization never does business.
+- Blocking countries/regions where your organization never does business.
- Blocking specific IP ranges like: - Known malicious IPs before a firewall policy can be changed. - For highly sensitive or privileged actions and cloud applications.
active-directory Scenario Web Api Call Api App Configuration https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/develop/scenario-web-api-call-api-app-configuration.md
using Microsoft.Identity.Web;
builder.Services.AddAuthentication(JwtBearerDefaults.AuthenticationScheme) .AddMicrosoftIdentityWebApi(Configuration, "AzureAd") .EnableTokenAcquisitionToCallDownstreamApi()
- .AddDownstreamApi("MyApi", Configuration.GetSection("GraphBeta"))
+ .AddDownstreamWebApi("MyApi", Configuration.GetSection("GraphBeta"))
.AddInMemoryTokenCaches(); // ... ```
active-directory V2 Oauth2 Device Code https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/develop/v2-oauth2-device-code.md
Title: OAuth 2.0 device code flow
+ Title: OAuth 2.0 device authorization grant
description: Sign in users without a browser. Build embedded and browser-less authentication flows using the device authorization grant.
active-directory Authentication Conditional Access https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/external-identities/authentication-conditional-access.md
Previously updated : 10/12/2022 Last updated : 04/03/2023
The following diagram illustrates the flow when email one-time passcode authenti
Organizations can enforce [Conditional Access](../conditional-access/overview.md) policies for external B2B collaboration and B2B direct connect users in the same way that theyΓÇÖre enabled for full-time employees and members of the organization. With the introduction of cross-tenant access settings, you can also trust MFA and device claims from external Azure AD organizations. This section describes important considerations for applying Conditional Access to users outside of your organization.
-### Assigning Conditional Access policies to external user types (preview)
-
-> [!NOTE]
-> This section describes a preview feature of Azure Active Directory. For more information about previews, see [Supplemental Terms of Use for Microsoft Azure Previews](https://azure.microsoft.com/support/legal/preview-supplemental-terms/).
+### Assigning Conditional Access policies to external user types
When configuring a Conditional Access policy, you have granular control over the types of external users you want to apply the policy to. External users are categorized based on how they authenticate (internally or externally) and their relationship to your organization (guest or member).
When configuring a Conditional Access policy, you have granular control over the
- **Service provider users** - Organizations that serve as cloud service providers for your organization (the isServiceProvider property in the Microsoft Graph [partner-specific configuration](/graph/api/resources/crosstenantaccesspolicyconfigurationpartner) is true). - **Other external users** - Applies to any users who don't fall into the categories above, but who are not considered internal members of your organization, meaning they don't authenticate internally via Azure AD, and the user object created in the resource Azure AD directory does not have a UserType of Member.
+>[!NOTE]
+> The "All guest and external users" selection has now been replaced with "Guest and external users" and all its sub types. For customers who previously had a Condtional Access policy with "All guest and external users" selected will now see "Guest and external users" along with all sub types being selected. This change in UX does not have any functional impact on how policy is evaluated by Conditional Access backend. The new selection provides customers the needed granularity to choose specifc types of guest and external users to include/exclude from user scope when creating their Conditional Access policy.
+ Learn more about [Conditional Access user assignments](../conditional-access/concept-conditional-access-users-groups.md). ### Comparing External Identities Conditional Access policies
The following PowerShell cmdlets are available to *proof up* or request MFA regi
[Authentication strength](https://aka.ms/b2b-auth-strengths) is a Conditional Access control that lets you define a specific combination of multifactor authentication (MFA) methods that an external user must complete to access your resources. This control is especially useful for restricting external access to sensitive apps in your organization because you can enforce specific authentication methods, such as a phishing-resistant method, for external users.
-You also have the ability to apply authentication strength to the different types of [guest or external users](#assigning-conditional-access-policies-to-external-user-types-preview) that you collaborate or connect with. This means you can enforce authentication strength requirements that are unique to your B2B collaboration, B2B direct connect, and other external access scenarios.
+You also have the ability to apply authentication strength to the different types of [guest or external users](#assigning-conditional-access-policies-to-external-user-types) that you collaborate or connect with. This means you can enforce authentication strength requirements that are unique to your B2B collaboration, B2B direct connect, and other external access scenarios.
Azure AD provides three [built-in authentication strengths](https://aka.ms/b2b-auth-strengths):
For more information, see the following articles:
- [What is Azure AD B2B collaboration?](./what-is-b2b.md) - [Identity Protection and B2B users](../identity-protection/concept-identity-protection-b2b.md) - [External Identities pricing](https://azure.microsoft.com/pricing/details/active-directory/external-identities/)-- [Frequently Asked Questions (FAQs)](./faq.yml)
+- [Frequently Asked Questions (FAQs)](./faq.yml)
active-directory B2b Tutorial Require Mfa https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/external-identities/b2b-tutorial-require-mfa.md
Previously updated : 02/03/2023 Last updated : 04/03/2023
To complete the scenario in this tutorial, you need:
1. On the **Conditional Access** page, in the toolbar on the top, select **New policy**. 1. On the **New** page, in the **Name** textbox, type **Require MFA for B2B portal access**. 1. In the **Assignments** section, choose the link under **Users and groups**.
-1. On the **Users and groups** page, choose **Select users and groups**, and then choose **Guest or external users**. You can assign the policy to different [external user types](authentication-conditional-access.md#assigning-conditional-access-policies-to-external-user-types-preview), built-in [directory roles](../conditional-access/concept-conditional-access-users-groups.md#include-users), or users and groups.
+1. On the **Users and groups** page, choose **Select users and groups**, and then choose **Guest or external users**. You can assign the policy to different [external user types](authentication-conditional-access.md#assigning-conditional-access-policies-to-external-user-types), built-in [directory roles](../conditional-access/concept-conditional-access-users-groups.md#include-users), or users and groups.
:::image type="content" source="media/tutorial-mfa/tutorial-mfa-user-access.png" alt-text="Screenshot showing selecting all guest users.":::
active-directory Certificate Authorities https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/fundamentals/certificate-authorities.md
Title: Azure Active Directory certificate authorities description: Listing of trusted certificates used in Azure --++ Last updated 10/10/2020--++
active-directory Create Access Review https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/governance/create-access-review.md
If you are reviewing access to an application, then before creating the review,
If you choose either **Managers of users** or **Group owner(s)**, you can also specify a fallback reviewer. Fallback reviewers are asked to do a review when the user has no manager specified in the directory or if the group doesn't have an owner.
+ > [!NOTE]
+ > In a team or group access review, only the group owners (at the time the review starts) are considered as reviewers. During the course of a review, if the list of group owners is updated, new group owners will not be considered reviewers as well as old group owners will still be considered reviewers. However, in the case of a recurring review, any changes on the group owners list will be considered in the next instance of that review.
+ >[!IMPORTANT] > For PIM for Groups (Preview), you must select **Group owner(s)**. It is mandatory to assign at least one fallback reviewer to the review. The review will only assign active owner(s) as the reviewer(s). Eligible owners are not included. If there are no active owners when the review begins, the fallback reviewer(s) will be assigned to the review.
active-directory Migrate Application Authentication To Azure Active Directory https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/manage-apps/migrate-application-authentication-to-azure-active-directory.md
Previously updated : 03/15/2023 Last updated : 03/30/2023
The process is broken into four phases, each with detailed planning and exit cri
## Introduction
-Today, your organization requires a slew of applications (apps) for users to get work done. You likely continue to add, develop, or retire apps every day. Users access these applications from a vast range of corporate and personal devices, and locations. They open apps in many ways, including:
+Today, your organization requires a lot of applications for users to get work done. You likely continue to add, develop, or retire apps every day. Users access these applications from a vast range of corporate and personal devices, and locations. They open apps in many ways, including:
- Through a company homepage or portal-- By bookmarking on their browsers
+- By bookmarking or favorites on their browsers
- Through a vendorΓÇÖs URL for software as a service (SaaS) apps - Links pushed directly to userΓÇÖs desktops or mobile devices via a mobile device/application management (MDM/ MAM) solution Your applications are likely using the following types of authentication: -- On-premises federation solutions (such as Active Directory Federation Services (ADFS) and Ping)-- Active Directory (such as Kerberos Auth and Windows-Integrated Auth)-- Other cloud-based identity and access management (IAM) solutions (such as Okta or Oracle)-- Header based authentication
+- Security Assertion Markup Language (SAML) or OpenID Connect (OIDC) via an on-premises or cloud-hosted Identity and Access Management (IAM) solutions federation solution (such as Active Directory Federation Services (ADFS), Okta, or Ping)
+
+- Kerberos or NTLM via Active Directory
+
+- Header-based authentication via Ping Access
To ensure that the users can easily and securely access applications, your goal is to have a single set of access controls and policies across your on-premises and cloud environments. [Azure Active Directory (Azure AD)](../fundamentals/active-directory-whatis.md) offers a universal identity platform that provides your employees, partners, and customers a single identity to access the applications they want and collaborate from any platform and device.
-![A diagram of Azure AD connectivity.](media/migrate-apps-to-azure-ad/azure-ad-connectivity.png)
+ [![A diagram of Azure AD connectivity.](media/migrate-apps-to-azure-ad/azure-ad-connectivity.png)](media/migrate-apps-to-azure-ad/azure-ad-connectivity.png#lightbox)
Azure AD has a [full suite of identity management capabilities](../fundamentals/active-directory-whatis.md#which-features-work-in-azure-ad). Standardizing your app authentication and authorization to Azure AD gets you the benefits that these capabilities provide.
You can find more migration resources at [https://aka.ms/migrateapps](./migratio
Moving app authentication to Azure AD helps you manage risk and cost, increase productivity, and address compliance and governance requirements.
-### Manage risk
+### Increase your security posture
-Safeguarding your apps requires that you have a full view of all the risk factors. Migrating your apps to Azure AD consolidates your security solutions. With it you can:
+Securing your apps requires that you've a full view of all the risk factors. Migrating your apps to Azure AD consolidates your security solutions. With it you can:
- Improve secure user access to applications and associated corporate data using [Conditional Access policies](../conditional-access/overview.md), [Multi-Factor Authentication](../authentication/concept-mfa-howitworks.md), and real-time risk-based [Identity Protection](../identity-protection/overview-identity-protection.md) technologies. - Protect privileged userΓÇÖs access to your environment with [Just-In-Time](../../azure-resource-manager/managed-applications/request-just-in-time-access.md) admin access.-- Use the [multi-tenant, geo-distributed, high availability design of Azure AD](https://cloudblogs.microsoft.com/enterprisemobility/2014/09/02/azure-ad-under-the-hood-of-our-geo-redundant-highly-available-distributed-cloud-directory/)for your most critical business needs. - Protect your legacy applications with one of our [secure hybrid access partner integrations](https://aka.ms/secure-hybrid-access) that you may have already deployed. ### Manage cost
-Your organization may have multiple Identity Access Management (IAM) solutions in place. Migrating to one Azure AD infrastructure is an opportunity to reduce dependencies on IAM licenses (on-premises or in the cloud) and infrastructure costs. In cases where you may have already paid for Azure AD via Microsoft 365 licenses, there's no reason to pay the added cost of another IAM solution.
+Your organization may have multiple IAM solutions in place. Migrating to one Azure AD infrastructure is an opportunity to reduce your on-premises footprint, consolidate vendor solutions, and therefore reduce costs. In cases where you may have already paid for Azure AD via Microsoft 365 licenses, thereΓÇÖs no reason to pay the added cost of another IAM solution. Ways to reduce costs:
+
+- Eliminate the need for an on-premises federation provider like ADFS or Ping Federate.
+
+- Eliminate the need for a cloud-hosted IAM solution like Okta or Ping One.
-With Azure AD, you can reduce infrastructure costs by providing secure remote access to on-premises apps using [Azure AD Application Proxy](../app-proxy/application-proxy.md).
+- Eliminate the need for on-premises remote access solutions like Ping Access or other WAM solutions.
### Increase productivity Economics and security benefits drive organizations to adopt Azure AD, but full adoption and compliance are more likely if users benefit too. With Azure AD, you can: -- Improve end-user [single sign-on (SSO)](./what-is-single-sign-on.md) experience through seamless and secure access to any application, from any device and any location.
+- Improve end-user [single sign-on (SSO)](./what-is-single-sign-on.md) experience through seamless and secure access to any application, from any device and any location with technologies like Hybrid Azure AD Join, Azure AD Join, or Azure AD Passwordless.
+ - Use self-service IAM capabilities, such as [Self-Service Password Resets](../authentication/concept-sspr-howitworks.md) and [SelfService Group Management](../enterprise-users/groups-self-service-management.md).-- Reduce administrative overhead by managing only a single identity for each user across cloud and on-premises environments:
- - Faster onboarding of new applications from the Azure AD app gallery.
+- Faster onboarding of new applications from the [Azure AD app gallery](overview-application-gallery.md).
- - [Automate provisioning](../app-provisioning/user-provisioning.md) of user accounts (in [Azure AD Gallery](https://azuremarketplace.microsoft.com/marketplace/apps/category/azure-active-directory-apps))based on Azure AD identities
+- [Automate provisioning](../app-provisioning/user-provisioning.md) of user accounts into applications.
- - Access all your apps from MyApps panel in the [Azure portal](https://portal.azure.com/)
+- Use Azure AD Lifecycle workflows to automate onboarding or offboarding, which might have previously been done with scripts.
- - Using Azure AD Lifecycle workflows automate onboarding or offboarding, which was previously done with scripts.
+- Create developer efficiencies and improve the end-user experience by building your applications using the Microsoft Identity Platform with the [Microsoft Authentication Library (MSAL)](../develop/msal-overview.md).
- Empower your partners with access to cloud resources using [Azure AD B2B collaboration](../external-identities/what-is-b2b.md). Cloud resources remove the overhead of configuring point-to-point federation with your partners. ### Address compliance and governance
-To comply with regulatory requirements, enforce corporate access policies and monitor user access to applications and associated data using integrated audit tools and APIs. With Azure AD, you can monitor application sign-ins through reports that use [Security Incident and Event Monitoring (SIEM) tools](../reports-monitoring/plan-monitoring-and-reporting.md). You can access the reports from the portal or APIs, and programmatically audit who has access to your applications and remove access to inactive users via access reviews.
+To comply with regulatory requirements, enforce corporate access policies and monitor user access to applications and associated data using integrated audit tools and APIs. With Azure AD, you can monitor application sign-ins through reports that use [Security Incident and Event Monitoring (SIEM) tools](../reports-monitoring/plan-monitoring-and-reporting.md) or [Azure Sentinel](https://azure.microsoft.com/products/microsoft-sentinel). You can access the reports from the portal or APIs, and programmatically audit who has access to your applications and remove access to inactive users via access reviews.
## Plan your migration phases and project strategy
Before we get into the tools, you should understand how to think through the mig
### Assemble the project team
-Application migration is a team effort, and you need to ensure that you have all the vital positions filled. Support from senior business leaders is important. Ensure that you involve the right set of executive sponsors, business decision-makers, and subject matter experts (SMEs.)
+Application migration is a team effort, and you need to ensure that you've all the vital positions filled. Support from senior business leaders is important. Ensure that you involve the right set of executive sponsors, business decision-makers, and subject matter experts (SMEs.)
During the migration project, one person may fulfill multiple roles, or multiple people fulfill each role, depending on your organizationΓÇÖs size and structure. You may also have a dependency on other teams that play a key role in your security landscape.
The following table includes the key roles and their contributions:
| Role | Contributions | | - | - | | **Project Manager** | Project coach accountable for guiding the project, including:<br /> - gain executive support<br /> - bring in stakeholders<br /> - manage schedules, documentation, and communications |
-| **Identity Architect / Azure AD App Administrator** | They're responsible for the following:<br /> - design the solution in cooperation with stakeholders<br /> - document the solution design and operational procedures for handoff to the operations team<br /> - manage the pre-production and production environments |
+| **Identity Architect / Azure AD App Administrator** | Responsible for the following:<br /> - design the solution in cooperation with stakeholders<br /> - document the solution design and operational procedures for handoff to the operations team<br /> - manage the pre-production and production environments |
| **On premises AD operations team** | The organization that manages the different on-premises identity sources such as AD forests, LDAP directories, HR systems etc.<br /> - perform any remediation tasks needed before synchronizing<br /> - Provide the service accounts required for synchronization<br /> - provide access to configure federation to Azure AD | | **IT Support Manager** | A representative from the IT support organization who can provide input on the supportability of this change from a helpdesk perspective. | | **Security Owner** | A representative from the security team that can ensure that the plan meets the security requirements of your organization. |
The following table includes the key roles and their contributions:
Effective business engagement and communication are the keys to success. It's important to give stakeholders and end-users an avenue to get information and keep informed of schedule updates. Educate everyone about the value of the migration, what the expected timelines are, and how to plan for any temporary business disruption. Use multiple avenues such as briefing sessions, emails, one-to-one meetings, banners, and townhalls.
-Based on the communication strategy that you have chosen for the app you may want to remind users of the pending downtime. You should also verify that there are no recent changes or business impacts that would require to postpone the deployment.
+Based on the communication strategy that you've chosen for the app you may want to remind users of the pending downtime. You should also verify that there are no recent changes or business impacts that would require to postpone the deployment.
-In the following table you find the minimum suggested communication to keep your stakeholders informed:
+In the following table, you find the minimum suggested communication to keep your stakeholders informed:
#### Plan phases and project strategy
In the following table you find the minimum suggested communication to keep your
| | - | | Available analytics and how to access | - App technical owners<br />- App business owners |
-There are two main categories of users of your apps and resources that Azure AD supports.
- ### Migration states communication dashboard Communicating the overall state of the migration project is crucial, as it shows progress, and helps app owners whose apps are coming up for migration to prepare for the move. You can put together a simple dashboard using Power BI or other reporting tools to provide visibility into the status of applications during the migration.
This ensures app owners know what the app migration and testing schedule are whe
### Best practices
-The following are our customer and partnerΓÇÖs success stories, and suggested best practices:
+The following articles are about our customer and partnerΓÇÖs success stories, and suggested best practices:
- [Five tips to improve the migration process to Azure Active Directory](https://techcommunity.microsoft.com/t5/Azure-Active-Directory-Identity/Five-tips-to-improve-the-migration-process-to-Azure-Active/ba-p/445364) by Patriot Consulting, a member of our partner network that focuses on helping customers deploy Microsoft cloud solutions securely.
The following are our customer and partnerΓÇÖs success stories, and suggested be
### Find your apps
-The first decision point in an application migration is which apps to migrate, which if any should remain, and which apps to deprecate. There is always an opportunity to deprecate the apps that you won't use in your organization. There are several ways to find apps in your organization. While discovering apps, ensure you include in-development and planned apps. Use Azure AD for authentication in all future apps.
-
-Using Active Directory Federation Services (AD FS) to gather a correct app inventory:
+The first decision point in an application migration is which apps to migrate, which if any should remain, and which apps to deprecate. There's always an opportunity to deprecate the apps that you won't use in your organization. There are several ways to find apps in your organization. While discovering apps, ensure you include in-development and planned apps. Use Azure AD for authentication in all future apps.
-- **Use Azure AD Connect Health.** If you have an Azure AD Premium license, we recommend deploying [Azure AD Connect Health](../hybrid/how-to-connect-health-adfs.md) to analyze the app usage in your on-premises environment. You can use the [ADFS application report](./migrate-adfs-application-activity.md) to discover ADFS applications that can be migrated and evaluate the readiness of the application to be migrated. After completing your migration, deploy [Cloud Discovery](/cloud-app-security/set-up-cloud-discovery) that allows you to continuously monitor Shadow IT in your organization once youΓÇÖre in the cloud.
+Discover applications using ADFS:
-- **Use ADFS to Azure AD app migration tool**: If you donΓÇÖt have Azure AD Premium licenses, we recommend using the ADFS to Azure AD app migration tools based on [PowerShell](https://github.com/AzureAD/Deployment-Plans/tree/master/ADFS%20to%20AzureAD%20App%20Migration). Refer to [solution guide](./migrate-adfs-apps-to-azure.md):
+- **Use Azure AD Connect Health for ADFS**: If you've an Azure AD Premium license, we recommend deploying [Azure AD Connect Health](../hybrid/how-to-connect-health-adfs.md) to analyze the app usage in your on-premises environment. You can use the [ADFS application report](./migrate-adfs-application-activity.md) to discover ADFS applications that can be migrated and evaluate the readiness of the application to be migrated.
-- **AD FS log parsing**. Parse the log files from your authentication servers to identify which apps are being used in your environment, and what their typical access patterns and access volumes are.
+- If you donΓÇÖt have Azure AD Premium licenses, we recommend using the ADFS to Azure AD app migration tools based on [PowerShell](https://github.com/AzureAD/Deployment-Plans/tree/master/ADFS%20to%20AzureAD%20App%20Migration). Refer to [solution guide](./migrate-adfs-apps-to-azure.md):
### Using other identity providers (IdPs)
-For other identity providers (such as Okta or Ping), you can use their tools to export the application inventory.
+- If youΓÇÖre currently using Okta, refer to our [Okta to Azure AD migration guide](migrate-applications-from-okta-to-azure-active-directory.md).
+
+- If youΓÇÖre currently using Ping Federate, then consider using the [Ping Administrative API](https://docs.pingidentity.com/r/en-us/pingfederate-112/pf_admin_api) to discover applications.
+
+- If the applications are integrated with Active Directory, search for service principals or service accounts that may be used for applications.
### Using cloud discovery tools In the cloud environment, you need rich visibility, control over data travel, and sophisticated analytics to find and combat cyber threats across all your cloud services. You can gather your cloud app inventory using the following tools: - **Cloud Access Security Broker (CASB**) ΓÇô A [CASB](/cloud-app-security/) typically works alongside your firewall to provide visibility into your employeesΓÇÖ cloud application usage and helps you protect your corporate data from cybersecurity threats. The CASB report can help you determine the most used apps in your organization, and the early targets to migrate to Azure AD.-- **Cloud Discovery** - By configuring [Cloud Discovery](/cloud-app-security/set-up-cloud-discovery), you gain visibility into the cloud app usage, and can discover unsanctioned or Shadow IT apps.-- **APIs** - For apps connected to cloud infrastructure, you can use the APIs and tools on those systems to begin to take an inventory of hosted apps. In the Azure environment:
+- **Cloud Discovery** - By configuring [Microsoft Defender for Cloud Apps](/defender-cloud-apps/what-is-defender-for-cloud-apps), you gain visibility into the cloud app usage, and can discover unsanctioned or Shadow IT apps.
+- **Azure Hosted Applications** - For apps connected to Azure infrastructure, you can use the APIs and tools on those systems to begin to take an inventory of hosted apps. In the Azure environment:
- Use the [Get-AzureWebsite](/powershell/module/servicemanagement/azure.service/get-azurewebsite) cmdlet to get information about Azure websites. - Use the [Get-AzureRMWebApp](/powershell/module/azurerm.websites/get-azurermwebapp) cmdlet to get information about your Azure Web Apps.D
- - You can find all the apps running on Microsoft IIS from the Windows command line using [AppCmd.exe](/iis/get-started/getting-started-with-iis/getting-started-with-appcmdexe#working-with-sites-applications-virtual-directories-and-application-pools).
- - Use [Applications](/previous-versions/azure/ad/graph/api/entity-and-complex-type-reference#application-entity) and [Service Principals](/previous-versions/azure/ad/graph/api/entity-and-complex-type-reference#serviceprincipal-entity) to get your information on web apps and app instance in a directory in Azure AD.
+ - Query Azure AD looking for [Applications](/previous-versions/azure/ad/graph/api/entity-and-complex-type-reference#application-entity) and [Service Principals](/previous-versions/azure/ad/graph/api/entity-and-complex-type-reference#serviceprincipal-entity).
-### Using manual processes
+### Manual discovery process
-Once you have taken the automated approaches described in this article, you have a good handle on your applications. However, you might consider doing the following to ensure you have good coverage across all user access areas:
+Once you've taken the automated approaches described in this article, you've a good handle on your applications. However, you might consider doing the following to ensure you've good coverage across all user access areas:
- Contact the various business owners in your organization to find the applications in use in your organization. - Run an HTTP inspection tool on your proxy server, or analyze proxy logs, to see where traffic is commonly routed. - Review weblogs from popular company portal sites to see what links users access the most.-- Reach out to executives or other key business members to ensure that you have covered the business-critical apps.
+- Reach out to executives or other key business members to ensure that you've covered the business-critical apps.
### Type of apps to migrate Once you find your apps, you identify these types of apps in your organization: -- Apps that use modern authentication protocols such as [Security Assertion Markup Language (SAML)](../fundamentals/auth-saml.md) and [OpenID Connect (OIDC)](../fundamentals/auth-oidc.md) already -- Apps that use legacy authentication such as [Kerberos](https://techcommunity.microsoft.com/t5/itops-talk-blog/deep-dive-how-azure-ad-kerberos-works/ba-p/3070889), [Header-based](application-proxy-configure-single-sign-on-with-headers.md), or NT LAN Manager (NTLM) protocols that you choose to modernize
+- Apps that use modern authentication protocols such as [Security Assertion Markup Language (SAML)](../fundamentals/auth-saml.md) or [OpenID Connect (OIDC)](../fundamentals/auth-oidc.md).
+- Apps that use legacy authentication such as [Kerberos](https://techcommunity.microsoft.com/t5/itops-talk-blog/deep-dive-how-azure-ad-kerberos-works/ba-p/3070889) or NT LAN Manager (NTLM) that you choose to modernize.
- Apps that use legacy authentication protocols that you choose NOT to modernize
+- New Line of Business (LoB) apps
### Apps that use modern authentication already The already modernized apps are the most likely to be moved to Azure AD. These apps already use modern authentication protocols such as SAML or OIDC and can be reconfigured to authenticate with Azure AD.
-We recommend you search and add applications from the [Azure AD app gallery](https://azuremarketplace.microsoft.com/marketplace/apps/category/azure-active-directory-apps). If you donΓÇÖt find them in the gallery, you can still add custom SAML or OIDC apps to Azure AD.
+We recommend you search and add applications from the [Azure AD app gallery](https://azuremarketplace.microsoft.com/marketplace/apps/category/azure-active-directory-apps). If you donΓÇÖt find them in the gallery, you can still onboard a custom application.
### Legacy apps that you choose to modernize
For certain apps using legacy authentication protocols, sometimes modernizing th
- Apps kept on-premises for compliance or control reasons. - Apps connected to an on-premises identity or federation provider that you do not want to change.-- Apps developed using on-premises authentication standards that you have no plans to move
+- Apps developed using on-premises authentication standards that you've no plans to move
Azure AD can bring great benefits to these legacy apps, as you can enable modern Azure AD security and governance features like [Multi-Factor Authentication](../authentication/concept-mfa-howitworks.md), [Conditional Access](../conditional-access/overview.md), [Identity Protection](../identity-protection/index.yml), [Delegated Application Access](./access-panel-manage-self-service-access.md), and [Access Reviews](../governance/manage-user-access-with-access-reviews.md#create-and-perform-an-access-review) against these apps without touching the app at all!
-Start by extending these apps into the cloud through our [Secure Hybrid Access (SHA) partner integrations](secure-hybrid-access.md) with application delivery controllers that you might have deployed already.
+- Start by extending these apps into the cloud with [Azure AD Application Proxy](../app-proxy/application-proxy.md).
+- Or explore using on of our [Secure Hybrid Access (SHA) partner integrations](secure-hybrid-access.md) that you might have deployed already.
### New Line of Business (LoB) apps
-You usually develop LoB apps for your organizationΓÇÖs in-house use. If you have new apps in the pipeline, we recommend using the [Microsoft Identity Platform](../develop/v2-overview.md) to implement OpenID Connect.
+You usually develop LoB apps for your organizationΓÇÖs in-house use. If you've new apps in the pipeline, we recommend using the [Microsoft Identity Platform](../develop/v2-overview.md) to implement OIDC.
### Apps to deprecate Apps without clear owners and clear maintenance and monitoring present a security risk for your organization. Consider deprecating applications when: - Their **functionality is highly redundant** with other systems-- There is **no business owner**-- There is clearly **no usage**
+- There's **no business owner**
+- There's clearly **no usage**
We recommend that you **do not deprecate high impact, business-critical applications**. In those cases, work with business owners to determine the right strategy.
We recommend that you **do not deprecate high impact, business-critical applicat
You are successful in this phase with: -- A good understanding of the systems in scope for your migration (that you can retire once you have moved to Azure AD)-- A list of apps that includes:-
- - What systems those apps connect to
- - From where and on what devices users access them
- - Whether they'll be migrated, deprecated, or connected with [Azure AD Connect](../hybrid/whatis-azure-ad-connect.md).
-
-> [!NOTE]
-> You can download the [Application Discovery Worksheet](https://download.microsoft.com/download/2/8/3/283F995C-5169-43A0-B81D-B0ED539FB3DD/Application%20Discovery%20worksheet.xlsx) to record the applications that you want to migrate to Azure AD authentication.
+- A good understanding of the applications in scope for migration, require modernization, staying as-is, or deprecation.
## Phase 2: Classify apps and plan pilot
-Classifying the migration of your apps is an important exercise. Not every app needs to be migrated and transitioned at the same time. Once you have collected information about each of the apps, you can rationalize which apps should be migrated first and which may take added time.
+Classifying the migration of your apps is an important exercise. Not every app needs to be migrated and transitioned at the same time. Once you've collected information about each of the apps, you can rationalize which apps should be migrated first and which may take added time.
### Classify in-scope apps
Applications with **high usage numbers** should receive a higher value than apps
![A diagram of the spectrums of User Volume and User Breadth](media/migrate-apps-to-azure-ad/user-volume-breadth.png)
-Once you have determined values for business criticality and usage, you can then determine the **application lifespan**, and create a matrix of priority. See one such matrix below:
+Once you've determined values for business criticality and usage, you can then determine the **application lifespan**, and create a matrix of priority. The diagram shows the matrix.
![A triangle diagram showing the relationships between Usage, Expected Lifespan, and Business Criticality](media/migrate-apps-to-azure-ad/triangular-diagram-showing-relationship.png)+ ### Prioritize apps for migration You can choose to begin the app migration with either the lowest priority apps or the highest priority apps based on your organizationΓÇÖs needs.
-In a scenario where you may not have experience using Azure AD and Identity services, consider moving your **lowest priority apps** to Azure AD first. This minimizes your business impact, and you can build momentum. Once you have successfully moved these apps and have gained the stakeholderΓÇÖs confidence, you can continue to migrate the other apps.
+In a scenario where you may not have experience using Azure AD and Identity services, consider moving your **lowest priority apps** to Azure AD first. This minimizes your business impact, and you can build momentum. Once you've successfully moved these apps and have gained the stakeholderΓÇÖs confidence, you can continue to migrate the other apps.
-If there is no clear priority, you should consider moving the apps that are in the [Azure AD Gallery](https://azuremarketplace.microsoft.com/marketplace/apps/category/azure-active-directory-apps) first and support multiple identity providers because they are easier to integrate. It is likely that these apps are the **highest-priority apps** in your organization. To help integrate your SaaS applications with Azure AD, we have developed a collection of [tutorials](../saas-apps/tutorial-list.md) that walk you through configuration.
+If there's no clear priority, you should consider moving the apps that are in the [Azure AD Gallery](https://azuremarketplace.microsoft.com/marketplace/apps/category/azure-active-directory-apps) first and support multiple identity providers because they are easier to integrate. It is likely that these apps are the **highest-priority apps** in your organization. To help integrate your SaaS applications with Azure AD, we have developed a collection of [tutorials](../saas-apps/tutorial-list.md) that walk you through configuration.
-When you have a deadline to migrate the apps, these highest priority apps bucket takes the major workload. You can eventually select the lower priority apps as they won't change the cost even though you have moved the deadline.
+When you've a deadline to migrate the apps, these highest priority apps bucket takes the major workload. You can eventually select the lower priority apps as they won't change the cost even though you've moved the deadline.
In addition to this classification and depending on the urgency of your migration, you should publish a **migration schedule** within which app owners must engage to have their apps migrated. At the end of this process, you should have a list of all applications in prioritized buckets for migration.
Information that is important to making your migration decision includes:
- **App type** ΓÇô is it a third-party SaaS app? A custom line-of-business web app? An API? - **Business criticality** ΓÇô is its high criticality? Low? Or somewhere in between? - **User access volume** ΓÇô does everyone access this app or just a few people?
+- **User access type**: who needs to access the application ΓÇô Employees, business partners, or customers or perhaps all?
- **Planned lifespan** ΓÇô how long will this app be around? Less than six months? More than two years?-- **Current identity provider** ΓÇô what is the primary IdP for this app?
+- **Current identity provider** ΓÇô what is the primary IdP for this app? AD FS, Active Directory, or Ping Federate?
+- **Security requirements** - does the application require MFA or that users be on the corporate network to access the application?
- **Method of authentication** ΓÇô does the app authenticate using open standards?-- **Security requirements** - must it be on a corporate network? Requires MFA or registered device? -- **User audience** ΓÇô employees, partners or internal or external customers? - **Whether you plan to update the app code** ΓÇô is the app under planned or active development? - **Whether you plan to keep the app on-premises** ΓÇô do you want to keep the app in your datacenter long term? - **Whether the app depends on other apps or APIs** ΓÇô does the app currently call into other apps or APIs?
Information that is important to making your migration decision includes:
Other data that helps you later, but that you do not need to make an immediate migration decision includes: - **App URL** ΓÇô where do users go to access the app?
+- **Application Logo**: If migrating an application to Azure AD that isnΓÇÖt in the Azure AD app gallery, it is recommended you provide a descriptive logo
- **App description** ΓÇô what is a brief description of what the app does? - **App owner** ΓÇô who in the business is the main POC for the app? - **General comments or notes** ΓÇô any other general information about the app or business ownership
-Once you have classified your application and documented the details, then be sure to gain business owner buy-in to your planned migration strategy.
+Once you've classified your application and documented the details, then be sure to gain business owner buy-in to your planned migration strategy.
+
+### Application users
+
+There are two main categories of users of your apps and resources that Azure AD supports:
+
+- **Internal:** Employees, contractors, and vendors that have accounts within your identity provider. This might need further pivots with different rules for managers or leadership versus other employees.
+
+- **External:** Vendors, suppliers, distributors, or other business partners that interact with your organization in the regular course of business with [Azure AD B2B collaboration.](../external-identities/what-is-b2b.md)
+
+You can define groups for these users and populate these groups in diverse ways. You may choose that an administrator must manually add members into a group, or you can enable self-service group membership. Rules can be established that automatically add members into groups based on the specified criteria using [dynamic groups](../enterprise-users/groups-dynamic-membership.md).
+
+External users may also refer to customers. [Azure AD B2C](../../active-directory-b2c/overview.md), a separate product supports customer authentication. However, it is outside the scope of this paper.
### Plan a pilot
DonΓÇÖt forget about your external partners. Make sure that they participate in
While some apps are easy to migrate, others may take longer due to multiple servers or instances. For example, SharePoint migration may take longer due to custom sign-in pages.
-Many SaaS app vendors charge for changing the SSO connection. Check with them and plan for this.
+Many SaaS app vendors may not provide a self-service means to reconfigure the application and may charge for changing the SSO connection. Check with them and plan for this.
### App owner sign-off
-Business critical and universally used applications may need a group of pilot users to test the app in the pilot stage. Once you have tested an app in the pre-production or pilot environment, ensure that app business owners sign off on performance prior to the migration of the app and all users to production use of Azure AD for authentication.
+Business critical and universally used applications may need a group of pilot users to test the app in the pilot stage. Once you've tested an app in the pre-production or pilot environment, ensure that app business owners sign off on performance prior to the migration of the app and all users to production use of Azure AD for authentication.
### Plan the security posture
-Before you initiate the migration process, take time to fully consider the security posture you wish to develop for your corporate identity system. This is based on gathering these valuable sets of information: **Identities, devices, and locations that are accessing your data.**
+Before you initiate the migration process, take time to fully consider the security posture you wish to develop for your corporate identity system. This is based on gathering these valuable sets of information: **Identities, devices, and locations that are accessing your applications and data.**
### Identities and data
You can use this information to protect access to all services integrated with A
This also helps you implement the [five steps to securing your identity infrastructure](../../security/fundamentals/steps-secure-identity.md). Use the guidance as a starting point for your organization and adjust the policies to meet your organization's specific requirements.
-### Who is accessing your data?
-
-There are two main categories of users of your apps and resources that Azure AD supports:
--- **Internal:** Employees, contractors, and vendors that have accounts within your identity provider. This might need further pivots with different rules for managers or leadership versus other employees.--- **External:** Vendors, suppliers, distributors, or other business partners that interact with your organization in the regular course of business with [Azure AD B2B collaboration.](../external-identities/what-is-b2b.md)-
-You can define groups for these users and populate these groups in diverse ways. You may choose that an administrator must manually add members into a group, or you can enable self-service group membership. Rules can be established that automatically add members into groups based on the specified criteria using [dynamic groups](../enterprise-users/groups-dynamic-membership.md).
-
-External users may also refer to customers. [Azure AD B2C](../../active-directory-b2c/overview.md), a separate product supports customer authentication. However, it is outside the scope of this paper.
- ### Device/location used to access data The device and location that a user uses to access an app are also important. Devices physically connected to your corporate network are more secure. Connections from outside the network over VPN may need scrutiny.
With these aspects of resource, user, and device in mind, you may choose to use
### Exit criteria
-You are successful in this phase when you:
+You are successful in this phase when you've:
+
+- Fully documented the apps you intend to migrate
+
+- Prioritized apps based on business criticality, usage volume, and lifespan
+
+- Selected apps that represent your requirements for a pilot
-- Know your apps
- - Have fully documented the apps you intend to migrate
- - Have prioritized apps based on business criticality, usage volume, and lifespan
+- Business-owner buy-in to your prioritization and strategy
-- Have selected apps that represent your requirements for a pilot-- Business-owner buy-in to your prioritization and strategy-- Understand your security posture needs and how to implement them
+- Understanding of your security posture needs and how to implement them
## Phase 3: Plan migration and testing
-Once you have gained business buy-in, the next step is to start migrating these apps to Azure AD authentication.
+Once you've gained business buy-in, the next step is to start migrating these apps to Azure AD authentication.
### Migration tools and guidance
-Use the tools and guidance below to follow the precise steps needed to migrate your applications to Azure AD:
+Use the tools and guidance provided to follow the precise steps needed to migrate your applications to Azure AD:
- **General migration guidance** ΓÇô Use the whitepaper, tools, email templates, and applications questionnaire in the [Azure AD apps migration toolkit](./migration-resources.md) to discover, classify, and migrate your apps. - **SaaS applications** ΓÇô See our list of [SaaS app tutorials](../saas-apps/tutorial-list.md) and the [Azure AD SSO deployment plan](plan-sso-deployment.md) to walk through the end-to-end process.-- **Applications running on-premises** ΓÇô Learn all [about the Azure AD Application Proxy](../app-proxy/application-proxy.md) and use the complete [Azure AD Application Proxy deployment plan](https://aka.ms/AppProxyDPDownload) to get going quickly.
+- **Applications running on-premises** ΓÇô Learn all [about the Azure AD Application Proxy](../app-proxy/application-proxy.md) and use the complete [Azure AD Application Proxy deployment plan](https://aka.ms/AppProxyDPDownload) to get going quickly or consider our [Secure Hybrid Access partners](secure-hybrid-access.md), which you may already own.
- **Apps youΓÇÖre developing** ΓÇô Read our step-by-step [integration](../develop/quickstart-register-app.md) and [registration](../develop/quickstart-register-app.md) guidance. After migration, you may choose to send communication informing the users of the successful deployment and remind them of any new steps that they need to take. ### Plan testing
-During the process of the migration, your app may already have a test environment used during regular deployments. You can continue to use this environment for migration testing. If a test environment is not currently available, you may be able to set one up using Azure App Service or Azure Virtual Machines, depending on the architecture of the application. You may choose to set up a separate test Azure AD tenant to use as you develop your app configurations. This tenant will start in a clean state and won't be configured to sync with any system.
+During the process of the migration, your app may already have a test environment used during regular deployments. You can continue to use this environment for migration testing. If a test environment is not currently available, you may be able to set one up using Azure App Service or Azure Virtual Machines, depending on the architecture of the application. You may choose to set up a separate test Azure AD tenant to use as you develop your app configurations. This tenant starts in a clean state and won't be configured to sync with any system.
-Once you have migrated the apps, go to the [Azure portal](https://portal.azure.com/) to test if the migration was a success. Follow the instructions below:
+Once you've migrated the apps, go to the [Azure portal](https://portal.azure.com/) to test if the migration was a success. Follow these instructions:
-- Browse to **Azure Active Directory** > **Enterprise Applications** > **All applications** and find your app from the list.-- Select **Users and groups** to assign at least one user or group to the app.-- Select **Conditional Access**. Review your list of policies and ensure that you are not blocking access to the application with a [conditional access policy](../conditional-access/overview.md).
+1. Select **Enterprise Applications > All applications** and find your app from the list.
+
+2. Select **Manage > Users and groups** to assign at least one user or group to the app.
+
+3. Select **Manage > Conditional Access**. Review your list of policies and ensure that you are not blocking access to the application with a conditional access policy.
Depending on how you configure your app, verify that SSO works properly. | Authentication type | Testing | | | |
-| **OAuth / OpenID Connect** | Select **Enterprise applications &gt; Permissions** and ensure you have consented to the application to be used in your organization in the user settings for your app. |
+| **OAuth / OpenID Connect** | Select **Enterprise applications &gt; Permissions** and ensure you've consented to the application to be used in your organization in the user settings for your app. |
| **SAML-based SSO** | Use the [Test SAML Settings](./debug-saml-sso-issues.md) button found under **Single Sign-On.** | | **Password-Based SSO** | Download and install the [MyApps Secure Sign-in Extension](https://support.microsoft.com/account-billing/sign-in-and-start-apps-from-the-my-apps-portal-2f3b1bae-0e5a-4a86-a33e-876fbd2a4510#download-and-install-the-my-apps-secure-sign-in-extension). This extension helps you start any of your organization's cloud apps that require you to use an SSO process. | | **[Application Proxy](../app-proxy/application-proxy.md)** | Ensure your connector is running and assigned to your application. Visit the [Application Proxy troubleshooting guide](../app-proxy/application-proxy-troubleshoot.md) for further assistance. |
-You can test each app by logging in with a test user and make sure all functionality is the same as prior to the migration. If you determine during testing that users will need to update their [MFA](../authentication/howto-mfa-userstates.md) or [SSPR](../authentication/tutorial-enable-sspr.md)settings, or you are adding this functionality during the migration, be sure to add that to your end-user communication plan. See [MFA](https://aka.ms/mfatemplates) and [SSPR](https://aka.ms/ssprtemplates) end-user communication templates.
+You can test each app by logging in with a test user and make sure all functionality is the same as prior to the migration. If you determine during testing that users need to update their [MFA](../authentication/howto-mfa-userstates.md) or [SSPR](../authentication/tutorial-enable-sspr.md)settings, or you are adding this functionality during the migration, be sure to add that to your end-user communication plan. See [MFA](https://aka.ms/mfatemplates) and [SSPR](https://aka.ms/ssprtemplates) end-user communication templates.
### Troubleshoot
If you run into problems, check out our [apps troubleshooting guide](../app-prov
If your migration fails, the best strategy is to roll back and test. Here are the steps that you can take to mitigate migration issues: - **Take screenshots** of the existing configuration of your app. You can look back if you must reconfigure the app once again.-- You might also consider **providing links for the application to use legacy authentication**, if there were issues with cloud authentication.
+- You might also consider **providing links for the application to use alternative authentication options (legacy or local authentication)**, in case there are issues with cloud authentication.
- Before you complete your migration, **do not change your existing configuration** with the existing identity provider.-- Consider migrating **the apps that support multiple IdPs**. If something goes wrong, you can always change to the preferred IdPΓÇÖs configuration.
+- Be aware of the **apps that support multiple IdPs** since they provide an easier rollback plan.
- Ensure that your app experience has a **Feedback button** or pointers to your **helpdesk** issues. ### Exit criteria
-You are successful in this phase when you have:
+You are successful in this phase when you've:
-- Determined how each app will be migrated
+- Determined how each app is migrated
- Reviewed the migration tools - Planned your testing including test environments and groups - Planned rollback
We recommend taking the following actions as appropriate to your organization.
### Manage your usersΓÇÖ app access
-Once you have migrated the apps, you can enrich your userΓÇÖs experience in many ways
--- Make apps discoverable-- Point your user to the [MyApps](https://support.microsoft.com/account-billing/sign-in-and-start-apps-from-the-my-apps-portal-2f3b1bae-0e5a-4a86-a33e-876fbd2a4510#download-and-install-the-my-apps-secure-sign-in-extension) portal experience. Here, they can access all cloud-based apps, apps you make available by using [Azure AD Connect](../hybrid/whatis-azure-ad-connect.md), and apps using [Application Proxy](../app-proxy/application-proxy.md) provided they have permissions to access those apps.-
-You can guide your users on how to discover their apps:
--- Use the [Existing Single Sign-on](./view-applications-portal.md) feature to **link your users to any app**-- Enable [Self-Service Application Access](./manage-self-service-access.md) to an app and **let users add apps that you curate**-- [Hide applications from end-users](./hide-application-from-user-portal.md) (default Microsoft apps or other apps) to make the apps they do need more discoverable-
-### Make apps accessible
-
-#### Let users access apps from their mobile devices
-
-Users can access the MyApps portal with Intune-managed browser on their [iOS](./hide-application-from-user-portal.md) 7.0 or later or [Android](./hide-application-from-user-portal.md) devices.
-
-Users can download an Intune-managed browser:
--- **For Android devices**, from the [Google play store](https://play.google.com/store/apps/details?id=com.microsoft.intune)-- **For Apple devices**, from the [Apple App Store](https://apps.apple.com/us/app/intune-company-portal/id719171358) or they can download the [My Apps mobile app for iOS ](https://appadvice.com/app/my-apps-azure-active-directory/824048653)-
-#### Let users open their apps from a browser extension
-
-Users can [download the MyApps Secure Sign-in Extension](https://www.microsoft.com/p/my-apps-secure-sign-in-extension/9pc9sckkzk84?rtc=1&activetab=pivot%3Aoverviewtab) in [Chrome,](https://chrome.google.com/webstore/detail/my-apps-secure-sign-in-ex/ggjhpefgjjfobnfoldnjipclpcfbgbhl) or [Microsoft Edge](https://www.microsoft.com/p/my-apps-secure-sign-in-extension/9pc9sckkzk84?rtc=1&activetab=pivot%3Aoverviewtab) and can launch apps right from their browser bar to:
+Once you've migrated the apps, you can enrich your userΓÇÖs experience by:
-- Search for their apps and have their most-recently-used apps appear-- Automatically convert internal URLs that you have configured in [Application Proxy](../app-proxy/application-proxy.md) to the appropriate external URLs. Your users can now work with the links they are familiar with no matter where they are.
+- Make apps discoverable by publishing them to the [Microsoft MyApplications portal](https://support.microsoft.com/account-billing/sign-in-and-start-apps-from-the-my-apps-portal-2f3b1bae-0e5a-4a86-a33e-876fbd2a4510#download-and-install-the-my-apps-secure-sign-in-extension).
+- Add [app collections](access-panel-collections.md) so users can locate application based on business function.
+- Add their own application bookmarks to the [MyApplications portal](https://support.microsoft.com/account-billing/sign-in-and-start-apps-from-the-my-apps-portal-2f3b1bae-0e5a-4a86-a33e-876fbd2a4510#download-and-install-the-my-apps-secure-sign-in-extension).
+- Enable [self-service application access](manage-self-service-access.md) to an app and **let users add apps that you curate**.
+- Optionally [hide applications from end-users](./hide-application-from-user-portal.md).
+- Users can go to [Office.com](https://www.office.com) to **search for their apps and have their most-recently-used apps appear** for them right from where they do work.
+- Users can download the MyApps secure sign-in extension in Chrome, or Microsoft Edge so they can launch applications directly from their browser without having to first navigate to MyApplications.
+- Users can access the MyApps portal with Intune-managed browser on their [iOS 7.0](./hide-application-from-user-portal.md) or later or [Android](./hide-application-from-user-portal.md) devices.
-#### Let users open their apps from Office.com
+ - For **Android devices**, from the [Google play store](https://play.google.com/store/apps/details?id=com.microsoft.intune)
-Users can go to [Office.com](https://www.office.com/) to **search for their apps and have their most-recently-used apps appear** for them right from where they do work.
+ - For **Apple devices**, from the [Apple App Store](https://apps.apple.com/us/app/intune-company-portal/id719171358) or they can download the My Apps mobile app for [iOS](https://appadvice.com/app/my-apps-azure-active-directory/824048653).
### Secure app access
Azure AD provides a centralized access location to manage your migrated apps. Go
- **Automatic provisioning.** Set up [automatic provisioning of users](../app-provisioning/user-provisioning.md) with various third-party SaaS apps that users need to access. In addition to creating user identities, it includes the maintenance and removal of user identities as status or roles change. - **Delegate user access** **management**. As appropriate, enable self-service application access to your apps and *assign a business approver to approve access to those apps*. Use [Self-Service Group Management](../enterprise-users/groups-self-service-management.md)for groups assigned to collections of apps. - **Delegate admin access.** using **Directory Role** to assign an admin role (such as Application administrator, Cloud Application administrator, or Application developer) to your user.
+- **Add applications to Access Packages** to provide governance and attestation.
### Audit and gain insights of your apps You can also use the [Azure portal](https://portal.azure.com/) to audit all your apps from a centralized location, - **Audit your app** using **Enterprise Applications, Audit**, or access the same information from the [Azure AD Reporting API](../reports-monitoring/concept-reporting-api.md) to integrate into your favorite tools.-- **View the permissions for an app** using **Enterprise Applications, Permissions** for apps using OAuth / OpenID Connect.
+- **View the permissions for an app** using **Enterprise Applications, Permissions** for apps using OAuth/OpenID Connect.
- **Get sign-in insights** using **Enterprise Applications, Sign-Ins**. Access the same information from the [Azure AD Reporting API.](../reports-monitoring/concept-reporting-api.md) - **Visualize your appΓÇÖs usage** from the [Azure AD Power BI content pack](../reports-monitoring/howto-use-azure-monitor-workbooks.md)
Many [deployment plans](../fundamentals/active-directory-deployment-plans.md) ar
Visit the following support links to create or track support ticket and monitor health. - **Azure Support:** You can call [Microsoft Support](https://azure.microsoft.com/support) and open a ticket for any Azure Identity deployment issue depending on your Enterprise Agreement with Microsoft.-- **FastTrack**: If you have purchased Enterprise Mobility and Security (EMS) or Azure AD Premium licenses, you are eligible to receive deployment assistance from the [FastTrack program.](/enterprise-mobility-security/solutions/enterprise-mobility-fasttrack-program)
+- **FastTrack**: If you've purchased Enterprise Mobility and Security (EMS) or Azure AD Premium licenses, you are eligible to receive deployment assistance from the [FastTrack program.](/enterprise-mobility-security/solutions/enterprise-mobility-fasttrack-program)
- **Engage the Product Engineering team:** If you are working on a major customer deployment with millions of users, you are entitled to support from the Microsoft account team or your Cloud Solutions Architect. Based on the projectΓÇÖs deployment complexity, you can work directly with the [Azure Identity Product Engineering team.](https://portal.azure.com/#blade/Microsoft_Azure_Marketplace/MarketplaceOffersBlade/selectedMenuItemId/solutionProviders) - **Azure AD Identity blog:** Subscribe to the [Azure AD Identity blog](https://techcommunity.microsoft.com/t5/Azure-Active-Directory-Identity/bg-p/Identity) to stay up to date with all the latest product announcements, deep dives, and roadmap information provided directly by the Identity engineering team.
active-directory Bis Provisioning Tutorial https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/bis-provisioning-tutorial.md
The scenario outlined in this tutorial assumes that you already have the followi
* [An Azure AD tenant](../develop/quickstart-create-new-tenant.md) * A user account in Azure AD with [permission](../roles/permissions-reference.md) to configure provisioning (for example, Application Administrator, Cloud Application administrator, Application Owner, or Global Administrator). * An administrator account with BIS.
-* Region and Country should be passed as 2 or 3 letter code and not full name.
+* Country/region should be passed as 2 or 3 letter code and not full name.
* Make sure all existing account in BIS has data in sync with Azure AD to avoid duplicate account creation (for example, email in Azure AD should match with email in BIS). ## Step 1. Plan your provisioning deployment
active-directory Easy Metrics Auth0 Connector Tutorial https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/easy-metrics-auth0-connector-tutorial.md
Complete the following steps to enable Azure AD single sign-on in the Azure port
1. On the **Basic SAML Configuration** section, perform the following steps:
- a. In the **Identifier** textbox, type the value:
- `urn:auth0:easymetrics:ups-saml-sso`
+ a. In the **Identifier** textbox, type the value provided by [Easy Metrics Auth0 Connector support team](mailto:support@easymetrics.com).
- b. In the **Reply URL** textbox, type the URL:
- `https://easymetrics.auth0.com/login/callback?connection=ups-saml-sso&organization=org_T8ro1Kth3Gleygg5`
+ b. In the **Reply URL** textbox, type the value provided by [Easy Metrics Auth0 Connector support team](mailto:support@easymetrics.com).
- c. In the **Sign on URL** textbox, type the URL:
- `https://azureapp.gcp-easymetrics.com`
+ c. In the **Sign on URL** textbox, type the value provided by [Easy Metrics Auth0 Connector support team](mailto:support@easymetrics.com).
1. On the **Set up single sign-on with SAML** page, in the **SAML Signing Certificate** section, find **Certificate (PEM)** and select **Download** to download the certificate and save it on your computer.
In this section, you test your Azure AD single sign-on configuration with follow
## Next steps
-Once you configure Easy Metrics Auth0 Connector you can enforce session control, which protects exfiltration and infiltration of your organizationΓÇÖs sensitive data in real time. Session control extends from Conditional Access. [Learn how to enforce session control with Microsoft Cloud App Security](/cloud-app-security/proxy-deployment-aad).
+Once you configure Easy Metrics Auth0 Connector you can enforce session control, which protects exfiltration and infiltration of your organizationΓÇÖs sensitive data in real time. Session control extends from Conditional Access. [Learn how to enforce session control with Microsoft Cloud App Security](/cloud-app-security/proxy-deployment-aad).
active-directory Navan Tutorial https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/navan-tutorial.md
+
+ Title: 'Tutorial: Azure AD SSO integration with Navan'
+description: Learn how to configure single sign-on between Azure Active Directory and Navan.
++++++++ Last updated : 04/03/2023+++
+# Tutorial: Azure AD SSO integration with Navan
+
+In this tutorial, you'll learn how to integrate Navan with Azure Active Directory (Azure AD). When you integrate Navan with Azure AD, you can:
+
+* Control in Azure AD who has access to Navan.
+* Enable your users to be automatically signed-in to Navan with their Azure AD accounts.
+* Manage your accounts in one central location - the Azure portal.
+
+## Prerequisites
+
+To get started, you need the following items:
+
+* An Azure AD subscription. If you don't have a subscription, you can get a [free account](https://azure.microsoft.com/free/).
+* Navan single sign-on (SSO) enabled subscription.
+
+## Scenario description
+
+In this tutorial, you configure and test Azure AD SSO in a test environment.
+
+* Navan supports **SP and IDP** initiated SSO.
+* Navan supports **Just In Time** user provisioning.
+
+## Add Navan from the gallery
+
+To configure the integration of Navan into Azure AD, you need to add Navan from the gallery to your list of managed SaaS apps.
+
+1. Sign in to the Azure portal using either a work or school account, or a personal Microsoft account.
+1. On the left navigation pane, select the **Azure Active Directory** service.
+1. Navigate to **Enterprise Applications** and then select **All Applications**.
+1. To add new application, select **New application**.
+1. In the **Add from the gallery** section, type **Navan** in the search box.
+1. Select **Navan** from results panel and then add the app. Wait a few seconds while the app is added to your tenant.
+
+ Alternatively, you can also use the [Enterprise App Configuration Wizard](https://portal.office.com/AdminPortal/home?Q=Docs#/azureadappintegration). In this wizard, you can add an application to your tenant, add users/groups to the app, assign roles, as well as walk through the SSO configuration as well. [Learn more about Microsoft 365 wizards.](/microsoft-365/admin/misc/azure-ad-setup-guides)
+
+## Configure and test Azure AD SSO for Navan
+
+Configure and test Azure AD SSO with Navan using a test user called **B.Simon**. For SSO to work, you need to establish a link relationship between an Azure AD user and the related user in Navan.
+
+To configure and test Azure AD SSO with Navan, perform the following steps:
+
+1. **[Configure Azure AD SSO](#configure-azure-ad-sso)** - to enable your users to use this feature.
+ 1. **[Create an Azure AD test user](#create-an-azure-ad-test-user)** - to test Azure AD single sign-on with B.Simon.
+ 1. **[Assign the Azure AD test user](#assign-the-azure-ad-test-user)** - to enable B.Simon to use Azure AD single sign-on.
+1. **[Configure Navan SSO](#configure-navan-sso)** - to configure the single sign-on settings on application side.
+ 1. **[Create Navan test user](#create-navan-test-user)** - to have a counterpart of B.Simon in Navan that is linked to the Azure AD representation of user.
+1. **[Test SSO](#test-sso)** - to verify whether the configuration works.
+
+## Configure Azure AD SSO
+
+Follow these steps to enable Azure AD SSO in the Azure portal.
+
+1. In the Azure portal, on the **Navan** application integration page, find the **Manage** section and select **single sign-on**.
+1. On the **Select a single sign-on method** page, select **SAML**.
+1. On the **Set up single sign-on with SAML** page, click the pencil icon for **Basic SAML Configuration** to edit the settings.
+
+ ![Edit Basic SAML Configuration](common/edit-urls.png)
+
+1. On the **Basic SAML Configuration** section, the user does not have to perform any step as the app is already pre-integrated with Azure.
+
+1. Click **Set additional URLs** and perform the following step if you wish to configure the application in **SP** initiated mode:
+
+ In the **Sign-on URL** text box, type the URL:
+ `https://app.tripactions.com`
+
+1. Click **Save**.
+
+1. On the **Set up single sign-on with SAML** page, in the **SAML Signing Certificate** section, find **Federation Metadata XML** and select **Download** to download the certificate and save it on your computer.
+
+ ![The Certificate download link](common/metadataxml.png)
+
+1. On the **Set up Navan** section, copy the appropriate URL(s) based on your requirement.
+
+ ![Copy configuration URLs](common/copy-configuration-urls.png)
+
+### Create an Azure AD test user
+
+In this section, you'll create a test user in the Azure portal called B.Simon.
+
+1. From the left pane in the Azure portal, select **Azure Active Directory**, select **Users**, and then select **All users**.
+1. Select **New user** at the top of the screen.
+1. In the **User** properties, follow these steps:
+ 1. In the **Name** field, enter `B.Simon`.
+ 1. In the **User name** field, enter the username@companydomain.extension. For example, `B.Simon@contoso.com`.
+ 1. Select the **Show password** check box, and then write down the value that's displayed in the **Password** box.
+ 1. Click **Create**.
+
+### Assign the Azure AD test user
+
+In this section, you'll enable B.Simon to use Azure single sign-on by granting access to Navan.
+
+1. In the Azure portal, select **Enterprise Applications**, and then select **All applications**.
+1. In the applications list, select **Navan**.
+1. In the app's overview page, find the **Manage** section and select **Users and groups**.
+1. Select **Add user**, then select **Users and groups** in the **Add Assignment** dialog.
+1. In the **Users and groups** dialog, select **B.Simon** from the Users list, then click the **Select** button at the bottom of the screen.
+1. If you're expecting any role value in the SAML assertion, in the **Select Role** dialog, select the appropriate role for the user from the list and then click the **Select** button at the bottom of the screen.
+1. In the **Add Assignment** dialog, click the **Assign** button.
+
+## Configure Navan SSO
+
+To configure single sign-on on **Navan** side, you need to send the downloaded **Federation Metadata XML** and appropriate copied URLs from Azure portal to [Navan support team](mailto:launches@tripactions.com). They set this setting to have the SAML SSO connection set properly on both sides.
+
+### Create Navan test user
+
+In this section, a user called B.Simon is created in Navan. Navan supports just-in-time user provisioning, which is enabled by default. There is no action item for you in this section. If a user doesn't already exist in Navan, a new one is created after authentication.
+
+## Test SSO
+
+In this section, you test your Azure AD single sign-on configuration with following options.
+
+#### SP initiated:
+
+* Click on **Test this application** in Azure portal. This will redirect to Navan Sign on URL where you can initiate the login flow.
+
+* Go to Navan Sign-on URL directly and initiate the login flow from there.
+
+#### IDP initiated:
+
+* Click on **Test this application** in Azure portal and you should be automatically signed in to the Navan for which you set up the SSO.
+
+You can also use Microsoft My Apps to test the application in any mode. When you click the Navan tile in the My Apps, if configured in SP mode you would be redirected to the application sign on page for initiating the login flow and if configured in IDP mode, you should be automatically signed in to the Navan for which you set up the SSO. For more information about the My Apps, see [Introduction to the My Apps](../user-help/my-apps-portal-end-user-access.md).
+
+## Next steps
+
+Once you configure Navan you can enforce session control, which protects exfiltration and infiltration of your organizationΓÇÖs sensitive data in real time. Session control extends from Conditional Access. [Learn how to enforce session control with Microsoft Defender for Cloud Apps](/cloud-app-security/proxy-deployment-aad).
active-directory Workload Identity Federation Considerations https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/workload-identities/workload-identity-federation-considerations.md
Previously updated : 03/07/2023 Last updated : 03/27/2023
For more information on the scenarios enabled by federated identity credentials,
## General federated identity credential considerations
-*Applies to: applications and user-assigned managed identities (public preview)*
+*Applies to: applications and user-assigned managed identities*
Anyone with permissions to create an app registration and add a secret or certificate can add a federated identity credential to an app. If the **Users can register applications** switch in the [User Settings](https://portal.azure.com/#blade/Microsoft_AAD_IAM/ActiveDirectoryMenuBlade/UserSettings) blade is set to **No**, however, you won't be able to create an app registration or configure the federated identity credential. Find an admin to configure the federated identity credential on your behalf, someone in the Application Administrator or Application Owner roles.
Federated identity credentials don't consume the Azure AD tenant service princip
## Unsupported regions (user-assigned managed identities)
-*Applies to: user-assigned managed identities (public preview)*
+*Applies to: user-assigned managed identities*
The creation of federated identity credentials is available on user-assigned managed identities created in most Azure regions during public. However, creation of federated identity credentials is **not supported** on user-assigned managed identities in the following regions:
Resources in these regions can still use federated identity credentials created
## Supported signing algorithms and issuers
-*Applies to: applications and user-assigned managed identities (public preview)*
+*Applies to: applications and user-assigned managed identities*
Only issuers that provide tokens signed using the RS256 algorithm are supported for token exchange using workload identity federation. Exchanging tokens signed with other algorithms may work, but haven't been tested. ## Azure Active Directory issuers aren't supported
-*Applies to: applications and user-assigned managed identities (public preview)*
+*Applies to: applications and user-assigned managed identities*
Creating a federation between two Azure AD identities from the same or different tenants isn't supported. When creating a federated identity credential, configuring the *issuer* (the URL of the external identity provider) with the following values isn't supported:
While it's possible to create a federated identity credential with an Azure AD i
## Time for federated credential changes to propagate
-*Applies to: applications and user-assigned managed identities (public preview)*
+*Applies to: applications and user-assigned managed identities*
It takes time for the federated identity credential to be propagated throughout a region after being initially configured. A token request made several minutes after configuring the federated identity credential may fail because the cache is populated in the directory with old data. During this time window, an authorization request might fail with error message: `AADSTS70021: No matching federated identity record found for presented assertion.`
To avoid this issue, wait a short time after adding the federated identity crede
## Concurrent updates aren't supported (user-assigned managed identities)
-*Applies to: user-assigned managed identities (public preview)*
+*Applies to: user-assigned managed identities*
Creating multiple federated identity credentials under the same user-assigned managed identity concurrently triggers concurrency detection logic, which causes requests to fail with 409-conflict HTTP status code.
You can also provision multiple new federated identity credentials sequentially
## Azure policy
-*Applies to: applications and user-assigned managed identities (public preview)*
+*Applies to: applications and user-assigned managed identities*
It's possible to use a deny [Azure Policy](../../governance/policy/overview.md) as in the following ARM template example:
It's possible to use a deny [Azure Policy](../../governance/policy/overview.md)
*Applies to: user-assigned managed identities*
-The following table describes limits on requests to the user-assigned managed identities (public preview) REST APIS. If you exceed a throttling limit, you receive an HTTP 429 error.
+The following table describes limits on requests to the user-assigned managed identities REST APIS. If you exceed a throttling limit, you receive an HTTP 429 error.
| Operation | Requests-per-second per Azure AD tenant | Requests-per-second per subscription | Requests-per-second per resource | |-|-|-|-|
active-directory Workload Identity Federation Create Trust Gcp https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/workload-identities/workload-identity-federation-create-trust-gcp.md
- Title: Access Azure resources from Google Cloud without credentials
-description: Access Azure AD protected resources from a service running in Google Cloud without using secrets or certificates. Use workload identity federation to set up a trust relationship between an app in Azure AD and an identity in Google Cloud. The workload running in Google Cloud can get an access token from Microsoft identity platform and access Azure AD protected resources.
-------- Previously updated : 03/06/2023---
-#Customer intent: As an application developer, I want to create a trust relationship with a Google Cloud identity so my service in Google Cloud can access Azure AD protected resources without managing secrets.
--
-# Access Azure AD protected resources from an app in Google Cloud
-
-Software workloads running in Google Cloud need an Azure Active Directory (Azure AD) application to authenticate and access Azure AD protected resources. A common practice is to configure that application with credentials (a secret or certificate). The credentials are used by a Google Cloud workload to request an access token from Microsoft identity platform. These credentials pose a security risk and have to be stored securely and rotated regularly. You also run the risk of service downtime if the credentials expire.
-
-[Workload identity federation](workload-identity-federation.md) allows you to access Azure AD protected resources from services running in Google Cloud without needing to manage secrets. Instead, you can configure your Azure AD application to trust a token issued by Google and exchange it for an access token from Microsoft identity platform.
-
-## Create an app registration in Azure AD
-
-[Create an app registration](/azure/active-directory/develop/quickstart-register-app) in Azure AD.
-
-Take note of the *object ID* of the app (not the application (client) ID) which you need in the following steps. Go to the [list of registered applications](https://portal.azure.com/#blade/Microsoft_AAD_IAM/ActiveDirectoryMenuBlade/RegisteredApps) in the Azure portal, select your app registration, and find the **Object ID** in **Overview**->**Essentials**.
-
-## Grant your app permissions to resources
-
-Grant your app the permissions necessary to access the Azure AD protected resources targeted by your software workload running in Google Cloud. For example, [assign the Storage Blob Data Contributor role](../../storage/blobs/assign-azure-role-data-access.md) to your app if your application needs to read, write, and delete blob data in [Azure Storage](../../storage/blobs/storage-blobs-introduction.md).
-
-## Set up an identity in Google Cloud
-
-You need an identity in Google Cloud that can be associated with your Azure AD application. A [service account](https://cloud.google.com/iam/docs/service-accounts), for example, used by an application or compute workload. You can either use the default service account of your Cloud project or create a dedicated service account.
-
-Each service account has a unique ID. When you visit the **IAM & Admin** page in the Google Cloud console, click on **Service Accounts**. Select the service account you plan to use, and copy its **Unique ID**.
--
-Tokens issued by Google to the service account will have this **Unique ID** as the *subject* claim.
-
-The *issuer* claim in the tokens will be `https://accounts.google.com`.
-
-You need these claim values to configure a trust relationship with an Azure AD application, which allows your application to trust tokens issued by Google to your service account.
-
-## Configure an Azure AD app to trust a Google Cloud identity
-
-Configure a federated identity credential on your Azure AD application to set up the trust relationship.
-
-The most important fields for creating the federated identity credential are:
--- *object ID*: the object ID of the app (not the application (client) ID) you previously registered in Azure AD.-- *subject*: must match the `sub` claim in the token issued by another identity provider, in this case Google. This is the Unique ID of the service account you plan to use.-- *issuer*: must match the `iss` claim in the token issued by the identity provider. A URL that complies with the [OIDC Discovery spec](https://openid.net/specs/openid-connect-discovery-1_0.html). Azure AD uses this issuer URL to fetch the keys that are necessary to validate the token. In the case of Google Cloud, the issuer is `https://accounts.google.com`.-- *audiences*: must match the `aud` claim in the token. For security reasons, you should pick a value that is unique for tokens meant for Azure AD. The Microsoft recommended value is `api://AzureADTokenExchange`.-
-The following command configures a federated identity credential:
-
-# [Azure CLI](#tab/azure-cli)
-
-```azurecli-interactive
-az ad app federated-credential create --id 41be38fd-caac-4354-aa1e-1fdb20e43bfa --parameters credential.json
-("credential.json" contains the following content)
-{
- "name": "GcpFederation",
- "issuer": "https://accounts.google.com",
- "subject": "112633961854638529490",
- "description": "Test GCP federation",
- "audiences": [
- "api://AzureADTokenExchange"
- ]
-}
-```
-
-# [Azure PowerShell](#tab/azure-powershell)
-
-```azurepowershell-interactive
-New-AzADappfederatedidentitycredential -ApplicationObjectId $appObjectId -Audience api://AzureADTokenExchange -Issuer 'https://accounts.google.com' -name 'GcpFederation' -Subject '112633961854638529490'
-```
--
-For more information and examples, see [Create a federated identity credential](workload-identity-federation-create-trust.md).
-
-## Exchange a Google token for an access token
-
-Now that you have configured the Azure AD application to trust the Google service account, you are ready to get a token from Google and exchange it for an access token from Microsoft identity platform. This code runs in an application deployed to Google Cloud and running, for example, on [App Engine](https://cloud.google.com/appengine/docs/standard/).
-
-### Get an ID token for your Google service account
-
-As mentioned earlier, Google cloud resources such as App Engine automatically use the default service account of your Cloud project. You can also configure the app to use a different service account when you deploy your service. Your service can [request an ID token](https://cloud.google.com/compute/docs/instances/verifying-instance-identity#request_signature) for that service account from the metadata server that handles such requests. With this approach, you don't need any keys for your service account: these are all managed by Google.
-
-# [TypeScript](#tab/typescript)
-HereΓÇÖs an example in TypeScript of how to request an ID token from the Google metadata server:
-
-```typescript
-async function getGoogleIDToken() {
- const headers = new Headers();
-
- headers.append("Metadata-Flavor", "Google ");
-
- let aadAudience = "api://AzureADTokenExchange";
-
- const endpoint="http://metadata.google.internal/computeMetadata/v1/instance/service-accounts/default/identity?audience="+ aadAudience;
-
- const options = {
- method: "GET",
- headers: headers,
- };
-
- return fetch(endpoint, options);
-}
-```
-
-# [C#](#tab/csharp)
-HereΓÇÖs an example in C# of how to request an ID token from the Google metadata server:
-```csharp
-private string getGoogleIdToken()
-{
- const string endpoint = "http://metadata.google.internal/computeMetadata/v1/instance/service-accounts/default/identity?audience=api://AzureADTokenExchange";
-
- var httpWebRequest = (HttpWebRequest)WebRequest.Create(endpoint);
- //httpWebRequest.ContentType = "application/json";
- httpWebRequest.Accept = "*/*";
- httpWebRequest.Method = "GET";
- httpWebRequest.Headers.Add("Metadata-Flavor", "Google ");
-
- var httpResponse = (HttpWebResponse)httpWebRequest.GetResponse();
-
- using (var streamReader = new StreamReader(httpResponse.GetResponseStream()))
- {
- string result = streamReader.ReadToEnd();
- return result;
- }
-}
-```
-
-# [Java](#tab/java)
-HereΓÇÖs an example in Java of how to request an ID token from the Google metadata server:
-```java
-private String getGoogleIdToken() throws IOException {
- final String endpoint = "http://metadata.google.internal/computeMetadata/v1/instance/service-accounts/default/identity?audience=api://AzureADTokenExchange";
-
- URL url = new URL(endpoint);
- HttpURLConnection httpUrlConnection = (HttpURLConnection) url.openConnection();
-
- httpUrlConnection.setRequestMethod("GET");
- httpUrlConnection.setRequestProperty("Metadata-Flavor", "Google ");
-
- InputStream inputStream = httpUrlConnection.getInputStream();
- InputStreamReader inputStreamReader = new InputStreamReader(inputStream);
- BufferedReader bufferedReader = new BufferedReader(inputStreamReader);
- StringBuffer content = new StringBuffer();
- String inputLine;
-
- while ((inputLine = bufferedReader.readLine()) != null)
- content.append(inputLine);
-
- bufferedReader.close();
-
- return content.toString();
-}
-```
--
-> [!IMPORTANT]
-> The *audience* here needs to match the *audiences* value you configured on your Azure AD application when [creating the federated identity credential](#configure-an-azure-ad-app-to-trust-a-google-cloud-identity).
-
-### Exchange the identity token for an Azure AD access token
-
-Now that your app running in Google Cloud has an identity token from Google, exchange it for an access token from Microsoft identity platform. Use the [Microsoft Authentication Library (MSAL)](/azure/active-directory/develop/msal-overview) to pass the Google token as a client assertion. The following MSAL versions support client assertions:
-- [MSAL Go (Preview)](https://github.com/AzureAD/microsoft-authentication-library-for-go)-- [MSAL Node](https://github.com/AzureAD/microsoft-authentication-library-for-js/tree/dev/lib/msal-node)-- [MSAL.NET](https://github.com/AzureAD/microsoft-authentication-library-for-dotnet)-- [MSAL Python](https://github.com/AzureAD/microsoft-authentication-library-for-python)-- [MSAL Java](https://github.com/AzureAD/microsoft-authentication-library-for-java)-
-Using MSAL, you write a token class (implementing the `TokenCredential` interface) exchange the ID token. The token class is used to with different client libraries to access Azure AD protected resources.
-
-# [TypeScript](#tab/typescript)
-The following TypeScript sample code snippet implements the `TokenCredential` interface, gets an ID token from Google (using the `getGoogleIDToken` method previously defined), and exchanges the ID token for an access token.
-
-```typescript
-const msal = require("@azure/msal-node");
-import {TokenCredential, GetTokenOptions, AccessToken} from "@azure/core-auth"
-
-class ClientAssertionCredential implements TokenCredential {
-
- constructor(clientID:string, tenantID:string, aadAuthority:string) {
- this.clientID = clientID;
- this.tenantID = tenantID;
- this.aadAuthority = aadAuthority; // https://login.microsoftonline.com/
- }
-
- async getToken(scope: string | string[], _options?: GetTokenOptions):Promise<AccessToken> {
-
- var scopes:string[] = [];
-
- if (typeof scope === "string") {
- scopes[0]=scope;
- } else if (Array.isArray(scope)) {
- scopes = scope;
- }
-
- // Get the ID token from Google.
- return getGoogleIDToken() // calling this directly just for clarity,
-
- let aadAudience = "api://AzureADTokenExchange"
- const jwt = axios({
- url: "http://metadata.google.internal/computeMetadata/v1/instance/service-accounts/default/identity?audience="
- + aadAudience,
- method: "GET",
- headers: {
- "Metadata-Flavor": "Google"
- }}).then(response => {
- console.log("AXIOS RESPONSE");
- return response.data;
- });
- return jwt;
- .then(function(aadToken) {
- // return in form expected by TokenCredential.getToken
- let returnToken = {
- token: aadToken.accessToken,
- expiresOnTimestamp: aadToken.expiresOn.getTime(),
- };
- return (returnToken);
- })
- .catch(function(error) {
- // error stuff
- });
- }
- }
-export default ClientAssertionCredential;
-```
-
-# [C#](#tab/csharp)
-
-The following C# sample code snippet implements the `TokenCredential` interface, gets an ID token from Google (using the `getGoogleIDToken` method previously defined), and exchanges the ID token for an access token.
-
-```csharp
-using System;
-using System.Threading.Tasks;
-using Microsoft.Identity.Client;
-using Azure.Core;
-using System.Threading;
-using System.Net;
-using System.IO;
-
-public class ClientAssertionCredential:TokenCredential
-{
- private readonly string clientID;
- private readonly string tenantID;
- private readonly string aadAuthority;
-
- public ClientAssertionCredential(string clientID, string tenantID, string aadAuthority)
- {
- this.clientID = clientID;
- this.tenantID = tenantID;
- this.aadAuthority = aadAuthority; // https://login.microsoftonline.com/
- }
-
- public override AccessToken GetToken(TokenRequestContext requestContext, CancellationToken cancellationToken = default) {
-
- return GetTokenImplAsync(false, requestContext, cancellationToken).GetAwaiter().GetResult();
- }
-
- public override async ValueTask<AccessToken> GetTokenAsync(TokenRequestContext requestContext, CancellationToken cancellationToken = default)
- {
- return await GetTokenImplAsync(true, requestContext, cancellationToken).ConfigureAwait(false);
- }
-
- private async ValueTask<AccessToken> GetTokenImplAsync(bool async, TokenRequestContext requestContext, CancellationToken cancellationToken)
- {
- // calling this directly just for clarity, this should be a callback
- string idToken = getGoogleIdToken();
-
- try
- {
- // pass token as a client assertion to the confidential client app
- var app = ConfidentialClientApplicationBuilder.Create(this.clientID)
- .WithClientAssertion(idToken)
- .Build();
-
- var authResult = app.AcquireTokenForClient(requestContext.Scopes)
- .WithAuthority(this.aadAuthority + this.tenantID)
- .ExecuteAsync();
-
- AccessToken token = new AccessToken(authResult.Result.AccessToken, authResult.Result.ExpiresOn);
-
- return token;
- }
- catch (Exception ex)
- {
- throw (ex);
- }
- }
-}
-```
-
-# [Java](#tab/java)
-
-The following Java sample code snippet implements the `TokenCredential` interface, gets an ID token from Google (using the `getGoogleIDToken` method previously defined), and exchanges the ID token for an access token.
-
-```java
-import java.io.Exception;
-import java.time.Instant;
-import java.time.OffsetDateTime;
-import java.time.ZoneOffset;
-import java.util.HashSet;
-import java.util.Set;
-
-import com.azure.core.credential.AccessToken;
-import com.azure.core.credential.TokenCredential;
-import com.azure.core.credential.TokenRequestContext;
-import com.microsoft.aad.msal4j.ClientCredentialFactory;
-import com.microsoft.aad.msal4j.ClientCredentialParameters;
-import com.microsoft.aad.msal4j.ConfidentialClientApplication;
-import com.microsoft.aad.msal4j.IClientCredential;
-import com.microsoft.aad.msal4j.IAuthenticationResult;
-import reactor.core.publisher.Mono;
-
-public class ClientAssertionCredential implements TokenCredential {
- private String clientID;
- private String tenantID;
- private String aadAuthority;
-
- public ClientAssertionCredential(String clientID, String tenantID, String aadAuthority)
- {
- this.clientID = clientID;
- this.tenantID = tenantID;
- this.aadAuthority = aadAuthority; // https://login.microsoftonline.com/
- }
-
- @Override
- public Mono<AccessToken> getToken(TokenRequestContext requestContext) {
- try {
- // Get the ID token from Google
- String idToken = getGoogleIdToken(); // calling this directly just for clarity, this should be a callback
-
- IClientCredential clientCredential = ClientCredentialFactory.createFromClientAssertion(idToken);
- String authority = String.format("%s%s", aadAuthority, tenantID);
-
- ConfidentialClientApplication app = ConfidentialClientApplication
- .builder(clientID, clientCredential)
- .authority(aadAuthority)
- .build();
-
- Set<String> scopes = new HashSet<String>(requestContext.getScopes());
- ClientCredentialParameters clientCredentialParam = ClientCredentialParameters
- .builder(scopes)
- .build();
-
- IAuthenticationResult authResult = app.acquireToken(clientCredentialParam).get();
- Instant expiresOnInstant = authResult.expiresOnDate().toInstant();
- OffsetDateTime expiresOn = OffsetDateTime.ofInstant(expiresOnInstant, ZoneOffset.UTC);
-
- AccessToken accessToken = new AccessToken(authResult.accessToken(), expiresOn);
-
- return Mono.just(accessToken);
- } catch (Exception ex) {
- return Mono.error(ex);
- }
- }
-}
-```
---
-## Access Azure AD protected resources
-
-Your application running in Google Cloud now has an access token issued by Microsoft identity platform. Use the access token to access the Azure AD protected resources that your Azure AD app has permissions to access. As an example, here's how you can access Azure Blob storage using the `ClientAssertionCredential` token class and the Azure Blob Storage client library. When you make requests to the `BlobServiceClient` to access storage, the `BlobServiceClient` calls the `getToken` method on the `ClientAssertionCredential` object to get a fresh ID token and exchange it for an access token.
-
-# [TypeScript](#tab/typescript)
-
-The following TypeScript example initializes a new `ClientAssertionCredential` object and then creates a new `BlobServiceClient` object.
-
-```typescript
-const { BlobServiceClient } = require("@azure/storage-blob");
-
-var storageUrl = "https://<storageaccount>.blob.core.windows.net";
-var clientID:any = "<client-id>";
-var tenantID:any = "<tenant-id>";
-var aadAuthority:any = "https://login.microsoftonline.com/";
-var credential = new ClientAssertionCredential(clientID,
- tenantID,
- aadAuthority);
-
-const blobServiceClient = new BlobServiceClient(storageUrl, credential);
-
-// write code to access Blob storage
-```
-
-# [C#](#tab/csharp)
-
-```csharp
-string clientID = "<client-id>";
-string tenantID = "<tenant-id>";
-string authority = "https://login.microsoftonline.com/";
-string storageUrl = "https://<storageaccount>.blob.core.windows.net";
-
-var credential = new ClientAssertionCredential(clientID,
- tenantID,
- authority);
-
-BlobServiceClient blobServiceClient = new BlobServiceClient(new Uri(storageUrl), credential);
-
-// write code to access Blob storage
-```
-
-# [Java](#tab/java)
-
-```java
-String clientID = "<client-id>";
-String tenantID = "<tenant-id>";
-String authority = "https://login.microsoftonline.com/";
-String storageUrl = "https://<storageaccount>.blob.core.windows.net";
-
-ClientAssertionCredential credential = new ClientAssertionCredential(clientID, tenantID, authority);
-
-BlobServiceClient blobServiceClient = new BlobServiceClientBuilder()
- .endpoint(storageUrl)
- .credential(credential)
- .buildClient();
-
-// write code to access Blob storage
-```
---
-## Next steps
-
-Learn more about [workload identity federation](workload-identity-federation.md).
active-directory Workload Identity Federation Create Trust User Assigned Managed Identity https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/workload-identities/workload-identity-federation-create-trust-user-assigned-managed-identity.md
Previously updated : 03/06/2023 Last updated : 03/27/2023
zone_pivot_groups: identity-wif-mi-methods
#Customer intent: As an application developer, I want to configure a federated credential on a user-assigned managed identity so I can create a trust relationship with an external identity provider and use workload identity federation to access Azure AD protected resources without managing secrets.
-# Configure a user-assigned managed identity to trust an external identity provider (preview)
+# Configure a user-assigned managed identity to trust an external identity provider
This article describes how to manage a federated identity credential on a user-assigned managed identity in Azure Active Directory (Azure AD). The federated identity credential creates a trust relationship between a user-assigned managed identity and an external identity provider (IdP). Configuring a federated identity credential on a system-assigned managed identity isn't supported.
aks Upgrade Cluster https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/upgrade-cluster.md
The Azure portal also highlights all the deprecated APIs between your current ve
+## Stop cluster upgrades automatically on API breaking changes (Preview)
++
+To stay within a supported Kubernetes version, you usually have to upgrade your cluster at least once per year and prepare for all possible disruptions. These disruptions include ones caused by API breaking changes, deprecations, and dependencies such as Helm and CSI. It can be difficult to anticipate these disruptions and migrate critical workloads without experiencing any downtime.
+
+AKS now automatically stops upgrade operations consisting of a minor version change if deprecated APIs are detected. This feature alerts you with an error message if it detects usage of APIs that are deprecated in the targeted version.
+
+All of the following criteria must be met in order for the stop to occur:
+
+* The upgrade operation is a Kubernetes minor version change for the cluster control plane
+
+* The Kubernetes version you are upgrading to is 1.26 or later
+
+* If performed via REST, the upgrade operation uses a preview API version of `2023-01-02-preview` or later
+
+* If performed via Azure CLI, the `aks-preview` CLI extension 0.5.134 or later must be installed
+
+* The last seen usage seen of deprecated APIs for the targeted version you are upgrading to must occur within 12 hours before the upgrade operation. AKS records usage hourly, so any usage of deprecated APIs within one hour isn't guaranteed to appear in the detection.
+
+If all of these criteria are true when you attempt an upgrade, you'll receive an error message similar to the following example:
+
+```output
+Bad Request({
+ "code": "ValidationError",
+ "message": "Control Plane upgrade is blocked due to recent usage of a Kubernetes API deprecated in the specified version. Please refer to https://kubernetes.io/docs/reference/using-api/deprecation-guide to migrate the usage. To bypass this error, set IgnoreKubernetesDeprecations in upgradeSettings.overrideSettings. Bypassing this error without migrating usage will result in the deprecated Kubernetes API calls failing. Usage details: 1 error occurred:\n\t* usage has been detected on API flowcontrol.apiserver.k8s.io.prioritylevelconfigurations.v1beta1, and was recently seen at: 2023-03-23 20:57:18 +0000 UTC, which will be removed in 1.26\n\n",
+ "subcode": "UpgradeBlockedOnDeprecatedAPIUsage"
+})
+```
+
+### Mitigating stopped upgrade operations
+
+After receiving the error message, you have two options to mitigate the issue:
+
+#### Remove usage of deprecated APIs (recommended)
+
+To remove usage of deprecated APIs, follow these steps:
+
+1. Remove the deprecated API, which is listed in the error message. Check the past usage by enabling [container insights][container-insights] and exploring kube audit logs.
+
+2. Wait 12 hours from the time the last deprecated api usage was seen.
+
+3. Retry your cluster upgrade.
+
+#### Bypass validation to ignore API changes
+
+To bypass validation to ignore API breaking changes, set the property `upgrade-settings` to `IgnoreKubernetesDeprecations`. You will need to use the `aks-preview` Azure CLI extension version 0.5.134 or later. This method is not recommended, as deprecated APIs in the targeted Kubernetes version may not work at all long term. It is advised to remove them as soon as possible after the upgrade completes.
+
+```azurecli-interactive
+az aks update --name myAKSCluster --resource-group myResourceGroup --upgrade-settings IgnoreKubernetesDeprecations --upgrade-override-until 2023-04-01T13:00:00Z
+```
+
+The `upgrade-override-until` property is used to define the end of the window during which validation will be bypassed. If no value is set, it will default the window to three days from the current time. The date and time you specify must be in the future.
+
+> [!NOTE]
+> `Z` is the zone designator for the zero UTC/GMT offset, also known as 'Zulu' time. This example sets the end of the window to `13:00:00` GMT. For more information, see [Combined date and time representations](https://wikipedia.org/wiki/ISO_8601#Combined_date_and_time_representations).
+
+After a successful override, performing an upgrade operation will ignore any deprecated API usage for the targeted version.
+ ## Customize node surge upgrade > [!IMPORTANT]
This article showed you how to upgrade an existing AKS cluster. To learn more ab
[release-tracker]: release-tracker.md [specific-nodepool]: node-image-upgrade.md#upgrade-a-specific-node-pool [k8s-deprecation]: https://kubernetes.io/blog/2022/11/18/upcoming-changes-in-kubernetes-1-26/#:~:text=A%20deprecated%20API%20is%20one%20that%20has%20been,point%20you%20must%20migrate%20to%20using%20the%20replacement
+[container-insights]:/azure/azure-monitor/containers/container-insights-log-query#resource-logs
aks Use Oidc Issuer https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/use-oidc-issuer.md
Title: Create an OpenID Connect provider for your Azure Kubernetes Service (AKS) cluster description: Learn how to configure the OpenID Connect (OIDC) provider for a cluster in Azure Kubernetes Service (AKS) Previously updated : 02/21/2023 Last updated : 04/04/2023 # Create an OpenID Connect provider on Azure Kubernetes Service (AKS)
Last updated 02/21/2023
AKS rotates the key automatically and periodically. If you don't want to wait, you can rotate the key manually and immediately. The maximum lifetime of the token issued by the OIDC provider is one day. > [!WARNING]
-> Enable or disable OIDC Issuer changes the current service account token issuer to a new value, which can cause down time and restarts the API server. If your application pods using a service token remain in a failed state after you enable or disable the OIDC Issuer, we recommend you manually restart the pods.
+> Enable OIDC Issuer on existing cluster changes the current service account token issuer to a new value, which can cause down time and restarts the API server. If your application pods using a service token remain in a failed state after you enable the OIDC Issuer, we recommend you manually restart the pods.
In this article, you learn how to create, update, and manage the OIDC Issuer for your cluster.
+> [!Important]
+> After enabling OIDC issuer on the cluster, it's not supported to disable it.
+ ## Prerequisites * The Azure CLI version 2.42.0 or higher. Run `az --version` to find your version. If you need to install or upgrade, see [Install Azure CLI][azure-cli-install].
az aks oidc-issuer rotate-signing-keys -n myAKSCluster -g myResourceGroup
[az-aks-show]: /cli/azure/aks#az-aks-show [az-aks-oidc-issuer]: /cli/azure/aks/oidc-issuer [azure-ad-workload-identity-overview]: workload-identity-overview.md
-[secure-pod-network-traffic]: use-network-policies.md
+[secure-pod-network-traffic]: use-network-policies.md
app-service Tutorial Java Tomcat Connect Managed Identity Postgresql Database https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/tutorial-java-tomcat-connect-managed-identity-postgresql-database.md
Follow these steps to create an Azure Database for Postgres in your subscription
--location $LOCATION \ --admin-user $POSTGRESQL_ADMIN_USER \ --admin-password $POSTGRESQL_ADMIN_PASSWORD \
- --public-network-access 0.0.0.0 \
+ --public-access 0.0.0.0 \
--sku-name Standard_D2s_v3 ```
application-gateway Key Vault Certs https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/application-gateway/key-vault-certs.md
Application Gateway uses a secret identifier in Key Vault to reference the certi
The Azure portal supports only Key Vault certificates, not secrets. Application Gateway still supports referencing secrets from Key Vault, but only through non-portal resources like PowerShell, the Azure CLI, APIs, and Azure Resource Manager templates (ARM templates).
-References to Key Vaults in other Azure subscriptions is supported, but must be configured via ARM Template, Azure PowerShell, CLI, Bicep, etc. Cross-subscription key vault configuration is not supported by Application Gateway via Azure Portal today.
+References to Key Vaults in other Azure subscriptions are supported, but must be configured via ARM Template, Azure PowerShell, CLI, Bicep, etc. Cross-subscription key vault configuration is not supported by Application Gateway via Azure portal today.
## Certificate settings in Key Vault
When you're using a restricted Key Vault, use the following steps to configure A
1. In the Azure portal, in your Key Vault, select **Networking**. 1. On the **Firewalls and virtual networks** tab, select **Selected networks**.
-1. For **Virtual networks**, select **+ Add existing virtual networks**, and then add the virtual network and subnet for your Application Gateway instance. During the process, also configure the `Microsoft.KeyVault` service endpoint by selecting its checkbox.
+1. For **Virtual networks**, select **+ Add existing virtual networks**, and then add the virtual network and subnet for your Application Gateway instance. If prompted, ensure the _Do not configure 'Microsoft.KeyVault' service endpoint(s) at this time_ checkbox is unchecked to ensure the `Microsoft.KeyVault` service endpoint is enabled on the subnet.
1. Select **Yes** to allow trusted services to bypass the Key Vault's firewall. ![Screenshot that shows selections for configuring Application Gateway to use firewalls and virtual networks.](media/key-vault-certs/key-vault-firewall.png)
application-gateway Overview V2 https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/application-gateway/overview-v2.md
Previously updated : 03/13/2023 Last updated : 04/03/2023
The Standard_v2 and WAF_v2 SKU is not currently available in the following regio
- UK North - UK South2-- South Africa West - China East - China North - US DOD East - US DOD Central-- US Gov Central-- Qatar Central ## Pricing
applied-ai-services Resource Customer Stories https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/applied-ai-services/form-recognizer/resource-customer-stories.md
The following customers and partners have adopted Form Recognizer across a wide
||-|-| | **Acumatica** | [**Acumatica**](https://www.acumatica.com/) is a technology provider that develops cloud and browser-based enterprise resource planning (ERP) software for small and medium-sized businesses (SMBs). To bring expense claims into the modern age, Acumatica incorporated Form Recognizer into its native application. The Form Recognizer's prebuilt-receipt API and machine learning capabilities are used to automatically extract data from receipts. Acumatica's customers can file multiple, error-free claims in a matter of seconds, freeing up more time to focus on other important tasks. | | | **Air Canada** | In September 2021, [**Air Canada**](https://www.aircanada.com/) was tasked with verifying the COVID-19 vaccination status of thousands of worldwide employees in only two months. After realizing manual verification would be too costly and complex within the time constraint, Air Canada turned to its internal AI team for an automated solution. The AI team partnered with Microsoft and used Form Recognizer to roll out a fully functional, accurate solution within weeks. This partnership met the government mandate on time and saved thousands of hours of manual work. | [Customer story](https://customers.microsoft.com/story/1505667713938806113-air-canada-travel-transportation-azure-form-recognizer)|
-|**Arkas Logistics** | [**Arkas Logistics**](http://www.arkaslojistik.com.tr/) is operates under the umbrella of Arkas Holding, T├╝rkiye's leading holding institution and operating in 23 countries. During the COVID-19 crisis, the company has been able to provide outstanding, complete logistical services thanks to its focus on contactless operation and digitalization steps. Form Recognizer powers a solution that maintains the continuity of the supply chain and allows for uninterrupted service. ||
+|**Arkas Logistics** | [**Arkas Logistics**](http://www.arkaslojistik.com.tr/) is operates under the umbrella of Arkas Holding, T├╝rkiye's leading holding institution and operating in 23 countries/regions. During the COVID-19 crisis, the company has been able to provide outstanding, complete logistical services thanks to its focus on contactless operation and digitalization steps. Form Recognizer powers a solution that maintains the continuity of the supply chain and allows for uninterrupted service. ||
|**Automation Anywhere**| [**Automation Anywhere**](https://www.automationanywhere.com/) is on a singular and unwavering mission to democratize automation by liberating teams from mundane, repetitive tasks, and allowing more time for innovation and creativity with cloud-native robotic process automation (RPA)software. To protect the citizens of the United Kingdom, healthcare providers must process tens of thousands of COVID-19 tests daily, each one accompanied by a form for the World Health Organization (WHO). Manually completing and processing these forms would potentially slow testing and divert resources away from patient care. In response, Automation Anywhere built an AI-powered bot to help a healthcare provider automatically process and submit the COVID-19 test forms at scale. | | |**AvidXchange**| [**AvidXchange**](https://www.avidxchange.com/) has developed an accounts payable automation solution applying Form Recognizer. AvidXchange partners with Azure Cognitive Services to deliver an accounts payable automation solution for the middle market. Customers benefit from faster invoice processing times and increased accuracy to ensure their suppliers are paid the right amount, at the right time. || |**Blue Prism**| [**Blue Prism**](https://www.blueprism.com/) Decipher is an AI-powered document processing capability that's directly embedded into the company's connected-RPA platform. Decipher works with Form Recognizer to help organizations process forms faster and with less human effort. One of Blue Prism's customers has been testing the solution to automate invoice handling as part of its procurement process. ||
applied-ai-services Anomaly Feedback https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/applied-ai-services/metrics-advisor/how-tos/anomaly-feedback.md
-+ Last updated 11/24/2020
applied-ai-services Whats New https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/applied-ai-services/metrics-advisor/whats-new.md
-+ Last updated 12/16/2022
azure-cache-for-redis Cache Best Practices Scale https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-cache-for-redis/cache-best-practices-scale.md
For more information on scaling and memory, depending on your tier see either:
If preserving the data in the cache isn't a requirement, consider flushing the data prior to scaling. Flushing the cache helps the scaling operation complete more quickly so the new capacity is available sooner.
+## Scaling Enterprise tier caches
+
+Because the _Enterprise_ and _Enterprise Flash_ tiers are built on Redis Enterprise rather than open-source Redis, there are some differences in scaling best practices. See [Best Practices for the Enterprise and Enterprise Flash tiers](cache-best-practices-enterprise-tiers.md) for more information.
+ ## Next steps - [Configure your maxmemory-reserved setting](cache-best-practices-memory-management.md#configure-your-maxmemory-reserved-setting)
azure-cache-for-redis Cache Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-cache-for-redis/cache-overview.md
Azure Cache for Redis provides an in-memory data store based on the [Redis](http
Azure Cache for Redis offers both the Redis open-source (OSS Redis) and a commercial product from Redis Inc. (Redis Enterprise) as a managed service. It provides secure and dedicated Redis server instances and full Redis API compatibility. The service is operated by Microsoft, hosted on Azure, and usable by any application within or outside of Azure.
-Azure Cache for Redis can be used as a distributed data or content cache, a session store, a message broker, and more. It can be deployed as a standalone. Or, it can be deployed along with other Azure database services, such as Azure SQL or Azure Cosmos DB.
+Azure Cache for Redis can be used as a distributed data or content cache, a session store, a message broker, and more. It can be deployed standalone. Or, it can be deployed along with other Azure database services, such as Azure SQL or Azure Cosmos DB.
## Key scenarios
The [Azure Cache for Redis Pricing](https://azure.microsoft.com/pricing/details/
| [Service Level Agreement (SLA)](https://azure.microsoft.com/support/legal/sla/cache/v1_0/) |-|Γ£ö|Γ£ö|Γ£ö|Γ£ö| | Data encryption in transit |Γ£ö|Γ£ö|Γ£ö|Γ£ö|Γ£ö| | [Network isolation](cache-private-link.md) |Γ£ö|Γ£ö|Γ£ö|Γ£ö|Γ£ö|
-| [Scaling](cache-how-to-scale.md) |Γ£ö|Γ£ö|Γ£ö|-|-|
+| [Scaling](cache-how-to-scale.md) |Γ£ö|Γ£ö|Γ£ö|Preview|Preview|
| [OSS clustering](cache-how-to-premium-clustering.md) |-|-|Γ£ö|Γ£ö|Γ£ö| | [Data persistence](cache-how-to-premium-persistence.md) |-|-|Γ£ö|Preview|Preview| | [Zone redundancy](cache-how-to-zone-redundancy.md) |-|-|Γ£ö|Γ£ö|Γ£ö|
-| [Geo-replication](cache-how-to-geo-replication.md) |-|-|Γ£ö|Γ£ö|Γ£ö|
+| [Geo-replication](cache-how-to-geo-replication.md) |-|-|Γ£ö (Passive) |Γ£ö (Active) |Γ£ö (Active) |
+| [Connection audit logs](cache-monitor-diagnostic-settings.md) |-|-|Γ£ö (Poll-based)|Γ£ö (Event-based)|Γ£ö (Event-based)|
| [Redis Modules](cache-redis-modules.md) |-|-|-|Γ£ö|Preview| | [Import/Export](cache-how-to-import-export-data.md) |-|-|Γ£ö|Γ£ö|Γ£ö| | [Reboot](cache-administration.md#reboot) |Γ£ö|Γ£ö|Γ£ö|-|-| | [Scheduled updates](cache-administration.md#schedule-updates) |Γ£ö|Γ£ö|Γ£ö|-|-| > [!NOTE]
-> The Enterprise Flash tier currently supports only the RedisJSON and RediSearch modules in preview.
+> The Enterprise Flash tier currently supports only the RediSearch module (in preview) and the RedisJSON module.
### Choosing the right tier
azure-maps Azure Maps Authentication https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-maps/azure-maps-authentication.md
# Authentication with Azure Maps
-Azure Maps supports three ways to authenticate requests: Shared Key authentication, [Azure Active Directory (Azure AD)](../active-directory/fundamentals/active-directory-whatis.md) authentication, and Shared Access Signature (SAS) Token authentication. This article explains authentication methods to help guide your implementation of Azure Maps services. The article also describes other account controls such as disabling local authentication for Azure Policy and Cross-Origin Resource Sharing (CORS).
+Azure Maps supports three ways to authenticate requests: Shared Key authentication, [Azure Active Directory (Azure AD)] authentication, and Shared Access Signature (SAS) Token authentication. This article explains authentication methods to help guide your implementation of Azure Maps services. The article also describes other account controls such as disabling local authentication for Azure Policy and Cross-Origin Resource Sharing (CORS).
> [!NOTE]
-> To improve secure communication with Azure Maps, we now support Transport Layer Security (TLS) 1.2, and we're retiring support for TLS 1.0 and 1.1. If you currently use TLS 1.x, evaluate your TLS 1.2 readiness and develop a migration plan with the testing described in [Solving the TLS 1.0 Problem](/security/solving-tls1-problem).
+> To improve secure communication with Azure Maps, we now support Transport Layer Security (TLS) 1.2, and we're retiring support for TLS 1.0 and 1.1. If you currently use TLS 1.x, evaluate your TLS 1.2 readiness and develop a migration plan with the testing described in [Solving the TLS 1.0 Problem].
## Shared Key authentication
-For information about viewing your keys in the Azure portal, see [Manage authentication](./how-to-manage-authentication.md#view-authentication-details).
+For information about viewing your keys in the Azure portal, see [View authentication details].
Primary and secondary keys are generated after the Azure Maps account is created. You're encouraged to use the primary key as the subscription key when calling Azure Maps with shared key authentication. Shared Key authentication passes a key generated by an Azure Maps account to an Azure Maps service. For each request to Azure Maps services, add the _subscription key_ as a parameter to the URL. The secondary key can be used in scenarios like rolling key changes.
https://atlas.microsoft.com/mapData/upload?api-version=1.0&dataFormat=zip&subscr
``` > [!IMPORTANT]
-> Primary and Secondary keys should be treated as sensitive data. The shared key is used to authenticate all Azure Maps REST APIs. Users who use a shared key should abstract the API key away, either through environment variables or secure secret storage, where it can be managed centrally.
+> Primary and Secondary keys should be treated as sensitive data. The shared key is used to authenticate all Azure Maps REST API. Users who use a shared key should abstract the API key away, either through environment variables or secure secret storage, where it can be managed centrally.
## Azure AD authentication
Azure Maps accepts **OAuth 2.0** access tokens for Azure AD tenants associated w
- Partner applications that use permissions delegated by users - Managed identities for Azure resources
-Azure Maps generates a _unique identifier_ (client ID) for each Azure Maps account. You can request tokens from Azure AD when you combine this client ID with additional parameters.
+Azure Maps generates a _unique identifier_ (client ID) for each Azure Maps account. You can request tokens from Azure AD when you combine this client ID with other parameters.
-For more information about how to configure Azure AD and request tokens for Azure Maps, see [Manage authentication in Azure Maps](./how-to-manage-authentication.md).
+For more information about how to configure Azure AD and request tokens for Azure Maps, see [Manage authentication in Azure Maps].
-For general information about authenticating with Azure AD, see [Authentication vs. authorization](../active-directory/develop/authentication-vs-authorization.md).
+For general information about authenticating with Azure AD, see [Authentication vs. authorization].
## Managed identities for Azure resources and Azure Maps
-[Managed identities for Azure resources](../active-directory/managed-identities-azure-resources/overview.md) provide Azure services with an automatically managed application based security principal that can authenticate with Azure AD. With Azure role-based access control (Azure RBAC), the managed identity security principal can be authorized to access Azure Maps services. Some examples of managed identities include: Azure App Service, Azure Functions, and Azure Virtual Machines. For a list of managed identities, see [Azure services that can use managed identities to access other services](../active-directory/managed-identities-azure-resources/services-support-managed-identities.md). For more information on managed identities, see [Manage authentication in Azure Maps](./how-to-manage-authentication.md).
+[Managed identities for Azure resources] provide Azure services with an automatically managed application based security principal that can authenticate with Azure AD. With Azure role-based access control (Azure RBAC), the managed identity security principal can be authorized to access Azure Maps services. Some examples of managed identities include: Azure App Service, Azure Functions, and Azure Virtual Machines. For a list of managed identities, see [Azure services that can use managed identities to access other services]. For more information on managed identities, see [Manage authentication in Azure Maps].
### Configure application Azure AD authentication
-Applications will authenticate with the Azure AD tenant using one or more supported scenarios provided by Azure AD. Each Azure AD application scenario represents different requirements based on business needs. Some applications may require user sign-in experiences and other applications may require an application sign-in experience. For more information, see [Authentication flows and application scenarios](../active-directory/develop/authentication-flows-app-scenarios.md).
+Applications authenticate with the Azure AD tenant using one or more supported scenarios provided by Azure AD. Each Azure AD application scenario represents different requirements based on business needs. Some applications may require user sign-in experiences and other applications may require an application sign-in experience. For more information, see [Authentication flows and application scenarios].
After the application receives an access token, the SDK and/or application sends an HTTPS request with the following set of required HTTP headers in addition to other REST API HTTP headers: | Header Name | Value | | :- | : |
-| x-ms-client-id | 30d7cc….9f55 |
-| Authorization | Bearer eyJ0e….HNIVN |
+| x-ms-client-id | 30d7cc…9f55 |
+| Authorization | Bearer eyJ0e…HNIVN |
-> [!NOTE]
+> [!NOTE]
> `x-ms-client-id` is the Azure Maps account-based GUID that appears on the Azure Maps authentication page. Here's an example of an Azure Maps route request that uses an Azure AD OAuth Bearer token:
Here's an example of an Azure Maps route request that uses an Azure AD OAuth Bea
GET /route/directions/json?api-version=1.0&query=52.50931,13.42936:52.50274,13.43872 Host: atlas.microsoft.com x-ms-client-id: 30d7cc….9f55
-Authorization: Bearer eyJ0e….HNIVN
+Authorization: Bearer eyJ0e…HNIVN
```
-For information about viewing your client ID, see [View authentication details](./how-to-manage-authentication.md#view-authentication-details).
+For information about viewing your client ID, see [View authentication details].
> [!TIP] >
For information about viewing your client ID, see [View authentication details](
### Prerequisites
-If you're new to Azure RBAC, [Azure role-based access control (Azure RBAC)](../role-based-access-control/overview.md) overview provides Principal types are granted a set of permissions, also known as a role definition. A role definition provides permissions to REST API actions. Azure Maps supports access to all principal types for [Azure role-based access control (Azure RBAC)](../role-based-access-control/overview.md) including: individual Azure AD users, groups, applications, Azure resources, and Azure managed identities. Applying access to one or more Azure Maps accounts is known as a scope. A role assignment is created when a principal, role definition, and scope are applied.
+If you're new to Azure RBAC, [Azure role-based access control (Azure RBAC)] overview provides Principal types are granted a set of permissions, also known as a role definition. A role definition provides permissions to REST API actions. Azure Maps supports access to all principal types for [Azure role-based access control (Azure RBAC)] including: individual Azure AD users, groups, applications, Azure resources, and Azure managed identities. Applying access to one or more Azure Maps accounts is known as a scope. A role assignment is created when a principal, role definition, and scope are applied.
### Overview The next sections discuss concepts and components of Azure Maps integration with Azure RBAC. As part of the process to set up your Azure Maps account, an Azure AD directory is associated to the Azure subscription, which the Azure Maps account resides.
-When you configure Azure RBAC, you choose a security principal and apply it to a role assignment. To learn how to add role assignments on the Azure portal, see [Assign Azure roles](../role-based-access-control/role-assignments-portal.md).
+When you configure Azure RBAC, you choose a security principal and apply it to a role assignment. To learn how to add role assignments on the Azure portal, see [Assign Azure roles using the Azure portal].
### Picking a role definition
The following role definition types exist to support application scenarios.
| : | :- | | Azure Maps Search and Render Data Reader | Provides access to only search and render Azure Maps REST APIs to limit access to basic web browser use cases. | | Azure Maps Data Reader | Provides access to immutable Azure Maps REST APIs. |
-| Azure Maps Data Contributor | Provides access to mutable Azure Maps REST APIs. Mutability is defined by the actions: write and delete. |
+| Azure Maps Data Contributor | Provides access to mutable Azure Maps REST APIs. Mutability, defined by the actions: write and delete. |
| Custom Role Definition | Create a crafted role to enable flexible restricted access to Azure Maps REST APIs. | Some Azure Maps services may require elevated privileges to perform write or delete actions on Azure Maps REST APIs. Azure Maps Data Contributor role is required for services, which provide write or delete actions. The following table describes what services Azure Maps Data Contributor is applicable when using write or delete actions. When only read actions are required, the Azure Maps Data Reader role can be used in place of the Azure Maps Data Contributor role.
-| Azure Maps service | Azure Maps Role Definition |
-| : | :-- |
-| [Data](/rest/api/maps/data) | Azure Maps Data Contributor |
-| [Creator](/rest/api/maps-creator/) | Azure Maps Data Contributor |
-| [Spatial](/rest/api/maps/spatial) | Azure Maps Data Contributor |
-| Batch [Search](/rest/api/maps/search) and [Route](/rest/api/maps/route) | Azure Maps Data Contributor |
+| Azure Maps service | Azure Maps Role Definition |
+| :--| :-- |
+| [Data] | Azure Maps Data Contributor |
+| [Creator] | Azure Maps Data Contributor |
+| [Spatial] | Azure Maps Data Contributor |
+| Batch [Search] and [Route] | Azure Maps Data Contributor |
-For information about viewing your Azure RBAC settings, see [How to configure Azure RBAC for Azure Maps](./how-to-manage-authentication.md).
+For information about viewing your Azure RBAC settings, see [How to configure Azure RBAC for Azure Maps].
#### Custom role definitions
-One aspect of application security is the principle of least privilege, the practice of limiting access rights to only those needed to do the job at hand. To accomplish this, create custom role definitions that support use cases, which require further granularity to access control. To create a custom role definition, select specific data actions to include or exclude for the definition.
+One aspect of application security is the principle of least privilege, the practice of limiting access rights to the rights required for the current job. Limiting access rights is accomplished by creating custom role definitions that support use cases requiring further granularity to access control. To create a custom role definition, select specific data actions to include or exclude for the definition.
-The custom role definition can then be used in a role assignment for any security principal. To learn more about Azure custom role definitions, see [Azure custom roles](../role-based-access-control/custom-roles.md).
+The custom role definition can then be used in a role assignment for any security principal. To learn more about Azure custom role definitions, see [Azure custom roles].
Here are some example scenarios where custom roles can improve application security.
-| Scenario | Custom Role Data Action(s) |
-| :- | : |
-| A public facing or interactive sign-in web page with base map tiles and no other REST APIs. | `Microsoft.Maps/accounts/services/render/read` |
-| An application, which only requires reverse geocoding and no other REST APIs. | `Microsoft.Maps/accounts/services/search/read` |
-| A role for a security principal, which requests a reading of Azure Maps Creator based map data and base map tile REST APIs. | `Microsoft.Maps/accounts/services/data/read`, `Microsoft.Maps/accounts/services/render/read` |
-| A role for a security principal, which requires reading, writing, and deleting of Creator based map data. This can be defined as a map data editor role, but doesn't allow access to other REST APIs like base map tiles. | `Microsoft.Maps/accounts/services/data/read`, `Microsoft.Maps/accounts/services/data/write`, `Microsoft.Maps/accounts/services/data/delete` |
+| Scenario | Custom Role Data Action(s) |
+| :-- | :- |
+| A public facing or interactive sign-in web page with base map tiles and no other REST APIs.| `Microsoft.Maps/accounts/services/render/read` |
+| An application, which only requires reverse geocoding and no other REST APIs. | `Microsoft.Maps/accounts/services/search/read` |
+| A role for a security principal, which requests a reading of Azure Maps Creator based map data and base map tile REST APIs. | `Microsoft.Maps/accounts/services/data/read`, `Microsoft.Maps/accounts/services/render/read` |
+| A role for a security principal, which requires reading, writing, and deleting of Creator based map data. Defined as a map data editor role that doesn't allow access to other REST API like base map tiles. | `Microsoft.Maps/accounts/services/data/read`, `Microsoft.Maps/accounts/services/data/write`, `Microsoft.Maps/accounts/services/data/delete` |
### Understand scope
-When creating a role assignment, it's defined within the Azure resource hierarchy. At the top of the hierarchy is a [management group](../governance/management-groups/overview.md) and the lowest is an Azure resource, like an Azure Maps account.
+When creating a role assignment, it's defined within the Azure resource hierarchy. The top of the hierarchy is a [management group] and the lowest is an Azure resource, like an Azure Maps account.
Assigning a role assignment to a resource group can enable access to multiple Azure Maps accounts or resources in the group. > [!TIP]
Assigning a role assignment to a resource group can enable access to multiple Az
## Disable local authentication
-Azure Maps accounts support the standard Azure property in the [Management API](/rest/api/maps-management/) for `Microsoft.Maps/accounts` called `disableLocalAuth`. When `true`, all authentication to the Azure Maps data-plane REST API is disabled, except [Azure AD authentication](./azure-maps-authentication.md#azure-ad-authentication). This is configured using Azure Policy to control distribution and management of shared keys and SAS tokens. For more information, see [What is Azure Policy?](../governance/policy/overview.md).
+Azure Maps accounts support the standard Azure property in the [Management API] for `Microsoft.Maps/accounts` called `disableLocalAuth`. When `true`, all authentication to the Azure Maps data-plane REST API is disabled, except [Azure AD authentication]. This is configured using Azure Policy to control distribution and management of shared keys and SAS tokens. For more information, see [What is Azure Policy?].
-Disabling local authentication doesn't take effect immediately. Allow a few minutes for the service to block future authentication requests. To re-enable local authentication, set the property to `false` and after a few minutes local authentication will resume.
+Disabling local authentication doesn't take effect immediately. Allow a few minutes for the service to block future authentication requests. To re-enable local authentication, set the property to `false` and after a few minutes local authentication resumes.
```json {
Disabling local authentication doesn't take effect immediately. Allow a few minu
Shared Access Signature token authentication is in preview.
-Shared access signature (SAS) tokens are authentication tokens created using the JSON Web token (JWT) format and are cryptographically signed to prove authentication for an application to the Azure Maps REST API. A SAS token is created by first integrating a [user-assigned managed identity](../active-directory/managed-identities-azure-resources/overview.md) with an Azure Maps account in your Azure subscription. The user-assigned managed identity is given authorization to the Azure Maps account through Azure RBAC using one of the built-in or custom role definitions.
+Shared access signature (SAS) tokens are authentication tokens created using the JSON Web token (JWT) format and are cryptographically signed to prove authentication for an application to the Azure Maps REST API. A SAS token, created by integrating a [user-assigned managed identity] with an Azure Maps account in your Azure subscription. The user-assigned managed identity is given authorization to the Azure Maps account through Azure RBAC using either built-in or custom role definitions.
Functional key differences of SAS token from Azure AD Access tokens:
Functional key differences of SAS token from Azure AD Access tokens:
- Private keys of the token are the primary and secondary keys of an Azure Maps account resource. - Service Principal object for authorization is supplied by a user-assigned managed identity.
-SAS tokens are immutable. This means that once a token is created, the SAS token is valid until the expiry has been met and the configuration of the allowed regions, rate limits, and user-assigned managed identity can't be changed. Read more below on [understanding access control](./azure-maps-authentication.md#understand-sas-token-access-control) for SAS token revocation and changes to access control.
+SAS tokens are immutable. This means that once a token is created, the SAS token is valid until the expiry has been met and the configuration of the allowed regions, rate limits, and user-assigned managed identity can't be changed. Read more below on [understanding access control] for SAS token revocation and changes to access control.
### Understand SAS token rate limits #### SAS token maximum rate limit can control billing for an Azure Maps resource
-By specifying a maximum rate limit on the token (`maxRatePerSecond`), the excess rate won't be billed to the account allowing you to set an upper limit of billable transactions for the account, when using the token. However, the application will receive client error responses with `429 (TooManyRequests)` for all transactions once that limit it reached. It's the responsibility of the application to manage retry and distribution of SAS tokens. There's no limit on how many SAS tokens can be created for an account. To allow for an increase or decrease in an existing token's limit; a new SAS token must be created. The old SAS token is still valid until its expiration.
+When specifying a maximum rate limit on the token (`maxRatePerSecond`), the excess rates aren't billed to the account allowing you to set an upper limit of billable transactions for the account, when using the token. However, the application receives client error responses with `429 (TooManyRequests)` for all transactions once that limit it reached. It's the responsibility of the application to manage retry and distribution of SAS tokens. There's no limit on how many SAS tokens can be created for an account. To allow for an increase or decrease in an existing token's limit; a new SAS token must be created. The old SAS token is still valid until its expiration.
Estimated Example:
Estimated Example:
| :- | : | : | :-- | | 10 | 20 | 600 | 6,000 |
-These are estimates, actual rate limits vary slightly based on Azure Maps ability to enforce consistency within a span of time. However, this allows for preventive control of billing cost.
+Actual rate limits vary based on Azure Maps ability to enforce consistency within a span of time. However, this allows for preventive control of billing cost.
#### Rate limits are enforced per Azure location, not globally or geographically
As described in [Azure Maps rate limits](./azure-maps-qps-rate-limits.md), indiv
Consider the case of **Search service - Non-Batch Reverse**, with its limit of 250 queries per second (QPS) for the following tables. Each table represents estimated total successful transactions from example usage.
-The first table shows one token that has a maximum request per second of 500, and then actual usage of the application was 500 request per second for a duration of 60 seconds. **Search service - Non-Batch Reverse** has a rate limit of 250, meaning of the total 30,000 requests made in the 60 seconds; 15,000 of those requests will be billable transactions. The remaining requests will result in status code `429 (TooManyRequests)`.
+The first table shows one token that has a maximum request per second of 500, and actual usage of the application is 500 request per second for a duration of 60 seconds. **Search service - Non-Batch Reverse** has a rate limit of 250, meaning of the total 30,000 requests made in the 60 seconds; 15,000 of those requests are billable transactions. The remaining requests result in status code `429 (TooManyRequests)`.
| Name | Approximate Maximum Rate Per Second | Actual Rate Per Second | Duration of sustained rate in seconds | Approximate total successful transactions | | :- | :- | : | : | :- |
-| token | 500 | 500 | 60 | ~15,000 |
+| token | 500 | 500 | 60 | ~15,000 |
For example, if two SAS tokens are created in, and use the same location as an Azure Maps account, each token now shares the default rate limit of 250 QPS. If each token is used at the same time with the same throughput token 1 and token 2 would successfully grant 7500 successful transactions each.
For example, if two SAS tokens are created in, and use the same location as an A
### Understand SAS token access control
-SAS tokens use RBAC to control access to the REST API. When you create a SAS token, the prerequisite managed identity on the Map Account is assigned an Azure RBAC role that grants access to specific REST API actions. See [Picking a role definition](./azure-maps-authentication.md#picking-a-role-definition) to determine which API should be allowed by the application.
+SAS tokens use RBAC to control access to the REST API. When you create a SAS token, the prerequisite managed identity on the Map Account is assigned an Azure RBAC role that grants access to specific REST API actions. See [Picking a role definition](./azure-maps-authentication.md#picking-a-role-definition) to determine which APIs the application allows.
-If you want to assign temporary access and remove access for before the SAS token expires, you'll want to revoke the token. Other reasons to revoke access may be if the token is distributed with `Azure Maps Data Contributor` role assignment unintentionally and anyone with the SAS token may be able to read and write data to Azure Maps REST APIs that may expose sensitive data or unexpected financial cost from usage.
+If you want to assign temporary access and remove access for before the SAS token expires, revoke the token. Other reasons to revoke access may be if the token is distributed with `Azure Maps Data Contributor` role assignment unintentionally and anyone with the SAS token may be able to read and write data to Azure Maps REST APIs that may expose sensitive data or unexpected financial cost from usage.
there are two options to revoke access for SAS token(s):
there are two options to revoke access for SAS token(s):
> 1. Create a SAS token using `secondaryKey` as the `signingKey` and distribute the new SAS token to the application. > 1. Regenerate the primary key, remove the managed identity from the account, and remove the role assignment for the managed identity. - ### Create SAS tokens To create SAS tokens, you must have `Contributor` role access to both manage Azure Maps accounts and user-assigned identities in the Azure subscription.
To create SAS tokens, you must have `Contributor` role access to both manage Azu
> [!IMPORTANT] > Existing Azure Maps accounts created in the Azure location `global` don't support managed identities.
-First, you should [Create a user-assigned managed identity](../active-directory/managed-identities-azure-resources/how-manage-user-assigned-managed-identities.md#create-a-user-assigned-managed-identity) in the same location as the Azure Maps account.
+First, you should [Create a user-assigned managed identity] in the same location as the Azure Maps account.
> [!TIP] > You should use the same location for both the managed identity and the Azure Maps account.
-Once a managed identity is created, you can create or update the Azure Maps account and attach it. See [Manage your Azure Maps account](./how-to-manage-account-keys.md) for more information.
+Once a managed identity is created, you can create or update the Azure Maps account and attach it. For more information, see [Manage your Azure Maps account].
-After the account has been successfully created or updated with the managed identity; assign role-based access control for the managed identity to an Azure Maps data role at the account scope. This enables the managed identity to be given access to the Azure Maps REST API for your map account.
+Once the account is successfully created or updated with the managed identity; assign role-based access control for the managed identity to an Azure Maps data role at the account scope. This enables the managed identity to be given access to the Azure Maps REST API for your map account.
-Next, you'll need to create a SAS token using the Azure Management SDK tooling, List SAS operation on Account Management API, or the Azure portal Shared Access Signature page of the Map account resource.
+Next, create a SAS token using the Azure Management SDK tooling, List SAS operation on Account Management API, or the Azure portal Shared Access Signature page of the Map account resource.
SAS token parameters:
-| Parameter Name | Example Value | Description |
-| : | :-- | :- |
-| signingKey | `primaryKey` | Required, the string enum value for the signingKey either `primaryKey` or `secondaryKey` is used to create the signature of the SAS. |
-| principalId | `<GUID>` | Required, the principalId is the Object (principal) ID of the user-assigned managed identity attached to the map account. |
-| regions | `[ "eastus", "westus2", "westcentralus" ]` | Optional, the default value is `null`. The regions control which regions the SAS token can be used in the Azure Maps REST [data-plane](../azure-resource-manager/management/control-plane-and-data-plane.md) API. Omitting regions parameter will allow the SAS token to be used without any constraints. When used in combination with an Azure Maps data-plane geographic endpoint like `us.atlas.microsoft.com` and `eu.atlas.microsoft.com` will allow the application to control usage with-in the specified geography. This allows prevention of usage in other geographies. |
-| maxRatePerSecond | 500 | Required, the specified approximate maximum request per second which the SAS token is granted. Once the limit is reached, additional throughput will be rate limited with HTTP status code `429 (TooManyRequests)`. |
-| start | `2021-05-24T10:42:03.1567373Z` | Required, a UTC date that specifies the date and time the token becomes active. |
-| expiry | `2021-05-24T11:42:03.1567373Z` | Required, a UTC date that specifies the date and time the token expires. The duration between start and expiry can't be more than 365 days. |
+| Parameter Name | Example Value | Description |
+| : | :-- | :- |
+| signingKey | `primaryKey` | Required, the string enum value for the signingKey either `primaryKey` or `secondaryKey` is used to create the signature of the SAS. |
+| principalId | `<GUID>` | Required, the principalId is the Object (principal) ID of the user-assigned managed identity attached to the map account. |
+| regions | `[ "eastus", "westus2", "westcentralus" ]` | Optional, the default value is `null`. The regions control which regions the SAS token can be used in the Azure Maps REST [data-plane] API. Omitting regions parameter allows the SAS token to be used without any constraints. When used in combination with an Azure Maps data-plane geographic endpoint like `us.atlas.microsoft.com` and `eu.atlas.microsoft.com` allows the application to control usage with-in the specified geography. This allows prevention of usage in other geographies. |
+| maxRatePerSecond | 500 | Required, the specified approximate maximum request per second which the SAS token is granted. Once the limit is reached, more throughput is rate limited with HTTP status code `429 (TooManyRequests)`. |
+| start | `2021-05-24T10:42:03.1567373Z` | Required, a UTC date that specifies the date and time the token becomes active. |
+| expiry | `2021-05-24T11:42:03.1567373Z` | Required, a UTC date that specifies the date and time the token expires. The duration between start and expiry can't be more than 365 days. |
### Configuring application with SAS token
Cross Origin Resource Sharing (CORS) is in preview.
To prevent malicious code execution on the client, modern browsers block requests from web applications to resources running in a separate domain. -- If you're unfamiliar with CORS, see [Cross-origin resource sharing (CORS)](https://developer.mozilla.org/docs/Web/HTTP/CORS), it lets an `Access-Control-Allow-Origin` header declare which origins are allowed to call endpoints of an Azure Maps account. CORS protocol isn't specific to Azure Maps.
+- If you're unfamiliar with CORS, see [Cross-origin resource sharing (CORS)], it lets an `Access-Control-Allow-Origin` header declare which origins are allowed to call endpoints of an Azure Maps account. CORS protocol isn't specific to Azure Maps.
### Account CORS
-[CORS](https://fetch.spec.whatwg.org/#http-cors-protocol) is an HTTP protocol that enables a web application running under one domain to access resources in another domain. Web browsers implement a security restriction known as [same-origin policy](https://www.w3.org/Security/wiki/Same_Origin_Policy) that prevents a web page from calling APIs in a different domain; CORS provides a secure way to allow one domain (the origin domain) to call APIs in another domain. Using the Azure Maps account resource, you can configure which origins are allowed to access the Azure Maps REST API from your applications.
+[CORS] is an HTTP protocol that enables a web application running under one domain to access resources in another domain. Web browsers implement a security restriction known as [same-origin policy] that prevents a web page from calling APIs in a different domain; CORS provides a secure way to allow one domain (the origin domain) to call APIs in another domain. Using the Azure Maps account resource, you can configure which origins are allowed to access the Azure Maps REST API from your applications.
> [!IMPORTANT] > CORS is not an authorization mechanism. Any request made to a map account using REST API, when CORS is enabled, also needs a valid map account authentication scheme such as Shared Key, Azure AD, or SAS token.
A CORS request from an origin domain may consist of two separate requests:
### Preflight request
-The preflight request is done not only as a security measure to ensure that the server understands the method and headers that will be sent in the actual request and that the server knows and trusts the source of the request, but it also queries the CORS restrictions that have been established for the map account. The web browser (or other user agent) sends an OPTIONS request that includes the request headers, method and origin domain. The map account service tries to fetch any CORS rules if account authentication is possible through the CORS preflight protocol.
+The preflight request is done not only as a security measure to ensure that the server understands the method and headers that are sent in the actual request and that the server knows and trusts the source of the request, but it also queries the CORS restrictions that have been established for the map account. The web browser (or other user agent) sends an OPTIONS request that includes the request headers, method and origin domain. The map account service tries to fetch any CORS rules if account authentication is possible through the CORS preflight protocol.
-If authentication isn't possible, the maps service evaluates pre-configured set of CORS rules that specify which origin domains, request methods, and request headers may be specified on an actual request against the maps service. By default, a maps account is configured to allow all origins to enable seamless integration into web browsers.
+If authentication isn't possible, the maps service evaluates a preconfigured set of CORS rules that specify which origin domains, request methods, and request headers may be specified on an actual request against the maps service. By default, a maps account is configured to allow all origins to enable seamless integration into web browsers.
-The service will respond to the preflight request with the required Access-Control headers if the following criteria are met:
+The service responds to the preflight request with the required Access-Control headers if the following criteria are met:
1. The OPTIONS request contains the required CORS headers (the Origin and Access-Control-Request-Method headers) 1. Authentication was successful and A CORS rule is enabled for the account that matches the preflight request.
The service will respond to the preflight request with the required Access-Contr
When preflight request is successful, the service responds with status code `200 (OK)`, and includes the required Access-Control headers in the response.
-The service will reject preflight requests if the following conditions occur:
+The service rejects preflight requests if the following conditions occur:
-1. If the OPTIONS request doesnΓÇÖt contain the required CORS headers (the Origin and Access-Control-Request-Method headers), the service will respond with status code `400 (Bad request)`.
-1. If authentication was successful on preflight request and no CORS rule matches the preflight request, the service will respond with status code `403 (Forbidden)`. This may occur if the CORS rule is configured to accept an origin that doesn't match the current browser client origin request header.
+1. If the OPTIONS request doesnΓÇÖt contain the required CORS headers (the Origin and Access-Control-Request-Method headers), the service responds with status code `400 (Bad request)`.
+1. If authentication was successful on preflight request and no CORS rule matches the preflight request, the service responds with status code `403 (Forbidden)`. This may occur if the CORS rule is configured to accept an origin that doesn't match the current browser client origin request header.
> [!NOTE] > A preflight request is evaluated against the service and not against the requested resource. The account owner must have enabled CORS by setting the appropriate account properties in order for the request to succeed. ### Actual request
-Once the preflight request is accepted and the response is returned, the browser will dispatch the actual request against the map service. The browser will deny the actual request immediately if the preflight request is rejected.
+Once the preflight request is accepted and the response is returned, the browser dispatches the actual request against the map service. The browser denies the actual request immediately if the preflight request is rejected.
-The actual request is treated as a normal request against the map service. The presence of the `Origin` header indicates that the request is a CORS request and the service will then validate against the CORS rules. If a match is found, the Access-Control headers are added to the response and sent back to the client. If a match isn't found, the response will return a `403 (Forbidden)` indicating a CORS origin error.
+The actual request is treated as a normal request against the map service. The presence of the `Origin` header indicates that the request is a CORS request and the service then validates against the CORS rules. If a match is found, the Access-Control headers are added to the response and sent back to the client. If a match isn't found, the response returns a `403 (Forbidden)` indicating a CORS origin error.
### Enable CORS policy
-When creating or updating an existing Map account, the Map account properties can specify the allowed origins to be configured. You can set a CORS rule on the Azure Maps account properties through Azure Maps Management SDK, Azure Maps Management REST API, and portal. Once you set the CORS rule for the service, then a properly authorized request made to the service from a different domain will be evaluated to determine whether it's allowed according to the rule you've specified. See an example below:
+When a Map account is created or updated, its properties specify the allowed origins to be configured. You can set a CORS rule on the Azure Maps account properties through Azure Maps Management SDK, Azure Maps Management REST API, and portal. Once you set the CORS rule for the service, then a properly authorized request made to the service from a different domain are evaluated to determine whether it's allowed according to the rule you've specified. For example:
```json {
Only one CORS rule with its list of allowed origins can be specified. Each origi
### Remove CORS policy
-You can remove CORS manually in the Azure portal, or programmatically using the Azure Maps SDK, Azure Maps management REST API or an [ARM template](../azure-resource-manager/templates/overview.md).
+You can remove CORS:
+
+- Manually in the Azure portal
+- Programmatically using:
+ - The Azure Maps SDK
+ - Azure Maps management REST API
+ - An [ARM template]
> [!TIP] > If you use the Azure Maps management REST API , use `PUT` or `PATCH` with an empty `corsRule` list in the request body.
Azure Maps doesn't count billing transactions for:
- 429 (TooManyRequests) - CORS preflight requests
-See [Azure Maps pricing](https://azure.microsoft.com/pricing/details/azure-maps) for additional information on billing transactions and other Azure Maps pricing information.
+For more information on billing transactions and other Azure Maps pricing information, see [Azure Maps pricing].
## Next steps To learn more about security best practices, see: > [!div class="nextstepaction"]
-> [Authentication and authorization best practices](authentication-best-practices.md)
+> [Authentication and authorization best practices]
To learn more about authenticating an application with Azure AD and Azure Maps, see: > [!div class="nextstepaction"]
-> [Manage authentication in Azure Maps](./how-to-manage-authentication.md)
+> [Manage authentication in Azure Maps]
-To learn more about authenticating the Azure Maps Map Control with Azure AD, see:
+To learn more about authenticating the Azure Maps Control with Azure AD, see:
> [!div class="nextstepaction"]
-> [Use the Azure Maps Map Control](./how-to-use-map-control.md)
+> [Use the Azure Maps Map Control]
+
+[Azure Active Directory (Azure AD)]: ../active-directory/fundamentals/active-directory-whatis.md
+[Solving the TLS 1.0 Problem]: /security/solving-tls1-problem
+[View authentication details]: how-to-manage-authentication.md#view-authentication-details
+[Manage authentication in Azure Maps]: how-to-manage-authentication.md
+[Authentication vs. authorization]: ../active-directory/develop/authentication-vs-authorization.md
+[Managed identities for Azure resources]: ../active-directory/managed-identities-azure-resources/overview.md
+[Azure services that can use managed identities to access other services]: ../active-directory/managed-identities-azure-resources/managed-identities-status.md
+[Authentication flows and application scenarios]: ../active-directory/develop/authentication-flows-app-scenarios.md
+[Azure role-based access control (Azure RBAC)]: ../role-based-access-control/overview.md
+[Assign Azure roles using the Azure portal]: ../role-based-access-control/role-assignments-portal.md
+
+[Data]: /rest/api/maps/data
+[Creator]: /rest/api/maps-creator/
+[Spatial]: /rest/api/maps/spatial
+[Search]: /rest/api/maps/search
+[Route]: /rest/api/maps/route
+
+[How to configure Azure RBAC for Azure Maps]: how-to-manage-authentication.md
+[Azure custom roles]: ../role-based-access-control/custom-roles.md
+[management group]: ../governance/management-groups/overview.md
+[Management API]: /rest/api/maps-management/
+[Azure AD authentication]: #azure-ad-authentication
+[What is Azure Policy?]: ../governance/policy/overview.md
+[user-assigned managed identity]: ../active-directory/managed-identities-azure-resources/overview.md
+[understanding access control]: #understand-sas-token-access-control
+[Create a user-assigned managed identity]: ../active-directory/managed-identities-azure-resources/how-manage-user-assigned-managed-identities.md#create-a-user-assigned-managed-identity
+[Manage your Azure Maps account]: how-to-manage-account-keys.md
+[data-plane]: ../azure-resource-manager/management/control-plane-and-data-plane.md
+[Cross-origin resource sharing (CORS)]: https://developer.mozilla.org/docs/Web/HTTP/CORS
+[same-origin policy]: https://www.w3.org/Security/wiki/Same_Origin_Policy
+[CORS]: https://fetch.spec.whatwg.org/#http-cors-protocol
+[ARM template]: ../azure-resource-manager/templates/overview.md
+[Azure Maps pricing]: https://azure.microsoft.com/pricing/details/azure-maps
+[Authentication and authorization best practices]: authentication-best-practices.md
+[Use the Azure Maps Map Control]: how-to-use-map-control.md
azure-maps Azure Maps Qps Rate Limits https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-maps/azure-maps-qps-rate-limits.md
# Azure Maps QPS rate limits
-Azure Maps does not have any maximum daily limits on the number of requests that can be made, however there are limits to the maximum number of queries per second (QPS).
+Azure Maps doesn't have any maximum daily limits on the number of requests that can be made, however there are limits to the maximum number of queries per second (QPS).
-Below are the QPS usage limits for each Azure Maps service by Pricing Tier.
+The following list shows the QPS usage limits for each Azure Maps service by Pricing Tier.
| Azure Maps service | QPS Limit: Gen 2 Pricing Tier | QPS Limit: Gen 1 S1 Pricing Tier | QPS Limit: Gen 1 S0 Pricing Tier | | -- | :--: | :: | :: |
Below are the QPS usage limits for each Azure Maps service by Pricing Tier.
| Traffic service | 50 | 50 | 50 | | Weather service | 50 | 50 | 50 |
-When QPS limits are reached, an HTTP 429 error will be returned. If you are using the Gen 2 or Gen 1 S1 pricing tiers, you can create an Azure Maps *Technical* Support Request in the [Azure portal](https://portal.azure.com/) to increase a specific QPS limit if needed. QPS limits for the Gen 1 S0 pricing tier cannot be increased.
+When QPS limits are reached, an HTTP 429 error is returned. If you're using the Gen 2 or Gen 1 S1 pricing tiers, you can create an Azure Maps *Technical* Support Request in the [Azure portal] to increase a specific QPS limit if needed. QPS limits for the Gen 1 S0 pricing tier can't be increased.
+
+[Azure portal]: https://portal.azure.com/
azure-maps Choose Map Style https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-maps/choose-map-style.md
# Change the style of the map
-The map control supports several different map [style options](/javascript/api/azure-maps-control/atlas.styleoptions) and [base map styles](supported-map-styles.md). All styles can be set when the map control is being initialized. Or, you can set styles by using the map control's `setStyle` function. This article shows you how to use these style options to customize the map's appearance. Also, you'll learn how to implement the style picker control in your map. The style picker control allows the user to toggle between different base styles.
+The map control supports several different map [style options] and [base map styles]. All styles can be set when the map control is being initialized. Or, you can set styles by using the map control's `setStyle` function. This article shows you how to use these style options to customize the map's appearance and how to implement the style picker control in your map. The style picker control allows the user to toggle between different base styles.
## Set map style options
-Style options can be set during web control initialization. Or, you can update style options by calling the map control's `setStyle` function. To see all available style options, see [style options](/javascript/api/azure-maps-control/atlas.styleoptions).
+Style options can be set during web control initialization. Or, you can update style options by calling the map control's `setStyle` function. To see all available style options, see [style options].
```javascript //Set the style options when creating the map.
The following tool shows how the different style options change how the map is r
## Set a base map style
-You can also initialize the map control with one of the [base map styles](supported-map-styles.md) that are available in the Web SDK. You can then use the `setStyle` function to update the base style with a different map style.
+You can also initialize the map control with one of the [base map styles] that are available in the Web SDK. You can then use the `setStyle` function to update the base style with a different map style.
### Set a base map style on initialization
-Base styles of the map control can be set during initialization. In the following code, the `style` option of the map control is set to the [`grayscale_dark` base map style](supported-map-styles.md#grayscale_dark).
+Base styles of the map control can be set during initialization. In the following code, the `style` option of the map control is set to the [`grayscale_dark` base map style].
```javascript var map = new atlas.Map('map', {
var map = new atlas.Map('map', {
### Update the base map style
-The base map style can be updated by using the `setStyle` function and setting the `style` option to either change to a different base map style or add additional style options.
+The base map style can be updated by using the `setStyle` function and setting the `style` option to either change to a different base map style or add more style options.
```javascript map.setStyle({ style: 'satellite' }); ```
-In the following code, after a map instance is loaded, the map style is updated from `grayscale_dark` to `satellite` using the [setStyle](/javascript/api/azure-maps-control/atlas.map#setstyle-styleoptions-) function.
+In the following code, after a map instance is loaded, the map style is updated from `grayscale_dark` to `satellite` using the [setStyle] function.
<br/>
In the following code, after a map instance is loaded, the map style is updated
The style picker control provides an easy to use button with flyout panel that can be used by the end user to switch between base styles.
-The style picker has two different layout options: `icon` and `list`. Also, the style picker allows you to choose two different style picker control `style` options: `light` and `dark`. In this example, the style picker uses the `icon` layout and displays a select list of base map styles in the form of icons. The style control picker includes the following base set of styles: `["road", "grayscale_light", "grayscale_dark", "night", "road_shaded_relief"]`. For more information on style picker control options, see [Style Control Options](/javascript/api/azure-maps-control/atlas.stylecontroloptions).
+The style picker has two different layout options: `icon` and `list`. Also, the style picker allows you to choose two different style picker control `style` options: `light` and `dark`. In this example, the style picker uses the `icon` layout and displays a select list of base map styles in the form of icons. The style control picker includes the following base set of styles: `["road", "grayscale_light", "grayscale_dark", "night", "road_shaded_relief"]`. For more information on style picker control options, see [Style Control Options].
-The image below shows the style picker control displayed in `icon` layout.
+The following image shows the style picker control displayed in `icon` layout.
:::image type="content" source="./media/choose-map-style/style-picker-icon-layout.png" alt-text="Style picker icon layout":::
-The image below shows the style picker control displayed in `list` layout.
+The following image shows the style picker control displayed in `list` layout.
:::image type="content" source="./media/choose-map-style/style-picker-list-layout.png" alt-text="Style picker list layout"::: > [!IMPORTANT] > By default the style picker control lists all the styles available under the S0 pricing tier of Azure Maps. If you want to reduce the number of styles in this list, pass an array of the styles you want to appear in the list into the `mapStyle` option of the style picker. If you are using Gen 1 (S1) or Gen 2 pricing tier and want to show all available styles, set the `mapStyles` option of the style picker to `"all"`.
-The following code shows you how to override the default `mapStyles` base style list. In this example, we're setting the `mapStyles` option to list which base styles we want to be displayed by the style picker control.
+The following code shows you how to override the default `mapStyles` base style list. In this example, we're setting the `mapStyles` option to list the base styles to display in the style picker control.
<br/>
The following code shows you how to override the default `mapStyles` base style
To learn more about the classes and methods used in this article: > [!div class="nextstepaction"]
-> [Map](/javascript/api/azure-maps-control/atlas.map)
+> [Map]
> [!div class="nextstepaction"]
-> [StyleOptions](/javascript/api/azure-maps-control/atlas.styleoptions)
+> [StyleOptions]
> [!div class="nextstepaction"]
-> [StyleControl](/javascript/api/azure-maps-control/atlas.control.stylecontrol)
+> [StyleControl]
> [!div class="nextstepaction"]
-> [StyleControlOptions](/javascript/api/azure-maps-control/atlas.stylecontroloptions)
+> [StyleControlOptions]
See the following articles for more code samples to add to your maps: > [!div class="nextstepaction"]
-> [Add map controls](map-add-controls.md)
+> [Add map controls]
> [!div class="nextstepaction"]
-> [Add a symbol layer](map-add-pin.md)
+> [Add a symbol layer]
> [!div class="nextstepaction"]
-> [Add a bubble layer](map-add-bubble-layer.md)
+> [Add a bubble layer]
+
+[style options]: /javascript/api/azure-maps-control/atlas.styleoptions
+[base map styles]: supported-map-styles.md
+[`grayscale_dark` base map style]: supported-map-styles.md#grayscale_dark
+[setStyle]: /javascript/api/azure-maps-control/atlas.map#setstyle-styleoptions-
+[Style Control Options]: /javascript/api/azure-maps-control/atlas.stylecontroloptions
+[Map]: /javascript/api/azure-maps-control/atlas.map
+[StyleOptions]: /javascript/api/azure-maps-control/atlas.styleoptions
+[StyleControl]: /javascript/api/azure-maps-control/atlas.control.stylecontrol
+[StyleControlOptions]: /javascript/api/azure-maps-control/atlas.stylecontroloptions
+[Add map controls]: map-add-controls.md
+[Add a symbol layer]: map-add-pin.md
+[Add a bubble layer]: map-add-bubble-layer.md
azure-maps Choose Pricing Tier https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-maps/choose-pricing-tier.md
# Choose the right pricing tier in Azure Maps
-Azure Maps now offers two pricing tiers: Gen 1 and Gen 2. The new Gen 2 pricing tier contains all Azure Maps capabilities with increased QPS (Queries Per Second) limits over Gen 1, and it allows you to achieve cost savings as Azure Maps transactions increases. The purpose of this article is to help you choose the right pricing tier for your needs.
+Azure Maps now offers two pricing tiers: Gen 1 and Gen 2. The new Gen 2 pricing tier contains all Azure Maps capabilities included in the Gen 1 tier, but with increased QPS (Queries Per Second) limits, allowing you to achieve cost savings as Azure Maps transactions increase. This article helps you determine the right pricing tier for your needs.
## Pricing tier targeted customers
-See the **pricing tier targeted customers** table below for a better understanding of Gen 1 and Gen 2 pricing tiers. For more information, see [Azure Maps pricing](https://aka.ms/CreatorPricing). If you're a current Azure Maps customer, you can learn how to change from Gen 1 to Gen 2 pricing in the [Manage pricing tier](how-to-manage-pricing-tier.md) article.
+The following **pricing tier targeted customers** table shows the Gen 1 and Gen 2 pricing tiers. For more information, see [Azure Maps pricing]. If you're a current Azure Maps customer, you can learn how to change from Gen 1 to Gen 2 pricing in the [Manage pricing tier] article.
| Pricing tier | SKU | Targeted Customers| ||::| | |**Gen 1**|S0| The S0 pricing tier works for applications in all stages of production: from proof-of-concept development and early stage testing to application production and deployment. However, this tier is designed for small-scale development, or customers with low concurrent users, or both. S0 has a restriction of 50 QPS for all services combined. | |S1| The S1 pricing tier is for customers with large-scale enterprise applications, mission-critical applications, or high volumes of concurrent users. It's also for those customers who require advanced geospatial services.
-| **Gen 2** | Maps/Location Insights | Gen 2 pricing is for new and current Azure Maps customers. Gen 2 comes with a free monthly tier of transactions to be used to test and build on Azure maps. Maps and Location Insights SKUΓÇÖs contain all Azure Maps capabilities. It allows you to achieve cost savings as Azure Maps transactions increases. Additionally, it has higher QPS limits than Gen 1. The Gen 2 pricing tier is required when using [Creator for indoor maps](creator-indoor-maps.md).
-| | |
+| **Gen 2** | Maps/Location Insights | Gen 2 pricing is for new and current Azure Maps customers. Gen 2 comes with a free monthly tier of transactions to be used to test and build on Azure maps. Maps and Location Insights SKUs contain all Azure Maps capabilities. It allows you to achieve cost savings as Azure Maps transactions increases. Additionally, it has higher QPS limits than Gen 1. The Gen 2 pricing tier is required when using [Creator for indoor maps].
-For more information on QPS limits, please refer to [Azure Maps QPS rate limits](azure-maps-qps-rate-limits.md).
+For more information on QPS limits, see [Azure Maps QPS rate limits].
-For additional pricing information on [Creator for indoor maps](creator-indoor-maps.md), see the *Creator* section in [Azure Maps pricing](https://aka.ms/CreatorPricing).
+For pricing information on [Creator for indoor maps], see the *Creator* section in [Azure Maps pricing].
## Next steps
Learn more about how to view and change pricing tiers:
> [!div class="nextstepaction"] > [Manage a pricing tier](how-to-manage-pricing-tier.md)+
+[Azure Maps pricing]: https://aka.ms/CreatorPricing
+[Manage pricing tier]: how-to-manage-pricing-tier.md
+[Creator for indoor maps]: creator-indoor-maps.md
+[Azure Maps QPS rate limits]: azure-maps-qps-rate-limits.md
azure-maps Clustering Point Data Web Sdk https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-maps/clustering-point-data-web-sdk.md
# Clustering point data in the Web SDK
-When visualizing many data points on the map, data points may overlap over each other. The overlap may cause the map may become unreadable and difficult to use. Clustering point data is the process of combining point data that are near each other and representing them on the map as a single clustered data point. As the user zooms into the map, the clusters break apart into their individual data points. When you work with large number of data points, use the clustering processes to improve your user experience.
+When there are many data points on the map, some may overlap over each other. The overlap may cause the map may become unreadable and difficult to use. Clustering point data is the process of combining point data that are near each other and representing them on the map as a single clustered data point. As the user zooms into the map, the clusters break apart into their individual data points. When you work with a large number of data points, the clustering processes can improve the user experience.
</br>
When visualizing many data points on the map, data points may overlap over each
## Enabling clustering on a data source
-Enable clustering in the `DataSource` class by setting the `cluster` option to `true`. Set `clusterRadius` to select nearby points and combines them into a cluster. The value of `clusterRadius` is in pixels. Use `clusterMaxZoom` to specify a zoom level at which to disable the clustering logic. Here is an example of how to enable clustering in a data source.
+Enable clustering in the `DataSource` class by setting the `cluster` option to `true`. Set `clusterRadius` to select nearby points and combines them into a cluster. The value of `clusterRadius` is in pixels. Use `clusterMaxZoom` to specify a zoom level at which to disable the clustering logic. Here's an example of how to enable clustering in a data source.
```javascript //Create a data source and enable clustering.
The `DataSource` class provides the following methods related to clustering as w
| Method | Return type | Description | |--|-|-|
-| getClusterChildren(clusterId: number) | Promise&lt;Array&lt;Feature&lt;Geometry, any&gt; \| Shape&gt;&gt; | Retrieves the children of the given cluster on the next zoom level. These children may be a combination of shapes and subclusters. The subclusters will be features with properties matching ClusteredProperties. |
-| getClusterExpansionZoom(clusterId: number) | Promise&lt;number&gt; | Calculates a zoom level at which the cluster will start expanding or break apart. |
+| getClusterChildren(clusterId: number) | Promise&lt;Array&lt;Feature&lt;Geometry, any&gt; \| Shape&gt;&gt; | Retrieves the children of the given cluster on the next zoom level. These children may be a combination of shapes and subclusters. The subclusters are features with properties matching ClusteredProperties. |
+| getClusterExpansionZoom(clusterId: number) | Promise&lt;number&gt; | Calculates a zoom level at which the cluster starts expanding or break apart. |
| getClusterLeaves(clusterId: number, limit: number, offset: number) | Promise&lt;Array&lt;Feature&lt;Geometry, any&gt; \| Shape&gt;&gt; | Retrieves the points in a cluster. By default the first 10 points are returned. To page through the points, use `limit` to specify the number of points to return, and `offset` to step through the index of points. To return all points, set `limit` to `Infinity` and don't set `offset`. | ## Display clusters using a bubble layer
To display the size of the cluster on top of the bubble, use a symbol layer with
## Display clusters using a symbol layer
-When visualizing data points, the symbol layer automatically hides symbols that overlap each other to ensure a cleaner user interface. This default behavior might be undesirable if you want to show the data points density on the map. However, these settings can be changed. To display all symbols, set the `allowOverlap` option of the Symbol layers `iconOptions` property to `true`.
+When visualizing data points, the symbol layer automatically hides symbols that overlap each other to ensure a cleaner user interface. This default behavior might be undesirable if you want to show the data points density on the map. However, these settings can be changed. To display all symbols, set the `allowOverlap` option of the Symbol layers `iconOptions` property to `true`.
-Use clustering to show the data points density while keeping a clean user interface. The sample below shows you how to add custom symbols and represent clusters and individual data points using the symbol layer.
+Use clustering to show the data points density while keeping a clean user interface. The following sample shows you how to add custom symbols and represent clusters and individual data points using the symbol layer.
<br/>
Use clustering to show the data points density while keeping a clean user interf
## Clustering and the heat maps layer
-Heat maps are a great way to display the density of data on the map. This visualization method can handle a large number of data points on its own. If the data points are clustered and the cluster size is used as the weight of the heat map, then the heat map can handle even more data. To achieve this option, set the `weight` option of the heat map layer to `['get', 'point_count']`. When the cluster radius is small, the heat map will look nearly identical to a heat map using the unclustered data points, but it will perform much better. However, the smaller the cluster radius, the more accurate the heat map will be, but with fewer performance benefits.
+Heat maps are a great way to display the density of data on the map. This visualization method can handle a large number of data points on its own. If the data points are clustered and the cluster size is used as the weight of the heat map, then the heat map can handle even more data. To achieve this option, set the `weight` option of the heat map layer to `['get', 'point_count']`. When the cluster radius is small, the heat map looks nearly identical to a heat map using the unclustered data points, but it performs better. However, the smaller the cluster radius, the more accurate the heat map is, but with fewer performance benefits.
<br/>
Heat maps are a great way to display the density of data on the map. This visual
## Mouse events on clustered data points
-When mouse events occur on a layer that contains clustered data points, the clustered data point return to the event as a GeoJSON point feature object. This point feature will have the following properties:
+When mouse events occur on a layer that contains clustered data points, the clustered data point return to the event as a GeoJSON point feature object. This point feature has the following properties:
| Property name | Type | Description | ||||
This example takes a bubble layer that renders cluster points and adds a click e
(<a href='https://codepen.io/azuremaps'>@azuremaps</a>) on <a href='https://codepen.io'>CodePen</a>. </iframe>
-## Display cluster area
+## Display cluster area
-The point data that a cluster represents is spread over an area. In this sample when the mouse is hovered over a cluster, two main behaviors occur. First, the individual data points contained in the cluster will be used to calculate a convex hull. Then, the convex hull will be displayed on the map to show an area. A convex hull is a polygon that wraps a set of points like an elastic band and can be calculated using the `atlas.math.getConvexHull` method. All points contained in a cluster can be retrieved from the data source using the `getClusterLeaves` method.
+The point data that a cluster represents is spread over an area. In this sample when the mouse is hovered over a cluster, two main behaviors occur. First, the individual data points contained in the cluster are used to calculate a convex hull. Then, the convex hull is displayed on the map to show an area. A convex hull is a polygon that wraps a set of points like an elastic band and can be calculated using the `atlas.math.getConvexHull` method. All points contained in a cluster can be retrieved from the data source using the `getClusterLeaves` method.
<br/>
The point data that a cluster represents is spread over an area. In this sample
## Aggregating data in clusters
-Often clusters are represented using a symbol with the number of points that are within the cluster. But, sometimes it's desirable to customize the style of clusters with additional metrics. With cluster aggregates, custom properties can be created and populated using an [aggregate expression](data-driven-style-expressions-web-sdk.md#aggregate-expression) calculation. Cluster aggregates can be defined in `clusterProperties` option of the `DataSource`.
+Often clusters are represented using a symbol with the number of points that are within the cluster. But, sometimes it's desirable to customize the style of clusters with more metrics. With cluster aggregates, custom properties can be created and populated using an [aggregate expression] calculation. Cluster aggregates can be defined in `clusterProperties` option of the `DataSource`.
-The following sample uses an aggregate expression. The code calculates a count based on the entity type property of each data point in a cluster. When a user clicks on a cluster, a popup shows with additional information about the cluster.
+The following sample uses an aggregate expression. The code calculates a count based on the entity type property of each data point in a cluster. When a user selects a cluster, a popup shows with additional information about the cluster.
<iframe height="500" scrolling="no" title="Cluster aggregates" src="//codepen.io/azuremaps/embed/jgYyRL/?height=500&theme-id=0&default-tab=js,result&editable=true" frameborder="no" allowtransparency="true" allowfullscreen="true"> See the Pen <a href='https://codepen.io/azuremaps/pen/jgYyRL/'>Cluster aggregates</a> by Azure Maps
See code examples to add functionality to your app:
> [!div class="nextstepaction"] > [Add a heat map layer](map-add-heat-map-layer.md)+
+[aggregate expression]: data-driven-style-expressions-web-sdk.md#aggregate-expression
azure-maps Consumption Model https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-maps/consumption-model.md
Depending on the value of **vehicleEngineType**, two principal Consumption Model
In both Consumption Models, there are some dependencies when specifying parameters. Meaning that, explicitly specifying some parameters may require specifying some other parameters. Here are these dependencies to be aware of:
-* All parameters require **constantSpeedConsumption** to be specified by the user. It is an error to specify any other consumption model parameter, if **constantSpeedConsumption** is not specified. The **vehicleWeight** parameter is an exception for this requirement.
+* All parameters require **constantSpeedConsumption** to be specified by the user. It's an error to specify any other consumption model parameter, if **constantSpeedConsumption** isn't specified. The **vehicleWeight** parameter is an exception for this requirement.
* **accelerationEfficiency** and **decelerationEfficiency** must always be specified as a pair (that is, both or none). * If **accelerationEfficiency** and **decelerationEfficiency** are specified, product of their values must not be greater than 1 (to prevent perpetual motion). * **uphillEfficiency** and **downhillEfficiency** must always be specified as a pair (that's, both or none).
In both Consumption Models, there are some dependencies when specifying paramete
## Combustion consumption model The Combustion Consumption Model is used when **vehicleEngineType** is set to _combustion_.
-The list of parameters that belong to this model are below. Refer to the Parameters section for detailed description.
+The following list of parameters belong to this model. Refer to the Parameters section for detailed description.
* constantSpeedConsumptionInLitersPerHundredkm * vehicleWeight
The list of parameters that belong to this model are below. Refer to the Paramet
## Electric consumption model The Electric Consumption Model is used when **vehicleEngineType** is set to _electric_.
-The list of parameters that belong to this model are below. Refer to the Parameters section for detailed description.
+The following list of parameters belong to this model. Refer to the Parameters section for detailed description.
* constantSpeedConsumptionInkWhPerHundredkm * vehicleWeight
The list of parameters that belong to this model are below. Refer to the Paramet
## Sensible values of consumption parameters
-A particular set of consumption parameters can be rejected, even though the set might fulfill all the explicit requirements. It happens when the value of a specific parameter, or a combination of values of several parameters, is considered to lead to unreasonable magnitudes of consumption values. If that happens, it most likely indicates an input error, as proper care is taken to accommodate all sensible values of consumption parameters. In case a particular set of consumption parameters is rejected, the accompanying error message will contain a textual explanation of the reason(s).
+A particular set of consumption parameters can be rejected, even though the set might fulfill all the explicit requirements. It happens when the value of a specific parameter, or a combination of values of several parameters, is considered to lead to unreasonable magnitudes of consumption values. If that happens, it most likely indicates an input error, as proper care is taken to accommodate all sensible values of consumption parameters. In case a particular set of consumption parameters is rejected, the accompanying error message contains a textual explanation of the reason(s).
The detailed descriptions of the parameters have examples of sensible values for both models.
azure-maps Migrate From Bing Maps Web Services https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-maps/migrate-from-bing-maps-web-services.md
Geocoding is the process of converting an address (like `"1 Microsoft way, Redmo
Azure Maps provides several methods for geocoding addresses: * [Free-form address geocoding]: Specify a single address string (like `"1 Microsoft way, Redmond, WA"`) and process the request immediately. This service is recommended if you need to geocode individual addresses quickly.
-* [Structured address geocoding]: Specify the parts of a single address, such as the street name, city, country, and postal code and process the request immediately. This service is recommended if you need to geocode individual addresses quickly and the data is already parsed into its individual address parts.
+* [Structured address geocoding]: Specify the parts of a single address, such as the street name, city, country/region, and postal code and process the request immediately. This service is recommended if you need to geocode individual addresses quickly and the data is already parsed into its individual address parts.
* [Batch address geocoding]: Create a request containing up to 10,000 addresses and have them processed over a period of time. All the addresses are geocoded in parallel on the server and when completed the full result set can be downloaded. This service is recommended for geocoding large data sets. * [Fuzzy search]: This API combines address geocoding with point of interest search. This API takes in a free-form string that can be an address, place, landmark, point of interest, or point of interest category and process the request immediately. This API is recommended for applications where users can search for addresses or points of interest from the same textbox. * [Fuzzy batch search]: Create a request containing up to 10,000 addresses, places, landmarks, or point of interests and have them processed over a period of time. All the data is processed in parallel on the server and when completed the full result set can be downloaded.
azure-maps Traffic Coverage https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-maps/traffic-coverage.md
The following tables provide information about what kind of traffic information
|-|::|:-:| | Australia | Γ£ô | Γ£ô | | Brunei | Γ£ô | Γ£ô |
-| Hong Kong | Γ£ô | Γ£ô |
+| Hong Kong SAR | Γ£ô | Γ£ô |
| India | Γ£ô | Γ£ô | | Indonesia | Γ£ô | Γ£ô | | Kazakhstan | Γ£ô | Γ£ô |
-| Macao | Γ£ô | Γ£ô |
+| Macao SAR | Γ£ô | Γ£ô |
| Malaysia | Γ£ô | Γ£ô | | New Zealand | Γ£ô | Γ£ô | | Philippines | Γ£ô | Γ£ô |
azure-monitor Azure Monitor Agent Migration https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/agents/azure-monitor-agent-migration.md
Previously updated : 2/22/2023 Last updated : 4/3/2023 # Customer intent: As an IT manager, I want to understand how I should move from using legacy agents to Azure Monitor Agent.
In addition to consolidating and improving upon legacy Log Analytics agents, Azu
2. Service (legacy Solutions) requirements - The legacy Log Analytics agents are used by various Azure services to collect required data. If you're not using any additional Azure service, you may skip this step altogether. - Use the [AMA Migration Helper](./azure-monitor-agent-migration-tools.md#using-ama-migration-helper) to **discover solutions enabled** on your workspace(s) that use the legacy agents, including the **per-solution migration recommendation<sup>1</sup>** shown under `Workspace overview` tab. - If you use Microsoft Sentinel, see [Gap analysis for Microsoft Sentinel](../../sentinel/ama-migrate.md#gap-analysis-between-agents) for a comparison of the extra data collected by Microsoft Sentinel.
-3. **Agent coexistence:**
- - If you're setting up a *new environment* with resources, such as deployment scripts and onboarding templates, install Azure Monitor Agent together with a legacy agent in your new environment to decrease the migration effort later.
- - Azure Monitor Agent **can run alongside the legacy Log Analytics agents on the same machine** so that you can continue to use existing functionality during evaluation or migration. You can begin the transition, but ensure you understand the **limitations below**:
- - Be careful when you collect duplicate data from the same machine, as this could skew query results, affect downstream features like alerts, dashboards, workbooks and generate more charges for data ingestion and retention. To avoid data duplication, ensure the agents are *collecting data from different machines* or *sending the data to different destinations*. Additionally,
- - For **Defender for Cloud**, you will only be [billed once per machine](../../defender-for-cloud/auto-deploy-azure-monitoring-agent.md#impact-of-running-with-both-the-log-analytics-and-azure-monitor-agents) when running both agents
- - For **Sentinel**, you can easily [disable the legacy connector](../../sentinel/ama-migrate.md#recommended-migration-plan) to stop ingestion of logs from legacy agents.
- - Running two telemetry agents on the same machine consumes double the resources, including but not limited to CPU, memory, storage space, and network bandwidth.
+3. **Agent coexistence:** If you're setting up a *new environment* with resources, such as deployment scripts and onboarding templates, install Azure Monitor Agent together with a legacy agent in your new environment to decrease the migration effort later.
+ - Be careful when you collect **duplicate data** from the same machine, as this could skew query results, affect downstream features like alerts, dashboards, workbooks and generate **additional cost/charges** for data ingestion and retention. Here are some things that can help
+ - If possible, configure the agents to *send the data to different destinations*, i.e. either different workspaces or different tables in same workspace
+ - If not used, disable any duplicate data collection from legacy agents by [removing the workspace configurations](./agent-data-sources.md#configure-data-sources)
+ - For **Defender for Cloud**, the experiences natively deduplicate data if using both agents. Also you will only be [billed once per machine](../../defender-for-cloud/auto-deploy-azure-monitoring-agent.md#impact-of-running-with-both-the-log-analytics-and-azure-monitor-agents) when running both agents
+ - For **Sentinel**, you can easily [disable the legacy connector](../../sentinel/ama-migrate.md#recommended-migration-plan) to stop ingestion of logs from legacy agents.
+ - Running two telemetry agents on the same machine **consumes double the resources**, including but not limited to CPU, memory, storage space, and network bandwidth.
<sup>1</sup> Start testing your scenarios during the preview phase. This will save time, avoid surprises later and ensure you're ready to deploy to production as soon as the service becomes generally available. Moreover you benefit from added security and reduced cost immediately.
In addition to consolidating and improving upon legacy Log Analytics agents, Azu
1. **[Create data collection rules](./data-collection-rule-azure-monitor-agent.md#create-a-data-collection-rule)**. You can use the [DCR generator](./azure-monitor-agent-migration-tools.md#installing-and-using-dcr-config-generator)<sup>1</sup> to **automatically convert your legacy agent configuration into data collection rule templates**. Review the generated rules before you create them, to leverage benefits like filtering, granular targeting (per machine), and other optimizations. 2. Deploy extensions and DCR-associations:
- 1. **Test first** by deploying extensions<sup>2</sup> and DCR-Associations on a few non-production machines. You can also deploy side-by-side on machines running legacy agents (see the section above for agent coexistence
+ 1. **Test first** by deploying extensions<sup>2</sup> and DCR-Associations on a few non-production machines. You can also deploy side-by-side on machines running legacy agents (see [agent coexistence](#before-you-begin) section above)
2. Once data starts flowing via Azure Monitor agent, **compare it with legacy agent data** to ensure there are no gaps. You can do this by joining with the `Category` column in the [Heartbeat](/azure/azure-monitor/reference/tables/heartbeat) table which indicates 'Azure Monitor Agent' for the new data collection.
- 3. Post testing, you can **roll out broadly**<sup>3</sup> using [built-in policies]() for at-scale deployment of extensions and DCR-associations. **Using policy will also ensure automatic deployment of extensions and DCR-associations for any new machines in future.**
- 4. Use the [AMA Migration Helper](./azure-monitor-agent-migration-tools.md#using-ama-migration-helper) to **monitor the at-scale migration** across your machines
+ 3. If you are required to run both agents and wish to **avoid double ingestion**, you can disable data collection from legacy agents without uninstalling them yet, by simply [removing the workspace configurations for legacy agents](./agent-data-sources.md#configure-data-sources)
+ 4. Post testing, you can **roll out broadly** using [built-in policies](./azure-monitor-agent-manage.md#use-azure-policy) for at-scale deployment of extensions and DCR-associations. **Using policy will also ensure automatic deployment of extensions and DCR-associations for any new machines in future.**
+ 5. Throughout this process, use the [AMA Migration Helper](./azure-monitor-agent-migration-tools.md#using-ama-migration-helper) to **monitor the at-scale migration** across your machines
3. **Validate** that Azure Monitor Agent is collecting data as expected and all **downstream dependencies**, such as dashboards, alerts, and workbooks, function properly. You can do this by joining with/looking at the `Category` column in the [Heartbeat](/azure/azure-monitor/reference/tables/heartbeat) table which indicates 'Azure Monitor Agent' vs 'Direct Agent' (for legacy). 4. Clean up: After you confirm that Azure Monitor Agent is collecting data properly, you may **choose to either disable or uninstall the legacy Log Analytics agents**
- 1. If you have migrated to Azure Monitor agent for selected features/solutions and you need to continue using the legacy Log Analytics for others, you can
+ 1. If you have need to continue using both agents, skip uninstallation and only [disable the legacy data collection](./agent-data-sources.md#configure-data-sources), also described above.
2. If you've migrated to Azure Monitor agent for all your requirements, you may [uninstall the Log Analytics agent](./agent-manage.md#uninstall-agent) from monitored resources. Clean up any configuration files, workspace keys, or certificates that were used previously by the Log Analytics agent. 3. Don't uninstall the legacy agent if you need to use it for uploading data to System Center Operations Manager. <sup>1</sup> The DCR generator only converts the configurations for Windows event logs, Linux syslog and performance counters. Support for additional features and solutions will be available soon <sup>2</sup> In addition to the Azure Monitor agent extension, you need to deploy additional extensions required for specific solutions. See [other extensions to be installed here](./agents-overview.md#supported-services-and-features)
-<sup>3</sup> Before you deploy a large number of agents, consider [configuring the workspace](agent-data-sources.md) to disable data collection for the Log Analytics agent. If you leave data collection for the Log Analytics agent enabled, you might collect duplicate data and increase your costs. You might choose to collect duplicate data for a short period during migration until you verify that you've deployed and configured Azure Monitor Agent correctly.
azure-monitor Analyze Usage https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/logs/analyze-usage.md
The following example is a [log alert rule](../alerts/alerts-unified-log.md) tha
| **Scope** | | | Target scope | Select your Log Analytics workspace. | | **Condition** | |
-| Query | `Usage \| where IsBillable \| summarize DataGB = sum(Quantity / 1000.)` |
+| Query | `Usage | where IsBillable | summarize DataGB = sum(Quantity / 1000)` |
| Measurement | Measure: *DataGB*<br>Aggregation type: Total<br>Aggregation granularity: 1 day | | Alert Logic | Operator: Greater than<br>Threshold value: 50<br>Frequency of evaluation: 1 day | | Actions | Select or add an [action group](../alerts/action-groups.md) to notify you when the threshold is exceeded. |
azure-monitor Prefer Options https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/logs/api/prefer-options.md
# Prefer options
-The API supports setting some request options using the `Prefer` header. This section describes how to set each preference and their values.
+The API supports setting some request and response options using the `Prefer` header. This section describes how to set each preference and their values.
## Visualization information
-In the query language, you can specify different render options. By default, the API does not return information about the type of visualization. To include a specific visualization, include this header:
+In the query language, you can specify different render options. By default, the API doesn't return information about the type of visualization. To include a specific visualization, include this header:
``` Prefer: include-render=true
To get information about query statistics, include this header:
``` The header includes a `statistics` property in the response that describes various performance statistics such as query execution time and resource usage.+
+## Query timeout
+The default query timeout is 3 minutes. To adjust the query timeout set the `wait` property, as documented [here](timeouts.md).
+
+## Query data sources
+To get information about the query data sources - regions, workspaces, clusters and tables, include this header:
+
+```
+ Prefer: include-dataSources=true
+```
azure-monitor Log Analytics Workspace Health https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/logs/log-analytics-workspace-health.md
Last updated 02/07/2023
# Monitor Log Analytics workspace health
-[Azure Service Health](../../service-health/overview.md) monitors the health of your cloud resources, including Log Analytics workspaces. When a Log Analytics workspace is healthy, data you collect from resources in your IT environment is available for querying and analysis in a relatively short period of time, known as [latency](../logs/data-ingestion-time.md). This article explains how to view the health status of your Log Analytics workspace and set up alerts to track Log Analytics workspace health status changes.
+[Azure Service Health](../../service-health/overview.md) monitors the health of your cloud resources, including Log Analytics workspaces. When a Log Analytics workspace is healthy, data you collect from resources in your IT environment is available for querying and analysis in a relatively short period of time, known as [latency](../logs/data-ingestion-time.md). This article explains how to view the health status of your Log Analytics workspace, set up workspace health status alerts, and view workspace health metrics.
Azure Service Health monitors:
To view your Log Analytics workspace health and set up health status alerts:
1. To set up health status alerts, you can either [enable recommended out-of-the-box alert](../alerts/alerts-overview.md#recommended-alert-rules) rules, or manually create new alert rules. - To enable the recommended alert rules:
- 1. select **Alerts**, then select **Enable recommended alert rules**. The **Enable recommended alert rules** pane opens with a list of recommended alert rules based on your type of resource.
- 1. In the **Alert me if** section, select all of the rules you want to enable. The rules are populated with the default values for the rule condition, you can change the default values if you would like.
- 1. In the **Notify me by** section, select the way you want to be notified if an alert is fired.
+ 1. Select **Alerts** > **Enable recommended alert rules**.
+
+ The **Enable recommended alert rules** pane opens with a list of recommended alert rules for your Log Analytics workspace.
+
+ :::image type="content" source="../alerts/media/alerts-managing-alert-instances/alerts-enable-recommended-alert-rule-pane.png" alt-text="Screenshot of recommended alert rules pane.":::
+
+ 1. In the **Alert me if** section, select all of the rules you want to enable.
+ 1. In the **Notify me by** section, select the way you want to be notified if an alert is triggered.
1. Select **Use an existing action group**, and enter the details of the existing action group if you want to use an action group that already exists. 1. Select **Enable**.
- :::image type="content" source="../alerts/media/alerts-managing-alert-instances/alerts-enable-recommended-alert-rule-pane.png" alt-text="Screenshot of recommended alert rules pane.":::
- - To create a new alert rule: 1. Select **Add resource health alert**.
- The **Create alert rule** wizard opens, with the **Scope** and **Condition** panes pre-populated. By default, the rule triggers alerts all status changes in all Log Analytics workspaces in the subscription. If necessary, you can edit and modify the scope and condition at this stage.
+ The **Create alert rule** wizard opens, with the **Scope** and **Condition** panes prepopulated. By default, the rule triggers alerts all status changes in all Log Analytics workspaces in the subscription. If necessary, you can edit and modify the scope and condition at this stage.
:::image type="content" source="media/data-ingestion-time/log-analytics-workspace-latency-alert-rule.png" lightbox="media/data-ingestion-time/log-analytics-workspace-latency-alert-rule.png" alt-text="Screenshot that shows the Create alert rule wizard for Log Analytics workspace latency issues."::: 1. Follow the rest of the steps in [Create a new alert rule in the Azure portal](../alerts/alerts-create-new-alert-rule.md#create-a-new-alert-rule-in-the-azure-portal). +
+## View Log Analytics workspace health metrics
+
+Azure Monitor exposes a set of metrics that provide insight into Log Analytics workspace health.
+
+To view Log Analytics workspace health metrics:
+
+1. Select **Metrics** from the Log Analytics workspace menu. This opens [Metrics Explorer](../essentials/metrics-charts.md) in context of your Log Analytics workspace.
+1. In the **Metric** field, select one of the Log Analytics workspace health metrics:
+
+ | Metric name | Description |
+ | - | - |
+ | Query count | Total number of user queries in the Log Analytics workspace within the selected time range.<br>This number includes only user-initiated queries, and doesn't include queries initiated by Sentinel rules and alert-related queries. |
+ | Query failure count | Total number of failed user queries in the Log Analytics workspace within the selected time range.<br>This number includes all queries that return 5XX response codes - except 504 *Gateway Timeout* - which indicate an error related to the application gateway or the backend server.|
+ | Query success rate | Total number of successful user queries in the Log Analytics workspace within the selected time range.<br>This number includes all queries that return 2XX, 4XX, and 504 response codes; in other words, all user queries that don't result in a service error. |
+ ## Investigate Log Analytics workspace health issues To investigate Log Analytics workspace health issues:
To investigate Log Analytics workspace health issues:
- Query the data in your Log Analytics workspace to [understand which factors are contributing greater than expected latency in your workspace](../logs/data-ingestion-time.md). - [Use the `_LogOperation` function to view and set up alerts about operational issues](../logs/monitor-workspace.md) logged in your Log Analytics workspace.
+
++ ## Next steps Learn more about:
azure-monitor Whats New https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/whats-new.md
Title: "What's new in Azure Monitor documentation" description: "What's new in Azure Monitor documentation" Previously updated : 01/06/2023 Last updated : 04/04/2023 # What's new in Azure Monitor documentation
-This article lists significant changes to Azure Monitor documentation.
+This article lists significant changes to Azure Monitor documentation.
+
+## March 2023
+
+|Subservice| Article | Description |
+||||
+Alerts|[Manage your alert rules](alerts/alerts-manage-alert-rules.md)|Updated article to reflect that the user can duplicate an existing alert rule.|
+Alerts|[Quickstart: Deploy an Azure Kubernetes Service (AKS) cluster using the Azure portal](../aks/learn/quick-kubernetes-deploy-portal.md)|You can enable recommended alert rules while creating an AKS cluster in the Azure portal. |
+Alerts|[Monitor Log Analytics workspace health](logs/log-analytics-workspace-health.md)|If you have a Log Analytics workspace without any configured alert rules, you can enable recommended alert rules from the Alerts page of a Log Analytics Workspace.|
+Alerts|[Connect Azure to ITSM tools by using IT Service Management](alerts/itsmc-definition.md)|Updated the workflow for creating ServieNow ITSM tickets from an Azure Monitor alert. The article now specifies separate workflows for ITSM actions, incidents, and events.|
+Alerts|[Manage your alert rules](alerts/alerts-manage-alert-rules.md)|Recommended alert rules are now enabled for all customers and are no longer public preview.|
+Alerts|[Create a new alert rule](alerts/alerts-create-new-alert-rule.md)|The documentation was updated to reflect the updated "Create new alert rule" UI. The alert rule creation wizard clearly indicates the most commonly used resources and signals for their alerts to help users more easily create alert rules.|
+Alerts|[Understanding Azure Active Directory Application Proxy Complex application scenario (Preview)](../active-directory/app-proxy/application-proxy-configure-complex-application.md)|The documentation for the common schema used in the alerts payload has been updated to contain the detailed information about the fields in the payload of each alert type. |
+Alerts|[Supported resources for metric alerts in Azure Monitor](alerts/alerts-metric-near-real-time.md)|Updated list of metrics supported by metric alert rules.|
+Alerts|[Create and manage action groups in the Azure portal](alerts/action-groups.md)|Updated the documentation explaining the retry logic used in action groups that use webhooks.|
+Alerts|[Create and manage action groups in the Azure portal](alerts/action-groups.md)|Added list of countries supported by voice notifications.|
+Alerts|[Connect ServiceNow to Azure Monitor](alerts/itsmc-secure-webhook-connections-servicenow.md)|Added Tokyo to list of supported ServiceNow webhook integrations.|
+Application-Insights|[Application Insights SDK support guidance](app/sdk-support-guidance.md)|Release notes are now available for each SDK.|
+Application-Insights|[What is distributed tracing and telemetry correlation?](app/distributed-tracing-telemetry-correlation.md)|We've merged our documents related to distributed tracing and telemetry correlation.|
+Application-Insights|[Application Insights availability tests](app/availability-overview.md)|We've separated and called out the two Classic Tests, which are older versions of availability tests.|
+Application-Insights|[Microsoft Azure Monitor Application Insights JavaScript SDK advanced topics](app/javascript-sdk-advanced.md)|JavaScript SDK advanced topics now include npm setup, cookie configuration and management, source map un-minify support, and tree shaking optimized code.|
+Application-Insights|[Microsoft Azure Monitor Application Insights JavaScript SDK](app/javascript-sdk.md)|Our introductory article to the JavaScript SDK now provides only the fast and easy code snippet method of getting started.|
+Application-Insights|[Geolocation and IP address handling](app/ip-collection.md)|Code samples have been updated for .NET 6/7.|
+Application-Insights|[Application Insights logging with .NET](app/ilogger.md)|Code samples have been updated for .NET 6/7.|
+Application-Insights|[Azure Monitor overview](overview.md)|Azure Monitor overview graphics updated along with related content|
+Containers|[Metric alert rules in Container insights (preview)](containers/container-insights-metric-alerts.md)|Updated to indicate deprecation of metric alerts.|
+Containers|[Azure Monitor Container Insights for Azure Arc-enabled Kubernetes clusters](containers/container-insights-enable-arc-enabled-clusters.md)|Added option for Azure Monitor Private Link Scope (AMPLS) + Proxy.|
+Essentials|[Collect Prometheus metrics from an AKS cluster (preview)](essentials/prometheus-metrics-enable.md)|Enable windows metric collection metrics addon|
+Essentials|[Query Prometheus metrics using the API and PromQL](essentials/prometheus-api-promql.md)|New Article: Query Azure Monitor workspaces using REST and PromQL|
+Essentials|[Configure remote write for Azure Monitor managed service for Prometheus using Azure Active Directory authentication (preview)](essentials/prometheus-remote-write-active-directory.md)|Added Prometheus remote write active directory relabel|
+Essentials|[Built-in policies for Azure Monitor](essentials/diagnostics-settings-policies-deployifnotexists.md)|New builtin polices to create diagnostic settings in Azure Monitor with deploy if not exits defaults|
+Logs|[Logs Ingestion API in Azure Monitor](logs/logs-ingestion-api-overview.md)|Updated to include client libraries.|
+Logs|[Tutorial: Send data to Azure Monitor using Logs ingestion API (Resource Manager templates)](logs/tutorial-logs-ingestion-api.md)|Rewritten to be more consistent with related tutorial.|
+Logs|[Sample code to send data to Azure Monitor using Logs ingestion API](logs/tutorial-logs-ingestion-code.md)|New article with sample code using Logs ingestion API, including new client ingestion libraries for Python, .NET, Java, and JavaScript.|
+Logs|[Tutorial: Send data to Azure Monitor Logs with Logs ingestion API (Azure portal)](logs/tutorial-logs-ingestion-portal.md)|Rewritten to be more consistent with related tutorial.|
+Snapshot-Debugger|[Enable Profiler for ASP.NET Core web applications hosted in Linux on App Services](profiler/profiler-aspnetcore-linux.md)|Update code snippets from .NET 5 to .NET 6|
+Snapshot-Debugger|[Enable Snapshot Debugger for .NET apps in Azure Service Fabric, Cloud Service, and Virtual Machines](snapshot-debugger/snapshot-debugger-vm.md)|Update code snippets from .NET 5 to .NET 6|
+ ## February 2023
azure-netapp-files Configure Customer Managed Keys https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-netapp-files/configure-customer-managed-keys.md
The following diagram demonstrates how customer-managed keys work with Azure Net
* Customer-managed keys can only be configured on new volumes. You can't migrate existing volumes to customer-managed key encryption. * To create a volume using customer-managed keys, you must select the *Standard* network features. You can't use customer-managed key volumes with volume configured using Basic network features. Follow instructions in to [Set the Network Features option](configure-network-features.md#set-the-network-features-option) in the volume creation page.
+* Customer-managed keys private endpoints do not support the **Disable public access** option. You must choose one of the **Allow public access** options.
* Switching from user-assigned identity to the system-assigned identity isn't currently supported. * MSI Automatic certificate renewal isn't currently supported. * The MSI certificate has a lifetime of 90 days. It becomes eligible for renewal after 46 days. **After 90 days, the certificate is no longer be valid and the customer-managed key volumes under the NetApp account will go offline.**
azure-resource-manager Bicep Config Linter https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-resource-manager/bicep/bicep-config-linter.md
The following example shows the rules that are available for configuration.
"use-resource-id-functions": { "level": "warning" },
+ "use-resource-symbol-reference": {
+ "level": "warning"
+ },
"use-stable-resource-identifiers": { "level": "warning" },
azure-resource-manager Linter Rule Use Resource Symbol Reference https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-resource-manager/bicep/linter-rule-use-resource-symbol-reference.md
+
+ Title: Linter rule - use resource symbol reference
+description: Linter rule - use resource symbol reference
+ Last updated : 03/30/2023++
+# Linter rule - use resource symbol reference
+
+This rule detects suboptimal uses of the [`reference`](./bicep-functions-resource.md#reference), and [`list`](./bicep-functions-resource.md#list) functions. Instead of invoking these functions, using a resource reference simplifies the syntax and allows Bicep to better understand your deployment dependency graph.
+
+## Linter rule code
+
+Use the following value in the [Bicep configuration file](bicep-config-linter.md) to customize rule settings:
+
+`use-resource-symbol-reference`
+
+## Solution
+
+The following example fails this test because of the uses of `reference` and `listKey`:
+
+```bicep
+@description('The name of the HDInsight cluster to create.')
+param clusterName string
+
+@description('These credentials can be used to submit jobs to the cluster and to log into cluster dashboards.')
+param clusterLoginUserName string
+
+@description('The password must be at least 10 characters in length and must contain at least one digit, one upper case letter, one lower case letter, and one non-alphanumeric character except (single-quote, double-quote, backslash, right-bracket, full-stop). Also, the password must not contain 3 consecutive characters from the cluster username or SSH username.')
+@minLength(10)
+@secure()
+param clusterLoginPassword string
+
+@description('Location for all resources.')
+param location string = resourceGroup().location
+
+param storageAccountName string = uniqueString(resourceGroup().id)
+
+resource storageAccount 'Microsoft.Storage/storageAccounts@2022-09-01' existing = {
+ name: storageAccountName
+}
+
+resource cluster 'Microsoft.HDInsight/clusters@2021-06-01' = {
+ name: clusterName
+ location: location
+ properties: {
+ clusterVersion: '4.0'
+ osType: 'Linux'
+ clusterDefinition: {
+ kind: 'hbase'
+ configurations: {
+ gateway: {
+ 'restAuthCredential.isEnabled': true
+ 'restAuthCredential.username': clusterLoginUserName
+ 'restAuthCredential.password': clusterLoginPassword
+ }
+ }
+ }
+ storageProfile: {
+ storageaccounts: [
+ {
+ name: replace(replace(reference(storageAccount.id, '2022-09-01').primaryEndpoints.blob, 'https://', ''), '/', '')
+ isDefault: true
+ container: clusterName
+ key: listKeys(storageAccount.id, '2022-09-01').keys[0].value
+ }
+ ]
+ }
+ }
+}
+```
+
+You can fix the problem by using resource reference:
+
+```bicep
+@description('The name of the HDInsight cluster to create.')
+param clusterName string
+
+@description('These credentials can be used to submit jobs to the cluster and to log into cluster dashboards.')
+param clusterLoginUserName string
+
+@description('The password must be at least 10 characters in length and must contain at least one digit, one upper case letter, one lower case letter, and one non-alphanumeric character except (single-quote, double-quote, backslash, right-bracket, full-stop). Also, the password must not contain 3 consecutive characters from the cluster username or SSH username.')
+@minLength(10)
+@secure()
+param clusterLoginPassword string
+
+@description('Location for all resources.')
+param location string = resourceGroup().location
+
+param storageAccountName string = uniqueString(resourceGroup().id)
+
+resource storageAccount 'Microsoft.Storage/storageAccounts@2022-09-01' existing = {
+ name: storageAccountName
+}
+
+resource cluster 'Microsoft.HDInsight/clusters@2021-06-01' = {
+ name: clusterName
+ location: location
+ properties: {
+ clusterVersion: '4.0'
+ osType: 'Linux'
+ clusterDefinition: {
+ kind: 'hbase'
+ configurations: {
+ gateway: {
+ 'restAuthCredential.isEnabled': true
+ 'restAuthCredential.username': clusterLoginUserName
+ 'restAuthCredential.password': clusterLoginPassword
+ }
+ }
+ }
+ storageProfile: {
+ storageaccounts: [
+ {
+ name: replace(replace(storageAccount.properties.primaryEndpoints.blob, 'https://', ''), '/', '')
+ isDefault: true
+ container: clusterName
+ key: storageAccount.listKeys().keys[0].value
+ }
+ ]
+ }
+ }
+}
+```
+
+You can fix the issue automatically by selecting **Quick Fix** as shown on the following screenshot:
++
+## Next steps
+
+For more information about the linter, see [Use Bicep linter](./linter.md).
azure-resource-manager Linter https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-resource-manager/bicep/linter.md
The default set of linter rules is minimal and taken from [arm-ttk test cases](.
- [use-protectedsettings-for-commandtoexecute-secrets](./linter-rule-use-protectedsettings-for-commandtoexecute-secrets.md) - [use-recent-api-versions](./linter-rule-use-recent-api-versions.md) - [use-resource-id-functions](./linter-rule-use-resource-id-functions.md)
+- [use-resource-symbol-reference](./linter-rule-use-resource-symbol-reference.md)
- [use-stable-resource-identifiers](./linter-rule-use-stable-resource-identifier.md) - [use-stable-vm-image](./linter-rule-use-stable-vm-image.md)
azure-vmware Azure Vmware Solution Known Issues https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-vmware/azure-vmware-solution-known-issues.md
description: This article provides details about the known issues of Azure VMwar
Previously updated : 3/20/2023 Last updated : 4/4/2023 # Known issues: Azure VMware Solution
Refer to the table below to find details about resolution dates or possible work
|Issue | Date discovered | Workaround | Date resolved | | :- | : | :- | :- | | [VMSA-2021-002 ESXiArgs](https://www.vmware.com/security/advisories/VMSA-2021-0002.html) OpenSLP vulnerability publicized in February 2023 | 2021 | [Disable OpenSLP service](https://kb.vmware.com/s/article/76372) | February 2021 - Resolved in [ESXi 7.0 U3c](concepts-private-clouds-clusters.md#vmware-software-versions) |
-| After my private cloud NSX-T Data Center upgrade to version [3.2.2](https://docs.vmware.com/en/VMware-NSX/3.2.2/rn/vmware-nsxt-data-center-322-release-notes/https://docsupdatetracker.net/index.html), the NSX-T Manager **DNS Forwarder Upstream Server Timeout** alarm is raised | February 2023 | [Enable private cloud internet Access](concepts-design-public-internet-access.md), alarm is raised because NSX-T Manager cannot access the configured CloudFlare DNS server. Otherwise, [change the default DNS zone to point to a valid and reachable DNS server.](configure-dns-azure-vmware-solution.md) | February 2023 |
+| After my private cloud NSX-T Data Center upgrade to version [3.2.2](https://docs.vmware.com/en/VMware-NSX/3.2.2/rn/vmware-nsxt-data-center-322-release-notes/https://docsupdatetracker.net/index.html), the NSX-T Manager **DNS - Forwarder Upstream Server Timeout** alarm is raised | February 2023 | [Enable private cloud internet Access](concepts-design-public-internet-access.md), alarm is raised because NSX-T Manager cannot access the configured CloudFlare DNS server. Otherwise, [change the default DNS zone to point to a valid and reachable DNS server.](configure-dns-azure-vmware-solution.md) | February 2023 |
+| When first logging into the vSphere Client, the **Cluster-n: vSAN health alarms are suppressed** alert is active | 2021 | This is should be considered an informational message, since Microsoft manages the service. Select the **Reset to Green** link to clear it. | 2021 |
In this article, you learned about the current known issues with the Azure VMware Solution.
backup Backup Azure Vms Enhanced Policy https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/backup/backup-azure-vms-enhanced-policy.md
Title: Back up Azure VMs with Enhanced policy description: Learn how to configure Enhanced policy to back up VMs. Previously updated : 03/15/2023- Last updated : 04/05/2023+
Azure Backup now supports _Enhanced policy_ that's needed to support new Azure o
>[!Important] >- [Default policy](./backup-during-vm-creation.md#create-a-vm-with-backup-configured) will not support protecting newer Azure offerings, such as [Trusted Launch VM](backup-support-matrix-iaas.md#tvm-backup), [Ultra SSD](backup-support-matrix-iaas.md#vm-storage-support), [Shared disk](backup-support-matrix-iaas.md#vm-storage-support), and Confidential Azure VMs.
->- Enhanced policy currently doesn't support protecting Ultra SSD. You can use [selective disk backup (preview)](selective-disk-backup-restore.md) to exclude these disks, and then configure backup.
+>- Enhanced policy now supports protecting Ultra SSD (preview). To enroll your subscription for this feature, [fill this form](https://forms.office.com/r/1GLRnNCntU).
>- Backups for VMs having [data access authentication enabled disks](../virtual-machines/windows/download-vhd.md?tabs=azure-portal#secure-downloads-and-uploads-with-azure-ad) will fail. You must enable backup of Trusted Launch VM through enhanced policy only. Enhanced policy provides the following features:
backup Backup Support Matrix Iaas https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/backup/backup-support-matrix-iaas.md
Title: Support matrix for Azure VM backups description: Get a summary of support settings and limitations for backing up Azure VMs by using the Azure Backup service. Previously updated : 02/27/2023 Last updated : 04/05/2023 -+
Restore files from network-restricted storage accounts | Not supported.
Restore files on VMs by using Windows Storage Spaces | Not supported on the same VM.<br/><br/> Instead, restore the files on a compatible VM. Restore files on a Linux VM by using LVM or RAID arrays | Not supported on the same VM.<br/><br/> Restore on a compatible VM. Restore files with special network settings | Not supported on the same VM. <br/><br/> Restore on a compatible VM.
+Restore files from an ultra disk | Supported. <br/><br/>See [Azure VM storage support](#vm-storage-support).
Restore files from a shared disk, temporary drive, deduplicated disk, ultra disk, or disk with a write accelerator enabled | Not supported. <br/><br/>See [Azure VM storage support](#vm-storage-support). ## Support for VM management
Adding a disk to a protected VM | Supported.
Resizing a disk on a protected VM | Supported. Shared storage| Backing up VMs by using Cluster Shared Volumes (CSV) or Scale-Out File Server isn't supported. CSV writers are likely to fail during backup. On restore, disks that contain CSV volumes might not come up. [Shared disks](../virtual-machines/disks-shared-enable.md) | Not supported.
-Ultra SSD disks | Not supported. For more information, see [these limitations](selective-disk-backup-restore.md#limitations).
+<a name="ultra-disk-backup">Ultra SSD disks</a> | Supported with [Enhanced policy](backup-azure-vms-enhanced-policy.md). The support is currently in preview. <br><br> To enroll your subscription for this feature, [fill this form](https://forms.office.com/r/1GLRnNCntU).
[Temporary disks](../virtual-machines/managed-disks-overview.md#temporary-disk) | Azure Backup doesn't back up temporary disks. NVMe/[ephemeral disks](../virtual-machines/ephemeral-os-disks.md) | Not supported. [Resilient File System (ReFS)](/windows-server/storage/refs/refs-overview) restore | Supported. Volume Shadow Copy Service (VSS) supports app-consistent backups on ReFS.
backup Selective Disk Backup Restore https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/backup/selective-disk-backup-restore.md
Title: Selective disk backup and restore for Azure virtual machines description: In this article, learn about selective disk backup and restore using the Azure virtual machine backup solution. Previously updated : 03/15/2023 Last updated : 04/05/2023
Azure Backup supports backing up all the disks (operating system and data) in a
This is supported both for Enhanced Policy (preview) as well as Standard Policy. This provides an efficient and cost-effective solution for your backup and restore needs. Each recovery point contains only the disks that are included in the backup operation. This further allows you to have a subset of disks restored from the given recovery point during the restore operation. This applies to both restore from snapshots and the vault.
+>[!Important]
+> [Enhanced policy](backup-azure-vms-enhanced-policy.md) now supports protecting Ultra SSD (preview). To enroll your subscription for this feature, [fill this form](https://forms.office.com/r/1GLRnNCntU).
+ >[!Note] >- This is supported for both backup policies - [Enhanced policy](backup-azure-vms-enhanced-policy.md) and [Standard policy](backup-during-vm-creation.md#create-a-vm-with-backup-configured). >- The *Selective disk backup and restore in Enhanced policy (preview)* is available in public Azure regions only.
This solution is useful particularly in the following scenarios:
1. If you have critical data to be backed up in only one disk, or a subset of the disks and donΓÇÖt want to back up the rest of the disks attached to a VM to minimize the backup storage costs. 2. If you've other backup solutions for part of your VM or data. For example, if you back up your databases or data using a different workload backup solution and you want to use Azure VM level backup for the rest of the data or disks to build an efficient and robust system using the best capabilities available.
-3. If you're using [Enhanced policy](backup-azure-vms-enhanced-policy.md), you can use this solution to exclude unsupported disks (Ultra Disks, Shared Disks) and configure a VM for backup.
+3. If you're using [Enhanced policy](backup-azure-vms-enhanced-policy.md), you can use this solution to exclude unsupported disks (Shared Disks) and configure a VM for backup.
Using PowerShell, Azure CLI, or Azure portal, you can configure selective disk backup of the Azure VM. Using a script, you can include or exclude data disks using their *LUN numbers*. The ability to configure selective disks backup via the Azure portal is limited to the *Backup OS Disk* only for the Standard policy, but can be configured for all data disks for Enhanced policy.
Selective disks backup functionality for Standard policy isn't supported for cla
The restore options to **Create new VM** and **Replace existing** aren't supported for the VM for which selective disks backup functionality is enabled.
-Currently, Azure VM backup doesn't support VMs with ultra-disks or shared disks attached to them. Selective disk backup for Standard policy can't be used to in such cases, which exclude the disk and backup the VM. You can use selective disk backup with Enhanced policy to exclude these disks and configure backup.
+Currently, Azure VM backup doesn't support VMs with shared disks attached to them. Selective disk backup for Standard policy can't be used to in such cases, which exclude the disk and backup the VM. You can use selective disk backup with Enhanced policy to exclude these disks and configure backup.
If you use disk exclusion or selective disks while backing up Azure VM, _[stop protection and retain backup data](backup-azure-manage-vms.md#stop-protection-and-retain-backup-data)_. When resuming backup for this resource, you need to set up disk exclusion settings again.
If you're using standard policy, the Selective disk backup features let you save
If you're using Enhanced policy, the snapshot is taken only for the OS disk and the data disks that you've included.
-### I can't configure backup for the Azure virtual machine by excluding ultra disk or shared disks attached to the VM
+### I can't configure backup for the Azure virtual machine by excluding shared disks attached to the VM
-If you're using Standard policy, Azure VM backup doesn't support VMs with ultra-disk or shared disk attached to them and it is not possible to exclude them with selective disk backup and then configure backup.
+If you're using Standard policy, Azure VM backup doesn't support VMs with shared disk attached to them and it is not possible to exclude them with selective disk backup and then configure backup.
If you're using Enhanced policy, you can exclude the unsupported disks from the backup via selective disk backup (in the Azure portal, CLI, PowerShell, and so on), and configure backup for the VM.
batch Batch Customer Managed Key https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/batch/batch-customer-managed-key.md
Title: Configure customer-managed keys for your Azure Batch account with Azure Key Vault and Managed Identity description: Learn how to encrypt Batch data using customer-managed keys. Previously updated : 02/27/2023 Last updated : 04/03/2023 ms.devlang: csharp
customer-managed keys at Batch account creation, as shown next.
If you don't need a separate user-assigned managed identity, you can enable system-assigned managed identity when you create your Batch account.
+> [!IMPORTANT]
+> A system-assigned managed identity created for a Batch account for customer data encryption as described in this document
+> cannot be used as a [user-assigned managed identity on a Batch pool](managed-identity-pools.md). If you wish to use the
+> same managed identity on both the Batch account and Batch pool, then use a common user-assigned managed identity instead.
+ ### Azure portal In the [Azure portal](https://portal.azure.com/), when you create Batch accounts, pick **System assigned** in the identity type under the **Advanced** tab.
az batch account set \
- **Can I disable customer-managed keys?** You can set the encryption type of the Batch Account back to "Microsoft managed key" at any time. You're free to delete or change the key afterwards. - **How can I rotate my keys?** Customer-managed keys aren't automatically rotated unless the [key is versionless with an appropriate key rotation policy set within Key Vault](../key-vault/keys/how-to-configure-key-rotation.md). To manually rotate the key, update the Key Identifier that the account is associated with. - **After I restore access how long will it take for the Batch account to work again?** It can take up to 10 minutes for the account to be accessible again once access is restored.-- **While the Batch Account is unavailable what happens to my resources?** Any pools that are running when Batch access to the customer-managed key is lost will continue to run. However, the nodes in these pools will transition into an unavailable state, and tasks will stop running (and be requeued). Once access is restored, nodes become available again, and tasks are restarted.
+- **While the Batch Account is unavailable what happens to my resources?** Any pools that are active when Batch access to the customer-managed key is lost will continue to run. However, the nodes in these pools will transition into an unavailable state, and tasks will stop running (and be requeued). Once access is restored, nodes become available again, and tasks are restarted.
- **Does this encryption mechanism apply to VM disks in a Batch pool?** No. For Cloud Services Configuration pools (which are [deprecated](https://azure.microsoft.com/updates/azure-batch-cloudserviceconfiguration-pools-will-be-retired-on-29-february-2024/)), no encryption is applied for the OS and temporary disk. For Virtual Machine Configuration pools, the OS and any specified data disks are encrypted with a Microsoft platform managed key by default. Currently, you can't specify your own key for these disks. To encrypt the temporary disk of VMs for a Batch pool with a Microsoft platform managed key, you must enable the [diskEncryptionConfiguration](/rest/api/batchservice/pool/add#diskencryptionconfiguration) property in your [Virtual Machine Configuration](/rest/api/batchservice/pool/add#virtualmachineconfiguration) Pool. For highly sensitive environments, we recommend enabling temporary disk encryption and avoiding storing sensitive data on OS and data disks. For more information, see [Create a pool with disk encryption enabled](./disk-encryption.md) - **Is the system-assigned managed identity on the Batch account available on the compute nodes?** No. The system-assigned managed identity is currently used only for accessing the Azure Key Vault for the customer-managed key. To use a user-assigned managed identity on compute nodes, see [Configure managed identities in Batch pools](managed-identity-pools.md).
batch Managed Identity Pools https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/batch/managed-identity-pools.md
Title: Configure managed identities in Batch pools description: Learn how to enable user-assigned managed identities on Batch pools and how to use managed identities within the nodes. Previously updated : 04/18/2022 Last updated : 04/03/2023 ms.devlang: csharp # Configure managed identities in Batch pools
-[Managed identities for Azure resources](../active-directory/managed-identities-azure-resources/overview.md) eliminate the need for developers having to manage credentials by providing an identity for the Azure resource in Azure Active Directory (Azure AD) and using it to obtain Azure Active Directory (Azure AD) tokens.
+[Managed identities for Azure resources](../active-directory/managed-identities-azure-resources/overview.md) eliminate
+complicated identity and credential management by providing an identity for the Azure resource in Azure Active Directory
+(Azure AD). This identity is used to obtain Azure Active Directory (Azure AD) tokens to authenticate with target
+resources in Azure.
This topic explains how to enable user-assigned managed identities on Batch pools and how to use managed identities within the nodes.
This topic explains how to enable user-assigned managed identities on Batch pool
> > Creating pools with managed identities can be done by using the [Batch .NET management library](/dotnet/api/overview/azure/batch#management-library), but is not currently supported with the [Batch .NET client library](/dotnet/api/overview/azure/batch#client-library).
-## Create a user-assigned identity
+## Create a user-assigned managed identity
-First, [create your user-assigned managed identity](../active-directory/managed-identities-azure-resources/how-manage-user-assigned-managed-identities.md) in the same tenant as your Batch account. You can create the identity using the Azure portal, the Azure Command-Line Interface (Azure CLI), PowerShell, Azure Resource Manager, or the Azure REST API. This managed identity does not need to be in the same resource group or even in the same subscription.
+First, [create your user-assigned managed identity](../active-directory/managed-identities-azure-resources/how-manage-user-assigned-managed-identities.md) in the same tenant as your Batch account. You can create the identity using the Azure portal, the Azure Command-Line Interface (Azure CLI), PowerShell, Azure Resource Manager, or the Azure REST API. This managed identity doesn't need to be in the same resource group or even in the same subscription.
> [!IMPORTANT]
-> Identities must be configured as user-assigned managed identities. The system-assigned managed identity is available for retrieving [customer-managed keys from Azure KeyVault](batch-customer-managed-key.md), but these are not supported in batch pools.
+> A system-assigned managed identity created for a Batch account for [customer data encryption](batch-customer-managed-key.md)
+> cannot be used as a user-assigned managed identity on a Batch pool as described in this document. If you wish to use the same
+> managed identity on both the Batch account and Batch pool, then use a common user-assigned managed identity instead.
## Create a Batch pool with user-assigned managed identities
To create a Batch pool with a user-assigned managed identity through the Azure p
1. In the search bar, enter and select **Batch accounts**. 1. On the **Batch accounts** page, select the Batch account where you want to create a Batch pool. 1. In the menu for the Batch account, under **Features**, select **Pools**.
-1. In the **Pools** menu, select **Add** to add a new Batch pool.
+1. In the **Pools** menu, select **Add** to add a new Batch pool.
1. For **Pool ID**, enter an identifier for your pool. 1. For **Identity**, change the setting to **User assigned**. 1. Under **User assigned managed identity**, select **Add**.
To create a Batch pool with a user-assigned managed identity with the [Batch .NE
```csharp var poolParameters = new Pool(name: "yourPoolName") {
- VmSize = "standard_d1_v2",
+ VmSize = "standard_d2_v3",
ScaleSettings = new ScaleSettings { FixedScale = new FixedScaleSettings
var poolParameters = new Pool(name: "yourPoolName")
VirtualMachineConfiguration = new VirtualMachineConfiguration( new ImageReference( "Canonical",
- "UbuntuServer",
- "18.04-LTS",
+ "0001-com-ubuntu-server-jammy",
+ "22_04-lts",
"latest"),
- "batch.node.ubuntu 18.04")
+ "batch.node.ubuntu 22.04")
}, Identity = new BatchPoolIdentity {
var pool = await managementClient.Pool.CreateWithHttpMessagesAsync(
resourceGroupName: "yourResourceGroupName", accountName: "yourAccountName", parameters: poolParameters,
- cancellationToken: default(CancellationToken)).ConfigureAwait(false);
+ cancellationToken: default(CancellationToken)).ConfigureAwait(false);
``` ## Use user-assigned managed identities in Batch nodes
-Many Azure Batch technologies which access other Azure resources, such as Azure Storage or Azure Container Registry, support managed identities. For more information on using managed identities with Azure Batch, see the following links:
+Many Azure Batch functions that access other Azure resources directly on the compute nodes, such as Azure Storage or
+Azure Container Registry, support managed identities. For more information on using managed identities with Azure Batch,
+see the following links:
- [Resource files](resource-files.md) - [Output files](batch-task-output-files.md#specify-output-files-using-managed-identity)
Within the Batch nodes, you can get managed identity tokens and use them to auth
For Windows, the PowerShell script to get an access token to authenticate is: ```powershell
-$Response = Invoke-RestMethod -Uri 'http://169.254.169.254/metadata/identity/oauth2/token?api-version=2018-02-01&resource={Resource App Id Url}' -Method GET -Headers @{Metadata="true"}
+$Response = Invoke-RestMethod -Uri 'http://169.254.169.254/metadata/identity/oauth2/token?api-version=2018-02-01&resource={Resource App Id Url}' -Method GET -Headers @{Metadata="true"}
``` For Linux, the Bash script is:
cdn Cdn Pop Locations https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cdn/cdn-pop-locations.md
This article lists current metros containing point-of-presence (POP) locations,
| Africa | Johannesburg, South Africa <br/> Nairobi, Kenya | South Africa | | Middle East | Muscat, Oman<br />Fujirah, United Arab Emirates | Qatar<br />United Arab Emirates | | India | Bengaluru (Bangalore), India<br />Chennai, India<br />Mumbai, India<br />New Delhi, India<br /> | India |
-| Asia | Hong Kong<br />Jakarta, Indonesia<br />Osaka, Japan<br />Tokyo, Japan<br />Singapore<br />Kaohsiung, Taiwan<br />Taipei, Taiwan <br />Manila, Philippines | Hong Kong<br />Indonesia<br />Israel<br />Japan<br />Macau<br />Malaysia<br />Philippines<br />Singapore<br />South Korea<br />Taiwan<br />Thailand<br />T├╝rkiye<br />Vietnam |
+| Asia | Hong Kong<br />Jakarta, Indonesia<br />Osaka, Japan<br />Tokyo, Japan<br />Singapore<br />Kaohsiung, Taiwan<br />Taipei, Taiwan <br />Manila, Philippines | Hong Kong<br />Indonesia<br />Israel<br />Japan<br />Macao<br />Malaysia<br />Philippines<br />Singapore<br />South Korea<br />Taiwan<br />Thailand<br />T├╝rkiye<br />Vietnam |
| Australia and New Zealand | Melbourne, Australia<br />Sydney, Australia<br />Auckland, New Zealand | Australia<br />New Zealand | ## Next steps
cognitive-services Prepare Data https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Anomaly-Detector/How-to/prepare-data.md
-+ Last updated 11/01/2022
cognitive-services Streaming Inference https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Anomaly-Detector/How-to/streaming-inference.md
-+ Last updated 11/01/2022
cognitive-services Train Model https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Anomaly-Detector/How-to/train-model.md
-+ Last updated 11/01/2022 # Train a Multivariate Anomaly Detection model
-To test out Multivariate Anomaly Detection quickly, try the [Code Sample](https://github.com/Azure-Samples/AnomalyDetector)! For more instructions on how to run a jupyter notebook, please refer to [Install and Run a Jupyter Notebook](https://jupyter-notebook-beginner-guide.readthedocs.io/en/latest/install.html#).
+To test out Multivariate Anomaly Detection quickly, try the [Code Sample](https://github.com/Azure-Samples/AnomalyDetector)! For more instructions on how to run a Jupyter notebook, please refer to [Install and Run a Jupyter Notebook](https://jupyter-notebook-beginner-guide.readthedocs.io/en/latest/install.html#).
## API Overview
There are 7 APIs provided in Multivariate Anomaly Detection:
| | - | -- | | |**Train Model**| POST | `{endpoint}`/anomalydetector/v1.1/multivariate/models | Create and train a model | |**Get Model Status**| GET | `{endpoint}`anomalydetector/v1.1/multivariate/models/`{modelId}` | Get model status and model metadata with `modelId` |
-|**Batch Inference**| POST | `{endpoint}`/anomalydetector/v1.1/multivariate/models/`{modelId}`: detect-batch | Trigger an asynchronous inference with `modelId` which works in a batch scenario |
+|**Batch Inference**| POST | `{endpoint}`/anomalydetector/v1.1/multivariate/models/`{modelId}`: detect-batch | Trigger an asynchronous inference with `modelId`, which works in a batch scenario |
|**Get Batch Inference Results**| GET | `{endpoint}`/anomalydetector/v1.1/multivariate/detect-batch/`{resultId}` | Get batch inference results with `resultId` |
-|**Streaming Inference**| POST | `{endpoint}`/anomalydetector/v1.1/multivariate/models/`{modelId}`: detect-last | Trigger a synchronous inference with `modelId` which works in a streaming scenario |
+|**Streaming Inference**| POST | `{endpoint}`/anomalydetector/v1.1/multivariate/models/`{modelId}`: detect-last | Trigger a synchronous inference with `modelId`, which works in a streaming scenario |
|**List Model**| GET | `{endpoint}`/anomalydetector/v1.1/multivariate/models | List all models | |**Delete Model**| DELET | `{endpoint}`/anomalydetector/v1.1/multivariate/models/`{modelId}` | Delete model with `modelId` |
Other parameters for training API are optional:
| `Zero` | Fill `nan` values with 0. | | `Fixed` | Fill `nan` values with a specified valid value that should be provided in `paddingValue`. |
-* **paddingValue**: Padding value is used to fill `nan` when `fillNAMethod` is `Fixed` and must be provided in that case. In other cases it's optional.
+* **paddingValue**: Padding value is used to fill `nan` when `fillNAMethod` is `Fixed` and must be provided in that case. In other cases, it's optional.
* **displayName**: This is an optional parameter, which is used to identify models. For example, you can use it to mark parameters, data sources, and any other metadata about the model and its input data. The default value is an empty string.
The response contains four fields, `models`, `currentCount`, `maxCount`, and `ne
## Next steps
-* [Best practices of multivariate anomaly detection](../concepts/best-practices-multivariate.md)
+* [Best practices of multivariate anomaly detection](../concepts/best-practices-multivariate.md)
cognitive-services Anomaly Detection Best Practices https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Anomaly-Detector/concepts/anomaly-detection-best-practices.md
-+ Last updated 01/22/2021
cognitive-services Best Practices Multivariate https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Anomaly-Detector/concepts/best-practices-multivariate.md
-+ Last updated 06/07/2022
cognitive-services Whats New https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Anomaly-Detector/whats-new.md
Title: What's New - Anomaly Detector
description: This article is regularly updated with news about the Azure Cognitive Services Anomaly Detector. -+ Last updated 12/15/2022
Last updated 12/15/2022
Learn what's new in the service. These items include release notes, videos, blog posts, papers, and other types of information. Bookmark this page to keep up to date with the service.
-We've also added links to some user-generated content. Those items will be marked with **[UGC]** tag. Some of them are hosted on websites that are external to Microsoft and Microsoft is not responsible for the content there. Use discretion when you refer to these resources. Contact AnomalyDetector@microsoft.com or raise an issue on GitHub if you'd like us to remove the content.
+We have also added links to some user-generated content. Those items will be marked with **[UGC]** tag. Some of them are hosted on websites that are external to Microsoft and Microsoft isn't responsible for the content there. Use discretion when you refer to these resources. Contact AnomalyDetector@microsoft.com or raise an issue on GitHub if you'd like us to remove the content.
## Release notes ### Jan 2023
-* Multivariate Anomaly Detection will begin charging as of January 10th, 2023. For pricing details see the [pricing page](https://azure.microsoft.com/pricing/details/cognitive-services/anomaly-detector/).
+* Multivariate Anomaly Detection will begin charging as of January 10th, 2023. For pricing details, see the [pricing page](https://azure.microsoft.com/pricing/details/cognitive-services/anomaly-detector/).
### Dec 2022 * Multivariate Anomaly Detection SDK is updated to match with GA API for four languages.
We've also added links to some user-generated content. Those items will be marke
### June 2022
-* New blog released: [4 sets of best practices to use Multivariate Anomaly Detector when monitoring your equipment](https://techcommunity.microsoft.com/t5/ai-cognitive-services-blog/4-sets-of-best-practices-to-use-multivariate-anomaly-detector/ba-p/3490848#footerContent).
+* New blog released: [Four sets of best practices to use Multivariate Anomaly Detector when monitoring your equipment](https://techcommunity.microsoft.com/t5/ai-cognitive-services-blog/4-sets-of-best-practices-to-use-multivariate-anomaly-detector/ba-p/3490848#footerContent).
### May 2022
cognitive-services Models https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/openai/concepts/models.md
description: Learn about the different model capabilities that are available with Azure OpenAI. Previously updated : 03/21/2023 Last updated : 03/31/2023 --++ recommendations: false keywords:
Cushman is powerful, yet fast. While Davinci is stronger when it comes to analyz
## Embeddings models
-Currently, we offer three families of Embeddings models for different functionalities:
+> [!IMPORTANT]
+> We strongly recommend using `text-embedding-ada-002 (Version 2)`. This model/version provides parity with OpenAI's `text-embedding-ada-002`. To learn more about the improvements offered by this model, please refer to [OpenAI's blog post](https://openai.com/blog/new-and-improved-embedding-model). Even if you are currently using Version 1 you should migrate to Version 2 to take advantage of the latest weights/updated token limit. Version 1 and Version 2 are not interchangeable, so document embedding and document search must be done using the same version of the model.
+
+Currently, we offer three families of Embeddings models for different functionalities:
- [Similarity](#similarity-embedding) - [Text search](#text-search-embedding)
These models can only be used with Completions API requests.
These models can only be used with Embedding API requests.
+> [!NOTE]
+> We strongly recommend using `text-embedding-ada-002 (Version 2)`. This model/version provides parity with OpenAI's `text-embedding-ada-002`. To learn more about the improvements offered by this model, please refer to [OpenAI's blog post](https://openai.com/blog/new-and-improved-embedding-model). Even if you are currently using Version 1 you should migrate to Version 2 to take advantage of the latest weights/updated token limit. Version 1 and Version 2 are not interchangeable, so document embedding and document search must be done using the same version of the model.
+ | Model ID | Base model Regions | Fine-Tuning Regions | Max Request (tokens) | Training Data (up to) | | | | | | |
-| text-embedding-ada-002 | East US, South Central US, West Europe | N/A |2,046 | Sep 2021 |
+| text-embedding-ada-002 (version 2) | East US, South Central US | N/A |8,191 | Sep 2021 |
+| text-embedding-ada-002 (version 1) | East US, South Central US, West Europe | N/A |4,095 | Sep 2021 |
| text-similarity-ada-001| East US, South Central US, West Europe | N/A | 2,046 | Aug 2020 | | text-similarity-babbage-001 | South Central US, West Europe | N/A | 2,046 | Aug 2020 | | text-similarity-curie-001 | East US, South Central US, West Europe | N/A | 2046 | Aug 2020 |
cognitive-services Understand Embeddings https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/openai/concepts/understand-embeddings.md
Previously updated : 12/06/2022 Last updated : 03/22/2023 recommendations: false
Embeddings make it easier to do machine learning on large inputs representing wo
One method of identifying similar documents is to count the number of common words between documents. Unfortunately, this approach doesn't scale since an expansion in document size is likely to lead to a greater number of common words detected even among completely disparate topics. For this reason, cosine similarity can offer a more effective alternative.
-From a mathematic perspective, cosine similarity measures the cosine of the angle between two vectors projected in a multi-dimensional space. This is beneficial because if two documents are far apart by Euclidean distance because of size, they could still have a smaller angle between them and therefore higher cosine similarity.
+From a mathematic perspective, cosine similarity measures the cosine of the angle between two vectors projected in a multi-dimensional space. This is beneficial because if two documents are far apart by Euclidean distance because of size, they could still have a smaller angle between them and therefore higher cosine similarity. For more information on cosine similarity and the [underlying formula](https://en.wikipedia.org/wiki/Cosine_similarity).
Azure OpenAI embeddings rely on cosine similarity to compute similarity between documents and a query.
cognitive-services Embeddings https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/openai/tutorials/embeddings.md
Previously updated : 01/10/2022- Last updated : 03/31/2023+ recommendations: false
In this tutorial, you learn how to:
> * Install Azure OpenAI and other dependent Python libraries. > * Download the BillSum dataset and prepare it for analysis. > * Create environment variables for your resources endpoint and API key.
-> * Use the **text-search-curie-doc-001** and **text-search-curie-query-001** models.
+> * Use the **text-embedding-ada-002 (Version 2)** model
> * Use [cosine similarity](../concepts/understand-embeddings.md) to rank search results.
+> [!Important]
+> We strongly recommend using `text-embedding-ada-002 (Version 2)`. This model/version provides parity with OpenAI's `text-embedding-ada-002`. To learn more about the improvements offered by this model, please refer to [OpenAI's blog post](https://openai.com/blog/new-and-improved-embedding-model). Even if you are currently using Version 1 you should migrate to Version 2 to take advantage of the latest weights/updated token limit. Version 1 and Version 2 are not interchangeable, so document embedding and document search must be done using the same version of the model.
+ ## Prerequisites * An Azure subscription - [Create one for free](https://azure.microsoft.com/free/cognitive-services?azure-portal=true) * Access granted to Azure OpenAI in the desired Azure subscription Currently, access to this service is granted only by application. You can apply for access to Azure OpenAI by completing the form at <a href="https://aka.ms/oai/access" target="_blank">https://aka.ms/oai/access</a>. Open an issue on this repo to contact us if you have an issue. * <a href="https://www.python.org/" target="_blank">Python 3.7.1 or later version</a>
-* The following Python libraries: openai, num2words, matplotlib, plotly, scipy, scikit-learn,pandas, transformers.
-* An Azure OpenAI resource with **text-search-curie-doc-001** and **text-search-curie-query-001** models deployed. These models are currently only available in [certain regions](../concepts/models.md#model-summary-table-and-region-availability). If you don't have a resource the process is documented in our [resource deployment guide](../how-to/create-resource.md).
-
-> [!NOTE]
-> If you have never worked with the Hugging Face transformers library it has its own specific [prerequisites](https://huggingface.co/docs/transformers/installation) that are required before you can successfully run `pip install transformers`.
+* The following Python libraries: openai, num2words, matplotlib, plotly, scipy, scikit-learn, pandas, tiktoken.
+* [Jupyter Notebooks](https://jupyter.org/)
+* An Azure OpenAI resource with the **text-embedding-ada-002 (Version 2)** model deployed. This model is currently only available in [certain regions](../concepts/models.md#model-summary-table-and-region-availability). If you don't have a resource the process of creating one is documented in our [resource deployment guide](../how-to/create-resource.md).
## Set up
In this tutorial, you learn how to:
If you haven't already, you need to install the following libraries: ```cmd
-pip install openai num2words matplotlib plotly scipy scikit-learn pandas transformers
+pip install openai num2words matplotlib plotly scipy scikit-learn pandas tiktoken
```
-Alternatively, you can use our [requirements.txt file](https://github.com/Azure-Samples/Azure-OpenAI-Docs-Samples/blob/main/Samples/Tutorials/Embeddings/requirements.txt).
+<!--Alternatively, you can use our [requirements.txt file](https://github.com/Azure-Samples/Azure-OpenAI-Docs-Samples/blob/main/Samples/Tutorials/Embeddings/requirements.txt).-->
### Download the BillSum dataset
To successfully make a call against Azure OpenAI, you'll need an **endpoint** an
|Variable name | Value | |--|-|
-| `ENDPOINT` | This value can be found in the **Keys & Endpoint** section when examining your resource from the Azure portal. Alternatively, you can find the value in **Azure OpenAI Studio** > **Playground** > **Code View**. An example endpoint is: `https://docs-test-001.openai.azure.com/`.|
+| `ENDPOINT` | This value can be found in the **Keys & Endpoint** section when examining your resource from the Azure portal. Alternatively, you can find the value in **Azure OpenAI Studio** > **Playground** > **Code View**. An example endpoint is: `https://docs-test-001.openai.azure.com`.|
| `API-KEY` | This value can be found in the **Keys & Endpoint** section when examining your resource from the Azure portal. You can use either `KEY1` or `KEY2`.| Go to your resource in the Azure portal. The **Endpoint and Keys** can be found in the **Resource Management** section. Copy your endpoint and access key as you'll need both for authenticating your API calls. You can use either `KEY1` or `KEY2`. Always having two keys allows you to securely rotate and regenerate keys without causing a service disruption.
echo export AZURE_OPENAI_ENDPOINT="REPLACE_WITH_YOUR_ENDPOINT_HERE" >> /etc/envi
-After setting the environment variables you may need to close and reopen Jupyter notebooks or whatever IDE you are using in order for the environment variables to be accessible.
+After setting the environment variables, you may need to close and reopen Jupyter notebooks or whatever IDE you're using in order for the environment variables to be accessible. While we strongly recommend using Jupyter Notebooks, if for some reason you cannot you'll need to modify any code that is returning a pandas dataframe by using `print(dataframe_name)` rather than just calling the `dataframe_name` directly as is often done at the end of a code block.
Run the following code in your preferred Python IDE:
-If you wish to view the Jupyter notebook that corresponds to this tutorial you can download the tutorial from our [samples repo](https://github.com/Azure-Samples/Azure-OpenAI-Docs-Samples/blob/main/Samples/Tutorials/Embeddings/embedding_billsum.ipynb).
+<!--If you wish to view the Jupyter notebook that corresponds to this tutorial you can download the tutorial from our [samples repo](https://github.com/Azure-Samples/Azure-OpenAI-Docs-Samples/blob/main/Samples/Tutorials/Embeddings/embedding_billsum.ipynb).-->
## Import libraries and list models ```python import openai
+import os
import re import requests import sys
import os
import pandas as pd import numpy as np from openai.embeddings_utils import get_embedding, cosine_similarity
-from transformers import GPT2TokenizerFast
+import tiktoken
API_KEY = os.getenv("AZURE_OPENAI_API_KEY") RESOURCE_ENDPOINT = os.getenv("AZURE_OPENAI_ENDPOINT")
openai.api_key = API_KEY
openai.api_base = RESOURCE_ENDPOINT openai.api_version = "2022-12-01"
-url = openai.api_base + "/openai/deployments?api-version=2022-12-01"
+url = openai.api_base + "/openai/deployments?api-version=2022-12-01"
r = requests.get(url, headers={"api-key": API_KEY})
print(r.text)
"scale_settings": { "scale_type": "standard" },
- "model": "text-davinci-002",
+ "model": "text-embedding-ada-002",
"owner": "organization-owner",
- "id": "text-davinci-002",
+ "id": "text-embedding-ada-002",
"status": "succeeded", "created_at": 1657572678, "updated_at": 1657572678,
print(r.text)
} ```
-The output of this command will vary based on the number and type of models you've deployed. In this case, we need to confirm that we have entries for both **text-search-curie-doc-001** and **text-search-curie-query-001**. If you find that you're missing one of these models, you'll need to [deploy the models](../how-to/create-resource.md#deploy-a-model) to your resource before proceeding.
+The output of this command will vary based on the number and type of models you've deployed. In this case, we need to confirm that we have an entry for **text-embedding-ada-002**. If you find that you're missing this model, you'll need to [deploy the model](../how-to/create-resource.md#deploy-a-model) to your resource before proceeding.
-Now we need read our csv file and create a pandas DataFrame. After the initial DataFrame is created, we can view the contents of the table by running `df`.
+Now we need to read our csv file and create a pandas DataFrame. After the initial DataFrame is created, we can view the contents of the table by running `df`.
```python
-df = pd.read_csv("INSERT LOCAL PATH TO BILL_SUM_DATA.CSV") # example: df = pd.read_csv("c:\\test\\bill_sum_data.csv")df
+df=pd.read_csv(os.path.join(os.getcwd(),'bill_sum_data.csv')) # This assumes that you have placed the bill_sum_data.csv in the same directory you are running Jupyter Notebooks
df ```
df_bills
Next we'll perform some light data cleaning by removing redundant whitespace and cleaning up the punctuation to prepare the data for tokenization. ```python
+pd.options.mode.chained_assignment = None #https://pandas.pydata.org/pandas-docs/stable/user_guide/indexing.html#evaluation-order-matters
# s is input text def normalize_text(s, sep_token = " \n "):
def normalize_text(s, sep_token = " \n "):
return s
-df_bills['text'] = df_bills["text"].apply(lambda x : normalize_text(x))
+df_bills['text']= df_bills["text"].apply(lambda x : normalize_text(x))
```
-> [!Note]
-> If you receive a warning stating *`A value is trying to be set on a copy of a slice from a DataFrame.Try using .loc[row_indexer,col_indexer] = value instead` you can safely ignore this message.
-
-Now we need to remove any bills that are too long for the token limit (~2000 tokens).
+Now we need to remove any bills that are too long for the token limit (8192 tokens).
```python
-tokenizer = GPT2TokenizerFast.from_pretrained("gpt2")
+tokenizer = tiktoken.get_encoding("cl100k_base")
df_bills['n_tokens'] = df_bills["text"].apply(lambda x: len(tokenizer.encode(x)))
-df_bills = df_bills[df_bills.n_tokens<2000]
+df_bills = df_bills[df_bills.n_tokens<8192]
len(df_bills) ``` **Output:** ```cmd
-12
+20
```
- > [!Note]
-> You can ignore the message:`Token indices sequence length is longer than the specified maximum sequence length for this model (1480 > 1024). Running this sequence through the model will result in indexing errors. A value is trying to be set on a copy of a slice from a DataFrame.Try using .loc[row_indexer,col_indexer] = value instead.`
+>[!NOTE]
+>In this case all bills are under the embedding model input token limit, but you can use the technique above to remove entries that would otherwise cause embedding to fail. When faced with content that exceeds the embedding limit, you can also chunk the content into smaller pieces and then embed those one at a time.
-We'll once again examine **df_bills**. Note that as expected, now only 12 results are returned though they retain their original index in the first column, and we've now added a column called n_tokens.
+We'll once again examine **df_bills**.
```python df_bills
df_bills
:::image type="content" source="../media/tutorials/tokens-dataframe.png" alt-text="Screenshot of the DataFrame with a new column called n_tokens." lightbox="../media/tutorials/tokens-dataframe.png":::
-To understand the n_tokens column a little more as well how the text is tokenized, it can be helpful to run the following code:
+To understand the n_tokens column a little more as well how text ultimately is tokenized, it can be helpful to run the following code:
```python
-understand_tokenization = tokenizer.tokenize(df_bills.text[0])
-understand_tokenization
+sample_encode = tokenizer.encode(df_bills.text[0])
+decode = tokenizer.decode_tokens_bytes(sample_encode)
+decode
```
-For our docs we're intentionally truncating the output, but running this command in your environment will return the full text from first index tokenized into chunks.
+For our docs we're intentionally truncating the output, but running this command in your environment will return the full text from index zero tokenized into chunks. You can see that in some cases an entire word is represented with a single token whereas in others parts of words are split across multiple tokens.
**Output:** ```cmd
-['S',
- 'ECTION',
- '─á1',
- '.',
- '─áSH',
- 'ORT',
- '─áTIT',
- 'LE',
- '.',
- '─áThis',
- '─áAct',
- '─ámay',
- '─ábe',
- '─ácited',
- '─áas',
- '─áthe',
- '─á``',
- 'National',
- '─áScience',
- '─áEducation',
- '─áTax',
- '─áIn',
- 'cent',
- 'ive',
- '─áfor',
- '─áBusiness',
-...
+[b'SECTION',
+ b' ',
+ b'1',
+ b'.',
+ b' SHORT',
+ b' TITLE',
+ b'.',
+ b' This',
+ b' Act',
+ b' may',
+ b' be',
+ b' cited',
+ b' as',
+ b' the',
+ b' ``',
+ b'National',
+ b' Science',
+ b' Education',
+ b' Tax',
+ b' In',
+ b'cent',
+ b'ive',
+ b' for',
+ b' Businesses',
+ b' Act',
+ b' of',
+ b' ',
+ b'200',
+ b'7',
+ b"''.",
+ b' SEC',
+ b'.',
+ b' ',
+ b'2',
+ b'.',
+ b' C',
+ b'RED',
+ b'ITS',
+ b' FOR',
+ b' CERT',
+ b'AIN',
+ b' CONTRIBUT',
+ b'IONS',
+ b' BEN',
+ b'EF',
+ b'IT',
+ b'ING',
+ b' SC',
```
-If you then check the length of the `understand_tokenization` variable, you'll find it matches the first number in the n_tokens column.
+If you then check the length of the `decode` variable, you'll find it matches the first number in the n_tokens column.
```python
-len(understand_tokenization)
+len(decode)
``` **Output:** ```cmd
-1480
+1466
```
-Now that we understand more about how tokenization works we can move on to embedding. Before searching, we'll embed the text documents and save the corresponding embedding. We embed each chunk using a **doc model**, in this case `text-search-curie-doc-001`. These embeddings can be stored locally or in an Azure DB. As a result, each tech document has its corresponding embedding vector in the new curie search column on the right side of the DataFrame.
+Now that we understand more about how tokenization works we can move on to embedding. It is important to note, that we haven't actually tokenized the documents yet. The `n_tokens` column is simply a way of making sure none of the data we pass to the model for tokenization and embedding exceeds the input token limit of 8,192. When we pass the documents to the embeddings model, it will break the documents into tokens similar (though not necessarily identical) to the examples above and then convert the tokens to a series of floating point numbers that will be accessible via vector search. These embeddings can be stored locally or in an Azure Database. As a result, each bill will have its own corresponding embedding vector in the new `ada_v2` column on the right side of the DataFrame.
```python
-df_bills['curie_search'] = df_bills["text"].apply(lambda x : get_embedding(x, engine = 'text-search-curie-doc-001'))
+df_bills['ada_v2'] = df_bills["text"].apply(lambda x : get_embedding(x, engine = 'text-embedding-ada-002')) # engine should be set to the deployment name you chose when you deployed the text-embedding-ada-002 (Version 2) model
``` ```python
df_bills
:::image type="content" source="../media/tutorials/embed-text-documents.png" alt-text="Screenshot of the formatted results from df_bills command." lightbox="../media/tutorials/embed-text-documents.png":::
-At the time of search (live compute), we'll embed the search query using the corresponding **query model** (`text-search-query-001`). Next find the closest embedding in the database, ranked by [cosine similarity](../concepts/understand-embeddings.md).
-
-In our example, the user provides the query "can I get information on cable company tax revenue". The query is passed through a function that embeds the query with the corresponding **query model** and finds the embedding closest to it from the previously embedded documents in the previous step.
+As we run the search code block below, we'll embed the search query *"Can I get information on cable company tax revenue?"* with the same **text-embedding-ada-002 (Version 2)** model. Next we'll find the closest bill embedding to the newly embedded text from our query ranked by [cosine similarity](../concepts/understand-embeddings.md).
```python # search through the reviews for a specific product def search_docs(df, user_query, top_n=3, to_print=True): embedding = get_embedding( user_query,
- engine="text-search-curie-query-001"
+ engine="text-embedding-ada-002" # engine should be set to the deployment name you chose when you deployed the text-embedding-ada-002 (Version 2) model
)
- df["similarities"] = df.curie_search.apply(lambda x: cosine_similarity(x, embedding))
+ df["similarities"] = df.ada_v2.apply(lambda x: cosine_similarity(x, embedding))
res = ( df.sort_values("similarities", ascending=False)
def search_docs(df, user_query, top_n=3, to_print=True):
return res
-res = search_docs(df_bills, "can i get information on cable company tax revenue", top_n=4)
+res = search_docs(df_bills, "Can I get information on cable company tax revenue?", top_n=4)
``` **Output**:
res["summary"][9]
Using this approach, you can use embeddings as a search mechanism across documents in a knowledge base. The user can then take the top search result and use it for their downstream task, which prompted their initial query.
-## Video
-
-There is video walkthrough of this tutorial including the pre-requisite steps which can viewed on this [community YouTube post](https://www.youtube.com/watch?v=PSLO-yM6eFY).
- ## Clean up resources If you created an OpenAI resource solely for completing this tutorial and want to clean up and remove an OpenAI resource, you'll need to delete your deployed models, and then delete the resource or associated resource group if it's dedicated to your test resource. Deleting the resource group also deletes any other resources associated with it.
communication-services Sms Insights https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communication-services/concepts/analytics/insights/sms-insights.md
The Overview section provides an overall performance of sent messages along with
Great to help answer general questions like: - How many SMS have I sent through my resource? - Are my messages being blocked or failing at a glance?-- What is my message distribution by country?
+- What is my message distribution by country/region?
#### Top metrics :::image type="content" source="..\media\workbooks\sms-insights\sms-insights-overview.png" alt-text="Screenshot of SMS insights overview graphs.":::
-#### SMS by country
+#### SMS by country/region
### Message delivery rates section The Message Delivery Rates section provides insights into SMS performance and delivery rate per day. The user can select a specific date in the graph to drill into logs.
communication-services Call Automation Teams Interop https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communication-services/concepts/call-automation/call-automation-teams-interop.md
The following list presents the set of features that are currently available in
## Next steps > [!div class="nextstepaction"]
-> [Get started with Adding a Microsoft Teams user to an ongoing call using Call Automation](./../../quickstarts/call-automation/Callflows-for-customer-interactions.md)
+> [Get started with Adding a Microsoft Teams user to an ongoing call using Call Automation](./../../how-tos/call-automation/teams-interop-call-automation.md)
-Here are some articles of interest to you:
-- Understand how your resource is [charged for various calling use cases](../pricing.md) with examples.
+Here are some articles of interest to you:
+- Learn more about [Call Automation](../../concepts/call-automation/call-automation.md) and its features.
+- Learn about [Play action](../../concepts/call-automation/play-Action.md) to play audio in a call.
+- Learn how to build a [call workflow](../../quickstarts/call-automation/callflows-for-customer-interactions.md) for a customer support scenario.
+- Understand how your resource is [charged for various calling use cases](../pricing.md) with examples.
communication-services Sub Eligibility Number Capability https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communication-services/concepts/numbers/sub-eligibility-number-capability.md
Title: Country availability of telephone numbers and subscription eligibility
+ Title: Country/regional availability of telephone numbers and subscription eligibility
-description: Learn about Country Availability, Subscription Eligibility and Number Capabilities for PSTN and SMS Numbers in Communication Services.
+description: Learn about Country/Regional Availability, Subscription Eligibility and Number Capabilities for PSTN and SMS Numbers in Communication Services.
-# Country availability of telephone numbers and subscription eligibility
+# Country/regional availability of telephone numbers and subscription eligibility
Numbers can be purchased on eligible Azure subscriptions and in geographies where Communication Services is legally eligible to provide them.
More details on eligible subscription types are as follows:
## Number capabilities and availability
-The capabilities and numbers that are available to you depend on the country that you're operating within, your use case, and the phone number type that you've selected. These capabilities vary by country due to regulatory requirements.
+The capabilities and numbers that are available to you depend on the country/region that you're operating within, your use case, and the phone number type that you've selected. These capabilities vary by country/region due to regulatory requirements.
The following tables summarize current availability:
The following tables summarize current availability:
| Canada | Local | - | - | General Availability | General Availability\* | | UK | Toll-Free | - | - | General Availability | General Availability\* | | UK | Local | - | - | General Availability | General Availability\* |-
+| Denmark | Toll-Free | - | - | Public Preview | Public Preview\* |
+| Denmark | Local | - | - | Public Preview | Public Preview\* |
+| Italy | Toll-Free** | - | - | General Availability | General Availability\* |
+| Italy | Local** | - | - | General Availability | General Availability\* |
+| Sweden | Toll-Free | - | - | General Availability | General Availability\* |
+| Sweden | Local | - | - | General Availability | General Availability\* |
\* Please refer to [Inbound calling capabilities page](../telephony/inbound-calling-capabilities.md) for details.
The following tables summarize current availability:
| Canada | Local | - | - | General Availability | General Availability\* | | UK | Toll-Free | - | - | General Availability | General Availability\* | | UK | Local | - | - | General Availability | General Availability\* |
+| Italy | Toll-Free** | - | - | General Availability | General Availability\* |
+| Italy | Local** | - | - | General Availability | General Availability\* |
+| Sweden | Toll-Free | - | - | General Availability | General Availability\* |
+| Sweden | Local | - | - | General Availability | General Availability\* |
+| Ireland | Toll-Free | - | - | General Availability | General Availability\* |
+| Ireland | Local | - | - | General Availability | General Availability\* |
\* Please refer to [Inbound calling capabilities page](../telephony/inbound-calling-capabilities.md) for details.
The following tables summarize current availability:
| Canada | Local | - | - | General Availability | General Availability\* | | UK | Toll-Free | - | - | General Availability | General Availability\* | | UK | Local | - | - | General Availability | General Availability\* |
+| Sweden | Toll-Free | - | - | General Availability | General Availability\* |
+| Sweden | Local | - | - | General Availability | General Availability\* |
+| Ireland | Toll-Free | - | - | General Availability | General Availability\* |
+| Ireland | Local | - | - | General Availability | General Availability\* |
+| Denmark | Toll-Free | - | - | Public Preview | Public Preview\* |
+| Denmark | Local | - | - | Public Preview | Public Preview\* |
\* Please refer to [Inbound calling capabilities page](../telephony/inbound-calling-capabilities.md) for details.
The following tables summarize current availability:
| Canada | Local | - | - | General Availability | General Availability\* | | USA & Puerto Rico | Toll-Free | General Availability | General Availability | General Availability | General Availability\* | | USA & Puerto Rico | Local | - | - | General Availability | General Availability\* |
+| Ireland | Toll-Free | - | - | General Availability | General Availability\* |
+| Ireland | Local | - | - | General Availability | General Availability\* |
+| Denmark | Toll-Free | - | - | Public Preview | Public Preview\* |
+| Denmark | Local | - | - | Public Preview | Public Preview\* |
+| Italy | Toll-Free** | - | - | General Availability | General Availability\* |
+| Italy | Local** | - | - | General Availability | General Availability\* |
\* Please refer to [Inbound calling capabilities page](../telephony/inbound-calling-capabilities.md) for details.
The following tables summarize current availability:
| Number | Type | Send SMS | Receive SMS | Make Calls | Receive Calls | | :- | :-- | :- | :- | :- | : | | Norway | Local** | - | - | Public Preview | Public Preview\* |
+| Norway | Toll-Free | - | - | Public Preview | Public Preview\* |
\* Please refer to [Inbound calling capabilities page](../telephony/inbound-calling-capabilities.md) for details.
communication-services Pstn Pricing https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communication-services/concepts/pstn-pricing.md
Numbers are billed on a per month basis, and pricing differs based on the type of a number and the source (country) of the number. Once a number is purchased, Customers can make / receive calls using that number and are billed on a per minute basis. PSTN call pricing is based on the type of number and location in which a call is terminated (destination), with few scenarios having rates based on origination location.
-In most cases, customers with Azure subscriptions locations that match the country of the Number offer will be able to buy the Number. However, US and UK numbers may be purchased by customers with Azure subscription locations in other countries. Please see here for details on [in-country and cross-country purchases](../concepts/numbers/sub-eligibility-number-capability.md).
+In most cases, customers with Azure subscriptions locations that match the country of the Number offer are able to buy the Number. See here for details on [in-country and cross-country purchases](../concepts/numbers/sub-eligibility-number-capability.md).
All prices shown below are in USD.
All prices shown below are in USD.
|Geographic |Starting at USD 0.0130/min |USD 0.0085/min | |Toll-free |Starting at USD 0.0130/min | USD 0.0220/min |
-\* For destination-specific pricing for making outbound calls, please refer to details [here](https://github.com/Azure/Communication/blob/master/pricing/communication-services-pstn-rates.csv)
+\* For destination-specific pricing for making outbound calls, refer to details [here](https://github.com/Azure/Communication/blob/master/pricing/communication-services-pstn-rates.csv)
## United Kingdom telephony offers
All prices shown below are in USD.
|Geographic |Starting at USD 0.0150/min |USD 0.0090/min | |Toll-free |Starting at USD 0.0150/min |Starting at USD 0.0290/min |
-\* For destination-specific pricing for making outbound calls, please refer to details [here](https://github.com/Azure/Communication/blob/master/pricing/communication-services-pstn-rates.csv)
+\* For destination-specific pricing for making outbound calls, refer to details [here](https://github.com/Azure/Communication/blob/master/pricing/communication-services-pstn-rates.csv)
## Denmark telephony offers
All prices shown below are in USD.
|Geographic |Starting at USD 0.0190/min |USD 0.0100/min | |Toll-free |Starting at USD 0.0190/min |Starting at USD 0.0343/min |
-\* For destination-specific pricing for making outbound calls, please refer to details [here](https://github.com/Azure/Communication/blob/master/pricing/communication-services-pstn-rates.csv)
+\* For destination-specific pricing for making outbound calls, refer to details [here](https://github.com/Azure/Communication/blob/master/pricing/communication-services-pstn-rates.csv)
## Canada telephony offers
All prices shown below are in USD.
|Geographic |Starting at USD 0.0130/min |USD 0.0085/min | |Toll-free |Starting at USD 0.0130/min |USD 0.0220/min |
-\* For destination-specific pricing for making outbound calls, please refer to details [here](https://github.com/Azure/Communication/blob/master/pricing/communication-services-pstn-rates.csv)
+\* For destination-specific pricing for making outbound calls, refer to details [here](https://github.com/Azure/Communication/blob/master/pricing/communication-services-pstn-rates.csv)
## Ireland telephony offers
All prices shown below are in USD.
|Geographic |Starting at USD 0.0160/min |USD 0.0100/min | |Toll-free |Starting at USD 0.0160/min |Starting at USD 0.0448/min |
-\* For destination-specific pricing for making outbound calls, please refer to details [here](https://github.com/Azure/Communication/blob/master/pricing/communication-services-pstn-rates.csv)
+\* For destination-specific pricing for making outbound calls, refer to details [here](https://github.com/Azure/Communication/blob/master/pricing/communication-services-pstn-rates.csv)
## Italy telephony offers
All prices shown below are in USD.
|Geographic |Starting at USD 0.0160/min |USD 0.0100/min | |Toll-free |Starting at USD 0.0160/min |USD 0.3415/min |
-\* For destination-specific pricing for making outbound calls, please refer to details [here](https://github.com/Azure/Communication/blob/master/pricing/communication-services-pstn-rates.csv)
+\* For destination-specific pricing for making outbound calls, refer to details [here](https://github.com/Azure/Communication/blob/master/pricing/communication-services-pstn-rates.csv)
## Sweden telephony offers
All prices shown below are in USD.
|Geographic |Starting at USD 0.0160/min |USD 0.0080/min | |Toll-free |Starting at USD 0.0160/min |USD 0.1138/min |
-\* For destination-specific pricing for making outbound calls, please refer to details [here](https://github.com/Azure/Communication/blob/master/pricing/communication-services-pstn-rates.csv)
+\* For destination-specific pricing for making outbound calls, refer to details [here](https://github.com/Azure/Communication/blob/master/pricing/communication-services-pstn-rates.csv)
## France telephony offers
All prices shown below are in USD.
|--|--|| |Geographic |Starting at USD 0.0160/min |USD 0.0100/min |
-\* For destination-specific pricing for making outbound calls, please refer to details [here](https://github.com/Azure/Communication/blob/master/pricing/communication-services-pstn-rates.csv)
+\* For destination-specific pricing for making outbound calls, refer to details [here](https://github.com/Azure/Communication/blob/master/pricing/communication-services-pstn-rates.csv)
## Spain telephony offers
All prices shown below are in USD.
|Geographic |Starting at USD 0.165/min |USD 0.0072/min | |Toll-free |Starting at USD 0.165/min | USD 0.2200/min |
-\* For destination-specific pricing for making outbound calls, please refer to details [here](https://github.com/Azure/Communication/blob/master/pricing/communication-services-pstn-rates.csv)
+\* For destination-specific pricing for making outbound calls, refer to details [here](https://github.com/Azure/Communication/blob/master/pricing/communication-services-pstn-rates.csv)
## Switzerland telephony offers
All prices shown below are in USD.
|--|--|| |Geographic |Starting at USD 0.0234/min |USD 0.0100/min |
-\* For destination-specific pricing for making outbound calls, please refer to details [here](https://github.com/Azure/Communication/blob/master/pricing/communication-services-pstn-rates.csv)
+\* For destination-specific pricing for making outbound calls, refer to details [here](https://github.com/Azure/Communication/blob/master/pricing/communication-services-pstn-rates.csv)
## Belgium telephony offers
All prices shown below are in USD.
|Geographic |Starting at USD 0.1300/min |USD 0.0100/min | |Toll-free |Starting at USD 0.1300/min |Starting at USD 0.0505/min |
-\* For destination-specific pricing for making outbound calls, please refer to details [here](https://github.com/Azure/Communication/blob/master/pricing/communication-services-pstn-rates.csv)
+\* For destination-specific pricing for making outbound calls, refer to details [here](https://github.com/Azure/Communication/blob/master/pricing/communication-services-pstn-rates.csv)
## Luxembourg telephony offers
All prices shown below are in USD.
|--|--|| |Geographic |Starting at USD 0.2300/min |USD 0.0100/min |
-\* For destination-specific pricing for making outbound calls, please refer to details [here](https://github.com/Azure/Communication/blob/master/pricing/communication-services-pstn-rates.csv)
+\* For destination-specific pricing for making outbound calls, refer to details [here](https://github.com/Azure/Communication/blob/master/pricing/communication-services-pstn-rates.csv)
## Austria telephony offers
All prices shown below are in USD.
|Geographic |Starting at USD 0.1550/min |USD 0.0100/min | |Toll-free |Starting at USD 0.1550/min |Starting at USD 0.0897/min |
-\* For destination-specific pricing for making outbound calls, please refer to details [here](https://github.com/Azure/Communication/blob/master/pricing/communication-services-pstn-rates.csv)
+\* For destination-specific pricing for making outbound calls, refer to details [here](https://github.com/Azure/Communication/blob/master/pricing/communication-services-pstn-rates.csv)
## Portugal telephony offers
All prices shown below are in USD.
|Geographic |Starting at USD 0.0130/min |USD 0.0100/min | |Toll-free |Starting at USD 0.0130/min | USD 0.0601/min |
-\* For destination-specific pricing for making outbound calls, please refer to details [here](https://github.com/Azure/Communication/blob/master/pricing/communication-services-pstn-rates.csv)
+\* For destination-specific pricing for making outbound calls, refer to details [here](https://github.com/Azure/Communication/blob/master/pricing/communication-services-pstn-rates.csv)
## Slovakia telephony offers
All prices shown below are in USD.
|--|--|| |Geographic |Starting at USD 0.0270/min |USD 0.0100/min |
-\* For destination-specific pricing for making outbound calls, please refer to details [here](https://github.com/Azure/Communication/blob/master/pricing/communication-services-pstn-rates.csv)
+\* For destination-specific pricing for making outbound calls, refer to details [here](https://github.com/Azure/Communication/blob/master/pricing/communication-services-pstn-rates.csv)
## Norway telephony offers
All prices shown below are in USD.
|Number type |Monthly fee | |--|--| |Geographic |USD 5.00/mo |
+|Toll-Free |USD 20.00/mo |
### Usage charges |Number type |To make calls* |To receive calls| |--|--|| |Geographic |Starting at USD 0.0200/min |USD 0.0300/min |
+|Toll-free |Starting at USD 0.0200/min | USD 0.1500/min |
-\* For destination-specific pricing for making outbound calls, please refer to details [here](https://github.com/Azure/Communication/blob/master/pricing/communication-services-pstn-rates.csv)
+\* For destination-specific pricing for making outbound calls, refer to details [here](https://github.com/Azure/Communication/blob/master/pricing/communication-services-pstn-rates.csv)
## Netherlands telephony offers
All prices shown below are in USD.
|--|--|| |Geographic |Starting at USD 0.3500/min |USD 0.0100/min |
-\* For destination-specific pricing for making outbound calls, please refer to details [here](https://github.com/Azure/Communication/blob/master/pricing/communication-services-pstn-rates.csv)
+\* For destination-specific pricing for making outbound calls, refer to details [here](https://github.com/Azure/Communication/blob/master/pricing/communication-services-pstn-rates.csv)
## Germany telephony offers
All prices shown below are in USD.
|Geographic |Starting at USD 0.0150/min |USD 0.0100/min | |Toll-free |Starting at USD 0.0150/min | USD 0.1750/min |
-\* For destination-specific pricing for making outbound calls, please refer to details [here](https://github.com/Azure/Communication/blob/master/pricing/communication-services-pstn-rates.csv)
+\* For destination-specific pricing for making outbound calls, refer to details [here](https://github.com/Azure/Communication/blob/master/pricing/communication-services-pstn-rates.csv)
***
Note: Pricing for all countries is subject to change as pricing is market-based
*** ## Direct routing pricing
-For Azure Communication Services direct routing there is a flat rate regardless of the geography:
+For Azure Communication Services direct routing, there is a flat rate regardless of the geography:
|Number type |To make calls |To receive calls| |--|--||
communication-services Outbound Calling https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communication-services/concepts/telephony/outbound-calling.md
# Toll-Free telephone numbers and outbound calling
-Outbound calling capability with Toll-Free telephone numbers is available in many countries where Azure Communication Services is available. However, there can be some limitations when trying to place outbound calls with toll-free telephone numbers.
+Outbound calling capability with Toll-Free telephone numbers is available in many countries/regions where Azure Communication Services is available. However, there can be some limitations when trying to place outbound calls with toll-free telephone numbers.
**Why outbound calls from Toll-Free numbers may not work?**
-Microsoft provides Toll-Free telephone numbers that have outbound calling capabilities, but it's important to note that this feature is only provided on a "best-effort" basis. In some countries and regions, toll-free numbers are considered as an "inbound only" service from regulatory perspective. This means, that in some scenarios, the receiving carrier may not allow incoming calls from toll-free telephone numbers. Since Microsoft and our carrier-partners don't have control over other carrier networks, we can't guarantee that outbound calls will reach all possible destinations.
+Microsoft provides Toll-Free telephone numbers that have outbound calling capabilities, but it's important to note that this feature is only provided on a "best-effort" basis. In some countries/regions, toll-free numbers are considered as an "inbound only" service from regulatory perspective. This means, that in some scenarios, the receiving carrier may not allow incoming calls from toll-free telephone numbers. Since Microsoft and our carrier-partners don't have control over other carrier networks, we can't guarantee that outbound calls will reach all possible destinations.
communication-services Call Recording https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communication-services/concepts/voice-video-calling/call-recording.md
An Event Grid notification `Microsoft.Communication.RecordingFileStatusUpdated`
## Regulatory and privacy concerns
-Many countries and states have laws and regulations that apply to call recording. PSTN, voice, and video calls often require that users consent to the recording of their communications. It is your responsibility to use the call recording capabilities in compliance with the law. You must obtain consent from the parties of recorded communications in a manner that complies with the laws applicable to each participant.
+Many countries/regions and states have laws and regulations that apply to call recording. PSTN, voice, and video calls often require that users consent to the recording of their communications. It is your responsibility to use the call recording capabilities in compliance with the law. You must obtain consent from the parties of recorded communications in a manner that complies with the laws applicable to each participant.
Regulations around the maintenance of personal data require the ability to export user data. In order to support these requirements, recording metadata files include the participantId for each call participant in the `participants` array. You can cross-reference the MRIs in the `participants` array with your internal user identities to identify participants in a call.
communication-services Teams Interop Call Automation https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communication-services/how-tos/call-automation/teams-interop-call-automation.md
+
+ Title: Azure Communication Services Call Automation how-to for adding Microsoft Teams User into an existing call
+
+description: ProvIDes a how-to for adding a Microsoft Teams user to a call with Call Automation.
++++ Last updated : 03/28/2023+++++
+# Add a Microsoft Teams user to an existing call using Call Automation
++
+In this quickstart, we use the Azure Communication Services Call Automation APIs to add, remove and transfer to a Teams user.
+
+You need to be part of the Azure Communication Services TAP program. It's likely that youΓÇÖre already part of this program, and if you aren't, sign-up using https://aka.ms/acs-tap-invite. To access to the specific Teams Interop functionality for Call Automation, submit your Teams Tenant IDs and Azure Communication Services Resource IDs by filling this form ΓÇô https://aka.ms/acs-ca-teams-tap. You need to fill the form every time you need a new tenant ID and new resource ID allow-listed.
+
+## Prerequisites
+
+- An Azure account with an active subscription.
+- A Microsoft Teams tenant with administrative privileges.
+- A deployed [Communication Service resource](../../quickstarts/create-communication-resource.md) and valid connection string found by selecting Keys in left side menu on Azure portal.
+- [Acquire a PSTN phone number from the Communication Service resource](../../quickstarts/telephony/get-phone-number.md). Note the phone number you acquired to use in this quickstart.
+- An Azure Event Grid subscription to receive the `IncomingCall` event.
+- The latest [Azure Communication Service Call Automation API library](https://dev.azure.com/azure-sdk/public/_artifacts/feed/azure-sdk-for-net/NuGet/Azure.Communication.CallAutomation/versions/) for your operating system.
+- A web service that implements the Call Automation API library, follow [this tutorial](../../quickstarts/call-automation/callflows-for-customer-interactions.md).
+
+## Step 1: Authorization for your Azure Communication Services Resource to enable calling to Microsoft Teams users
+
+To enable calling through Call Automation APIs, a [Microsoft Teams Administrator](/azure/active-directory/roles/permissions-reference#teams-administrator) or [Global Administrator](/en-us/azure/active-directory/roles/permissions-reference#global-administrator) must explicitly enable the ACS resource(s) access to their tenant to allow calling.
+
+[Set-CsTeamsAcsFederationConfiguration (MicrosoftTeamsPowerShell)](/powershell/module/teams/set-csteamsacsfederationconfiguration)
+Tenant level setting that enables/disables federation between their tenant and specific ACS resources.
+
+[Set-CsExternalAccessPolicy (SkypeForBusiness)](/powershell/module/skype/set-csexternalaccesspolicy)
+User policy that allows the admin to further control which users in their organization can participate in federated communications with ACS users.
+
+## Step 2: Use the Graph API to get Azure AD object ID for Teams users and optionally check their presence
+A Teams userΓÇÖs Azure Active Directory (Azure AD) object ID (OID) is required to add them to or transfer to them from an ACS call. The OID can be retrieved through 1) Office portal, 2) Azure AD portal, 3) Azure AD Connect; or 4) Graph API. The example below uses Graph API.
+
+Consent must be granted by an Azure AD admin before Graph can be used to search for users, learn more by following on the [Microsoft Graph Security API overview](/graph/security-concept-overview) document. The OID can be retrieved using the list users API to search for users. The following shows a search by display name, but other properties can be searched as well:
+
+[List users using Microsoft Graph v1.0](/graph/api/user-list):
+```rest
+Request:
+ https://graph.microsoft.com/v1.0/users?$search="displayName:Art Anderson"
+Permissions:
+ Application and delegated. Refer to documentation.
+Response:
+ "@odata.context": "https://graph.microsoft.com/v1.0/$metadata#users",
+ "value": [
+ {
+ "displayName": "Art Anderson",
+ "mail": "artanderson@contoso.com",
+ "id": "fc4ccb5f-8046-4812-803f-6c344a5d1560"
+ }
+```
+Optionally, Presence for a user can be retrieved using the get presence API and the user ObjectId. Learn more on the [Microsoft Graph v1.0 documentation](/graph/api/presence-get).
+```rest
+Request:
+https://graph.microsoft.com/v1.0/users/fc4ccb5f-8046-4812-803f-6c344a5d1560/presence
+Permissions:
+Delegated only. Application not supported. Refer to documentation.
+Response:
+ "@odata.context": "https://graph.microsoft.com/v1.0/$metadata#users('fc4ccb5f-8046-4812-803f-6c344a5d1560')/presence/$entity",
+ "id": "fc4ccb5f-8046-4812-803f-6c344a5d1560",
+ "availability": "Offline",
+ "activity": "Offline"
+
+```
+
+## Step 3: Add a Teams user to an existing ACS call controlled by Call Automation APIs
+You need to complete the prerequisite step and have a web service app to control an ACS call. Using the callConnection object, add a participant to the call.
+
+```csharp
+CallAutomationClient client = new CallAutomationClient('<Connection_String>');
+AnswerCallResult answer = await client.AnswerCallAsync(incomingCallContext, new Uri('<Callback_URI>'));
+await answer.Value.CallConnection.AddParticipantAsync(
+ new CallInvite(new MicrosoftTeamsUserIdentifier('<Teams_User_Guid>'))
+ {
+ SourceDisplayName = "Jack (Contoso Tech Support)"
+ });
+```
+On the Microsoft Teams desktop client, Jack's call will be sent to the Microsoft Teams user through an incoming call toast notification.
+
+![Screenshot of Microsoft Teams desktop client, Jack's call is sent to the Microsoft Teams user through an incoming call toast notification.](./media/incoming-call-toast-notification-teams-user.png)
+
+After the Microsoft Teams user accepts the call, the in-call experience for the Microsoft Teams user will have all the participants displayed on the Microsoft Teams roster.
+![Screenshot of Microsoft Teams user accepting the call and entering the in-call experience for the Microsoft Teams user.](./media/active-call-teams-user.png)
+
+## Step 4: Remove a Teams user from an existing ACS call controlled by Call Automation APIs
+```csharp
+await answer.Value.CallConnection.RemoveParticipantAsync(new MicrosoftTeamsUserIdentifier('<Teams_User_Guid>'));
+```
+
+### Optional feature: Transfer to a Teams user from an existing ACS call controlled by Call Automation APIs
+```csharp
+await answer.Value.CallConnection.TransferCallToParticipantAsync(new CallInvite(new MicrosoftTeamsUserIdentifier('<Teams_User_Guid>')));
+```
+### How to tell if your Tenant isn't enabled for this preview?
+![Screenshot showing the error during Step 1.](./media/teams-federation-error.png)
+
+## Clean up resources
+
+If you want to clean up and remove a Communication Services subscription, you can delete the resource or resource group. Deleting the resource group also deletes any other resources associated with it. Learn more about [cleaning up resources](../../quickstarts/create-communication-resource.md#clean-up-resources).
+
+## Next steps
+
+- Learn more about [Call Automation](../../concepts/call-automation/call-automation.md) and its features.
+- Learn more about capabilities of [Teams Interoperability support with ACS Call Automation](../../concepts/call-automation/call-automation-teams-interop.md)
+- Learn about [Play action](../../concepts/call-automation/play-Action.md) to play audio in a call.
+- Learn how to build a [call workflow](../../quickstarts/call-automation/callflows-for-customer-interactions.md) for a customer support scenario.
cosmos-db Analytical Store Change Data Capture https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/analytical-store-change-data-capture.md
Previously updated : 03/23/2023 Last updated : 04/03/2023 # Change Data Capture in Azure Cosmos DB analytical store (preview)
Change data capture (CDC) in [Azure Cosmos DB analytical store](analytical-store
> [!IMPORTANT] > This feature is currently in preview.
-The change data capture feature in Azure Cosmos DB analytical store can write to a variety of sinks using an Azure Synapse or Azure Data Factory data flow.
+The change data capture feature in Azure Cosmos DB analytical store can write to various sinks using an Azure Synapse or Azure Data Factory data flow.
For more information on supported sink types in a mapping data flow, see [data flow supported sink types](../data-factory/data-flow-sink.md#supported-sinks).
cosmos-db Introduction https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/gremlin/introduction.md
The API for Gremlin has added benefits of being built on Azure Cosmos DB:
- **Elastically scalable throughput and storage**: Graphs in the real world need to scale beyond the capacity of a single server. Azure Cosmos DB supports horizontally scalable graph databases that can have an unlimited size in terms of storage and provisioned throughput. As the graph database scale grows, the data is automatically distributed using [graph partitioning](./partitioning.md). -- **Multi-region replication**: Azure Cosmos DB can automatically replicate your graph data to any Azure region worldwide. Global replication simplifies the development of applications that require global access to data. In addition to minimizing read and write latency anywhere around the world, Azure Cosmos DB provides automatic regional failover mechanism. This mechanism can ensure the continuity of your application in the rare case of a service interruption in a region.
+- **Multi-region replication**: Azure Cosmos DB can automatically replicate your graph data to any Azure region worldwide. Global replication simplifies the development of applications that require global access to data. In addition to minimizing read and write latency anywhere around the world, Azure Cosmos DB provides a service-managed regional failover mechanism. This mechanism can ensure the continuity of your application in the rare case of a service interruption in a region.
- **Fast queries and traversals with the most widely adopted graph query standard**: Store heterogeneous vertices and edges and query them through a familiar Gremlin syntax. Gremlin is an imperative, functional query language that provides a rich interface to implement common graph algorithms. The API for Gremlin enables rich real-time queries and traversals without the need to specify schema hints, secondary indexes, or views. For more information, see [query graphs by using Gremlin](tutorial-query.md).
cosmos-db Support https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/gremlin/support.md
The following table lists the TinkerPop features that are implemented by Azure C
| Category | Azure Cosmos DB implementation | Notes | | | | | | Graph features | Provides Persistence and ConcurrentAccess. Designed to support Transactions | Computer methods can be implemented via the Spark connector. |
-| Variable features | Supports Boolean, Integer, Byte, Double, Float, Integer, Long, String | Supports primitive types, is compatible with complex types via data model |
+| Variable features | Supports Boolean, Integer, Byte, Double, Float, Long, String | Supports primitive types, is compatible with complex types via data model |
| Vertex features | Supports RemoveVertices, MetaProperties, AddVertices, MultiProperties, StringIds, UserSuppliedIds, AddProperty, RemoveProperty | Supports creating, modifying, and deleting vertices | | Vertex property features | StringIds, UserSuppliedIds, AddProperty, RemoveProperty, BooleanValues, ByteValues, DoubleValues, FloatValues, IntegerValues, LongValues, StringValues | Supports creating, modifying, and deleting vertex properties | | Edge features | AddEdges, RemoveEdges, StringIds, UserSuppliedIds, AddProperty, RemoveProperty | Supports creating, modifying, and deleting edges |
cosmos-db High Availability https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/high-availability.md
Single-region accounts might lose availability after a regional outage. To ensur
Service-managed failover allows Azure Cosmos DB to fail over the write region of a multiple-region account in order to preserve availability at the cost of data loss, as described earlier in the [Durability](#durability) section. Regional failovers are detected and handled in the Azure Cosmos DB client. They don't require any changes from the application. For instructions on how to enable multiple read regions and service-managed failover, see [Manage an Azure Cosmos DB account using the Azure portal](./how-to-manage-database-account.md). > [!IMPORTANT]
-> We strongly recommend that you configure the Azure Cosmos DB accounts used for production workloads to *enable service-managed failover*. This configuration enables Azure Cosmos DB to fail over the account databases to available regions automatically.
+> We strongly recommend that you configure the Azure Cosmos DB accounts used for production workloads to *enable service-managed failover*. This configuration enables Azure Cosmos DB to fail over the account databases to available regions.
> > In the absence of this configuration, the account will experience loss of write availability for the whole duration of the write region outage. Manual failover won't succeed because of a lack of region connectivity.
Multiple-region accounts experience different behaviors depending on the followi
### Additional information on read region outages
-* The affected region is automatically disconnected and marked offline. The [Azure Cosmos DB SDKs](nosql/sdk-dotnet-v3.md) redirect read calls to the next available region in the preferred region list.
+* The affected region is disconnected and marked offline. The [Azure Cosmos DB SDKs](nosql/sdk-dotnet-v3.md) redirect read calls to the next available region in the preferred region list.
* If none of the regions in the preferred region list are available, calls automatically fall back to the current write region.
-* No changes are required in your application code to handle read region outages. When the affected read region is back online, it automatically syncs with the current write region and is available again to serve read requests.
+* No changes are required in your application code to handle read region outages. When the affected read region is back online, it syncs with the current write region and is available again to serve read requests after it has fully caught up.
* Subsequent reads are redirected to the recovered region without requiring any changes to your application code. During both failover and rejoining of a previously failed region, Azure Cosmos DB continues to honor read consistency guarantees.
Multiple-region accounts experience different behaviors depending on the followi
### Additional information on write region outages
-* During a write region outage, the Azure Cosmos DB account automatically promotes a secondary region to be the new primary write region when *automatic (service-managed) failover* is configured on the Azure Cosmos DB account. The failover occurs to another region in the order of region priority that you specify.
+* During a write region outage, the Azure Cosmos DB account promotes a secondary region to be the new primary write region when *service-managed failover* is configured on the Azure Cosmos DB account. The failover occurs to another region in the order of region priority that you specify.
* Manual failover shouldn't be triggered and won't succeed in the presence of an outage of the source or destination region. The reason is that the failover procedure includes a consistency check that requires connectivity between the regions. * When the previously affected region is back online, any write data that wasn't replicated when the region failed is made available through the [conflict feed](how-to-manage-conflicts.md#read-from-conflict-feed). Applications can read the conflict feed, resolve the conflicts based on the application-specific logic, and write the updated data back to the Azure Cosmos DB container as appropriate.
-* After the previously affected write region recovers, it becomes automatically available as a read region. You can switch back to the recovered region as the write region by using [PowerShell, the Azure CLI, or the Azure portal](how-to-manage-database-account.md#manual-failover). There is *no data or availability loss* before, while, or after you switch the write region. Your application continues to be highly available.
+* After the previously affected write region recovers, it will become available as a read region. You can switch back to the recovered region as the write region by using [PowerShell, the Azure CLI, or the Azure portal](how-to-manage-database-account.md#manual-failover). There is *no data or availability loss* before, while, or after you switch the write region. Your application continues to be highly available.
## SLAs
For single-region accounts, clients experience a loss of read and write availabi
| Write regions | Service-managed failover | What to expect | What to do | | -- | -- | -- | -- |
-| Single write region | Not enabled | If there's an outage in a read region when you're not using strong consistency, all clients redirect to other regions. There's no read or write availability loss, and there's no data loss. When you use strong consistency, an outage in a read region can affect write availability if fewer than two read regions remain.<br/><br/> If there's an outage in the write region, clients experience write availability loss. If you didn't select strong consistency, the service might not replicate some data to the remaining active regions. This replication depends on the selected [consistency level](consistency-levels.md#rto). If the affected region suffers permanent data loss, you might lose unreplicated data. <br/><br/> Azure Cosmos DB restores write availability automatically when the outage ends. | During the outage, ensure that there are enough provisioned RUs in the remaining regions to support read traffic. <br/><br/> *Don't* trigger a manual failover during the outage, because it can't succeed. <br/><br/> When the outage is over, readjust provisioned RUs as appropriate. |
-| Single write region | Enabled | If there's an outage in a read region when you're not using strong consistency, all clients redirect to other regions. There's no read or write availability loss, and there's no data loss. When you're using strong consistency, the outage of a read region can affect write availability if fewer than two read regions remain.<br/><br/> If there's an outage in the write region, clients experience write availability loss until Azure Cosmos DB automatically elects a new region as the new write region according to your preferences. If you didn't select strong consistency, the service might not replicate some data to the remaining active regions. This replication depends on the selected [consistency level](consistency-levels.md#rto). If the affected region suffers permanent data loss, you might lose unreplicated data. | During the outage, ensure that there are enough provisioned RUs in the remaining regions to support read traffic. <br/><br/> *Don't* trigger a manual failover during the outage, because it can't succeed. <br/><br/> When the outage is over, you can move the write region back to the original region and readjust provisioned RUs as appropriate. Accounts that use the API for NoSQL can also recover the unreplicated data in the failed region from your [conflict feed](how-to-manage-conflicts.md#read-from-conflict-feed). |
-| Multiple write regions | Not applicable | Recently updated data in the failed region might be unavailable in the remaining active regions. Eventual, consistent prefix, and session consistency levels guarantee a staleness of less than 15 minutes. Bounded staleness guarantees fewer than *K* updates or *T* seconds, depending on the configuration. If the affected region suffers permanent data loss, you might lose unreplicated data. | During the outage, ensure that there are enough provisioned RUs in the remaining regions to support more traffic. <br/><br/> When the outage is over, you can readjust provisioned RUs as appropriate. If possible, Azure Cosmos DB automatically recovers unreplicated data in the failed region. This automatic recovery uses the conflict resolution method that you configure for accounts that use the API for NoSQL. For accounts that use other APIs, this automatic recovery uses *last write wins*. |
+| Single write region | Not enabled | If there's an outage in a read region when you're not using strong consistency, all clients redirect to other regions. There's no read or write availability loss, and there's no data loss. When you use strong consistency, an outage in a read region can affect write availability if fewer than two read regions remain.<br/><br/> If there's an outage in the write region, clients experience write availability loss. If you didn't select strong consistency, the service might not replicate some data to the remaining active regions. This replication depends on the selected [consistency level](consistency-levels.md#rto). If the affected region suffers permanent data loss, you might lose unreplicated data. <br/><br/> Azure Cosmos DB restores write availability when the outage ends. | During the outage, ensure that there are enough provisioned RUs in the remaining regions to support read traffic. <br/><br/> *Don't* trigger a manual failover during the outage, because it can't succeed. <br/><br/> When the outage is over, readjust provisioned RUs as appropriate. |
+| Single write region | Enabled | If there's an outage in a read region when you're not using strong consistency, all clients redirect to other regions. There's no read or write availability loss, and there's no data loss. When you're using strong consistency, the outage of a read region can affect write availability if fewer than two read regions remain.<br/><br/> If there's an outage in the write region, clients experience write availability loss until Azure Cosmos DB elects a new region as the new write region according to your preferences. If you didn't select strong consistency, the service might not replicate some data to the remaining active regions. This replication depends on the selected [consistency level](consistency-levels.md#rto). If the affected region suffers permanent data loss, you might lose unreplicated data. | During the outage, ensure that there are enough provisioned RUs in the remaining regions to support read traffic. <br/><br/> *Don't* trigger a manual failover during the outage, because it can't succeed. <br/><br/> When the outage is over, you can move the write region back to the original region and readjust provisioned RUs as appropriate. Accounts that use the API for NoSQL can also recover the unreplicated data in the failed region from your [conflict feed](how-to-manage-conflicts.md#read-from-conflict-feed). |
+| Multiple write regions | Not applicable | Recently updated data in the failed region might be unavailable in the remaining active regions. Eventual, consistent prefix, and session consistency levels guarantee a staleness of less than 15 minutes. Bounded staleness guarantees fewer than *K* updates or *T* seconds, depending on the configuration. If the affected region suffers permanent data loss, you might lose unreplicated data. | During the outage, ensure that there are enough provisioned RUs in the remaining regions to support more traffic. <br/><br/> When the outage is over, you can readjust provisioned RUs as appropriate. If possible, Azure Cosmos DB recovers unreplicated data in the failed region. This recovery uses the conflict resolution method that you configure for accounts that use the API for NoSQL. For accounts that use other APIs, this recovery uses *last write wins*. |
## Next steps
cosmos-db How To Manage Database Account https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/how-to-manage-database-account.md
After an Azure Cosmos DB account is configured for service-managed failover, the
:::image type="content" source="./media/how-to-manage-database-account/replicate-data-globally.png" alt-text="Screenshot showing the Replicate data globally menu.":::
-1. On the **Service-Managed Failover** pane, make sure that **Enable Automatic Failover** is set to **ON**.
+1. On the **Service-Managed Failover** pane, make sure that **Enable Service-Managed Failover** is set to **ON**.
1. To modify the failover priority, drag the read regions via the three dots on the left side of the row that appear when you hover over them.
cosmos-db Limit Total Account Throughput https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/limit-total-account-throughput.md
Set this property to `-1` to disable the limit.
#### Are there situations where the total provisioned throughput can exceed the limit?
-Azure Cosmos DB enforces a minimum throughput of 10 RU/s per GB of data stored. If you're ingesting data while already being at that minimum, the throughput provisioned on your resources will automatically increase to honor 10 RU/s per GB. In this case, and this case only, your total provisioned throughput may exceed the limit you've set.
+Azure Cosmos DB enforces a minimum throughput of 1 RU/s per GB of data stored. If you're ingesting data while already being at that minimum, the throughput provisioned on your resources will automatically increase to honor 1 RU/s per GB. In this case, and this case only, your total provisioned throughput may exceed the limit you've set.
## Next steps
cosmos-db Local Emulator https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/local-emulator.md
Start emulator from an administrator [command prompt](emulator-command-line-para
### API for Gremlin
-Start emulator from an administrator [command prompt](emulator-command-line-parameters.md)with "/EnableGremlinEndpoint". Alternatively you can also set the environment variable `AZURE_COSMOS_EMULATOR_GREMLIN_ENDPOINT=true`
+Start the emulator from an administrator [command prompt](emulator-command-line-parameters.md) with `/EnableGremlinEndpoint`. Alternatively you can also set the environment variable `AZURE_COSMOS_EMULATOR_GREMLIN_ENDPOINT=true`
-1. [Install apache-tinkerpop-gremlin-console-3.6.0](https://archive.apache.org/dist/tinkerpop/3.6.0).
+1. Tinkerpop console **3.6.2** is compatible with Java 8 or Java 11. For more information, see [OpenJDK 11](/java/openjdk/download#openjdk-11).
+
+1. Extract [apache-tinkerpop-gremlin-console-3.6.2](https://tinkerpop.apache.org/download.html) to a folder on your machine.
+
+ > [!NOTE]
+ > For the remainder of these steps, we will assume that you installed the console to the `%ProgramFiles%\gremlin` folder.
1. From the emulator's data explorer create a database "db1" and a collection "coll1"; for the partition key, choose "/name" 1. Run the following commands in a regular command prompt window:
- ```bash
- cd /d C:\sdk\apache-tinkerpop-gremlin-console-3.6.0-bin\apache-tinkerpop-gremlin-console-3.6.0
-
- copy /y conf\remote.yaml conf\remote-localcompute.yaml
- notepad.exe conf\remote-localcompute.yaml
- hosts: [localhost]
- port: 8901
- username: /dbs/db1/colls/coll1
- password: C2y6yDjf5/R+ob0N8A7Cgv30VRDJIWEHLM+4QDU5DE2nQ9nDuVTqobD4b8mGGyPMbIZnqyMsEcaGQy67XIw/Jw==
- connectionPool: {
- enableSsl: false}
- serializer: { className: org.apache.tinkerpop.gremlin.driver.ser.GraphSONMessageSerializerV1d0,
- config: { serializeResultToString: true }}
-
- bin\gremlin.bat
- ```
+ ```cmd
+ cd /d %ProgramFiles%\apache-tinkerpop-gremlin-console-3.6.0
+ ```
+
+ ```cmd
+ copy /y conf\remote.yaml conf\remote-localcompute.yaml
+ ```
+
+1. Open the `conf\remote-localcompute.yaml` file in Notepad.
+
+ ```cmd
+ notepad.exe conf\remote-localcompute.yaml
+ ```
+
+1. Replace the contents of the YAML file with this configuration and **Save** the file.
+
+ ```yaml
+ hosts: [localhost]
+ port: 8901
+ username: /dbs/db1/colls/coll1
+ password: C2y6yDjf5/R+ob0N8A7Cgv30VRDJIWEHLM+4QDU5DE2nQ9nDuVTqobD4b8mGGyPMbIZnqyMsEcaGQy67XIw/Jw==
+ connectionPool: { enableSsl: false }
+ serializer: { className: org.apache.tinkerpop.gremlin.driver.ser.GraphSONMessageSerializerV1d0, config: { serializeResultToString: true }}
+ ```
+
+1. Run the gremlin console.
+
+ ```cmd
+ bin\gremlin.bat
+ ```
1. In the Gremlin shell, run the following commands to connect to the Gremlin endpoint:
- ```bash
- :remote connect tinkerpop.server conf/remote-localcompute.yaml
- :remote console
- :> g.V()
- :> g.addV('person1').property(id, '1').property('name', 'somename1')
- :> g.addV('person2').property(id, '2').property('name', 'somename2')
- :> g.V()
- ```
+ ```gremlin
+ :remote connect tinkerpop.server conf/remote-localcompute.yaml
+ :remote console
+ ```
+
+1. Run the following commands to try various operations on the Gremlin endpont:
+
+ ```gremlin
+ g.V()
+ g.addV('person1').property(id, '1').property('name', 'somename1')
+ g.addV('person2').property(id, '2').property('name', 'somename2')
+ g.V()
+ ```
## <a id="uninstall"></a>Uninstall the local emulator
cosmos-db Best Practice Dotnet https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/nosql/best-practice-dotnet.md
Watch the video below to learn more about using the .NET SDK from an Azure Cosmo
|||| |<input type="checkbox"/> | SDK Version | Always using the [latest version](sdk-dotnet-v3.md) of the Azure Cosmos DB SDK available for optimal performance. | | <input type="checkbox"/> | Singleton Client | Use a [single instance](/dotnet/api/microsoft.azure.cosmos.cosmosclient?view=azure-dotnet&preserve-view=true) of `CosmosClient` for the lifetime of your application for [better performance](performance-tips-dotnet-sdk-v3.md#sdk-usage). |
-| <input type="checkbox"/> | Regions | Make sure to run your application in the same [Azure region](../distribute-data-globally.md) as your Azure Cosmos DB account, whenever possible to reduce latency. Enable 2-4 regions and replicate your accounts in multiple regions for [best availability](../distribute-data-globally.md). For production workloads, enable [automatic failover](../how-to-manage-database-account.md#configure-multiple-write-regions). In the absence of this configuration, the account will experience loss of write availability for all the duration of the write region outage, as manual failover won't succeed due to lack of region connectivity. To learn how to add multiple regions using the .NET SDK visit [here](tutorial-global-distribution.md) |
+| <input type="checkbox"/> | Regions | Make sure to run your application in the same [Azure region](../distribute-data-globally.md) as your Azure Cosmos DB account, whenever possible to reduce latency. Enable 2-4 regions and replicate your accounts in multiple regions for [best availability](../distribute-data-globally.md). For production workloads, enable [service-managed failover](../how-to-manage-database-account.md#configure-multiple-write-regions). In the absence of this configuration, the account will experience loss of write availability for all the duration of the write region outage, as manual failover won't succeed due to lack of region connectivity. To learn how to add multiple regions using the .NET SDK visit [here](tutorial-global-distribution.md) |
| <input type="checkbox"/> | Availability and Failovers | Set the [ApplicationPreferredRegions](/dotnet/api/microsoft.azure.cosmos.cosmosclientoptions.applicationpreferredregions?view=azure-dotnet&preserve-view=true) or [ApplicationRegion](/dotnet/api/microsoft.azure.cosmos.cosmosclientoptions.applicationregion?view=azure-dotnet&preserve-view=true) in the v3 SDK, and the [PreferredLocations](/dotnet/api/microsoft.azure.documents.client.connectionpolicy.preferredlocations?view=azure-dotnet&preserve-view=true) in the v2 SDK using the [preferred regions list](./tutorial-global-distribution.md?tabs=dotnetv3%2capi-async#preferred-locations). During failovers, write operations are sent to the current write region and all reads are sent to the first region within your preferred regions list. For more information about regional failover mechanics, see the [availability troubleshooting guide](troubleshoot-sdk-availability.md). | | <input type="checkbox"/> | CPU | You may run into connectivity/availability issues due to lack of resources on your client machine. Monitor your CPU utilization on nodes running the Azure Cosmos DB client, and scale up/out if usage is high. | | <input type="checkbox"/> | Hosting | Use [Windows 64-bit host](performance-tips-query-sdk.md#use-local-query-plan-generation) processing for best performance, whenever possible. For Direct mode latency-sensitive production workloads, we highly recommend using at least 4-cores and 8-GB memory VMs whenever possible.
cosmos-db Best Practice Java https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/nosql/best-practice-java.md
This article walks through the best practices for using the Azure Cosmos DB Java
|||| |<input type="checkbox"/> | SDK Version | Always using the [latest version](sdk-java-v4.md) of the Azure Cosmos DB SDK available for optimal performance. | | <input type="checkbox"/> | Singleton Client | Use a [single instance](/jav#sdk-usage). |
-| <input type="checkbox"/> | Regions | Make sure to run your application in the same [Azure region](../distribute-data-globally.md) as your Azure Cosmos DB account, whenever possible to reduce latency. Enable 2-4 regions and replicate your accounts in multiple regions for [best availability](../distribute-data-globally.md). For production workloads, enable [automatic failover](../how-to-manage-database-account.md#configure-multiple-write-regions). In the absence of this configuration, the account will experience loss of write availability for all the duration of the write region outage, as manual failover won't succeed due to lack of region connectivity. To learn how to add multiple regions using the Java SDK [visit here](tutorial-global-distribution.md) |
+| <input type="checkbox"/> | Regions | Make sure to run your application in the same [Azure region](../distribute-data-globally.md) as your Azure Cosmos DB account, whenever possible to reduce latency. Enable 2-4 regions and replicate your accounts in multiple regions for [best availability](../distribute-data-globally.md). For production workloads, enable [service-managed failover](../how-to-manage-database-account.md#configure-multiple-write-regions). In the absence of this configuration, the account will experience loss of write availability for all the duration of the write region outage, as manual failover won't succeed due to lack of region connectivity. To learn how to add multiple regions using the Java SDK [visit here](tutorial-global-distribution.md) |
| <input type="checkbox"/> | Availability and Failovers | Set the [preferredRegions](/jav). | | <input type="checkbox"/> | CPU | You may run into connectivity/availability issues due to lack of resources on your client machine. Monitor your CPU utilization on nodes running the Azure Cosmos DB client, and scale up/out if usage is very high. | | <input type="checkbox"/> | Hosting | For most common cases of production workloads, we highly recommend using at least 4-cores and 8-GB memory VMs whenever possible. |
cosmos-db Manage With Powershell https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/nosql/manage-with-powershell.md
New-AzCosmosDBAccountKey `
### <a id="enable-automatic-failover"></a> Enable service-managed failover
-The following command sets an Azure Cosmos DB account to fail over automatically to its secondary region should the primary region become unavailable.
+The following command sets an Azure Cosmos DB account to perform a service-managed fail over to its secondary region should the primary region become unavailable.
```azurepowershell-interactive $resourceGroupName = "myResourceGroup"
cost-management-billing Calculate Ea Savings Plan Savings https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cost-management-billing/savings-plan/calculate-ea-savings-plan-savings.md
+
+ Title: Calculate Enterprise Agreement (EA) savings plan cost savings
+
+description: Learn how Enterprise Agreement users manually calculate their savings plan savings.
+++++ Last updated : 04/03/2023+++
+# Calculate EA savings plan cost savings
+
+This article helps Enterprise Agreement (EA) users manually calculate their savings plan savings. In this article, you download your amortized usage and charges file, prepare an Excel worksheet, and then do some calculations to determine your savings. There are several steps involved and we walk you through the process. Although the example process shown in this article uses Excel, you can use the spreadsheet application of your choice.
+
+> [!NOTE]
+> The prices shown in this article are for example purposes only.
+
+This article is specific to EA users. Microsoft Customer Agreement (MCA) users can use similar steps to calculate their savings plan savings through invoices. However, the MCA amortized usage file doesn't contain UnitPrice (on-demand pricing) for savings plans. Other resources in the file do. For more information, see [Download usage for your Microsoft Customer Agreement](../savings-plan/utilization-cost-reports.md).
+
+## Required permissions
+
+To view and download usage data as an EA customer, you must be an Enterprise Administrator, Account Owner, or Department Admin with the view charges policy enabled.
+
+## Download all usage amortized charges
+
+1. Sign in to the [Azure portal](https://portal.azure.com/).
+2. Search for _Cost Management + Billing_.
+ :::image type="content" source="./media/calculate-ea-savings-plan-savings/search-cost-management.png" alt-text="Screenshot showing search for cost management." lightbox="./media/calculate-ea-savings-plan-savings/search-cost-management.png" :::
+3. If you have access to multiple billing accounts, select the billing scope for your EA billing account.
+4. Select **Usage + charges**.
+5. For the month you want to download, select **Download**.
+ :::image type="content" source="./media/calculate-ea-savings-plan-savings/download-usage-ea.png" alt-text="Screenshot showing Usage + charges download." lightbox="./media/calculate-ea-savings-plan-savings/download-usage-ea.png" :::
+6. On the Download Usage + Charges page, under Usage Details, select **Amortized charges (usage and purchases)**.
+ :::image type="content" source="./media/calculate-ea-savings-plan-savings/select-usage-detail-charge-type-small.png" alt-text="Screenshot showing the Download usage + charges window." lightbox="./media/calculate-ea-savings-plan-savings/select-usage-detail-charge-type.png" :::
+7. Select **Prepare document**.
+8. It could take a while for Azure to prepare your download, depending on your monthly usage. When it's ready for download, select **Download csv**.
+
+## Prepare data and calculate savings
+
+Because Azure usage files are in CSV format, you need to prepare the data for use in Excel. Then you calculate your savings.
+
+1. Open the amortized cost file in Excel and save it as an Excel workbook.
+2. The data resembles the following example.
+ :::image type="content" source="./media/calculate-ea-savings-plan-savings/unformatted-data.png" alt-text="Example screenshot of the unformatted amortized usage file." lightbox="./media/calculate-ea-savings-plan-savings/unformatted-data.png" :::
+3. In the Home ribbon, select **Format as Table**.
+4. In the Create Table window, select **My table has headers**.
+5. In the **benefitName** column, set a filter to clear **Blanks**.
+ :::image type="content" source="./media/calculate-ea-savings-plan-savings/savings-plan-name-clear-blanks.png" alt-text="Screenshot showing clear Blanks data." lightbox="./media/calculate-ea-savings-plan-savings/savings-plan-name-clear-blanks.png" :::
+6. Find the **ChargeType** column and then to the right of the column name, select the sort and filter symbol (the down arrow).
+7. For the **ChargeType** column, set a filter on it to select only **Usage**. Clear any other selections.
+ :::image type="content" source="./media/calculate-ea-savings-plan-savings/charge-type-selection-small.png" alt-text="Screenshot showing ChargeType selection." lightbox="./media/calculate-ea-savings-plan-savings/charge-type-selection.png" :::
+8. To the right of **UnitPrice** , insert add a column and label it with a title like **TotalUsedSavings**.
+9. In the first cell under **TotalUsedSavings**, create a formula that calculates (_UnitPrice ΓÇô EffectivePrice) \* Quantity_.
+ :::image type="content" source="./media/calculate-ea-savings-plan-savings/total-used-savings-formula.png" alt-text="Screenshot showing the TotalUsedSavings formula." lightbox="./media/calculate-ea-savings-plan-savings/total-used-savings-formula.png" :::
+10. Copy the formula to all the other empty **TotalUsedSavings** cells.
+11. At the bottom of the **TotalUsedSavings** column, sum the column's values.
+ :::image type="content" source="./media/calculate-ea-savings-plan-savings/total-used-savings-summed.png" alt-text="Screenshot showing the summed values." lightbox="./media/calculate-ea-savings-plan-savings/total-used-savings-summed.png" :::
+12. Somewhere under your data, create a cell named _TotalUsedSavingsValue_. Next to it, copy the **TotalUsed** cell and paste it as **Values**. This step is important because the next step will change the applied filter and affect the summed total.
+ :::image type="content" source="./media/calculate-ea-savings-plan-savings/paste-value-used.png" alt-text="Screenshot showing pasting the TotalUsedSavings cell as Values." lightbox="./media/calculate-ea-savings-plan-savings/paste-value-used.png" :::
+13. For the **ChargeType** column, set a filter on it to select only **UnusedSavingsPlan**. Clear any other selections.
+14. To the right of the **TotalUsedSavings** column, insert a column and label it with a title like **TotalUnused**.
+15. In the first cell under **TotalUnused**, create a formula that calculates _EffectivePrice \* Quantity_.
+ :::image type="content" source="./media/calculate-ea-savings-plan-savings/total-unused-formula.png" alt-text="Screenshot showing the TotalUnused formula." lightbox="./media/calculate-ea-savings-plan-savings/total-unused-formula.png" :::
+16. At the bottom of the TotalUnused column, sum the column's values.
+17. Somewhere under your data, create a cell named _TotalUnusedValue_. Next to it, copy the TotalUnused cell and paste it as **Values**.
+18. Under the TotalUsedSavingsValue and TotalUnusedValue cells, create a cell named _SavingsPlanSavings_. Next to it, subtract TotalUnusedValue from TotalUsedSavingsValue. The calculation result is your savings plan savings.
+ :::image type="content" source="./media/calculate-ea-savings-plan-savings/savings-plan-savings.png" alt-text="Screenshot showing the SavingsPlanSavings calculation and final savings." lightbox="./media/calculate-ea-savings-plan-savings/savings-plan-savings.png" :::
+
+If you see a negative savings value, then you're likely to have unused savings plans. You should review your savings plan usage. For more information, see [View savings plan utilization after purchase](view-utilization.md).
+
+## Other ways to get data and see savings
+
+Using the preceding steps, you can repeat the process for any number of months. Doing so allows you to see your savings over a longer period.
+
+Instead of downloading usage files, one per month, you can get all your usage data for a specific date range using exports from Cost Management and output the data to Azure Storage. Doing so allows you to see your savings over a longer period. For more information about creating an export, see [Create and manage exported data](../costs/tutorial-export-acm-data.md).
+
+## Next steps
+
+- If you have any unused savings plans, read [View savings plan utilization after purchase](view-utilization.md).
+- Learn more about creating an export at [Create and manage exported data](../costs/tutorial-export-acm-data.md).
data-factory Concept Managed Airflow https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/concept-managed-airflow.md
Azure Data Factory offers serverless pipelines for data process orchestration, data movement with 100+ managed connectors, and visual transformations with the mapping data flow.
-Managed Airflow in Azure Data Factory is a managed orchestration service for [Apache Airflow](https://airflow.apache.org/) that simplifies the creation and management of Airflow environments on which you can operate end-to-end data pipelines at scale. Apache Airflow is an open-source tool used to programmatically author, schedule, and monitor sequences of processes and tasks referred to as "workflows." With Managed Airflow in Azure Data Factory, you can use Airflow and Python to create data workflows without managing the underlying infrastructure for scalability, availability, and security.
+Azure Data Factory's Managed Airflow service is a simple and efficient way to create and manage [Apache Airflow](https://airflow.apache.org) environments, enabling you to run data pipelines at scale with ease.
+[Apache Airflow](https://airflow.apache.org) is an open-source platform used to programmatically create, schedule, and monitor complex data workflows. It allows you to define a set of tasks, called operators, that can be combined into directed acyclic graphs (DAGs) to represent data pipelines. Airflow enables you to execute these DAGs on a schedule or in response to an event, monitor the progress of workflows, and provide visibility into the state of each task. It is widely used in data engineering and data science to orchestrate data pipelines, and is known for its flexibility, extensibility, and ease of use.
:::image type="content" source="media/concept-managed-airflow/data-integration.png" alt-text="Screenshot shows data integration.":::
With Managed Airflow, Azure Data Factory now offers multi-orchestration capabili
## Features -- **Automatic Airflow setup** – Quickly set up Apache Airflow by choosing an [Apache Airflow version](concept-managed-airflow.md#supported-apache-airflow-versions) when you create a Managed Airflow environment. ADF Managed Airflow sets up Apache Airflow for you using the same Apache Airflow user interface and open-source code you can download on the Internet.-- **Automatic scaling** – Automatically scale Apache Airflow Workers by setting the minimum and maximum number of Workers that run in your environment. ADF Managed Airflow monitors the Workers in your environment. It uses its autoscaling component to add Workers to meet demand until it reaches the maximum number of Workers you defined.-- **Built-in authentication** – Enable Azure Active Directory (Azure AD) role-based authentication and authorization for your Airflow Web server by defining Azure AD RBAC's access control policies. -- **Built-in security** – Metadata is also automatically encrypted by Azure-managed keys, so your environment is secure by default. Additionally, it supports double encryption with a Customer-Managed Key (CMK). -- **Streamlined upgrades and patches** – Azure Data Factory Managed Airflow provide new versions of Apache Airflow periodically. The ADF Managed Airflow team will auto-update and patch the minor versions. -- **Workflow monitoring** – View Airflow logs and Airflow metrics in Azure Monitor to identify Airflow task delays or workflow errors without needing additional third-party tools. Managed Airflow automatically sends environment metrics, and if enabled, Airflow logs to Azure Monitor. -- **Azure integration** – Azure Data Factory Managed Airflow supports open-source integrations with Azure Data Factory pipelines, Azure Batch, Azure Cosmos DB, Azure Key Vault, ACI, ADLS Gen2, Azure Kusto, as well as hundreds of built-in and community-created operators and sensors.
+Managed Airflow in Azure Data Factory offers a range of powerful features, including:
+
+- **Fast and simple deployment** – You can quickly and easily set up Apache Airflow by selecting an [Apache Airflow version](concept-managed-airflow.md#supported-apache-airflow-versions) when you create a Managed Airflow.
+- **Cloud scale** – Managed Airflow automatically scales Apache Airflow nodes when required based on range specification (min, max).
+- **Azure Active Directory integration** – You can enable [Azure AD RBAC](concepts-roles-permissions.md) against your Airflow environment for a single sign on experience that is secured by Azure Active Directory.
+- **Managed Virtual Network integration**ΓÇ»(coming soon) ΓÇô You can access your data source via private endpoints or on-premises using ADF Managed Virtual Network that provides extra network isolation.
+- **Metadata encryption** – Managed Airflow automatically encrypts metadata using Azure-managed keys to ensure your environment is secure by default. It also supports double encryption with a [Customer-Managed Key (CMK)](enable-customer-managed-key.md).
+- **Azure Monitoring and alerting** – All the logs generated by Managed Airflow is exported to Azure Monitor. It also provides metrics to track critical conditions and help you notify if the need be.
## Architecture :::image type="content" source="media/concept-managed-airflow/architecture.png" lightbox="media/concept-managed-airflow/architecture.png" alt-text="Screenshot shows architecture in Managed Airflow."::: ## Region availability (public preview)
-* EastUs
-* SouthCentralUs
-* WestUs
-* UKSouth
-* NorthEurope
-* WestEurope
-* SouthEastAsia
-* EastUS2 (coming soon)
-* WestUS2 (coming soon)
-* GermanyWestCentral (coming soon)
+* East Us
+* South Central Us
+* West Us
+* UK South
+* North Europe
+* West Europe
+* SouthEast Asia
+* East US2 (coming soon)
+* West US2 (coming soon)
+* Germany West Central (coming soon)
* AustraliaEast (coming soon) > [!NOTE]
data-factory How Does Managed Airflow Work https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/how-does-managed-airflow-work.md
Last updated 01/20/2023
> [!NOTE] > Managed Airflow for Azure Data Factory relies on the open source Apache Airflow application. Documentation and more tutorials for Airflow can be found on the Apache Airflow [Documentation](https://airflow.apache.org/docs/) or [Community](https://airflow.apache.org/community/) pages.
-Azure Data Factory Managed Airflow orchestrates your workflows using Directed Acyclic Graphs (DAGs) written in Python. You must provide your DAGs and plugins in Azure Blob Storage. Airflow requirements or library dependencies can be installed during the creation of the new Managed Airflow environment or by editing an existing Managed Airflow environment. Then run and monitor your DAGs by launching the Airflow UI from ADF using a command line interface (CLI) or a software development kit (SDK).
+Managed Airflow in Azure Data Factory uses Python-based Directed Acyclic Graphs (DAGs) to run your orchestration workflows.
+To use this feature, you need to provide your DAGs and plugins in Azure Blob Storage. You can launch the Airflow UI from ADF using a command line interface (CLI) or a software development kit (SDK) to manage your DAGs.
## Create a Managed Airflow environment The following steps set up and configure your Managed Airflow environment.
If you're using Airflow version 1.x, delete DAGs that are deployed on any Airflo
* [Run an existing pipeline with Managed Airflow](tutorial-run-existing-pipeline-with-airflow.md) * [Refresh a Power BI dataset with Managed Airflow](tutorial-refresh-power-bi-dataset-with-airflow.md) * [Managed Airflow pricing](airflow-pricing.md)
-* [How to change the password for Managed Airflow environments](password-change-airflow.md)
+* [How to change the password for Managed Airflow environments](password-change-airflow.md)
ddos-protection Manage Ddos Ip Protection Cli https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ddos-protection/manage-ddos-ip-protection-cli.md
Previously updated : 03/09/2023 Last updated : 04/04/2023 # Customer intent As an IT admin, I want to learn how to enable DDoS IP Protection on my public IP address.
You can disable DDoS IP Protection on an existing public IP address.
``` >[!Note]
->When changing DDoS IP protection from **Enabled** to **Disabled**, telemetry for the public IP resource will not be available.
+> When changing DDoS IP protection from **Enabled** to **Disabled**, telemetry for the public IP resource will no longer be active.
## Validate and test
ddos-protection Manage Ddos Ip Protection Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ddos-protection/manage-ddos-ip-protection-portal.md
Previously updated : 03/16/2023 Last updated : 04/04/2023 # Customer intent As an IT admin, I want to learn how to enable DDoS IP Protection on my public IP address.
In this quickstart, you'll enable DDoS IP protection and link it to a public IP
| | | | Subscription | Select your subscription. | | Resource group | Select **Create new**, enter **MyResourceGroup**. </br> Select **OK**. |
- | Region | Select your region. In this example, we selected **(US) East US 2**. |
- | Name | Enter your resource name. In this example, we selected **mystandardpublicip**. |
+ | Region | Select your region. In this example, we selected **(US) East US**. |
+ | Name | Enter your resource name. In this example, we selected **myStandardPublicIP**. |
| IP Version | Select IPv4 or IPv6. In this example, we selected **IPv4**. | | SKU | Select **Standard**. DDoS IP Protection is enabled only on Public IP Standard SKU. | | Availability Zone | You can specify an availability zone in which to deploy your public IP address. In this example, we selected **zone-redundant**. |
In this quickstart, you'll enable DDoS IP protection and link it to a public IP
1. Select your Public IP address. In this example, select **myStandardPublicIP**. 1. In the **Overview** pane, select the **Properties** tab, then select **DDoS protection**.
- :::image type="content" source="./media/ddos-protection-quickstarts/ddos-protection-view-status.png" alt-text="Screenshot showing view of Public IP Properties.":::
+ :::image type="content" source="./media/ddos-protection-quickstarts/ddos-protection-view-status.png" alt-text="Screenshot showing view of Public IP Properties." lightbox="./media/ddos-protection-quickstarts/ddos-protection-view-status.png":::
1. In the **Configure DDoS protection** pane, under **Protection type**, select **IP**, then select **Save**.
In this quickstart, you'll enable DDoS IP protection and link it to a public IP
:::image type="content" source="./media/ddos-protection-quickstarts/ddos-protection-disable-status.png" alt-text="Screenshot of disabling IP Protection in Public IP Properties."::: > [!NOTE]
-> When changing DDoS IP protection from **Enabled** to **Disabled**, telemetry for the public IP resource will not be available.
+> When changing DDoS IP protection from **Enabled** to **Disabled**, telemetry for the public IP resource will no longer be active.
## Validate and test First, check the details of your public IP address:
First, check the details of your public IP address:
1. In the **Overview** pane, select the **Properties** tab in the middle of the page, then select **DDoS protection**. 1. View **Protection status** and verify your public IP is protected.
- :::image type="content" source="./media/ddos-protection-quickstarts/ddos-protection-protected-status.png" alt-text="Screenshot of status of IP Protection in Public IP Properties.":::
+ :::image type="content" source="./media/ddos-protection-quickstarts/ddos-protection-protected-status.png" alt-text="Screenshot of status of IP Protection in Public IP Properties." lightbox="./media/ddos-protection-quickstarts/ddos-protection-protected-status.png":::
## Clean up resources
ddos-protection Manage Ddos Protection Powershell Ip https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ddos-protection/manage-ddos-protection-powershell-ip.md
Previously updated : 02/27/2023 Last updated : 04/04/2023
$publicIp.DdosSettings.ProtectionMode = 'Disabled'
Set-AzPublicIpAddress -PublicIpAddress $publicIp ``` > [!NOTE]
-> When changing DDoS IP protection from **Enabled** to **Disabled**, telemetry for the public IP resource will not be available.
+> When changing DDoS IP protection from **Enabled** to **Disabled**, telemetry for the public IP resource will no longer be active.
## Clean up resources
defender-for-cloud Attack Path Reference https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/attack-path-reference.md
This section lists all of the cloud security graph components (connections and
| Insight | Description | Supported entities | |--|--|--|
-| Exposed to the internet | Indicates that a resource is exposed to the internet. Supports port filtering | Azure virtual machine, AWS EC2, Azure storage account, Azure SQL server, Azure Cosmos DB, AWS S3, Kubernetes pod, Azure SQL Managed Instance, Azure MySQL Single Server, Azure MySQL Flexible Server, Azure PostgreSQL Single Server, Azure PostgreSQL Flexible Server, Azure MariaDB Single Server, Synapse Workspace, RDS Instance |
+| Exposed to the internet | Indicates that a resource is exposed to the internet. Supports port filtering. [Learn more](concept-data-security-posture-prepare.md#exposed-to-the-internetallows-public-access) | Azure virtual machine, AWS EC2, Azure storage account, Azure SQL server, Azure Cosmos DB, AWS S3, Kubernetes pod, Azure SQL Managed Instance, Azure MySQL Single Server, Azure MySQL Flexible Server, Azure PostgreSQL Single Server, Azure PostgreSQL Flexible Server, Azure MariaDB Single Server, Synapse Workspace, RDS Instance |
| Allows basic authentication (Preview) | Indicates that a resource allows basic (local user/password or key-based) authentication | Azure SQL Server, RDS Instance | | Contains sensitive data (Preview) <br/> <br/> Prerequisite: [Enable data-aware security for storage accounts in Defender for CSPM](data-security-posture-enable.md), or [leverage Microsoft Purview Data Catalog to protect sensitive data](information-protection.md). | Indicates that a resource contains sensitive data. | Azure Storage Account, Azure Storage Account Container, AWS S3 bucket, Azure SQL Server, Azure SQL Database, Azure Data Lake Storage Gen2, Azure Database for PostgreSQL, Azure Database for MySQL, Azure Synapse Analytics, Azure Cosmos DB accounts | | Moves data to (Preview) | Indicates that a resource transfers its data to another resource | Storage account container, AWS S3, AWS RDS instance, AWS RDS cluster | | Gets data from (Preview) | Indicates that a resource gets its data from another resource | Storage account container, AWS S3, AWS RDS instance, AWS RDS cluster | | Has tags | Lists the resource tags of the cloud resource | All Azure and AWS resources | | Installed software | Lists all software installed on the machine. This insight is applicable only for VMs that have threat and vulnerability management integration with Defender for Cloud enabled and are connected to Defender for Cloud. | Azure virtual machine, AWS EC2 |
-| Allows public access | Indicates that a public read access is allowed to the resource with no authorization required | Azure storage account, AWS S3 bucket, GitHub repository |
+| Allows public access | Indicates that a public read access is allowed to the resource with no authorization required. [Learn more](concept-data-security-posture-prepare.md#exposed-to-the-internetallows-public-access) | Azure storage account, AWS S3 bucket, GitHub repository |
| Doesn't have MFA enabled | Indicates that the user account does not have a multi-factor authentication solution enabled | Azure AD User account, IAM user | | Is external user | Indicates that the user account is outside the organization's domain | Azure AD User account | | Is managed | Indicates that an identity is managed by the cloud provider | Azure Managed Identity |
defender-for-cloud Concept Data Security Posture Prepare https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/concept-data-security-posture-prepare.md
The table summarizes support for data-aware posture management.
**Support** | **Details** |
-What Azure data resources can I scan? | Azure storage accounts v1, v2<br/><br/> Azure Data Lake Storage Gen1/Gen2<br/><br/>Accounts are supported behind private networks but not behind private endpoints.<br/><br/> Defender for Cloud can discover data encrypted by KMB or a customer-managed key. <br/><br/>Page blobs aren't scanned.
-What AWS data resources can I scan? | AWS S3 buckets<br/><br/> Defender for Cloud can scan encrypted data, but not data encrypted with a customer-managed key.
-What permissions do I need for scanning? | Storage account: Subscription Owner or Microsoft.Storage/storageaccounts/{read/write} and Microsoft.Authorization/roleAssignments/{read/write/delete}<br/><br/> Amazon S3 buckets: AWS account permission to run Cloud Formation (to create a role).
+What Azure data resources can I discover? | [Block blob](../storage/blobs/storage-blobs-introduction.md) storage accounts in Azure Storage v1/v2<br/><br/> Azure Data Lake Storage Gen2<br/><br/>Storage accounts behind private networks are supported.<br/><br/> Storage accounts encrypted with a customer-managed server-side key are supported.<br/><br/> Accounts aren't supported if any of these settings are enabled: [Public network access is disabled](../storage/common/storage-network-security.md#change-the-default-network-access-rule); Storage account is defined as [Azure DNS Zone](https://techcommunity.microsoft.com/t5/azure-storage-blog/public-preview-create-additional-5000-azure-storage-accounts/ba-p/3465466); The storage account endpoint has a [custom domain mapped to it](../storage/blobs/storage-custom-domain-name.md).
+What AWS data resources can I discover? | AWS S3 buckets<br/><br/> Defender for Cloud can discover KMS-encrypted data, but not data encrypted with a customer-managed key.
+What permissions do I need for discovery? | Storage account: Subscription Owner or Microsoft.Storage/storageaccounts/{read/write} and Microsoft.Authorization/roleAssignments/{read/write/delete}<br/><br/> Amazon S3 buckets: AWS account permission to run Cloud Formation (to create a role).
What file types are supported for sensitive data discovery? | Supported file types (you can't select a subset) - .doc, .docm, .docx, .dot, .odp, .ods, .odt, .pdf, .pot, .pps, .ppsx, .ppt, .pptm, .pptx, .xlc, .xls, .xlsb, .xlsm, .xlsx, .xlt, .csv, .json, .psv, .ssv, .tsv, .txt., xml, .parquet, .avro, .orc.
-What Azure regions are supported? | You can scan Azure storage accounts in:<br/><br/> Australia Central; Australia Central 2; Australia East; Australia Southeast; Brazil South; Canada Central; Canada East; Central India; Central US; East Asia; East US; East US 2; France Central; Germany West Central; Japan East; Japan West: Jio India West: North Central US; North Europe; Norway East; South Africa North: South Central US; South India; Sweden Central; Switzerland North; UAE North; UK South; UK West: West Central US; West Europe; West US, West US3.<br/><br/> Scanning is done locally in the region.
-What AWS regions are supported? | Asia Pacific (Mumbai); Asia Pacific (Singapore); Asia Pacific (Sydney); Asia Pacific (Tokyo); Canada (Central); Europe (Frankfurt); Europe (Ireland); Europe (London); Europe (Paris); South America (São Paulo); US East (Ohio); US East (N. Virginia); US West (N. California): US West (Oregon).<br/><br/> Scanning is done locally in the region.
-Do I need to install an agent? | No, scanning is agentless.
+What Azure regions are supported? | You can discover Azure storage accounts in:<br/><br/> Australia Central; Australia Central 2; Australia East; Australia Southeast; Brazil South; Canada Central; Canada East; Central India; Central US; East Asia; East US; East US 2; France Central; Germany West Central; Japan East; Japan West: Jio India West: North Central US; North Europe; Norway East; South Africa North: South Central US; South India; Sweden Central; Switzerland North; UAE North; UK South; UK West: West Central US; West Europe; West US, West US3.<br/><br/> Discovery is done locally in the region.
+What AWS regions are supported? | Asia Pacific (Mumbai); Asia Pacific (Singapore); Asia Pacific (Sydney); Asia Pacific (Tokyo); Canada (Central); Europe (Frankfurt); Europe (Ireland); Europe (London); Europe (Paris); South America (São Paulo); US East (Ohio); US East (N. Virginia); US West (N. California): US West (Oregon).<br/><br/> Discovery is done locally in the region.
+Do I need to install an agent? | No, discovery is agentless.
What's the cost? | The feature is included with the Defender CSPM and Defender for Storage plans, and doesnΓÇÖt include other costs except for the respective plan costs.-
-## Scanning
--- It takes up to 24 hours to see the results for a first scan.-- Refreshed results for a resource that's previously been scanned take up to eight days.-- A new Azure storage account that's added to an already scanned subscription is scanned within 24 hours or less.-- A new AWS S3 bucket that's added to an already scanned AWS account is scanned within 48 hours or less.-
+What permissions do I need to edit data sensitivity settings? | You need one of these permissions: Global Administrator, Compliance Administrator, Compliance Data Administrator, Security Administrator, Security Operator.
## Configuring data sensitivity settings The main steps for configuring data sensitivity setting include:-- [Import custom sensitive info types/labels from Microsoft Purview compliance portal](data-sensitivity-settings.md#import-custom-sensitive-info-typeslabels-from-microsoft-purview-compliance-portal)
+- [Import custom sensitive info types/labels from Microsoft Purview compliance portal](data-sensitivity-settings.md#import-custom-sensitive-info-typeslabels)
- [Customize sensitive data categories/types](data-sensitivity-settings.md#customize-sensitive-data-categoriestypes) - [Set the threshold for sensitivity labels](data-sensitivity-settings.md#set-the-threshold-for-sensitive-data-labels) [Learn more](/microsoft-365/compliance/create-sensitivity-labels) about sensitivity labels in Microsoft Purview.
-## Discovery and scanning
+## Discovery
-Defender for Cloud starts discovering and scanning data immediately after enabling a plan, or after turning on the feature in plans that are already running.
+Defender for Cloud starts discovering data immediately after enabling a plan, or after turning on the feature in plans that are already running.
-- After you onboard the feature, results appear in the Defender for Cloud portal within 24 hours. -- After files are updated in the scanned resources, data is refreshed within eight days.
+- It takes up to 24 hours to see the results for a first-time discovery.
+- After files are updated in the discovered resources, data is refreshed within eight days.
+- A new Azure storage account that's added to an already discovered subscription is discovered within 24 hours or less.
+- A new AWS S3 bucket that's added to an already discovered AWS account is discovered within 48 hours or less.
-## Scanning AWS storage
+### Discovering AWS storage
-In order to protect AWS resources in Defender for Cloud, you set up an AWS connector, using a CloudFormation template to onboard the AWS account.
+In order to protect AWS resources in Defender for Cloud, you set up an AWS connector, using a CloudFormation template to onboard the AWS account.
-- To scan AWS data resources, Defender for Cloud updates the CloudFormation template.
+- To discover AWS data resources, Defender for Cloud updates the CloudFormation template.
- The CloudFormation template creates a new role in AWS IAM, to allow permission for the Defender for Cloud scanner to access data in the S3 buckets. - To connect AWS accounts, you need Administrator permissions on the account. - The role allows these permissions: S3 read only; KMS decrypt.
+## Exposed to the internet/allows public access
+Defender CSPM attack paths and cloud security graph insights include information about storage resources that are exposed to the internet and allow public access. The following table provides more details.
+**State** | **Azure storage accounts** | **AWS S3 Buckets**
+ | |
+**Exposed to the internet** | An Azure storage account is considered exposed to the internet if either of these settings enabled:<br/><br/> Storage_account_name > **Networking** > **Public network access** > **Enabled from all networks**<br/><br/> or<br/><br/> Storage_account_name > **Networking** > **Public network access** > **Enable from selected virtual networks and IP addresses**. | An AWS S3 bucket is considered exposed to the internet if the AWS account/AWS S3 bucket policies don't have a condition set for IP addresses.
+**Allows public access** | An Azure storage account container is considered as allowing public access if these settings are enabled on the storage account:<br/><br/> Storage_account_name > **Configuration** > **Allow blob public access** > **Enabled**.<br/><br/>and **either** of these settings:<br/><br/> Storage_account_name > **Containers** > container_name > **Public access level** set to **Blob (anonymous read access for blobs only)**<br/><br/> Or, storage_account_name > **Containers** > container_name > **Public access level** set to **Container (anonymous read access for containers and blobs)**. | An AWS S3 bucket is considered to allow public access if both the AWS account and the AWS S3 bucket have **Block all public access** set to **Off**, and **either** of these settings is set:<br/><br/> In the policy, **RestrictPublicBuckets** isn't enabled, and the **Principal** setting is set to * and **Effect** is set to **Allow**.<br/><br/> Or, in the access control list, **IgnorePublicAcl** isn't enabled, and permission is allowed for **Everyone**, or for **Authenticated users**.
## Next steps
defender-for-cloud Concept Data Security Posture https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/concept-data-security-posture.md
Last updated 03/09/2023
# About data-aware security posture (preview)
-As digital transformation accelerates, organizations move data to the cloud at an exponential rate using multiple data stores such as object stores and managed/hosted databases. The dynamic and complex nature of the cloud has increased data threat surfaces and risk. This causes challenges for security teams around data visibility and protecting the cloud data estate.
+As digital transformation accelerates, organizations move data to the cloud at an exponential rate using multiple data stores such as object stores and managed/hosted databases. The dynamic and complex nature of the cloud has increased data threat surfaces and risks. This causes challenges for security teams around data visibility and protecting the cloud data estate.
Data-aware security in Microsoft Defender for Cloud helps you to reduce data risk, and respond to data breaches. Using data-aware security posture you can:
Data-aware security in Microsoft Defender for Cloud helps you to reduce data ris
Data-aware security posture automatically and continuously discovers managed and shadow data resources across clouds, including different types of objects stores and databases. -- You can discover sensitive data using the sensitive data discovery extension that's included in the Defender Cloud Security Posture Management (CSPM) and Defender for Storage plans.-- Discovery of hosted databases and data flows is available in Cloud Security Explorer and Attack Paths. This functionality is available in the Defender for CSPM plan, and isn't dependent on the extension.
+- Discover sensitive data using the sensitive data discovery extension that's included in the Defender Cloud Security Posture Management (CSPM) and Defender for Storage plans.
+- In addition, you can discover hosted databases and data flows in Cloud Security Explorer and Attack Paths. This functionality is available in the Defender CSPM plan, and isn't dependent on the sensitive data discovery extension.
+
+## Smart sampling
+
+Defender for Cloud uses smart sampling to discover a selected number of files in your cloud datastores. Smart sampling results discover evidence of sensitive data issues, while saving on discovery costs and time.
## Data security in Defender CSPM
Cloud Security Explorer helps you identify security risks in your cloud environm
You can leverage Cloud Security Explorer query templates, or build your own queries, to find insights about misconfigured data resources that are publicly accessible and contain sensitive data, across multicloud environments. You can run queries to examine security issues, and to get environment context into your asset inventory, exposure to internet, access controls, data flows, and more. Review [cloud graph insights](attack-path-reference.md#cloud-security-graph-components-list). - ## Data security in Defender for Storage Defender for Storage monitors Azure storage accounts with advanced threat detection capabilities. It detects potential data breaches by identifying harmful attempts to access or exploit data, and by identifying suspicious configuration changes that could lead to a breach. When early suspicious signs are detected, Defender for Storage generates security alerts, allowing security teams to quickly respond and mitigate.
-By applying sensitivity information types and Microsoft Purview sensitivity labels on storage resources, you can easily prioritize the alerts and recommendations that focus on sensitive data.
--
-## Scanning with smart sampling
+By applying sensitivity information types and Microsoft Purview sensitivity labels on storage resources, you can easily prioritize the alerts and recommendations that focus on sensitive data.
-Defender for Cloud uses smart sampling to scan a selected number of files in your cloud datastores. The sampling results discover evidence of sensitive data issues, while saving on scanning costs and time.
+[Learn more about sensitive data discovery](defender-for-storage-data-sensitivity.md) in Defender for Storage.
## Data sensitivity settings
Data sensitivity settings define what's considered sensitive data in your organi
- **Custom information types/labels**: You can optionally import custom sensitive information types and [labels](/microsoft-365/compliance/sensitivity-labels) that you've defined in the Microsoft Purview compliance portal. - **Sensitive data thresholds**: In Defender for Cloud you can set the threshold for sensitive data labels. The threshold determines minimum confidence level for a label to be marked as sensitive in Defender for Cloud. Thresholds make it easier to explore sensitive data.
-When scanning resources for data sensitivity, scan results are based on these settings.
+When discovering resources for data sensitivity, results are based on these settings.
When you enable data-aware security capabilities with the sensitive data discovery component in the Defender CSPM or Defender for Storage plans, Defender for Cloud uses algorithms to identify storage resources that appear to contain sensitive data. Resources are labeled in accordance with data sensitivity settings.
-Changes in sensitivity settings take effect the next time that resources are scanned.
+Changes in sensitivity settings take effect the next time that resources are discovered.
## Next steps
defender-for-cloud Data Security Posture Enable https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/data-security-posture-enable.md
This article describes how to enable [data-aware security posture](data-security
- Before you enable data-aware security posture, [review support and prerequisites](concept-data-security-posture-prepare.md). - When you enable Defender CSPM or Defender for Storage plans, the sensitive data discovery extension is automatically enabled. You can disable this setting if you don't want to use data-aware security posture, but we recommend that you use the feature to get the most value from Defender for Cloud. - Sensitive data is identified based on the data sensitivity settings in Defender for Cloud. You can [customize the data sensitivity settings](data-sensitivity-settings.md) to identify the data that your organization considers sensitive.-- It takes up to 24 hours to see the results of a first scan after enabling the feature.
+- It takes up to 24 hours to see the results of a first discovery after enabling the feature.
## Enable in Defender CSPM (Azure)
Follow these steps to enable data-aware security posture. Don't forget to review
### Before you start -- Don't forget to: [review the requirements](concept-data-security-posture-prepare.md#scanning-aws-storage) for AWS scanning, and [required permissions](concept-data-security-posture-prepare.md#whats-supported).
+- Don't forget to: [review the requirements](concept-data-security-posture-prepare.md#discovering-aws-storage) for AWS discovery, and [required permissions](concept-data-security-posture-prepare.md#whats-supported).
- Check that there's no policy that blocks the connection to your Amazon S3 buckets. ### Enable for AWS resources
Automatic discovery of S3 buckets in the AWS account starts automatically. The D
If the enable process didn't work because of a blocked policy, check the following: -- Make sure that the S3 bucket policy doesn't block the connection. In the AWS S3 bucket, select the **Permissions** tab > Bucket policy. Check the policy details to make sure the MDC scanner service running in the Microsoft account in AWS isn't blocked.
+- Make sure that the S3 bucket policy doesn't block the connection. In the AWS S3 bucket, select the **Permissions** tab > Bucket policy. Check the policy details to make sure the Microsoft Defender for Cloud scanner service running in the Microsoft account in AWS isn't blocked.
- Make sure that there's no SCP policy that blocks the connection to the S3 bucket. For example, your SCP policy might block read API calls to the AWS Region where your S3 bucket is hosted.
defender-for-cloud Data Security Review Risks https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/data-security-review-risks.md
Other examples of attack paths for sensitive data include:
Explore data risks and exposure in cloud security graph insights using a query template, or by defining a manual query. 1. In Defender for Cloud, open **Cloud Security Explorer**.
-1. Select a query template, or build your own query. Here's an example:
+1. You can build your own query, or select one of the sensitive data query templates > **Open query**, and modify it as needed. Here's an example:
:::image type="content" source="./media/data-security-review-risks/query.png" alt-text="Screenshot that shows an Insights data query.":::
+### Use query templates
+
+As an alternative to creating your own query, you can use predefined query templates. A number of sensitive data query templates are available. For example:
+
+- Internet exposed storage containers with sensitive data that allow public access.
+- Internet exposed S3 buckets with sensitive data that allow public access
+
+When you open a predefined query it's populated automatically and can be tweaked as needed. For example, here are the prepopulated fields for "Internet exposed storage containers with sensitive data that allow public access".
++ ## Explore sensitive data security alerts When sensitive data discovery is enabled in the Defender for Storage plan, you can prioritize and focus on alerts the alerts that affect resources with sensitive data. [Learn more](defender-for-storage-data-sensitivity.md) about monitoring data security alerts in Defender for Storage.
defender-for-cloud Data Sensitivity Settings https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/data-sensitivity-settings.md
Last updated 03/22/2023
# Customize data sensitivity settings
-This article describes how to customize data sensitivity settings in Microsoft Defender for Cloud.
+This article describes how to customize data sensitivity settings in Microsoft Defender for Cloud.
Data sensitivity settings are used to identify and focus on managing the critical sensitive data in your organization.
This configuration helps you focus on your critical sensitive resources and impr
## Before you start
-You need one of these permissions in order to sign in and edit sensitivity settings: Global Administrator, Compliance Administrator, Compliance Data Administrator, Security Administrator, Security Operator.
--- [Review the prerequisites](concept-data-security-posture-prepare.md#configuring-data-sensitivity-settings) for customizing data sensitivity settings.
+- Make sure that you [review the prerequisites and requirements](concept-data-security-posture-prepare.md#configuring-data-sensitivity-settings) for customizing data sensitivity settings.
- In Defender for Cloud, enable sensitive data discovery capabilities in the [Defender CSPM](data-security-posture-enable.md) and/or [Defender for Storage](defender-for-storage-data-sensitivity.md) plans.
-Changes in sensitivity settings take effect the next time that resources are scanned.
+Changes in sensitivity settings take effect the next time that resources are discovered.
-## Import custom sensitive info types/labels from Microsoft Purview compliance portal
+## Import custom sensitive info types/labels
-Defender for Cloud uses built-in sensitive info types. You can optionally import your own custom sensitive info types and labels from Microsoft Purview compliance portal to align with your organization's needs.
+Defender for Cloud uses built-in sensitive info types. You can optionally import your own custom sensitive info types and labels from Microsoft Purview compliance portal to align with your organization's needs.
Import as follows (Import only once):
defender-for-iot How To Investigate Sensor Detections In A Device Inventory https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-iot/organizations/how-to-investigate-sensor-detections-in-a-device-inventory.md
For example, if you merge two devices, each with an IP address, both IP addresse
**To merge devices from the device inventory:**
-In the device inventory grid, select the devices you want to merge, and then select **Merge** in the toolbar at the top of the page.
+1. In the **Device inventory** page, select the devices you want to merge, and then select **Merge** in the toolbar at the top of the page.
+
+1. At the prompt, select **Confirm** to confirm that you want to merge the devices.
The devices are merged, and a confirmation message appears at the top right.
You can delete a device when it's been inactive for more than 10 minutes.
1. In the **Device inventory** page, select the device or devices you want to delete, and then select **Delete** :::image type="icon" source="media/how-to-manage-device-inventory-on-the-cloud/delete-device.png" border="false"::: in the toolbar at the top of the page.
-1. At the prompt, select **Confirm** to confirm that you want to delete the device from Defender for IoT.
+1. At the prompt, select **Confirm** to confirm that you want to delete the device or devices from Defender for IoT.
-A confirmation message appears at the top right.
+The device or devices are deleted, and a confirmation message appears at the top right.
**To delete all inactive devices**:
defender-for-iot How To Work With The Sensor Device Map https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-iot/organizations/how-to-work-with-the-sensor-device-map.md
You can only merge [authorized devices](device-inventory.md#unauthorized-devices
1. Select the authorized devices you want to merge by using the SHIFT key to select more than one device, and then right-click and select **Merge**.
+1. At the prompt, select **Confirm** to confirm that you want to merge the devices.
+ The devices are merged, and a confirmation message appears at the top right. Merge events are listed in the OT sensor's event timeline. ## Manage device notifications
frontdoor Front Door Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/frontdoor/front-door-overview.md
Whether youΓÇÖre delivering content and files or building global apps and APIs, Azure Front Door can help you deliver higher availability, lower latency, greater scale, and more secure experiences to your users wherever they are.
-Azure Front Door is MicrosoftΓÇÖs modern cloud Content Delivery Network (CDN) that provides fast, reliable, and secure access between your users and your applicationsΓÇÖ static and dynamic web content across the globe. Azure Front Door delivers your content using the MicrosoftΓÇÖs global edge network with hundreds of [global and local points of presence (PoPs)](edge-locations-by-region.md) distributed around the world close to both your enterprise and consumer end users.
+Azure Front Door is MicrosoftΓÇÖs modern cloud Content Delivery Network (CDN) that provides fast, reliable, and secure access between your users and your applicationsΓÇÖ static and dynamic web content across the globe. Azure Front Door delivers your content using MicrosoftΓÇÖs global edge network with hundreds of [global and local points of presence (PoPs)](edge-locations-by-region.md) distributed around the world close to both your enterprise and consumer end users.
:::image type="content" source="./media/overview/front-door-overview.png" alt-text="Diagram of Azure Front Door routing user traffic to endpoints." lightbox="./media/overview/front-door-overview-expanded.png":::
governance Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/governance/machine-configuration/overview.md
for communication to the machine configuration service. Apply tag with the name
applied before or after machine configuration policy definitions are applied to the machine.
+> [!IMPORTANT]
+> In order to communicate over private link for custom packages, the link to the location of the package must be added to the list of allowed URLS.
+ Traffic is routed using the Azure [virtual public IP address](../../virtual-network/what-is-ip-address-168-63-129-16.md) to establish a secure, authenticated channel with Azure platform resources.
healthcare-apis Frequently Asked Questions https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/healthcare-apis/iot/frequently-asked-questions.md
Previously updated : 03/28/2023 Last updated : 04/03/2023
The MedTech service supports the [HL7 FHIR&#174; R4](https://www.hl7.org/impleme
The MedTech service requires device and FHIR destination mappings to perform normalization and transformation processes on device message data. To learn how the MedTech service transforms device message data into [FHIR Observations](https://www.hl7.org/fhir/observation.html), see [Understand the MedTech service device message processing stages](understand-service.md).
+### Is JsonPathContent still supported by the MedTech service device mapping?
+
+Yes. JsonPathContent can be used as a template type within [CollectionContent](overview-of-device-mapping.md#collectioncontent). It's recommended that [CalculatedContent](how-to-use-calculatedcontent-mappings.md) is used as it supports all of the features of JsonPathContent with extra support for more advanced features.
+ ### How long does it take for device message data to show up in the FHIR service? The MedTech service buffers [FHIR Observations](https://www.hl7.org/fhir/observation.html) created during the transformation stage and provides near real-time processing. However, this buffer can potentially delay the persistence of FHIR Observations to the FHIR service up to ~five minutes. To learn how the MedTech service transforms device message data into FHIR Observations, see [Understand the MedTech service device message processing stages](understand-service.md).
For an overview of the MedTech service, see
To learn about the MedTech service device message data transformation, see > [!div class="nextstepaction"]
-> [Understand the MedTech service device message data transformation](overview.md)
+> [Understand the MedTech service device message processing stages](overview.md)
To learn about methods for deploying the MedTech service, see > [!div class="nextstepaction"] > [Choose a deployment method for the MedTech service](deploy-new-choose.md)
-FHIR&#174; is a registered trademark of Health Level Seven International, registered in the U.S. Trademark Office and is used with their permission.
+FHIR&#174; is a registered trademark of Health Level Seven International, registered in the U.S. Trademark Office and is used with their permission.
healthcare-apis How To Configure Device Mappings https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/healthcare-apis/iot/how-to-configure-device-mappings.md
- Title: How to configure device mappings in MedTech service - Azure Health Data Services
-description: This article describes how to configure device mappings in the Azure Health Data Services MedTech service.
---- Previously updated : 02/09/2023---
-# How to configure device mappings
-
-This article provides an overview and describes how to configure the MedTech service device mappings.
-
-The MedTech service requires two types of JSON-based mappings. The first type, **device mappings**, is responsible for mapping the device payloads sent to the MedTech service device message event hub endpoint. The device mapping extracts types, device identifiers, measurement date time, and the measurement value(s).
-
-The second type, **Fast Healthcare Interoperability Resources (FHIR&#174;) destination mappings**, controls the mapping for FHIR resource. The FHIR destination mappings allow configuration of the length of the observation period, FHIR data type used to store the values, and terminology code(s).
-
-> [!NOTE]
-> Device and FHIR destination mappings are stored in an underlying blob storage and loaded from blob per compute execution. Once updated they should take effect immediately.
-
-The two types of mappings are composed into a JSON document based on their type. These JSON documents are then added to your MedTech service through the Azure portal. The device mapping is added through the **Device mapping** page and the FHIR destination mapping through the **Destination** page.
-
-## Device mappings overview
-
-Device mappings provide functionality to extract device message content into a common format for further evaluation. Each device message received is evaluated against all device mapping templates. A single inbound device message can be separated into multiple outbound messages that are later mapped to different observations in the FHIR service. The result is a normalized data object representing the value or values parsed by the device mapping templates.
-
-The normalized data model has a few required properties that must be found and extracted:
-
-|Property|Description|
-|--|--|
-|**Type**|The name/type to classify the measurement. This value is used to bind to the required FHIR destination mapping. Multiple mappings can output to the same type allowing you to map different representations across multiple devices to a single common output.|
-|**OccurenceTimeUtc**|The time the measurement occurred.|
-|**DeviceId**|The identifier for the device. This value should match an identifier on the device resource that exists on the destination FHIR service.|
-|**Properties**|Extract at least one property so the value can be saved in the Observation resource created. Properties are a collection of key value pairs extracted during normalization.|
-
-> [!IMPORTANT]
-> The full normalized model is defined by the [IMeasurement](https://github.com/microsoft/iomt-fhir/blob/master/src/lib/Microsoft.Health.Fhir.Ingest.Schema/IMeasurement.cs) interface.
-
-Below is an example of what happens during normalization and transformation process within the MedTech service. For the purposes of the device mapping, we'll be focusing on the **Normalized data** process:
--
-The content payload itself is an Azure Event Hubs message, which is composed of three parts: Body, Properties, and SystemProperties. The `Body` is a byte array representing an UTF-8 encoded string. During template evaluation, the byte array is automatically converted into the string value. `Properties` is a key value collection for use by the message creator. `SystemProperties` is also a key value collection reserved by the Azure Event Hubs framework with entries automatically populated by it.
-
-```json
-{
- "Body": {
- "content": "value"
- },
- "Properties": {
- "key1": "value1",
- "key2": "value2"
- },
- "SystemProperties": {
- "x-opt-sequence-number": 1,
- "x-opt-enqueued-time": "2021-02-01T22:46:01.8750000Z",
- "x-opt-offset": 1,
- "x-opt-partition-key": "1"
- }
-}
-```
-
-## Device mappings validations
-
-The validation process validates the device mappings before allowing them to be saved for use. These elements are required in the device mapping templates.
-
-**Device mappings**
-
-|Element|Required|
-|:-|:|
-|TypeName|True|
-|TypeMatchExpression|True|
-|DeviceIdExpression|True|
-|TimestampExpression|True|
-|Values[].ValueName|True|
-|Values[].ValueExpression|True|
-
-> [!NOTE]
-> `Values[].ValueName and Values[].ValueExpression` elements are only required if you have a value entry in the array. It's valid to have no values mapped. This is used when the telemetry being sent is an event.
->
-> For example:
->
-> Some IoMT scenarios may require creating an Observation Resource in the FHIR service that does not contain a value.
-
-## CollectionContentTemplate
-
-The CollectionContentTemplate is the **root** template type used by the MedTech service device mappings template and represents a list of all templates that will be used during the normalization process.
-
-### Example
-
-```json
-{
- "templateType": "CollectionContent",
- "template": [
- {
- "templateType": "CalculatedContent",
- "template": {
- "typeName": "heartrate",
- "typeMatchExpression": "$..[?(@heartRate)]",
- "deviceIdExpression": "$.matchedToken.deviceId",
- "timestampExpression": "$.matchedToken.endDate",
- "values": [
- {
- "required": "true",
- "valueExpression": "$.matchedToken.heartRate",
- "valueName": "hr"
- }
- ]
- }
- },
- {
- "templateType": "CalculatedContent",
- "template": {
- "typeName": "stepcount",
- "typeMatchExpression": "$..[?(@steps)]",
- "deviceIdExpression": "$.matchedToken.deviceId",
- "timestampExpression": "$.matchedToken.endDate",
- "values": [
- {
- "required": "true",
- "valueExpression": "$.matchedToken.steps",
- "valueName": "steps"
- }
- ]
- }
- }
- ]
-}
-```
-
-## Mapping with JSONPath
-
-The device mapping content types supported by the MedTech service rely on JSONPath to both match the required mapping and extracted values. More information on JSONPath can be found [here](https://goessner.net/articles/JsonPath/). All template types use the [JSON .NET implementation](https://www.newtonsoft.com/json/help/html/QueryJsonSelectTokenJsonPath.htm) for resolving JSONPath expressions.
-
-### Example
-
-**Heart rate**
-
-*A device message from the Azure Event Hubs event hub received by the MedTech service*
-
-```json
-{
- "Body": {
- "heartRate": "78",
- "endDate": "2021-02-01T22:46:01.8750000Z",
- "deviceId": "device123"
- },
- "Properties": {},
- "SystemProperties": {}
-}
-```
-
-*A conforming MedTech service device mapping template that could be used during the normalization process with the example device message*
-
-```json
-{
- "templateType": "CollectionContent",
- "template": [
- {
- "templateType": "JsonPathContent",
- "template": {
- "typeName": "heartrate",
- "typeMatchExpression": "$..[?(@heartRate)]",
- "deviceIdExpression": "$.deviceId",
- "timestampExpression": "$.endDate",
- "values": [
- {
- "required": "true",
- "valueExpression": "$.heartRate",
- "valueName": "hr"
- }
- ]
- }
- }
- ]
-}
-```
-JSONPath allows matching on and extracting values from a device message.
-
-|Property|Description|Example|
-|--|--|-|
-|TypeName|The type to associate with measurements that match the template|`heartrate`|
-|TypeMatchExpression|The JSONPath expression that is evaluated against the EventData payload. If a matching JToken is found, the template is considered a match. All later expressions are evaluated against the extracted JToken matched here.|`$..[?(@heartRate)]`|
-|DeviceIdExpression|The JSONPath expression to extract the device identifier.|`$.matchedToken.deviceId`|
-|TimestampExpression|The JSONPath expression to extract the timestamp value for the measurement's OccurrenceTimeUtc.|`$.matchedToken.endDate`|
-|PatientIdExpression|*Required* when IdentityResolution is in **Create** mode and *Optional* when IdentityResolution is in **Lookup** mode. The expression to extract the patient identifier.|`$.matchedToken.patientId`|
-|EncounterIdExpression|*Optional*: The expression to extract the encounter identifier.|`$.matchedToken.encounterId`|
-|CorrelationIdExpression|*Optional*: The expression to extract the correlation identifier. This output can be used to group values into a single observation in the FHIR destination mappings.|`$.matchedToken.correlationId`|
-|Values[].ValueName|The name to associate with the value extracted by the next expression. Used to bind the wanted value/component in the FHIR destination mapping template.|`hr`|
-|Values[].ValueExpression|The JSONPath expression to extract the wanted value.|`$.matchedToken.heartRate`|
-|Values[].Required|Will require the value to be present in the payload. If not found, a measurement won't be generated, and an InvalidOperationException will be created.|`true`|
-
-## Other supported template types
-
-You can define one or more templates within the MedTech service device mapping. Each device message received is evaluated against all device mapping templates.
-
-|Template Type|Description|
-|-|--|
-|[CalculatedContent](how-to-use-calculatedcontent-mappings.md)|A template that supports writing expressions using one of several expression languages. Supports data transformation via the use of JMESPath functions.|
-|[IotJsonPathContentTemplate](how-to-use-iot-jsonpath-content-mappings.md)|A template that supports messages sent from Azure Iot Hub or the Legacy Export Data feature of Azure Iot Central.
-
-> [!TIP]
-> See the MedTech service article [Troubleshoot MedTech service errors](troubleshoot-errors.md) for assistance fixing common MedTech service errors.
-
-## Next steps
-
-In this article, you learned how to configure device mappings.
-
-To learn how to configure FHIR destination mappings, see
-
-> [!div class="nextstepaction"]
-> [How to configure FHIR destination mappings](how-to-configure-fhir-mappings.md)
-
-FHIR&#174; is a registered trademark of Health Level Seven International, registered in the U.S. Trademark Office and is used with their permission.
healthcare-apis Overview Of Device Mapping https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/healthcare-apis/iot/overview-of-device-mapping.md
+
+ Title: Overview the MedTech service device mapping - Azure Health Data Services
+description: This article provides an overview of the MedTech service device mapping.
++++ Last updated : 04/03/2023+++
+# Overview of the MedTech service device mapping
+
+> [!NOTE]
+> [Fast Healthcare Interoperability Resources (FHIR&#174;)](https://www.hl7.org/fhir/) is an open healthcare specification.
+
+This article provides an overview of the MedTech service device mapping.
+
+The MedTech service requires two types of [JSON](https://www.json.org/) mappings that are added to your MedTech service through the Azure portal or Azure Resource Manager API. The device mapping is the first type and controls mapping values in the device message data sent to the MedTech service to an internal, normalized data object. The device mapping contains expressions that the MedTech service uses to extract types, device identifiers, measurement date time, and measurement value(s). The [FHIR destination mapping](how-to-configure-fhir-mappings.md) is the second type and controls the mapping for [FHIR Observations](https://www.hl7.org/fhir/observation.html).
+
+> [!NOTE]
+> The device and FHIR destination mappings are re-evaluated each time a message is processed. Any updates to either mapping will take effect immediately.
+
+## Device mapping basics
+
+The device mapping contains collections of expression templates used to extract device message data into an internal, normalized format for further evaluation. Each device message received is evaluated against **all** expression templates in the collection. This evaluation means that a single device message can be separated into multiple outbound messages that can be mapped to multiple FHIR Observations in the FHIR service.
+
+> [!TIP]
+> For more information about how the MedTech service processes device message data into FHIR Observations for persistence on the FHIR service, see [Understand the MedTech service device message processing stages](understand-service.md).
+
+This diagram provides an illustration of what happens during the normalization stage within the MedTech service.
++
+## Device mapping validations
+
+The normalization process validates the device mapping before allowing it to be saved for use. These elements are required in the device mapping templates.
+
+**Device mapping**
+
+|Element |Required in CalculatedContent|Required in IotJsonPathContent|
+|:--|:-|:--|
+|typeName |True |True |
+|typeMatchExpression |True |True |
+|deviceIdExpression |True |False and ignored completely. |
+|timestampExpression |True |False and ignored completely. |
+|patientIdExpression |True when the MedTech services's **Resolution type** is set to **Create**; False when the MedTech service's **Resolution type** is set to **Lookup**.|True when the MedTech service's **Resolution type** is set to **Create**; False when the MedTech service's **Resolution type** is set to **Lookup**.|
+|encounterIdExpression |False |False |
+|correlationIdExpression |False |False |
+|values[].valueName |True |True |
+|values[].valueExpression|True |True |
+|values[].required |True |True |
+
+> [!NOTE]
+> `values[].valueName, values[].valueExpression`, `values[].required` and elements are only required if you have a value entry in the array. It's valid to have no values mapped. These elements are used when the telemetry being sent is an event.
+>
+> For example, some scenarios may require creating a FHIR Observation in the FHIR service that does not contain a value.
+
+## CollectionContent
+
+CollectionContent is the root template type used by the MedTech service device mapping. CollectionContent is a list of all templates that are used during the normalization stage. You can define one or more templates within CollectionContent, with each device message received by the MedTech service being evaluated against all templates.
+
+You can use these template types within CollectionContent depending on your use case:
+
+- [CalculatedContent](how-to-use-calculatedcontent-mappings.md) for device messages sent directly to your MedTech service event hub. CalculatedContent supports [JSONPath](https://goessner.net/articles/JsonPath/), [JMESPath](https://jmespath.org/), [JMESPath functions](https://jmespath.org/specification.html#built-in-functions), and the MedTech service [custom functions](how-to-use-custom-functions.md).
+
+and/or
+
+- [IotJsonPathContent](how-to-use-iotjsonpathcontenttemplate-mappings.md) for device messages being routed through [Azure IoT Hub](/azure/iot-hub/iot-concepts-and-iot-hub) to your MedTech service event hub. IotJsonPathContent supports [JSONPath](https://goessner.net/articles/JsonPath/).
++
+### Example
+
+> [!TIP]
+> You can use the MedTech service [Mapping debugger](how-to-use-mapping-debugger.md) for assistance creating, updating, and troubleshooting the MedTech service device and FHIR destination mappings. The Mapping debugger enables you to easily view and make inline adjustments in real-time, without ever having to leave the Azure portal. The Mapping debugger can also be used for uploading test device messages to see how they'll look after being processed into normalized messages and transformed into FHIR Observations.
+
+In this example, we're using a device message that is capturing `heartRate` data:
+
+```json
+{
+ "heartRate": "78",
+ "endDate": "2023-03-13T22:46:01.8750000",
+ "deviceId": "device01"
+}
+```
+
+We're using this device mapping for the normalization stage:
+
+```json
+{
+ "templateType": "CollectionContent",
+ "template": [
+ {
+ "templateType": "CalculatedContent",
+ "template": {
+ "typeName": "heartrate",
+ "typeMatchExpression": "$..[?(@heartRate)]",
+ "deviceIdExpression": "$.matchedToken.deviceId",
+ "timestampExpression": "$.matchedToken.endDate",
+ "values": [
+ {
+ "required": true,
+ "valueExpression": "$.matchedToken.heartRate",
+ "valueName": "hr"
+ }
+ ]
+ }
+ }
+ ]
+}
+```
+
+The resulting normalized message will look like this after the normalization stage:
+
+```json
+[
+ {
+ "type": "heartrate",
+ "occurrenceTimeUtc": "2023-03-13T22:46:01.875Z",
+ "deviceId": "device01",
+ "properties": [
+ {
+ "name": "hr",
+ "value": "78"
+ }
+ ]
+ }
+]
+```
+When the MedTech service is processing the device message, the templates in the CollectionContent are used to evaluate the message. The `typeMatchExpression` is used to determine whether or not the template should be used to create a normalized message from the device message. If the `typeMatchExpression` evaluates to true, then the `deviceIdExpression`, `timestampExpression`, and `valueExpression` values are used to locate and extract the JSON values from the device message and create a normalized message. In this example, all expressions are written in JSONPath, however, it would be valid to write all the expressions in JMESPath. It's up to the template author to determine which expression language is most appropriate.
+
+> [!TIP]
+> See [Troubleshoot MedTech service deployment errors](troubleshoot-errors-deployment.md) for assistance fixing common MedTech service deployment errors.
+>
+> See [Troubleshoot errors using the MedTech service logs](troubleshoot-errors-logs.md) for assistance fixing MedTech service errors.
+
+## Next steps
+
+In this article, you've been provided an overview of the MedTech service device mapping.
+
+To learn how to use CalculatedContent with the MedTech service device mapping, see
+
+> [!div class="nextstepaction"]
+> [How to use CalculatedContent with the MedTech service device mapping](how-to-use-calculatedcontent-mappings.md)
+
+To learn how to use IotJsonPathContent with the MedTech service device mapping, see
+
+> [!div class="nextstepaction"]
+> [How to use IotJsonPathContent with the MedTech service device mapping](how-to-use-iotjsonpathcontenttemplate-mappings.md)
+
+To learn how to use custom functions with the MedTech service device mapping, see
+
+> [!div class="nextstepaction"]
+> [How to use custom functions with the MedTech service device mapping](how-to-use-custom-functions.md)
+
+To get an overview of the MedTech service FHIR destination mapping, see
+
+> [!div class="nextstepaction"]
+> [Overview of the MedTech service FHIR destination mapping](how-to-configure-fhir-mappings.md)
+
+FHIR&#174; is a registered trademark of Health Level Seven International, registered in the U.S. Trademark Office and is used with their permission.
iot-edge Iot Edge For Linux On Windows Updates https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-edge/iot-edge-for-linux-on-windows-updates.md
To migrate between EFLOW 1.1LTS to EFLOW 1.4LTS, use the following steps.
Confirm-EflowMigration -updateMsiPath "<path-to-folder>\AzureIoTEdge_LTS_Update_1.4.2.12122_X64.msi" ```
-If for any reason the migration fails, the EFLOW VM will be restored to its original 1.1LTS version.
-If you want to cancel the migration, you can use the following cmdlets `Start-EflowMigration` and then `Restore-EflowPriorToMigration`
+>[!WARNING]
+> If for any reason the migration fails, the EFLOW VM will be restored to its original 1.1LTS version.
+> If you want to cancel the migration or manually restore the EFLOW VM to prior state, you can use the following cmdlets `Start-EflowMigration` and then `Restore-EflowPriorToMigration`.
For more information, check `Start-EflowMigration`, `Confirm-EflowMigration` and `Restore-EflowPriorToMigration` cmdlet documentation by using the `Get-Help <cmdlet> -full` command.
iot-edge Tutorial Develop For Linux https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-edge/tutorial-develop-for-linux.md
Each module can have multiple *input* and *output* queues declared in their code
The sample C# code that comes with the project template uses the [ModuleClient Class](/dotnet/api/microsoft.azure.devices.client.moduleclient) from the IoT Hub SDK for .NET.
-1. In the Visual Studio Code explorer, open **modules** > **CSharpModule** > **ModuleBackgroundService.cs**.
+1. In the Visual Studio Code explorer, open **modules** > **filtermodule** > **ModuleBackgroundService.cs**.
-1. At the top of the **CSharpModule** namespace, add three **using** statements for types that are used later:
+1. Before the **filtermodule** namespace, add three **using** statements for types that are used later:
```csharp using System.Collections.Generic; // For KeyValuePair<>
The sample C# code that comes with the project template uses the [ModuleClient C
} ```
-1. Find the **Init** function. This function creates and configures a **ModuleClient** object, which allows the module to connect to the local Azure IoT Edge runtime to send and receive messages. After creating the **ModuleClient**, the code reads the **temperatureThreshold** value from the module twin's desired properties. The code registers a callback to receive messages from an IoT Edge hub via an endpoint called **input1**.
+1. Find the **ExecuteAsync** function. This function creates and configures a **ModuleClient** object, which allows the module to connect to the local Azure IoT Edge runtime to send and receive messages. After creating the **ModuleClient**, the code reads the **temperatureThreshold** value from the module twin's desired properties. The code registers a callback to receive messages from an IoT Edge hub via an endpoint called **input1**.
- Replace the **SetInputMessageHandlerAsync** method with a new one that updates the name of the endpoint and the method that's called when input arrives. Also, add a **SetDesiredPropertyUpdateCallbackAsync** method for updates to the desired properties. To make this change, replace the last line of the **Init** method with the following code:
+ Replace the call to the **ProcessMessageAsync** method with a new one that updates the name of the endpoint and the method that's called when input arrives. Also, add a **SetDesiredPropertyUpdateCallbackAsync** method for updates to the desired properties. To make this change, replace the last line of the **ExecuteAsync** method with the following code:
```csharp // Register a callback for messages that are received by the module.
The sample C# code that comes with the project template uses the [ModuleClient C
await ioTHubModuleClient.SetInputMessageHandlerAsync("inputFromSensor", FilterMessages, ioTHubModuleClient); ```
-1. Add the **onDesiredPropertiesUpdate** method to the **Program** class. This method receives updates on the desired properties from the module twin, and updates the **temperatureThreshold** variable to match. All modules have their own module twin, which lets you configure the code that's running inside a module directly from the cloud.
+1. Add the **onDesiredPropertiesUpdate** method to the **ModuleBackgroundService** class. This method receives updates on the desired properties from the module twin, and updates the **temperatureThreshold** variable to match. All modules have their own module twin, which lets you configure the code that's running inside a module directly from the cloud.
```csharp static Task OnDesiredPropertiesUpdate(TwinCollection desiredProperties, object userContext)
The sample C# code that comes with the project template uses the [ModuleClient C
} ```
-1. Replace the **PipeMessage** method with the **FilterMessages** method. This method is called whenever the module receives a message from the IoT Edge hub. It filters out messages that report temperatures below the temperature threshold set via the module twin. It also adds the **MessageType** property to the message with the value set to **Alert**.
+1. Add the **FilterMessages** method. This method is called whenever the module receives a message from the IoT Edge hub. It filters out messages that report temperatures below the temperature threshold set via the module twin. It also adds the **MessageType** property to the message with the value set to **Alert**.
```csharp static async Task<MessageResponse> FilterMessages(Message message, object userContext)
lab-services Classroom Labs Concepts https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/lab-services/classroom-labs-concepts.md
The following conceptual diagram shows how the different Azure Lab Services comp
In Azure Lab Services, a lab plan is an Azure resource and serves as a collection of configurations and settings that apply to all the labs created from it. For example, lab plans specify the networking setup, the list of available VM images and VM sizes, and if [Canvas integration](lab-services-within-canvas-overview.md) can be used for a lab. Learn more about [planning your lab plan settings](./lab-plan-setup-guide.md#plan-your-lab-plan-settings).
-A lab plan can contain zero or more [labs](#lab). Each lab uses the configuration settings from the lab plan. Azure Lab Services uses Azure AD roles to grant permissions for creating labs. Learn more about [Azure Lab Services built-in roles](./administrator-guide.md#rbac-roles).
+You can associate a lab plan with zero or more [labs](#lab). Each lab uses the configuration settings from the lab plan. Azure Lab Services uses Azure RBAC roles to grant permissions for creating labs. Learn more about [Azure Lab Services built-in roles](./administrator-guide.md#rbac-roles).
## Lab
You can further configure the lab behavior by creating [lab schedules](#schedule
When you publish a lab, Azure Lab Services provisions the lab VMs. All lab VMs for a lab share the same configuration and are identical.
-To create labs in Azure Lab Services, your Azure account needs to have the Lab Creator Azure AD role, or you need to be the owner of the corresponding lab plan. Learn more about [Azure Lab Services built-in roles](./administrator-guide.md#rbac-roles).
+To create labs in Azure Lab Services, your Azure account needs to have the Lab Creator Azure RBAC role, or you need to be the owner of the corresponding lab plan. Learn more about [Azure Lab Services built-in roles](./administrator-guide.md#rbac-roles).
You use the Azure Lab Services website (https://labs.azure.com) to create labs for a lab plan. Alternately, you can also [configure Microsoft Teams integration](./how-to-configure-teams-for-lab-plans.md) or [Canvas integration](./how-to-configure-canvas-for-lab-plans.md) with Azure Lab Services to create labs directly in Microsoft Teams or Canvas.
Learn how to [attach or detach an Azure compute gallery](./how-to-attach-detach-
## Template virtual machine
-You can choose to create a customizable lab, which enables you to modify the base image for the [lab VMs](#lab-virtual-machine). For example, to install extra software components or modify operating system settings. In this case, Azure Lab Services creates a lab template VM, which you can connect to and customize.
+You can choose to create a customizable lab, which enables you to modify the base image for the [lab virtual machines](#lab-virtual-machine). In this case, Azure Lab Services creates a lab template VM, which you can connect to and customize. For example, you might install extra software components, such as Visual Studio, or configure the operating system to disable the web server process.
When you [publish the lab](./tutorial-setup-lab.md#publish-lab), Azure Lab Services creates the lab VMs, based on the template VM image. If you modify the template VM at a later stage, when you republish the template VM, all lab VMs are updated to match the new template. When you republish a template VM, Azure Lab Services reimages the lab VMs and removes all changes and data on the VM.
lab-services Classroom Labs Scenarios https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/lab-services/classroom-labs-scenarios.md
Previously updated : 01/17/2023 Last updated : 04/04/2023 # Use labs for trainings
The following table shows the corresponding mapping of organization roles to Azu
| Student | | Students don't need an Azure AD role. Educators [grant students access](./how-to-configure-student-usage.md) in the lab configuration or students are automatically granted access, for example when using [Teams](./how-to-manage-labs-within-teams.md#manage-lab-user-lists-in-teams) or [Canvas](./how-to-manage-labs-within-canvas.md#manage-lab-user-lists-in-canvas). | | Others | Lab Services Reader | Optionally, provide access to see all lab plans and labs without permission to modify. |
-## Create the lab plan as a lab plan administrator
-
-The first step in using Azure Lab Services is to create a lab plan in the Azure portal. After a lab plan administrator creates the lab plan, the admin adds the Lab Creator role to users who want to create labs, such as educators.
-
-The lab creator can then create labs with virtual machines for students to do exercises for the course they're teaching. For details, see [Create and manage lab plan](how-to-manage-lab-plans.md).
-
-## Create and manage labs
-
-If you have the Lab Creator role for a lab plan, you can create one or more labs in the lab plan. You create and configure a template VM with all the required software for doing exercises in your course. You select a ready-made image from the available images for creating a lab and then optionally customize it by installing the software required for the lab. For details, see [Create and manage labs](how-to-manage-labs.md).
-
-## Set up and publish a template VM
-
-A template VM in a lab is a base virtual machine image from which all usersΓÇÖ VMs are created. Set up the template VM so that it's configured with exactly what you want to provide to the training attendees. You can provide a name and description of the template that the lab users see.
-
-Then, you publish the template to make instances of the template VM available to your lab users. When you publish a template, Azure Lab Services creates VMs in the lab by using the template. The number of VMs created in this process is the same as the maximum number of users allowed into the lab, which you can set in the usage policy of the lab. All virtual machines have the same configuration as the template. For details, see [Set up and publish template virtual machines](how-to-create-manage-template.md).
-
-## Configure usage settings and policies
-
-The lab creator can add or remove users to the lab, get a registration link to invite lab users, set up policies such as setting individual quotas per user, update the number of VMs available in the lab, and more. For details, see [Configure usage settings and policies](how-to-configure-student-usage.md).
-
-When you use Azure Lab Services with [Microsoft Teams](./how-to-manage-labs-within-teams.md) or [Canvas](./how-to-manage-labs-within-canvas.md), Azure Lab Services automatically synchronizes the lab user list with the membership in Teams or Canvas.
-
-## Create and manage schedules
-
-Schedules allow you to configure a lab such that VMs in the lab automatically start and shut down at a specified time. You can define a one-time schedule or a recurring schedule. For details, see [Create and manage schedules for labs](how-to-create-schedules.md).
-
-## Use VMs in the lab
-
-A student or training attendee registers to the lab by using the registration link they received from the lab creator. They can then connect to the VM to do the exercises for the course. For details, see [How to access a lab](how-to-use-lab.md).
- ## Next steps
+- Learn more about [setting up example class types](./class-types.md).
- Get started by following the steps in the tutorial [Set up a lab for classroom training](./tutorial-setup-lab.md).
lab-services Lab Services Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/lab-services/lab-services-overview.md
Title: Introduction to Azure Lab Services-
+ Title: What is Azure Lab Services?
description: Learn how Azure Lab Services can make it easy to create, manage, and secure labs with VMs for educators and students.++++ Previously updated : 02/09/2023- Last updated : 04/03/2023 # What is Azure Lab Services?
+Azure Lab Services lets you create labs whose infrastructure is fully managed by Azure. The service handles all the infrastructure management, from spinning up virtual machines (VMs) to handling errors and scaling the infrastructure. For example, configure labs for specific class types, such as data science or general programming, and quickly assign lab users their dedicated lab virtual machine.
+
+To create, manage, and access labs in Azure Lab Services, use the dedicated Azure Lab Services website, or directly integrate labs in [Microsoft Teams](./lab-services-within-teams-overview.md) or the [Canvas Learning Management System (LMS)](./lab-services-within-canvas-overview.md).
+
+Azure Lab Services is designed with three major personas in mind: administrators, educators, and students. Take advantage of Azure Role-Based Access Control (RBAC) to grant the right access to the different personas in your organization. Learn more about these personas and how to [use Azure Lab Services for conducting classes](./classroom-labs-scenarios.md).
+ [!INCLUDE [preview note](./includes/lab-services-new-update-note.md)]
-Azure Lab Services lets you create labs whose infrastructure is managed by Azure. The service itself handles all the infrastructure management, from spinning up virtual machines (VMs) to handling errors and scaling the infrastructure. Azure Lab Services was designed with three major personas in mind: administrators, educators, and students. After an IT administrator creates a lab plan, an educator can quickly set up a lab for the class. Educators specify the number and type of VMs needed, configure the template VM, and add users to the class. Once a user registers to the class, the user can access the VM to do exercises for the class.
+## Lab creation process
-To get started with Azure Lab Services, you need to [create a lab plan Azure resource](./quick-create-resources.md) for your organization first. The lab plan serves as a collection of configurations and settings that apply to the labs created from it.
+The following diagram shows the different steps involved in creating and accessing labs with Azure Lab Services.
:::image type="content" source="./media/lab-services-overview/lab-services-process-overview.png" alt-text="Diagram that shows the steps involved in creating a lab with Azure Lab Services.":::
-To learn more about lab plans, labs, or other concepts, see the [key concepts for Azure Lab Services](./classroom-labs-concepts.md).
+To get started with Azure Lab Services, you [*create a lab plan*](./quick-create-resources.md). A lab plan is an Azure resource that serves as the collection of configuration settings that apply to all labs associated with the lab plan. Optionally, you can *assign lab creator* permissions through Azure RBAC to allow others to create labs.
+
+Next, [*create a lab*](./quick-create-connect-lab.md) for conducting a specific class or running a hackathon, based on Azure Marketplace images or your own custom virtual machine images. You can further *configure the lab* settings with lab schedules, usage quota, or automatic startup and shutdown.
+
+Optionally, *customize the [lab template](./classroom-labs-concepts.md#template-virtual-machine)* to match the specific needs of the class. For example, install extra software such as Visual Studio Code, or enable specific operating system services.
-The service creates and manages resources in a subscription managed by Microsoft. Resources aren't created in your own Azure subscription. The [advanced networking](how-to-connect-vnet-injection.md) option is an exception as there are a few resources saved in your subscription. Virtual machines are always hosted in the Microsoft managed subscription. The service keeps track of usage of these resources in internal Microsoft subscriptions. This usage is [billed back to your Azure subscription](cost-management-guide.md) that contains the lab plan.
+After you *publish the lab*, you can add lab virtual machines, and *assign lab users* to the lab. After they *register* for the lab, lab users can then *remotely connect* to their individual lab virtual machine to perform their exercises. If you use Azure Lab Services with Microsoft Teams or Canvas, lab users are automatically registered for their lab.
+
+To learn about lab plans, labs, and more, see the [key concepts for Azure Lab Services](./classroom-labs-concepts.md).
## Key capabilities Azure Lab Services supports the following key capabilities and features: -- **Fast and flexible setup of a lab**. Lab owners can quickly [set up a lab](./quick-create-connect-lab.md) for their needs. Azure Lab Services takes care of all Azure infrastructure including built-in scaling and resiliency of infrastructure for labs.
+- **Automatic management of Azure infrastructure and scale**. Azure Lab Services is a fully managed service, which automatically handles the provisioning and management of a lab's underlying infrastructure. Focus on preparing the lab experience for the lab users, and quickly scale the lab across hundreds of lab virtual machines.
+
+- **Fast and flexible setup of a lab**. Quickly [set up a lab](./quick-create-connect-lab.md) by using an Azure Marketplace image or by applying a custom image from an Azure compute gallery. Choose between Windows or Linux operating systems, and select the compute family that best matches the needs for your lab. Flexibly configure the lab by installing additional software components or making operating system changes.
+
+- **Simplified experience for lab users**. Lab users can easily [register for a lab](how-to-use-lab.md), and get immediate access without the need for an Azure subscription. Use the Azure Lab Services website, or use the [Microsoft Teams](./lab-services-within-teams-overview.md) or [Canvas LMS](./lab-services-within-canvas-overview.md) integration, to view the list of labs and remotely connect to a lab virtual machine.
+
+- **Separate responsibilities with role-based access**. Azure Lab Services uses Azure Role-Based Access (Azure RBAC) to manage access. Using Azure RBAC lets you clearly separate roles and responsibilities for creating and managing labs across different teams and people in your organization.
-- **Simplified experience for lab users**. Students who are invited to a lab get immediate access to the resources you give them inside your labs. They just need to sign in to see the full list of virtual machines for all labs that they can access. They can select a single button to connect to the virtual machines and start working. Users don't need Azure subscriptions to use the service. [Lab users can register](how-to-use-lab.md) to a lab with a registration code and can access the lab anytime to use the lab's resources.
+- **Advanced virtual networking support**. [Configure advanced networking](./tutorial-create-lab-with-advanced-networking.md) for your labs to apply network traffic control, network ports management, or access resources in a virtual or internal network. For example, your labs might have to connect to an on-premises licensing server.
-- **Cost optimization and analysis**. [Keep your budget in check](cost-management-guide.md) by controlling exactly how many hours your lab users can use the virtual machines. Set up [schedules](how-to-create-schedules.md) in the lab to allow users to use the virtual machines only during designated time slots. Set up [auto-shutdown policies](how-to-configure-auto-shutdown-lab-plans.md) to avoid unneeded VM usage. Keep track of [individual users' usage](how-to-manage-classroom-labs.md) and [set limits](how-to-configure-student-usage.md#set-quotas-for-users).
+- **Cost optimization and analysis**. Azure Lab Services uses a consumption-based [cost model](cost-management-guide.md) and you pay only for lab virtual machines when they're running. Further optimize your costs for running labs by [automatically shutting down lab virtual machines](./how-to-configure-auto-shutdown-lab-plans.md), and by configuring [schedules](./how-to-create-schedules.md) and [usage quotas](./how-to-configure-student-usage.md#set-quotas-for-users) to limit the number of hours the labs can be used.
+
+## Use cases
-- **Automatic management of Azure infrastructure and scale** Azure Lab Services is a managed service, which means that provisioning and management of a lab's underlying infrastructure is handled automatically by the service. You can just focus on preparing the right lab experience for your users. Let the service handle the rest and roll out your lab's virtual machines to your audience. Scale your lab to hundreds of virtual machines with a single action.
+You can use the Azure Lab Services managed labs in different scenarios:
-Here are some of the **use cases for managed labs**:
+- Provide preconfigured virtual machine to attendees of a [classroom or virtual training](./classroom-labs-scenarios.md) for completing homework of exercises. Limit the number of hours that lab users have access to their virtual machine. Set up labs for several types of classes with Azure Lab Services. See the [example class types on Azure Lab Services](class-types.md) article for a few example types of classes for which you can set up labs with Azure Lab Services.
-- Provide students with a lab of virtual machines configured with exactly what's needed for a class. Give each student a limited number of hours for using the VMs for homework or personal projects.-- Set up a pool of high-performance compute VMs to perform compute-intensive or graphics-intensive research. Run the VMs as needed, and clean up the machines once you're done.-- Move your school's physical computer lab into the cloud. Automatically scale the number of VMs only to the maximum usage and cost threshold that you set on the lab. -- Quickly create a lab of virtual machines for hosting a hackathon. Delete the lab with a single action once you're done.
+- Set up a pool of high-performance compute virtual machines to perform compute-intensive or graphics-intensive research or training. For example, to run train machine learning models, or teach about data science or game design. Run the virtual machines only when you need them, and clean up the machines once you're done.
-## Example class types
+- Move your school's physical computer lab into the cloud. Automatically scale the number of virtual machines only to the maximum usage and cost threshold that you set on the lab.
-You can set up labs for several types of classes with Azure Lab Services. See the [Example class types on Azure Lab Services](class-types.md) article for a few example types of classes for which you can set up labs with Azure Lab Services.
+- Quickly create a lab of virtual machines for [hosting a hackathon](./hackathon-labs.md). Delete the lab with a single action once you're done.
-## Region availability
+- Teach advanced courses using nested virtualization or lab-to-lab communication.
-Visit the [Azure Global Infrastructure products by region](https://azure.microsoft.com/global-infrastructure/services/?products=lab-services) page to learn where Azure Lab Services is available.
+## Privacy and compliance
-[Azure Lab Services August 2022 Update](lab-services-whats-new.md)) doesn't move or store customer data outside the region it's deployed in. However, accessing Azure Lab Services resources through the Azure Lab Services portal may cause customer data to cross regions.
+### Data residency
-There are no guarantees customer data stays in the region it's deployed to when using Azure Lab Services prior to the August 2022 Update.
+[Azure Lab Services August 2022 Update](lab-services-whats-new.md)) doesn't move or store customer data outside the region it's deployed in. However, if you access Azure Lab Services resources through the Azure Lab Services website (https://labs.azure.com), customer data might cross regions.
-## Data at rest
+There are no guarantees that customer data stays in the region it's deployed to when using Azure Lab Services prior to the August 2022 Update.
+### Data at rest
Azure Lab Services encrypts all content using a Microsoft-managed encryption key. ## Next steps
lab-services Tutorial Setup Lab https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/lab-services/tutorial-setup-lab.md
Follow these steps to add a lab to the lab plan you created earlier:
:::image type="content" source="./media/tutorial-setup-lab/lab-template.png" alt-text="Screenshot of Template page for a lab.":::
+## Add a lab schedule
+
+Instead of each lab user starting their lab VM manually, you can optionally create a lab schedule to automatically start and stop the lab VM according to your training calendar. Azure Lab Services supports one-time events or recurring schedules.
+
+Alternately, you can also use [quota](./classroom-labs-concepts.md#quota) to manage the number of hours that lab users can run their lab virtual machine.
+
+Follow these steps to add a recurring schedule to your lab:
+
+1. On the **Schedule** page for the lab, select **Add scheduled event** on the toolbar.
+
+ :::image type="content" source="./media/tutorial-setup-lab/add-schedule-button.png" alt-text="Screenshot of the Add scheduled event button on the Schedule page, highlighting the Schedule menu and Add scheduled event button.":::
+
+1. On the **Add scheduled event** page, enter the following information:
+
+ | Field | Value |
+ | -- | -- |
+ | **Event type** | *Standard* |
+ | **Start date** | Enter a start date for the classroom training. |
+ | **Start time** | Enter a start time for the classroom training. |
+ | **Stop time** | Enter an end time for the classroom training. |
+ | **Time zone** | Select your time zone. |
+ | **Repeat** | Keep the default value, which is a weekly recurrence for four months. |
+ | **Notes** | Optionally enter a description for the schedule. |
+
+1. Select **Save** to confirm the lab schedule.
+
+ :::image type="content" source="./media/tutorial-setup-lab/add-schedule-page-weekly.png" alt-text="Screenshot of the Add scheduled event window.":::
+
+1. In the calendar view, confirm that the scheduled event is present.
+
+ :::image type="content" source="./media/tutorial-setup-lab/schedule-calendar.png" alt-text="Screenshot of the Schedule page for Azure Lab Services. Repeating schedule, Monday through Friday shown in the calendar.":::
+ ## Customize the lab template The lab template serves as the basis for the lab VMs. To make sure that lab users have the right configuration and software components, you can customize the lab template.
You've now customized the lab template for the course. Every VM in the lab will
## Publish lab
-Before Azure Lab Services can create lab VMs for your lab, you first need to publish the lab. When you publish the lab, you need to specify the maximum number of lab VMs that Azure Lab Services creates. All VMs in the lab share the same configuration as the lab template.
+All VMs in the lab share the same configuration as the lab template. Before Azure Lab Services can create lab VMs for your lab, you first need to publish the lab. When you publish the lab, you can specify the maximum number of lab VMs that Azure Lab Services creates. You can also modify the number of lab virtual machines at a later stage.
To publish the lab and create the lab VMs:
To publish the lab and create the lab VMs:
:::image type="content" source="./media/tutorial-setup-lab/virtual-machines-stopped.png" alt-text="Screenshot that shows the list of virtual machines for the lab. The lab VMs show as unassigned and stopped.":::
-## Add a lab schedule
-
-Instead of each lab user starting their lab VM manually, you can create a lab schedule to automatically start and stop the lab VM according to your training calendar. Azure Lab Services supports one-time events or recurring schedules.
-
-Follow these steps to add a recurring schedule to your lab:
-
-1. On the **Schedule** page for the lab, select **Add scheduled event** on the toolbar.
-
- :::image type="content" source="./media/tutorial-setup-lab/add-schedule-button.png" alt-text="Screenshot of the Add scheduled event button on the Schedule page, highlighting the Schedule menu and Add scheduled event button.":::
-
-1. On the **Add scheduled event** page, enter the following information:
-
- | Field | Value |
- | -- | -- |
- | **Event type** | *Standard* |
- | **Start date** | Enter a start date for the classroom training. |
- | **Start time** | Enter a start time for the classroom training. |
- | **Stop time** | Enter an end time for the classroom training. |
- | **Time zone** | Select your time zone. |
- | **Repeat** | Keep the default value, which is a weekly recurrence for four months. |
- | **Notes** | Optionally enter a description for the schedule. |
-
-1. Select **Save** to confirm the lab schedule.
-
- :::image type="content" source="./media/tutorial-setup-lab/add-schedule-page-weekly.png" alt-text="Screenshot of the Add scheduled event window.":::
-
-1. In the calendar view, confirm that the scheduled event is present.
-
- :::image type="content" source="./media/tutorial-setup-lab/schedule-calendar.png" alt-text="Screenshot of the Schedule page for Azure Lab Services. Repeating schedule, Monday through Friday shown in the calendar.":::
+> [!CAUTION]
+> When you republish a lab, Azure Lab Services recreates all existing lab virtual machines and removes all data from the virtual machines.
## Invite users
After you add users to the lab, they can register for the lab by using a registr
You've successfully created a customized lab for a classroom training, created a recurring lab schedule, and invited users to register for the lab. Next, lab users can now connect to their lab virtual machine by using remote desktop.
-In this tutorial, you have the Lab Creator Azure AD role to let you create labs for a lab plan. Depending on your organziation, the responsibilities for creating lab plans and labs might be assigned to different people or teams. Learn more about [mapping permissions across your organization](./classroom-labs-scenarios.md#mapping-organizational-roles-to-permissions).
+In this tutorial, you have the Lab Creator Azure AD role to let you create labs for a lab plan. Depending on your organization, the responsibilities for creating lab plans and labs might be assigned to different people or teams. Learn more about [mapping permissions across your organization](./classroom-labs-scenarios.md#mapping-organizational-roles-to-permissions).
> [!div class="nextstepaction"] > [Connect to a lab virtual machine](./tutorial-connect-lab-virtual-machine.md)
machine-learning Azure Machine Learning Glossary https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/azure-machine-learning-glossary.md
Last updated 09/21/2022
+monikerRange: 'azureml-api-2'
# Azure Machine Learning glossary
machine-learning Classification https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/component-reference-v2/classification.md
AutoML creates a number of pipelines in parallel that try different algorithms a
1. For **classification**, you can also enable deep learning.
-If deep learning is enabled, validation is limited to _train_validation split_. [Learn more about validation options](../how-to-configure-cross-validation-data-splits.md).
+If deep learning is enabled, validation is limited to _train_validation split_. [Learn more about validation options](../v1/how-to-configure-cross-validation-data-splits.md).
1. (Optional) View addition configuration settings: additional settings you can use to better control the training job. Otherwise, defaults are applied based on experiment selection and data.
If deep learning is enabled, validation is limited to _train_validation split_.
Explain best model | Select to enable or disable, in order to show explanations for the recommended best model. <br> This functionality is not currently available for [certain forecasting algorithms](../v1/how-to-machine-learning-interpretability-automl.md#interpretability-during-training-for-the-best-model). Blocked algorithm| Select algorithms you want to exclude from the training job. <br><br> Allowing algorithms is only available for [SDK experiments](../how-to-configure-auto-train.md#supported-algorithms). <br> See the [supported algorithms for each task type](/python/api/azureml-automl-core/azureml.automl.core.shared.constants.supportedmodels). Exit criterion| When any of these criteria are met, the training job is stopped. <br> *Training job time (hours)*: How long to allow the training job to run. <br> *Metric score threshold*: Minimum metric score for all pipelines. This ensures that if you have a defined target metric you want to reach, you do not spend more time on the training job than necessary.
- Concurrency| *Max concurrent iterations*: Maximum number of pipelines (iterations) to test in the training job. The job will not run more than the specified number of iterations. Learn more about how automated ML performs [multiple child jobs on clusters](/how-to-configure-auto-train.md#multiple-child-runs-on-clusters).
+ Concurrency| *Max concurrent iterations*: Maximum number of pipelines (iterations) to test in the training job. The job will not run more than the specified number of iterations. Learn more about how automated ML performs [multiple child jobs on clusters](../how-to-configure-auto-train.md#multiple-child-runs-on-clusters).
1. The **[Optional] Validate and test** form allows you to do the following.
- 1. Specify the type of validation to be used for your training job. [Learn more about cross validation](../how-to-configure-cross-validation-data-splits.md#prerequisites).
+ 1. Specify the type of validation to be used for your training job. [Learn more about cross validation](../v1/how-to-configure-cross-validation-data-splits.md#prerequisites).
1. Provide a test dataset (preview) to evaluate the recommended model that automated ML generates for you at the end of your experiment. When you provide test data, a test job is automatically triggered at the end of your experiment. This test job is only job on the best model that was recommended by automated ML.
machine-learning Regression https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/component-reference-v2/regression.md
AutoML creates a number of pipelines in parallel that try different algorithms a
1. The **[Optional] Validate and test** form allows you to do the following.
- 1. Specify the type of validation to be used for your training job. [Learn more about cross validation](../how-to-configure-cross-validation-data-splits.md#prerequisites).
+ 1. Specify the type of validation to be used for your training job. [Learn more about cross validation](../v1/how-to-configure-cross-validation-data-splits.md#prerequisites).
1. Provide a test dataset (preview) to evaluate the recommended model that automated ML generates for you at the end of your experiment. When you provide test data, a test job is automatically triggered at the end of your experiment. This test job is only job on the best model that was recommended by automated ML.
machine-learning Concept Automated Ml https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/concept-automated-ml.md
[!INCLUDE [sdk v2](../../includes/machine-learning-sdk-v2.md)] > [!div class="op_single_selector" title1="Select the version of the Azure Machine Learning Python SDK you are using:"]
-> * [v1](./v1/concept-automated-ml-v1.md)
+> * [v1](./v1/concept-automated-ml-v1.md?view=azureml-api-1&preserve-view=true)
> * [v2 (current version)](concept-automated-ml.md) Automated machine learning, also referred to as automated ML or AutoML, is the process of automating the time-consuming, iterative tasks of machine learning model development. It allows data scientists, analysts, and developers to build ML models with high scale, efficiency, and productivity all while sustaining model quality. Automated ML in Azure Machine Learning is based on a breakthrough from our [Microsoft Research division](https://www.microsoft.com/research/project/automl/).
The following diagram illustrates this process.
You can also inspect the logged job information, which [contains metrics](how-to-understand-automated-ml.md) gathered during the job. The training job produces a Python serialized object (`.pkl` file) that contains the model and data preprocessing.
-While model building is automated, you can also [learn how important or relevant features are](./v1/how-to-configure-auto-train-v1.md#explain) to the generated models.
+While model building is automated, you can also [learn how important or relevant features are](./v1/how-to-configure-auto-train-v1.md?view=azureml-api-1&preserve-view=true#explain) to the generated models.
## When to use AutoML: classification, regression, forecasting, computer vision & NLP
Learn how to [configure AutoML experiments to use test data (preview) with the S
Feature engineering is the process of using domain knowledge of the data to create features that help ML algorithms learn better. In Azure Machine Learning, scaling and normalization techniques are applied to facilitate feature engineering. Collectively, these techniques and feature engineering are referred to as featurization.
-For automated machine learning experiments, featurization is applied automatically, but can also be customized based on your data. [Learn more about what featurization is included](how-to-configure-auto-features.md#featurization) and how AutoML helps [prevent over-fitting and imbalanced data](concept-manage-ml-pitfalls.md) in your models.
+For automated machine learning experiments, featurization is applied automatically, but can also be customized based on your data. [Learn more about what featurization is included (SDK v1)](./v1/how-to-configure-auto-features.md?view=azureml-api-1&preserve-view=true#featurization) and how AutoML helps [prevent over-fitting and imbalanced data](concept-manage-ml-pitfalls.md) in your models.
> [!NOTE] > Automated machine learning featurization steps (feature normalization, handling missing data,
How-to articles provide additional detail into what functionality automated ML o
+ Learn how to [train computer vision models with Python](how-to-auto-train-image-models.md).
-+ Learn how to [view the generated code from your automated ML models](how-to-generate-automl-training-code.md).
++ Learn how to [view the generated code from your automated ML models (SDK v1)](./v1/how-to-generate-automl-training-code.md?view=azureml-api-1&preserve-view=true). ### Jupyter notebook samples
machine-learning Concept Automl Forecasting Sweeping https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/concept-automl-forecasting-sweeping.md
AutoML follows the usual cross-validation procedure, training a separate model o
Cross-validation for forecasting jobs is configured by setting the number of cross-validation folds and, optionally, the number of time periods between two consecutive cross-validation folds. See the [custom cross-validation settings](./how-to-auto-train-forecast.md#custom-cross-validation-settings) guide for more information and an example of configuring cross-validation for forecasting.
-You can also bring your own validation data. Learn more in the [configure data splits and cross-validation in AutoML](how-to-configure-cross-validation-data-splits.md#provide-validation-data) article.
+You can also bring your own validation data. Learn more in the [configure data splits and cross-validation in AutoML (SDK v1)](./v1/how-to-configure-cross-validation-data-splits.md#provide-validation-data) article.
## Next steps * Learn more about [how to set up AutoML to train a time-series forecasting model](./how-to-auto-train-forecast.md).
machine-learning Concept Azure Machine Learning V2 https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/concept-azure-machine-learning-v2.md
Last updated 11/04/2022
[!INCLUDE [dev v2](../../includes/machine-learning-dev-v2.md)]
-This article applies to the second version of the [Azure Machine Learning CLI & Python SDK (v2)](concept-v2.md). For version one (v1), see [How Azure Machine Learning works: Architecture and concepts (v1)](v1/concept-azure-machine-learning-architecture.md)
+This article applies to the second version of the [Azure Machine Learning CLI & Python SDK (v2)](concept-v2.md). For version one (v1), see [How Azure Machine Learning works: Architecture and concepts (v1)](v1/concept-azure-machine-learning-architecture.md?view=azureml-api-1&preserve-view=true)
Azure Machine Learning includes several resources and assets to enable you to perform your machine learning tasks. These resources and assets are needed to run any job.
machine-learning Concept Compute Instance https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/concept-compute-instance.md
Last updated 10/19/2022
+monikerRange: 'azureml-api-2 || azureml-api-1'
#Customer intent: As a data scientist, I want to know what a compute instance is and how to use it for Azure Machine Learning.
Python packages are all installed in the **Python 3.8 - AzureML** environment. C
## Accessing files
-Notebooks and Python scripts are stored in the default storage account of your workspace in Azure file share. These files are located under your ΓÇ£User filesΓÇ¥ directory. This storage makes it easy to share notebooks between compute instances. The storage account also keeps your notebooks safely preserved when you stop or delete a compute instance.
+Notebooks and Python scripts are stored in the default storage account of your workspace in Azure file share. These files are located under your "User files" directory. This storage makes it easy to share notebooks between compute instances. The storage account also keeps your notebooks safely preserved when you stop or delete a compute instance.
The Azure file share account of your workspace is mounted as a drive on the compute instance. This drive is the default working directory for Jupyter, Jupyter Labs, RStudio, and Posit Workbench. This means that the notebooks and other files you create in Jupyter, JupyterLab, RStudio, or Posit are automatically stored on the file share and available to use in other compute instances as well.
You can also clone the latest Azure Machine Learning samples to your folder unde
Writing small files can be slower on network drives than writing to the compute instance local disk itself. If you're writing many small files, try using a directory directly on the compute instance, such as a `/tmp` directory. Note these files won't be accessible from other compute instances.
-Don't store training data on the notebooks file share. You can use the `/tmp` directory on the compute instance for your temporary data. However, don't write large files of data on the OS disk of the compute instance. OS disk on compute instance has 128-GB capacity. You can also store temporary training data on temporary disk mounted on /mnt. Temporary disk size is based on the VM size chosen and can store larger amounts of data if a higher size VM is chosen. You can also mount [datastores and datasets](v1/concept-azure-machine-learning-architecture.md#datasets-and-datastores). Any software packages you install are saved on the OS disk of compute instance. Note customer managed key encryption is currently not supported for OS disk. The OS disk for compute instance is encrypted with Microsoft-managed keys.
+Don't store training data on the notebooks file share. You can use the `/tmp` directory on the compute instance for your temporary data. However, don't write large files of data on the OS disk of the compute instance. OS disk on compute instance has 128-GB capacity. You can also store temporary training data on temporary disk mounted on /mnt. Temporary disk size is based on the VM size chosen and can store larger amounts of data if a higher size VM is chosen. Any software packages you install are saved on the OS disk of compute instance. Note customer managed key encryption is currently not supported for OS disk. The OS disk for compute instance is encrypted with Microsoft-managed keys.
+You can also mount [datastores and datasets](v1/concept-azure-machine-learning-architecture.md?view=azureml-api-1&preserve-view=true#datasets-and-datastores).
## Create Follow the steps in the [Quickstart: Create workspace resources you need to get started with Azure Machine Learning](quickstart-create-resources.md) to create a basic compute instance.
machine-learning Concept Compute Target https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/concept-compute-target.md
Last updated 10/19/2022
+monikerRange: 'azureml-api-2 || azureml-api-1'
#Customer intent: As a data scientist, I want to understand what a compute target is and why I need it.
When performing inference, Azure Machine Learning creates a Docker container tha
[!INCLUDE [aml-deploy-target](../../includes/aml-compute-target-deploy.md)] Learn [where and how to deploy your model to a compute target](how-to-deploy-online-endpoints.md).
+Learn [where and how to deploy your model to a compute target](./v1/how-to-deploy-and-where.md).
## Azure Machine Learning compute (managed)
While Azure Machine Learning supports these VM series, they might not be availab
> [!NOTE] > Azure Machine Learning doesn't support all VM sizes that Azure Compute supports. To list the available VM sizes, use one of the following methods: > * [REST API](https://github.com/Azure/azure-rest-api-specs/blob/master/specification/machinelearningservices/resource-manager/Microsoft.MachineLearningServices/stable/2020-08-01/examples/ListVMSizesResult.json) > * The [Azure CLI extension 2.0 for machine learning](how-to-configure-cli.md) command, [az ml compute list-sizes](/cli/azure/ml/compute#az-ml-compute-list-sizes). If using the GPU-enabled compute targets, it is important to ensure that the correct CUDA drivers are installed in the training environment. Use the following table to determine the correct CUDA version to use:
Azure Machine Learning supports the following unmanaged compute types:
* Azure HDInsight * Azure Databricks * Azure Data Lake Analytics * [Azure Synapse Spark pool](v1/how-to-link-synapse-ml-workspaces.md) (preview) > [!TIP] > Currently this requires the Azure Machine Learning SDK v1. * [Kubernetes](how-to-attach-kubernetes-anywhere.md)
+* [Azure Kubernetes Service](./v1/how-to-create-attach-kubernetes.md)
For more information, see [Manage compute resources](how-to-create-attach-compute-studio.md). ## Next steps Learn how to: * [Deploy your model to a compute target](how-to-deploy-online-endpoints.md)
+* [Deploy your model](./v1/how-to-deploy-and-where.md)
machine-learning Concept Data Encryption https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/concept-data-encryption.md
Last updated 03/07/2023
+monikerRange: 'azureml-api-2 || azureml-api-1'
# Data encryption with Azure Machine Learning
For an example of creating a workspace using an existing Azure Container Registr
* [Create a workspace with Python SDK](how-to-manage-workspace.md?tabs=python#create-a-workspace). * [Use an Azure Resource Manager template to create a workspace for Azure Machine Learning](how-to-create-workspace-template.md) ### Azure Container Instance > [!IMPORTANT]
For more information on creating and using a deployment configuration, see the f
* [Where and how to deploy](./v1/how-to-deploy-and-where.md) For more information on using a customer-managed key with ACI, see [Encrypt deployment data](../container-instances/container-instances-encrypt-data.md). ### Azure Kubernetes Service
You may also want to encrypt [diagnostic information logged from your deployed e
Azure Machine Learning uses TLS to secure internal communication between various Azure Machine Learning microservices. All Azure Storage access also occurs over a secure channel. To secure external calls made to the scoring endpoint, Azure Machine Learning uses TLS. For more information, see [Use TLS to secure a web service through Azure Machine Learning](./v1/how-to-secure-web-service.md). ## Data collection and handling
Each workspace has an associated system-assigned managed identity that has the s
## Next steps
-* [Connect to Azure storage](how-to-access-data.md)
-* [Get data from a datastore](how-to-create-register-datasets.md)
+* [Use datastores](how-to-datastore.md)
+* [Create data assets](how-to-create-data-assets.md)
+* [Access data in a training job](how-to-read-write-data-v2.md)
+* [Connect to Azure storage](./v1/how-to-access-data.md)
+* [Get data from a datastore](./v1/how-to-create-register-datasets.md)
* [Connect to data](v1/how-to-connect-data-ui.md) * [Train with datasets](v1/how-to-train-with-datasets.md)
-* [Customer-managed keys](concept-customer-managed-keys.md).
+* [Customer-managed keys](concept-customer-managed-keys.md)
machine-learning Concept Data https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/concept-data.md
# Data concepts in Azure Machine Learning > [!div class="op_single_selector" title1="Select the version of Azure Machine Learning developer platform you use:"]
-> * [v1](./v1/concept-data.md)
+> * [v1](./v1/concept-data.md?view=azureml-api-1&preserve-view=true)
> * [v2 (current version)](concept-data.md) With Azure Machine Learning, you can bring data from a local machine or an existing cloud-based storage. In this article, you'll learn the main Azure Machine Learning data concepts.
machine-learning Concept Designer https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/concept-designer.md
Last updated 08/03/2022
+monikerRange: 'azureml-api-1 || azureml-api-2'
# What is Azure Machine Learning designer?
The designer uses your Azure Machine Learning [workspace](concept-workspace.md)
+ [Pipelines](#pipeline) + [Data](#data) + [Compute resources](#compute)++ [Registered models](concept-model-management-and-deployment.md#register-and-track-machine-learning-models) + [Registered models](v1/concept-azure-machine-learning-architecture.md#models) + [Published pipelines](#publish) + [Real-time endpoints](#deploy)
Use a visual canvas to build an end-to-end machine learning workflow. Train, tes
## Pipeline
-A [pipeline](v1/concept-azure-machine-learning-architecture.md#ml-pipelines) consists of data assets and analytical components, which you connect. Pipelines have many uses: you can make a pipeline that trains a single model, or one that trains multiple models. You can create a pipeline that makes predictions in real time or in batch, or make a pipeline that only cleans data. Pipelines let you reuse your work and organize your projects.
+A [pipeline](concept-ml-pipelines.md) consists of data assets and analytical components, which you connect. Pipelines have many uses: you can make a pipeline that trains a single model, or one that trains multiple models. You can create a pipeline that makes predictions in real time or in batch, or make a pipeline that only cleans data. Pipelines let you reuse your work and organize your projects.
### Pipeline draft
When you're ready to run your pipeline draft, you submit a pipeline job.
Each time you run a pipeline, the configuration of the pipeline and its results are stored in your workspace as a **pipeline job**. You can go back to any pipeline job to inspect it for troubleshooting or auditing. **Clone** a pipeline job to create a new pipeline draft for you to edit.
-Pipeline jobs are grouped into [experiments](v1/concept-azure-machine-learning-architecture.md#experiments) to organize job history. You can set the experiment for every pipeline job.
+Pipeline jobs are grouped into experiments to organize job history. You can set the experiment for every pipeline job.
## Data
machine-learning Concept Endpoints https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/concept-endpoints.md
The following table highlights the key differences between managed online endpoi
| **Recommended users** | Users who want a managed model deployment and enhanced MLOps experience | Users who prefer Kubernetes and can self-manage infrastructure requirements | | **Node provisioning** | Managed compute provisioning, update, removal | User responsibility | | **Node maintenance** | Managed host OS image updates, and security hardening | User responsibility |
-| **Cluster sizing (scaling)** | [Managed manual and autoscale](how-to-autoscale-endpoints.md), supporting additional nodes provisioning | [Manual and autoscale](v1/how-to-deploy-azure-kubernetes-service.md#autoscaling), supporting scaling the number of replicas within fixed cluster boundaries |
+| **Cluster sizing (scaling)** | [Managed manual and autoscale](how-to-autoscale-endpoints.md), supporting additional nodes provisioning | [Manual and autoscale](how-to-kubernetes-inference-routing-azureml-fe.md#autoscaling), supporting scaling the number of replicas within fixed cluster boundaries |
| **Compute type** | Managed by the service | Customer-managed Kubernetes cluster (Kubernetes) | | **Managed identity** | [Supported](how-to-access-resources-from-endpoints-managed-identities.md) | Supported | | **Virtual Network (VNET)** | [Supported via managed network isolation](how-to-secure-online-endpoint.md) | User responsibility |
machine-learning Concept Manage Ml Pitfalls https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/concept-manage-ml-pitfalls.md
Automated ML also implements explicit **model complexity limitations** to preven
**Cross-validation (CV)** is the process of taking many subsets of your full training data and training a model on each subset. The idea is that a model could get "lucky" and have great accuracy with one subset, but by using many subsets the model won't achieve this high accuracy every time. When doing CV, you provide a validation holdout dataset, specify your CV folds (number of subsets) and automated ML will train your model and tune hyperparameters to minimize error on your validation set. One CV fold could be overfitted, but by using many of them it reduces the probability that your final model is overfitted. The tradeoff is that CV does result in longer training times and thus greater cost, because instead of training a model once, you train it once for each *n* CV subsets. > [!NOTE]
-> Cross-validation is not enabled by default; it must be configured in automated ML settings. However, after cross-validation is configured and a validation data set has been provided, the process is automated for you. Learn more about [cross validation configuration in Auto ML](how-to-configure-cross-validation-data-splits.md)
+> Cross-validation is not enabled by default; it must be configured in automated ML settings. However, after cross-validation is configured and a validation data set has been provided, the process is automated for you. Learn more about [cross validation configuration in Auto ML (SDK v1)](./v1/how-to-configure-cross-validation-data-splits.md?view=azureml-api-1&preserve-view=true)
<a name="imbalance"></a>
machine-learning Concept Ml Pipelines https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/concept-ml-pipelines.md
Last updated 05/10/2022
+monikerRange: 'azureml-api-2 || azureml-api-1'
# What are Azure Machine Learning pipelines?
For example, a typical machine learning project includes the steps of data colle
### Training efficiency and cost reduction
-Besides being the tool to put MLOps into practice, the machine learning pipeline also improves large model trainingΓÇÖs efficiency and reduces cost. Taking modern natural language model training as an example. It requires pre-processing large amounts of data and GPU intensive transformer model training. It takes hours to days to train a model each time. When the model is being built, the data scientist wants to test different training code or hyperparameters and run the training many times to get the best model performance. For most of these trainings, there's usually small changes from one training to another one. It will be a significant waste if every time the full training from data processing to model training takes place. By using machine learning pipeline, it can automatically calculate which steps result is unchanged and reuse outputs from previous training. Additionally, the machine learning pipeline supports running each step on different computation resources. Such that, the memory heavy data processing work and run-on high memory CPU machines, and the computation intensive training can run on expensive GPU machines. By properly choosing which step to run on which type of machines, the training cost can be significantly reduced.
+Besides being the tool to put MLOps into practice, the machine learning pipeline also improves large model training's efficiency and reduces cost. Taking modern natural language model training as an example. It requires pre-processing large amounts of data and GPU intensive transformer model training. It takes hours to days to train a model each time. When the model is being built, the data scientist wants to test different training code or hyperparameters and run the training many times to get the best model performance. For most of these trainings, there's usually small changes from one training to another one. It will be a significant waste if every time the full training from data processing to model training takes place. By using machine learning pipeline, it can automatically calculate which steps result is unchanged and reuse outputs from previous training. Additionally, the machine learning pipeline supports running each step on different computation resources. Such that, the memory heavy data processing work and run-on high memory CPU machines, and the computation intensive training can run on expensive GPU machines. By properly choosing which step to run on which type of machines, the training cost can be significantly reduced.
## Getting started best practices Depending on what a machine learning project already has, the starting point of building a machine learning pipeline may vary. There are a few typical approaches to building a pipeline.
-The first approach usually applies to the team that hasnΓÇÖt used pipeline before and wants to take some advantage of pipeline like MLOps. In this situation, data scientists typically have developed some machine learning models on their local environment using their favorite tools. Machine learning engineers need to take data scientistsΓÇÖ output into production. The work involves cleaning up some unnecessary code from original notebook or Python code, changes the training input from local data to parameterized values, split the training code into multiple steps as needed, perform unit test of each step, and finally wraps all steps into a pipeline.
+The first approach usually applies to the team that hasn't used pipeline before and wants to take some advantage of pipeline like MLOps. In this situation, data scientists typically have developed some machine learning models on their local environment using their favorite tools. Machine learning engineers need to take data scientists' output into production. The work involves cleaning up some unnecessary code from original notebook or Python code, changes the training input from local data to parameterized values, split the training code into multiple steps as needed, perform unit test of each step, and finally wraps all steps into a pipeline.
-Once the teams get familiar with pipelines and want to do more machine learning projects using pipelines, they'll find the first approach is hard to scale. The second approach is set up a few pipeline templates, each try to solve one specific machine learning problem. The template predefines the pipeline structure including how many steps, each stepΓÇÖs inputs and outputs, and their connectivity. To start a new machine learning project, the team first forks one template repo. The team leader then assigns members which step they need to work on. The data scientists and data engineers do their regular work. When they're happy with their result, they structure their code to fit in the pre-defined steps. Once the structured codes are checked-in, the pipeline can be executed or automated. If there's any change, each member only needs to work on their piece of code without touching the rest of the pipeline code.
+Once the teams get familiar with pipelines and want to do more machine learning projects using pipelines, they'll find the first approach is hard to scale. The second approach is set up a few pipeline templates, each try to solve one specific machine learning problem. The template predefines the pipeline structure including how many steps, each step's inputs and outputs, and their connectivity. To start a new machine learning project, the team first forks one template repo. The team leader then assigns members which step they need to work on. The data scientists and data engineers do their regular work. When they're happy with their result, they structure their code to fit in the pre-defined steps. Once the structured codes are checked-in, the pipeline can be executed or automated. If there's any change, each member only needs to work on their piece of code without touching the rest of the pipeline code.
-Once a team has built a collection of machine learnings pipelines and reusable components, they could start to build the machine learning pipeline from cloning previous pipeline or tie existing reusable component together. At this stage, the teamΓÇÖs overall productivity will be improved significantly.
+Once a team has built a collection of machine learnings pipelines and reusable components, they could start to build the machine learning pipeline from cloning previous pipeline or tie existing reusable component together. At this stage, the team's overall productivity will be improved significantly.
-Azure Machine Learning offers different methods to build a pipeline. For users who are familiar with DevOps practices, we recommend using [CLI](how-to-create-component-pipelines-cli.md). For data scientists who are familiar with python, we recommend writing pipeline using the [Azure Machine Learning SDK v1](v1/how-to-create-machine-learning-pipelines.md). For users who prefer to use UI, they could use the [designer to build pipeline by using registered components](how-to-create-component-pipelines-ui.md).
+Azure Machine Learning offers different methods to build a pipeline. For users who are familiar with DevOps practices, we recommend using [CLI](how-to-create-component-pipelines-cli.md). For data scientists who are familiar with python, we recommend writing pipeline using the [Azure Machine Learning SDK v1](v1/how-to-create-machine-learning-pipelines.md?view=azureml-api-1&preserve-view=true). For users who prefer to use UI, they could use the [designer to build pipeline by using registered components](how-to-create-component-pipelines-ui.md).
<a name="compare"></a> ## Which Azure pipeline technology should I use?
machine-learning Concept Mlflow https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/concept-mlflow.md
[!INCLUDE [dev v2](../../includes/machine-learning-dev-v2.md)] > [!div class="op_single_selector" title1="Select the version of the Azure Machine Learning developer platform that you're using:"]
-> * [v1](v1/concept-mlflow-v1.md)
+> * [v1](v1/concept-mlflow-v1.md?view=azureml-api-1&preserve-view=true)
> * [v2 (current version)](concept-mlflow.md) [MLflow](https://www.mlflow.org) is an open-source framework that's designed to manage the complete machine learning lifecycle. Its ability to train and serve models on different platforms allows you to use a consistent set of tools regardless of where your experiments are running: locally on your computer, on a remote compute target, on a virtual machine, or on an Azure Machine Learning compute instance.
machine-learning Concept Model Management And Deployment https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/concept-model-management-and-deployment.md
Last updated 01/04/2023
[!INCLUDE [dev v2](../../includes/machine-learning-dev-v2.md)] > [!div class="op_single_selector" title1="Select the version of Azure Machine Learning developer platform you are using:"]
-> * [v1](./v1/concept-model-management-and-deployment.md)
+> * [v1](./v1/concept-model-management-and-deployment.md?view=azureml-api-1&preserve-view=true)
> * [v2 (current version)](concept-model-management-and-deployment.md) In this article, learn how to apply Machine Learning Operations (MLOps) practices in Azure Machine Learning for the purpose of managing the lifecycle of your models. Applying MLOps practices can improve the quality and consistency of your machine learning solutions.
machine-learning Concept Secure Network Traffic Flow https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/concept-secure-network-traffic-flow.md
Last updated 10/03/2022
+monikerRange: 'azureml-api-2 || azureml-api-1'
# Network traffic flow when using a secured workspace
The following features of Azure Machine Learning studio use _data profiling_:
* AutoML: View a data preview/profile and choose a target column. * Labeling
-Data profiling depends on the Azure Machine Learning managed service being able to access the default Azure Storage Account for your workspace. The managed service _doesn't exist in your VNet_, so canΓÇÖt directly access the storage account in the VNet. Instead, the workspace uses a service principal to access storage.
+Data profiling depends on the Azure Machine Learning managed service being able to access the default Azure Storage Account for your workspace. The managed service _doesn't exist in your VNet_, so can't directly access the storage account in the VNet. Instead, the workspace uses a service principal to access storage.
> [!TIP] > You can provide a service principal when creating the workspace. If you do not, one is created for you and will have the same name as your workspace.
If you use Visual Studio Code on a compute instance, you must allow other outbou
:::image type="content" source="./media/concept-secure-network-traffic-flow/compute-instance-and-cluster.png" alt-text="Diagram of traffic flow when using compute instance or cluster"::: ## Scenario: Use online endpoints
-Securing an online endpoint with a private endpoint is a preview feature.
-- __Inbound__ communication with the scoring URL of the online endpoint can be secured using the `public_network_access` flag on the endpoint. Setting the flag to `disabled` restricts the online endpoint to receiving traffic only from the virtual network. For secure inbound communications, the Azure Machine Learning workspace's private endpoint is used. __Outbound__ communication from a deployment can be secured on a per-deployment basis by using the `egress_public_network_access` flag. Outbound communication in this case is from the deployment to Azure Container Registry, storage blob, and workspace. Setting the flag to `true` will restrict communication with these resources to the virtual network.
Visibility of the endpoint is also governed by the `public_network_access` flag
| secure inbound with public outbound | `public_network_access` is disabled | `egress_public_network_access` is enabled | Yes | | public inbound with secure outbound | `public_network_access` is enabled | `egress_public_network_access` is disabled | Yes | | public inbound with public outbound | `public_network_access` is enabled | `egress_public_network_access` is enabled | Yes |- ## Scenario: Use Azure Kubernetes Service For information on the outbound configuration required for Azure Kubernetes Service, see the connectivity requirements section of [How to secure inference](how-to-secure-inferencing-vnet.md).
machine-learning Concept Soft Delete https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/concept-soft-delete.md
Last updated 11/07/2022
+monikerRange: 'azureml-api-2 || azureml-api-1'
#Customer intent: As an IT pro, understand how to enable data protection capabilities, to protect against accidental deletion.
Last updated 11/07/2022
The soft-delete feature for Azure Machine Learning workspace provides a data protection capability that enables you to attempt recovery of workspace data after accidental deletion. Soft delete introduces a two-step approach in deleting a workspace. When a workspace is deleted, it's first soft deleted. While in soft-deleted state, you can choose to recover or permanently delete a workspace and its data during a data retention period.
-> [!IMPORTANT]
-> Workspace soft delete is currently in public preview. This preview is provided without a service level agreement, and it's not recommended for production workloads. Certain features might not be supported or might have constrained capabilities.
+> [!IMPORTANT]
+> Workspace soft delete is currently in public preview. This preview is provided without a service level agreement, and it's not recommended for production workloads. Certain features might not be supported or might have constrained capabilities.
> For more information, see [Supplemental Terms of Use for Microsoft Azure Previews](https://azure.microsoft.com/support/legal/preview-supplemental-terms/). > To enroll your Azure Subscription, see [Register soft-delete on an Azure subscription](#register-soft-delete-on-an-azure-subscription).
A default retention period of 14 days holds for deleted workspaces. The retentio
During the retention period, soft-deleted workspaces can be recovered or permanently deleted. Any other operations on the workspace, like submitting a training job, will fail. You can't reuse the name of a workspace that has been soft-deleted until the retention period has passed. Once the retention period elapses, a soft deleted workspace automatically gets permanently deleted. > [!TIP]
-> During preview of workspace soft-delete, the retention period is fixed to 14 days and canΓÇÖt be modified.
+> During preview of workspace soft-delete, the retention period is fixed to 14 days and can't be modified.
## Deleting a workspace
machine-learning Concept Sourcing Human Data https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/concept-sourcing-human-data.md
We suggest the following best practices for manually collecting human data direc
In order for AI systems to work well for everyone, the datasets used for training and evaluation should reflect the diversity of people who will use or be affected by those systems. In many cases, age, ancestry, and gender identity can help approximate the range of factors that might affect how well a product performs for a variety of people; however, collecting this information requires special consideration.
-If you do collect this data, always let data contributors self-identify (choose their own responses) instead of having data collectors make assumptions, which might be incorrect. Also include a ΓÇ£prefer not to answerΓÇ¥ option for each question. These practices will show respect for the data contributors and yield more balanced and higher-quality data.
+If you do collect this data, always let data contributors self-identify (choose their own responses) instead of having data collectors make assumptions, which might be incorrect. Also include a "prefer not to answer" option for each question. These practices will show respect for the data contributors and yield more balanced and higher-quality data.
These best practices have been developed based on three years of research with intended stakeholders and collaboration with many teams at Microsoft: [fairness and inclusiveness working groups](https://www.microsoft.com/ai/our-approach?activetab=pivot1:primaryr5), [Global Diversity & Inclusion](https://www.microsoft.com/diversity/default.aspx), [Global Readiness](https://www.microsoft.com/security/blog/2014/09/29/microsoft-global-readiness-diverse-cultures-multiple-languages-one-world/), [Office of Responsible AI](https://www.microsoft.com/ai/responsible-ai?activetab=pivot1:primaryr6), and others.
For more information on how to work with your data:
- [Secure data access in Azure Machine Learning](concept-data.md) - [Data ingestion options for Azure Machine Learning workflows](concept-data-ingestion.md) - [Optimize data processing with Azure Machine Learning](concept-optimize-data-processing.md)-- [Use differential privacy with Azure Machine Learning SDK](v1/how-to-differential-privacy.md) Follow these how-to guides to work with your data after you've collected it:
machine-learning Concept Train Machine Learning Model https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/concept-train-machine-learning-model.md
ms.devlang: azurecli
[!INCLUDE [sdk v2](../../includes/machine-learning-sdk-v2.md)] > [!div class="op_single_selector" title1="Select the Azure Machine Learning version you are using:"]
-> * [v1](v1/concept-train-machine-learning-model-v1.md)
+> * [v1](v1/concept-train-machine-learning-model-v1.md?view=azureml-api-1&preserve-view=true)
> * [v2 (current)](concept-train-machine-learning-model.md) Azure Machine Learning provides several ways to train your models, from code-first solutions using the SDK to low-code solutions such as automated machine learning and the visual designer. Use the following list to determine which training method is right for you:
machine-learning Concept Workspace https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/concept-workspace.md
Last updated 03/13/2023
+monikerRange: 'azureml-api-2 || azureml-api-1'
#Customer intent: As a data scientist, I want to understand the purpose of a workspace for Azure Machine Learning.
Machine learning tasks read and/or write artifacts to your workspace.
+ Create and run reusable workflows. + View machine learning artifacts such as jobs, pipelines, models, deployments. + Track and monitor models. + You can share assets between workspaces using [Azure Machine Learning registries (preview)](how-to-share-models-pipelines-across-workspaces-with-registries.md). ## Taxonomy
Machine learning tasks read and/or write artifacts to your workspace.
+ [Pipelines](concept-ml-pipelines.md) are reusable workflows for training and retraining your model. + [Data assets](concept-data.md) aid in management of the data you use for model training and pipeline creation. + Once you have a model you want to deploy, you create a registered model. + Use the registered model and a scoring script to create an [online endpoint](concept-endpoints.md).++ Use the registered model and a scoring script to [deploy the model](./v1/how-to-deploy-and-where.md) ## Tools for workspace interaction
You can interact with your workspace in the following ways:
+ On the web: + [Azure Machine Learning studio ](https://ml.azure.com) + [Azure Machine Learning designer](concept-designer.md)
-+ In any Python environment with the [Azure Machine Learning SDK for Python](https://aka.ms/sdk-v2-install).
-+ On the command line using the Azure Machine Learning [CLI extension](how-to-configure-cli.md)
++ In any Python environment with the [Azure Machine Learning SDK v2 for Python](https://aka.ms/sdk-v2-install).++ On the command line using the Azure Machine Learning [CLI extension v2](how-to-configure-cli.md)++ In any Python environment with the [Azure Machine Learning SDK v1 for Python](/python/api/overview/azure/ml/)++ On the command line using the Azure Machine Learning [CLI extension v1](./v1/reference-azure-machine-learning-cli.md) + [Azure Machine Learning VS Code Extension](how-to-manage-resources-vscode.md#workspaces) ## Workspace management
There are multiple ways to create a workspace:
* Use [Azure Machine Learning studio](quickstart-create-resources.md) to quickly create a workspace with default settings. * Use the [Azure portal](how-to-manage-workspace.md?tabs=azure-portal#create-a-workspace) for a point-and-click interface with more options. * Use the [Azure Machine Learning SDK for Python](how-to-manage-workspace.md?tabs=python#create-a-workspace) to create a workspace on the fly from Python scripts or Jupyter notebooks. * Use an [Azure Resource Manager template](how-to-create-workspace-template.md) or the [Azure Machine Learning CLI](how-to-configure-cli.md) when you need to automate or customize the creation with corporate security standards.
+* Use an [Azure Resource Manager template](how-to-create-workspace-template.md) or the [Azure Machine Learning CLI](./v1/reference-azure-machine-learning-cli.md) when you need to automate or customize the creation with corporate security standards.
* If you work in Visual Studio Code, use the [VS Code extension](how-to-manage-resources-vscode.md#create-a-workspace). > [!NOTE]
When you create a new workspace, it automatically creates several Azure resource
+ [Azure Container Registry](https://azure.microsoft.com/services/container-registry/): Registers docker containers that are used for the following components: * [Azure Machine Learning environments](concept-environments.md) when training and deploying models
+ :::moniker range="azureml-api-2"
* [AutoML](concept-automated-ml.md) when deploying
+ :::moniker-end
+ :::moniker range="azureml-api-1"
+ * [AutoML](./v1/concept-automated-ml-v1.md) when deploying
* [Data profiling](v1/how-to-connect-data-ui.md#data-preview-and-profile)
+ :::moniker-end
To minimize costs, ACR is **lazy-loaded** until images are needed. > [!NOTE] > If your subscription setting requires adding tags to resources under it, Azure Container Registry (ACR) created by Azure Machine Learning will fail, since we cannot set tags to ACR.
-+ [Azure Application Insights](https://azure.microsoft.com/services/application-insights/): Stores monitoring and diagnostics information. For more information, see [Monitor online endpoints](how-to-monitor-online-endpoints.md).
++ [Azure Application Insights](https://azure.microsoft.com/services/application-insights/): Stores monitoring and diagnostics information.
+ :::moniker range="azureml-api-2"
+ For more information, see [Monitor online endpoints](how-to-monitor-online-endpoints.md).
+ :::moniker-end
> [!NOTE] > You can delete the Application Insights instance after cluster creation if you want. Deleting it limits the information gathered from the workspace, and may make it more difficult to troubleshoot problems. __If you delete the Application Insights instance created by the workspace, you cannot re-create it without deleting and recreating the workspace__.
machine-learning How To Access Azureml Behind Firewall https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-access-azureml-behind-firewall.md
Last updated 01/10/2023 ms.devlang: azurecli
+monikerRange: 'azureml-api-2 || azureml-api-1'
# Configure inbound and outbound network traffic
For information on restricting access to models deployed to AKS, see [Restrict e
__Monitoring, metrics, and diagnostics__ If you haven't [secured Azure Monitor](how-to-secure-workspace-vnet.md#secure-azure-monitor-and-application-insights) for the workspace, you must allow outbound traffic to the following hosts:
+If you haven't [secured Azure Monitor](./v1/how-to-secure-workspace-vnet.md#secure-azure-monitor-and-application-insights) for the workspace, you must allow outbound traffic to the following hosts:
> [!NOTE] > The information logged to these hosts is also used by Microsoft Support to be able to diagnose any problems you run into with your workspace.
For a list of IP addresses for these hosts, see [IP addresses used by Azure Moni
This article is part of a series on securing an Azure Machine Learning workflow. See the other articles in this series: * [Virtual network overview](how-to-network-security-overview.md) * [Secure the workspace resources](how-to-secure-workspace-vnet.md) * [Secure the training environment](how-to-secure-training-vnet.md) * [Secure the inference environment](how-to-secure-inferencing-vnet.md)
+* [Secure the workspace resources](./v1/how-to-secure-workspace-vnet.md)
+* [Secure the training environment](./v1/how-to-secure-training-vnet.md)
+* [Secure the inference environment](./v1/how-to-secure-inferencing-vnet.md)
* [Enable studio functionality](how-to-enable-studio-virtual-network.md) * [Use custom DNS](how-to-custom-dns.md)
machine-learning How To Administrate Data Authentication https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-administrate-data-authentication.md
# Data administration > [!div class="op_single_selector" title1="Select the version of Azure Machine Learning SDK or CLI extension you are using:"]
-> * [v1](./v1/concept-network-data-access.md)
+> * [v1](./v1/concept-network-data-access.md?view=azureml-api-1&preserve-view=true)
> * [v2 (current version)](how-to-administrate-data-authentication.md) Learn how to manage data access and how to authenticate in Azure Machine Learning
machine-learning How To Auto Train Forecast https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-auto-train-forecast.md
show_latex: true
[!INCLUDE [sdk v2](../../includes/machine-learning-sdk-v2.md)] > [!div class="op_single_selector" title1="Select the version of the Azure Machine Learning SDK you are using:"]
-> * [v1](./v1/how-to-auto-train-forecast-v1.md)
+> * [v1](./v1/how-to-auto-train-forecast-v1.md?view=azureml-api-1&preserve-view=true)
> * [v2 (current version)](how-to-auto-train-forecast.md) In this article, you'll learn how to set up AutoML training for time-series forecasting models with Azure Machine Learning automated ML in the [Azure Machine Learning Python SDK](/python/api/overview/azure/ai-ml-readme).
machine-learning How To Auto Train Image Models https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-auto-train-image-models.md
Last updated 07/13/2022
[!INCLUDE [dev v2](../../includes/machine-learning-dev-v2.md)] > [!div class="op_single_selector" title1="Select the version of Azure Machine Learning you are using:"]
-> * [v1](v1/how-to-auto-train-image-models-v1.md)
+> * [v1](v1/how-to-auto-train-image-models-v1.md?view=azureml-api-1&preserve-view=true)
> * [v2 (current version)](how-to-auto-train-image-models.md)
Review detailed code examples and use cases in the [GitHub notebook repository f
## Next steps * [Tutorial: Train an object detection model with AutoML and Python](tutorial-auto-train-image-models.md).
-* [Troubleshoot automated ML experiments](how-to-troubleshoot-auto-ml.md).
+* [Troubleshoot automated ML experiments (SDK v1)](./v1/how-to-troubleshoot-auto-ml.md?view=azureml-api-1&preserve-view=true).
machine-learning How To Auto Train Nlp Models https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-auto-train-nlp-models.md
Last updated 03/15/2022
[!INCLUDE [dev v2](../../includes/machine-learning-dev-v2.md)] > [!div class="op_single_selector" title1="Select the version of the developer platform of Azure Machine Learning you are using:"]
-> * [v1](./v1/how-to-auto-train-nlp-models-v1.md)
+> * [v1](./v1/how-to-auto-train-nlp-models-v1.md?view=azureml-api-1&preserve-view=true)
> * [v2 (current version)](how-to-auto-train-nlp-models.md)
In this article, you learn how to train natural language processing (NLP) models
Automated ML supports NLP which allows ML professionals and data scientists to bring their own text data and build custom models for tasks such as, multi-class text classification, multi-label text classification, and named entity recognition (NER).
-You can seamlessly integrate with the [Azure Machine Learning data labeling](how-to-create-text-labeling-projects.md) capability to label your text data or bring your existing labeled data. Automated ML provides the option to use distributed training on multi-GPU compute clusters for faster model training. The resulting model can be operationalized at scale by leveraging Azure Machine LearningΓÇÖs MLOps capabilities.
+You can seamlessly integrate with the [Azure Machine Learning data labeling](how-to-create-text-labeling-projects.md) capability to label your text data or bring your existing labeled data. Automated ML provides the option to use distributed training on multi-GPU compute clusters for faster model training. The resulting model can be operationalized at scale by leveraging Azure Machine Learning's MLOps capabilities.
## Prerequisites
Named entity recognition (NER)|`"eng"` <br> `"deu"` <br> `"mul"`| English&nbsp
[!INCLUDE [cli v2](../../includes/machine-learning-cli-v2.md)]
-You can specify your dataset language in the featurization section of your configuration YAML file. BERT is also used in the featurization process of automated ML experiment training, learn more about [BERT integration and featurization in automated ML](how-to-configure-auto-features.md#bert-integration-in-automated-ml).
+You can specify your dataset language in the featurization section of your configuration YAML file. BERT is also used in the featurization process of automated ML experiment training, learn more about [BERT integration and featurization in automated ML (SDK v1)](./v1/how-to-configure-auto-features.md#bert-integration-in-automated-ml).
```azurecli featurization:
featurization:
[!INCLUDE [sdk v2](../../includes/machine-learning-sdk-v2.md)]
-You can specify your dataset language with the `set_featurization()` method. BERT is also used in the featurization process of automated ML experiment training, learn more about [BERT integration and featurization in automated ML](how-to-configure-auto-features.md#bert-integration-in-automated-ml).
+You can specify your dataset language with the `set_featurization()` method. BERT is also used in the featurization process of automated ML experiment training, learn more about [BERT integration and featurization in automated ML (SDK v1)](./v1/how-to-configure-auto-features.md?view=azureml-api-1&preserve-view=true#bert-integration-in-automated-ml).
```python text_classification_job.set_featurization(dataset_language='eng')
The following table describes the hyperparameters that AutoML NLP supports.
| Parameter name | Description | Syntax | |-|||
-| gradient_accumulation_steps | The number of backward operations whose gradients are to be summed up before performing one step of gradient descent by calling the optimizerΓÇÖs step function. <br><br> This is leveraged to use an effective batch size which is gradient_accumulation_steps times larger than the maximum size that fits the GPU. | Must be a positive integer.
+| gradient_accumulation_steps | The number of backward operations whose gradients are to be summed up before performing one step of gradient descent by calling the optimizer's step function. <br><br> This is leveraged to use an effective batch size which is gradient_accumulation_steps times larger than the maximum size that fits the GPU. | Must be a positive integer.
| learning_rate | Initial learning rate. | Must be a float in the range (0, 1). | | learning_rate_scheduler |Type of learning rate scheduler. | Must choose from `linear, cosine, cosine_with_restarts, polynomial, constant, constant_with_warmup`. | | model_name | Name of one of the supported models. | Must choose from `bert_base_cased, bert_base_uncased, bert_base_multilingual_cased, bert_base_german_cased, bert_large_cased, bert_large_uncased, distilbert_base_cased, distilbert_base_uncased, roberta_base, roberta_large, distilroberta_base, xlm_roberta_base, xlm_roberta_large, xlnet_base_cased, xlnet_large_cased`. |
While such cases are uncommon, they're possible and the best way to handle it is
## Next steps + [Deploy AutoML models to an online (real-time inference) endpoint](how-to-deploy-automl-endpoint.md)
-+ [Troubleshoot automated ML experiments](how-to-troubleshoot-auto-ml.md)
++ [Troubleshoot automated ML experiments (SDK v1)](./v1/how-to-troubleshoot-auto-ml.md?view=azureml-api-1&preserve-view=true)
machine-learning How To Configure Auto Train https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-configure-auto-train.md
[!INCLUDE [sdk v2](../../includes/machine-learning-sdk-v2.md)] > [!div class="op_single_selector" title1="Select the version of Azure Machine Learning Python you are using:"]
-> * [v1](./v1/how-to-configure-auto-train-v1.md)
+> * [v1](./v1/how-to-configure-auto-train-v1.md?view=azureml-api-1&preserve-view=true)
> * [v2 (current version)](how-to-configure-auto-train.md) In this guide, learn how to set up an automated machine learning, AutoML, training job with the [Azure Machine Learning Python SDK v2](/python/api/overview/azure/ml/intro). Automated ML picks an algorithm and hyperparameters for you and generates a model ready for deployment. This guide provides details of the various options that you can use to configure automated ML experiments.
machine-learning How To Configure Cli https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-configure-cli.md
> [!div class="op_single_selector" title1="Select the version of Azure Machine Learning CLI extension you are using:"]
-> * [v1](v1/reference-azure-machine-learning-cli.md)
+> * [v1](v1/reference-azure-machine-learning-cli.md?view=azureml-api-1&preserve-view=true)
> * [v2 (current version)](how-to-configure-cli.md) The `ml` extension to the [Azure CLI](/cli/azure/) is the enhanced interface for Azure Machine Learning. It enables you to train and deploy models from the command line, with features that accelerate scaling data science up and out while tracking the model lifecycle.
machine-learning How To Configure Environment https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-configure-environment.md
The following table shows each development environment covered in this article,
| [Local environment](#local-computer-or-remote-vm-environment) | Full control of your development environment and dependencies. Run with any build tool, environment, or IDE of your choice. | Takes longer to get started. Necessary SDK packages must be installed, and an environment must also be installed if you don't already have one. | | [The Data Science Virtual Machine (DSVM)](#data-science-virtual-machine) | Similar to the cloud-based compute instance (Python is pre-installed), but with additional popular data science and machine learning tools pre-installed. Easy to scale and combine with other custom tools and workflows. | A slower getting started experience compared to the cloud-based compute instance. | | [Azure Machine Learning compute instance](#azure-machine-learning-compute-instance) | Easiest way to get started. The SDK is already installed in your workspace VM, and notebook tutorials are pre-cloned and ready to run. | Lack of control over your development environment and dependencies. Additional cost incurred for Linux VM (VM can be stopped when not in use to avoid charges). See [pricing details](https://azure.microsoft.com/pricing/details/virtual-machines/linux/). |
-| [Azure Databricks](how-to-configure-databricks-automl-environment.md) | Ideal for running large-scale intensive machine learning workflows on the scalable Apache Spark platform. | Overkill for experimental machine learning, or smaller-scale experiments and workflows. Additional cost incurred for Azure Databricks. See [pricing details](https://azure.microsoft.com/pricing/details/databricks/). |
This article also provides additional usage tips for the following tools:
machine-learning How To Configure Private Link https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-configure-private-link.md
Last updated 08/29/2022
[!INCLUDE [CLI v2](../../includes/machine-learning-cli-v2.md)] > [!div class="op_single_selector" title1="Select the Azure Machine Learning version you are using:"]
-> * [CLI or SDK v1](v1/how-to-configure-private-link.md)
+> * [CLI or SDK v1](v1/how-to-configure-private-link.md?view=azureml-api-1&preserve-view=true)
> * [CLI v2 (current)](how-to-configure-private-link.md) In this document, you learn how to configure a private endpoint for your Azure Machine Learning workspace. For information on creating a virtual network for Azure Machine Learning, see [Virtual network isolation and privacy overview](how-to-network-security-overview.md).
machine-learning How To Create Attach Compute Cluster https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-create-attach-compute-cluster.md
Last updated 10/19/2022
[!INCLUDE [dev v2](../../includes/machine-learning-dev-v2.md)] > [!div class="op_single_selector" title1="Select the Azure Machine Learning CLI or SDK version you are using:"]
-> * [v1](v1/how-to-create-attach-compute-cluster.md)
+> * [v1](v1/how-to-create-attach-compute-cluster.md?view=azureml-api-1&preserve-view=true)
> * [v2 (current version)](how-to-create-attach-compute-cluster.md) Learn how to create and manage a [compute cluster](concept-compute-target.md#azure-machine-learning-compute-managed) in your Azure Machine Learning workspace.
machine-learning How To Create Data Assets https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-create-data-assets.md
Last updated 01/23/2023
[!INCLUDE [dev v2](../../includes/machine-learning-dev-v2.md)] > [!div class="op_single_selector" title1="Select the version of Azure Machine Learning SDK you are using:"]
-> * [v1](./v1/how-to-create-register-datasets.md)
+> * [v1](./v1/how-to-create-register-datasets.md?view=azureml-api-1&preserve-view=true)
> * [v2 (current version)](how-to-create-data-assets.md) In this article, you'll learn how to create a data asset in Azure Machine Learning. An Azure Machine Learning data asset is similar to web browser bookmarks (favorites). Instead of remembering long storage paths (URIs) that point to your most frequently used data, you can create a data asset, and then access that asset with a friendly name.
machine-learning How To Create Manage Compute Instance https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-create-manage-compute-instance.md
Last updated 12/28/2022
[!INCLUDE [dev v2](../../includes/machine-learning-dev-v2.md)] > [!div class="op_single_selector" title1="Select the Azure Machine Learning SDK or CLI version you are using:"]
-> * [v1](v1/how-to-create-manage-compute-instance.md)
+> * [v1](v1/how-to-create-manage-compute-instance.md?view=azureml-api-1&preserve-view=true)
> * [v2 (current version)](how-to-create-manage-compute-instance.md) Learn how to create and manage a [compute instance](concept-compute-instance.md) in your Azure Machine Learning workspace.
You can also [use a setup script](how-to-customize-compute-instance.md) to creat
Compute instances can run jobs securely in a [virtual network environment](how-to-secure-training-vnet.md), without requiring enterprises to open up SSH ports. The job executes in a containerized environment and packages your model dependencies in a Docker container. > [!NOTE]
-> This article shows CLI v2 in the sections below. If you are still using CLI v1, see [Create an Azure Machine Learning compute cluster CLI v1)](v1/how-to-create-manage-compute-instance.md).
+> This article shows CLI v2 in the sections below. If you are still using CLI v1, see [Create an Azure Machine Learning compute cluster CLI v1)](v1/how-to-create-manage-compute-instance.md?view=azureml-api-1&preserve-view=true).
## Prerequisites
Then use either cron or LogicApps expressions to define the schedule that starts
} ```
-* Action can have value of ΓÇ£StartΓÇ¥ or ΓÇ£StopΓÇ¥.
+* Action can have value of "Start" or "Stop".
* For trigger type of `Recurrence` use the same syntax as logic app, with this [recurrence schema](../logic-apps/logic-apps-workflow-actions-triggers.md#recurrence-trigger). * For trigger type of `cron`, use standard cron syntax:
You can assign a system- or user-assigned [managed identity](../active-directory
You can create compute instance with managed identity from Azure Machine Learning Studio:
-1. Fill out the form to [create a new compute instance](?tabs=azure-studio#create).
-1. Select **Next: Advanced Settings**.
-1. Enable **Assign a managed identity**.
+1. Fill out the form to [create a new compute instance](?tabs=azure-studio#create).
+1. Select **Next: Advanced Settings**.
+1. Enable **Assign a managed identity**.
1. Select **System-assigned** or **User-assigned** under **Identity type**. 1. If you selected **User-assigned**, select subscription and name of the identity.
az login --identity --username $DEFAULT_IDENTITY_CLIENT_ID
You can set up other applications, such as RStudio, or Posit Workbench (formerly RStudio Workbench), when creating a compute instance. Follow these steps in studio to set up a custom application on your compute instance
-1. Fill out the form to [create a new compute instance](?tabs=azure-studio#create)
-1. Select **Next: Advanced Settings**
-1. Select **Add application** under the **Custom application setup (RStudio Workbench, etc.)** section
+1. Fill out the form to [create a new compute instance](?tabs=azure-studio#create)
+1. Select **Next: Advanced Settings**
+1. Select **Add application** under the **Custom application setup (RStudio Workbench, etc.)** section
:::image type="content" source="media/how-to-create-manage-compute-instance/custom-service-setup.png" alt-text="Screenshot showing Custom Service Setup.":::
You can set up other applications, such as RStudio, or Posit Workbench (formerly
RStudio is one of the most popular IDEs among R developers for ML and data science projects. You can easily set up Posit Workbench, which provides access to RStudio along with other development tools, to run on your compute instance, using your own Posit license, and access the rich feature set that Posit Workbench offers
-1. Follow the steps listed above to **Add application** when creating your compute instance.
-1. Select **Posit Workbench (bring your own license)** in the **Application** dropdown and enter your Posit Workbench license key in the **License key** field. You can get your Posit Workbench license or trial license [from posit](https://posit.co).
+1. Follow the steps listed above to **Add application** when creating your compute instance.
+1. Select **Posit Workbench (bring your own license)** in the **Application** dropdown and enter your Posit Workbench license key in the **License key** field. You can get your Posit Workbench license or trial license [from posit](https://posit.co).
1. Select **Create** to add Posit Workbench application to your compute instance. :::image type="content" source="media/how-to-create-manage-compute-instance/rstudio-workbench.png" alt-text="Screenshot shows Posit Workbench settings." lightbox="media/how-to-create-manage-compute-instance/rstudio-workbench.png":::
RStudio is one of the most popular IDEs among R developers for ML and data scien
To use RStudio, set up a custom application as follows:
-1. Follow the steps listed above to **Add application** when creating your compute instance.
-1. Select **Custom Application** on the **Application** dropdown
-1. Configure the **Application name** you would like to use.
+1. Follow the steps listed above to **Add application** when creating your compute instance.
+1. Select **Custom Application** on the **Application** dropdown
+1. Configure the **Application name** you would like to use.
1. Set up the application to run on **Target port** `8787` - the docker image for RStudio open source listed below needs to run on this Target port. 1. Set up the application to be accessed on **Published port** `8787` - you can configure the application to be accessed on a different Published port if you wish.
For each compute instance in a workspace that you created (or that was created f
-[Azure RBAC](../role-based-access-control/overview.md) allows you to control which users in the workspace can create, delete, start, stop, restart a compute instance. All users in the workspace contributor and owner role can create, delete, start, stop, and restart compute instances across the workspace. However, only the creator of a specific compute instance, or the user assigned if it was created on their behalf, is allowed to access Jupyter, JupyterLab, and RStudio on that compute instance. A compute instance is dedicated to a single user who has root access. That user has access to Jupyter/JupyterLab/RStudio running on the instance. Compute instance will have single-user sign-in and all actions will use that userΓÇÖs identity for Azure RBAC and attribution of experiment jobs. SSH access is controlled through public/private key mechanism.
+[Azure RBAC](../role-based-access-control/overview.md) allows you to control which users in the workspace can create, delete, start, stop, restart a compute instance. All users in the workspace contributor and owner role can create, delete, start, stop, and restart compute instances across the workspace. However, only the creator of a specific compute instance, or the user assigned if it was created on their behalf, is allowed to access Jupyter, JupyterLab, and RStudio on that compute instance. A compute instance is dedicated to a single user who has root access. That user has access to Jupyter/JupyterLab/RStudio running on the instance. Compute instance will have single-user sign-in and all actions will use that user's identity for Azure RBAC and attribution of experiment jobs. SSH access is controlled through public/private key mechanism.
These actions can be controlled by Azure RBAC: * *Microsoft.MachineLearningServices/workspaces/computes/read*
machine-learning How To Custom Dns https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-custom-dns.md
Last updated 09/06/2022
+monikerRange: 'azureml-api-2 || azureml-api-1'
# How to use your workspace with a custom DNS server
When using an Azure Machine Learning workspace with a private endpoint, there ar
> This article is part of a series on securing an Azure Machine Learning workflow. See the other articles in this series: > > * [Virtual network overview](how-to-network-security-overview.md) > * [Secure the workspace resources](how-to-secure-workspace-vnet.md) > * [Secure the training environment](how-to-secure-training-vnet.md) > * [Secure the inference environment](how-to-secure-inferencing-vnet.md)
+> * [Secure the workspace resources](./v1/how-to-secure-workspace-vnet.md)
+> * [Secure the training environment](./v1/how-to-secure-training-vnet.md)
+> * [Secure the inference environment](./v1/how-to-secure-inferencing-vnet.md)
> * [Enable studio functionality](how-to-enable-studio-virtual-network.md) > * [Use a firewall](how-to-access-azureml-behind-firewall.md) ## Prerequisites - An Azure Virtual Network that uses [your own DNS server](../virtual-network/virtual-networks-name-resolution-for-vms-and-role-instances.md#name-resolution-that-uses-your-own-dns-server). - An Azure Machine Learning workspace with a private endpoint. For more information, see [Create an Azure Machine Learning workspace](how-to-manage-workspace.md).
+- An Azure Machine Learning workspace with a private endpoint. For more information, see [Create an Azure Machine Learning workspace](./v1/how-to-manage-workspace.md).
- Familiarity with using [Network isolation during training & inference](./how-to-network-security-overview.md).
If you cannot access the workspace from a virtual machine or jobs fail on comput
- [Azure China regions](https://portal.azure.cn/?feature.privateendpointmanagedns=false) - [Azure US Government regions](https://portal.azure.us/?feature.privateendpointmanagedns=false)
- Navigate to the Private Endpoint to the Azure Machine Learning workspace. The workspace FQDNs will be listed on the ΓÇ£OverviewΓÇ¥ tab.
+ Navigate to the Private Endpoint to the Azure Machine Learning workspace. The workspace FQDNs will be listed on the "Overview" tab.
1. **Access compute resource in Virtual Network topology**:
If after running through the above steps you are unable to access the workspace
- [Azure China regions](https://portal.azure.cn/?feature.privateendpointmanagedns=false) - [Azure US Government regions](https://portal.azure.us/?feature.privateendpointmanagedns=false)
- Navigate to the Private Endpoint to the Azure Machine Learning workspace. The workspace FQDNs will be listed on the ΓÇ£OverviewΓÇ¥ tab.
+ Navigate to the Private Endpoint to the Azure Machine Learning workspace. The workspace FQDNs will be listed on the "Overview" tab.
1. **Access compute resource in Virtual Network topology**:
If after running through the above steps you are unable to access the workspace
This article is part of a series on securing an Azure Machine Learning workflow. See the other articles in this series: * [Virtual network overview](how-to-network-security-overview.md) * [Secure the workspace resources](how-to-secure-workspace-vnet.md) * [Secure the training environment](how-to-secure-training-vnet.md) * [Secure the inference environment](how-to-secure-inferencing-vnet.md)
+* [Secure the workspace resources](./v1/how-to-secure-workspace-vnet.md)
+* [Secure the training environment](./v1/how-to-secure-training-vnet.md)
+* [Secure the inference environment](./v1/how-to-secure-inferencing-vnet.md)
* [Enable studio functionality](how-to-enable-studio-virtual-network.md) * [Use a firewall](how-to-access-azureml-behind-firewall.md)
machine-learning How To Datastore https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-datastore.md
# Create datastores > [!div class="op_single_selector" title1="Select the version of Azure Machine Learning developer platform you are using:"]
-> * [v1](v1/how-to-access-data.md)
+> * [v1](v1/how-to-access-data.md?view=azureml-api-1&preserve-view=true)
> * [v2 (current version)](how-to-datastore.md) [!INCLUDE [dev v2](../../includes/machine-learning-dev-v2.md)]
machine-learning How To Debug Visual Studio Code https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-debug-visual-studio-code.md
Last updated 10/21/2021
+monikerRange: 'azureml-api-1'
# Interactive debugging with Visual Studio Code
machine-learning How To Deploy Kubernetes Extension https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-deploy-kubernetes-extension.md
In this article, you can learn:
- If your AKS cluster has an [Authorized IP range enabled to access the API server](../aks/api-server-authorized-ip-ranges.md), enable the Azure Machine Learning control plane IP ranges for the AKS cluster. The Azure Machine Learning control plane is deployed across paired regions. Without access to the API server, the machine learning pods can't be deployed. Use the [IP ranges](https://www.microsoft.com/download/confirmation.aspx?id=56519) for both the [paired regions](../availability-zones/cross-region-replication-azure.md) when enabling the IP ranges in an AKS cluster. - Azure Machine Learning does not support attaching an AKS cluster cross subscription. If you have an AKS cluster in a different subscription, you must first [connect it to Azure-Arc](../azure-arc/kubernetes/quickstart-connect-cluster.md) and specify in the same subscription as your Azure Machine Learning workspace. - Azure Machine Learning does not guarantee support for all preview stage features in AKS. For example, [Azure AD pod identity](../aks/use-azure-ad-pod-identity.md) is not supported.-- If you've previously followed the steps from [Azure Machine Learning AKS v1 document](./v1/how-to-create-attach-kubernetes.md) to create or attach your AKS as inference cluster, use the following link to [clean up the legacy azureml-fe related resources](./v1/how-to-create-attach-kubernetes.md#delete-azureml-fe-related-resources) before you continue the next step.
+- If you've previously followed the steps from [Azure Machine Learning AKS v1 document](./v1/how-to-create-attach-kubernetes.md?view=azureml-api-1&preserve-view=true) to create or attach your AKS as inference cluster, use the following link to [clean up the legacy azureml-fe related resources](./v1/how-to-create-attach-kubernetes.md?view=azureml-api-1&preserve-view=true#delete-azureml-fe-related-resources) before you continue the next step.
## Review Azure Machine Learning extension configuration settings
machine-learning How To Deploy Mlflow Models Online Endpoints https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-deploy-mlflow-models-online-endpoints.md
[!INCLUDE [cli v2](../../includes/machine-learning-cli-v2.md)] > [!div class="op_single_selector" title1="Select the version of Azure Machine Learning CLI extension you are using:"]
-> * [v1](./v1/how-to-deploy-mlflow-models.md)
+> * [v1](./v1/how-to-deploy-mlflow-models.md?view=azureml-api-1&preserve-view=true)
> * [v2 (current version)](how-to-deploy-mlflow-models-online-endpoints.md) In this article, learn how to deploy your [MLflow](https://www.mlflow.org) model to an [online endpoint](concept-endpoints.md) for real-time inference. When you deploy your MLflow model to an online endpoint, you don't need to indicate a scoring script or an environment. This characteristic is referred as __no-code deployment__.
machine-learning How To Deploy Mlflow Models https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-deploy-mlflow-models.md
ms.devlang: azurecli
[!INCLUDE [cli v2](../../includes/machine-learning-cli-v2.md)] > [!div class="op_single_selector" title1="Select the version of Azure Machine Learning CLI extension you are using:"]
-> * [v1](./v1/how-to-deploy-mlflow-models.md)
+> * [v1](./v1/how-to-deploy-mlflow-models.md?view=azureml-api-1&preserve-view=true)
> * [v2 (current version)](how-to-deploy-mlflow-models.md) In this article, learn how to deploy your [MLflow](https://www.mlflow.org) model to Azure Machine Learning for both real-time and batch inference. Learn also about the different tools you can use to perform management of the deployment.
The rest of this section mostly applies to online endpoints but you can learn mo
| JSON-serialized pandas DataFrames in the records orientation | Deprecated | | | CSV-serialized pandas DataFrames | **&check;** | Use batch<sup>1</sup> | | Tensor input format as JSON-serialized lists (tensors) and dictionary of lists (named tensors) | **&check;** | **&check;** |
-| Tensor input formatted as in TF ServingΓÇÖs API | **&check;** | |
+| Tensor input formatted as in TF Serving's API | **&check;** | |
> [!NOTE] > - <sup>1</sup> We suggest you to explore batch inference for processing files. See [Deploy MLflow models to Batch Endpoints](how-to-mlflow-batch.md).
machine-learning How To Deploy Model Cognitive Search https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-deploy-model-cognitive-search.md
Last updated 03/11/2021
+monikerRange: 'azureml-api-1'
machine-learning How To Enable Studio Virtual Network https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-enable-studio-virtual-network.md
Last updated 11/16/2022
+monikerRange: 'azureml-api-2 || azureml-api-1'
# Use Azure Machine Learning studio in an Azure virtual network
In this article, you learn how to:
> This article is part of a series on securing an Azure Machine Learning workflow. See the other articles in this series: > > * [Virtual network overview](how-to-network-security-overview.md) > * [Secure the workspace resources](how-to-secure-workspace-vnet.md) > * [Secure the training environment](how-to-secure-training-vnet.md) > * [Secure the inference environment](how-to-secure-inferencing-vnet.md)
+> * [Secure the workspace resources](./v1/how-to-secure-workspace-vnet.md)
+> * [Secure the training environment](./v1/how-to-secure-training-vnet.md)
+> * [Secure the inference environment](./v1/how-to-secure-inferencing-vnet.md)
> * [Use custom DNS](how-to-custom-dns.md) > * [Use a firewall](how-to-access-azureml-behind-firewall.md) >
In this article, you learn how to:
+ A pre-existing virtual network and subnet to use. + An existing [Azure Machine Learning workspace with a private endpoint](how-to-secure-workspace-vnet.md#secure-the-workspace-with-private-endpoint). + An existing [Azure storage account added your virtual network](how-to-secure-workspace-vnet.md#secure-azure-storage-accounts).++ An existing [Azure Machine Learning workspace with a private endpoint](how-to-secure-workspace-vnet.md#secure-the-workspace-with-private-endpoint).+++ An existing [Azure storage account added your virtual network](how-to-secure-workspace-vnet.md#secure-azure-storage-accounts). ## Limitations
Some storage services, such as Azure Storage Account, have firewall settings tha
This article is part of a series on securing an Azure Machine Learning workflow. See the other articles in this series: * [Virtual network overview](how-to-network-security-overview.md) * [Secure the workspace resources](how-to-secure-workspace-vnet.md) * [Secure the training environment](how-to-secure-training-vnet.md) * [Secure the inference environment](how-to-secure-inferencing-vnet.md)
+* [Secure the workspace resources](./v1/how-to-secure-workspace-vnet.md)
+* [Secure the training environment](./v1/how-to-secure-training-vnet.md)
+* [Secure the inference environment](./v1/how-to-secure-inferencing-vnet.md)
* [Use custom DNS](how-to-custom-dns.md) * [Use a firewall](how-to-access-azureml-behind-firewall.md)
machine-learning How To Export Delete Data https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-export-delete-data.md
Last updated 02/09/2023 -
+monikerRange: 'azureml-api-2 || azureml-api-1'
Job history documents, which may contain personal user information, are stored i
Azure Machine Learning studio provides a unified view of your machine learning resources - for example, notebooks, data assets, models, and jobs. Azure Machine Learning studio emphasizes preservation of a record of your data and experiments. You can delete computational resources such as pipelines and compute resources with the browser. For these resources, navigate to the resource in question and choose **Delete**.
-You can unregister data assets and archive jobs, but these operations don't delete the data. To entirely remove the data, data assets and job data require deletion at the storage level. Storage level deletion happens in the portal, as described earlier. Azure Machine Learning Studio can handle individual deletion. Job deletion deletes the data of that job.
+You can unregister data assets and archive jobs, but these operations don't delete the data. To entirely remove the data, data assets and job data require deletion at the storage level. Storage level deletion happens in the portal, as described earlier. Azure Machine Learning studio can handle individual deletion. Job deletion deletes the data of that job.
-Azure Machine Learning Studio can handle training artifact downloads from experimental jobs. Choose the relevant **Job**. Choose **Output + logs**, and navigate to the specific artifacts you wish to download. Choose **...** and **Download**, or select **Download all**.
+Azure Machine Learning studio can handle training artifact downloads from experimental jobs. Choose the relevant **Job**. Choose **Output + logs**, and navigate to the specific artifacts you wish to download. Choose **...** and **Download**, or select **Download all**.
To download a registered model, navigate to the **Model** and choose **Download**. :::image type="contents" source="media/how-to-export-delete-data/model-download.png" alt-text="Screenshot of studio model page with download option highlighted.":::
+## Export and delete resources using the Python SDK
+
+You can download the outputs of a particular job using:
+
+```python
+# Retrieved from Azure Machine Learning web UI
+run_id = 'aaaaaaaa-bbbb-cccc-dddd-0123456789AB'
+experiment = ws.experiments['my-experiment']
+run = next(run for run in ex.get_runs() if run.id == run_id)
+metrics_output_port = run.get_pipeline_output('metrics_output')
+model_output_port = run.get_pipeline_output('model_output')
+
+metrics_output_port.download('.', show_progress=True)
+model_output_port.download('.', show_progress=True)
+```
+
+The following machine learning resources can be deleted using the Python SDK:
+
+| Type | Function Call | Notes |
+| | | |
+| `Workspace` | [`delete`](/python/api/azureml-core/azureml.core.workspace.workspace#delete-delete-dependent-resources-false--no-wait-false-) | Use `delete-dependent-resources` to cascade the delete |
+| `Model` | [`delete`](/python/api/azureml-core/azureml.core.model%28class%29#delete--) | |
+| `ComputeTarget` | [`delete`](/python/api/azureml-core/azureml.core.computetarget#delete--) | |
+| `WebService` | [`delete`](/python/api/azureml-core/azureml.core.webservice%28class%29) | |
++ ## Next steps
-Learn more about [Managing a workspace](how-to-manage-workspace.md).
+Learn more about [Managing a workspace](how-to-manage-workspace.md).
machine-learning How To Identity Based Service Authentication https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-identity-based-service-authentication.md
[!INCLUDE [dev v2](../../includes/machine-learning-dev-v2.md)] > [!div class="op_single_selector" title1="Select the version of Azure Machine Learning SDK or CLI extension you are using:"]
-> * [v1](./v1/how-to-use-managed-identities.md)
+> * [v1](./v1/how-to-use-managed-identities.md?view=azureml-api-1&preserve-view=true)
> * [v2 (current version)](./how-to-identity-based-service-authentication.md) Azure Machine Learning is composed of multiple Azure services. There are multiple ways that authentication can happen between Azure Machine Learning and the services it relies on.
If your storage account has virtual network settings, that dictates what identit
* If your storage is ADLS Gen 2 or Blob and has virtual network settings, customers can use either user identity or workspace MSI depending on the datastore settings defined during creation.
-* If the virtual network setting is ΓÇ£Allow Azure services on the trusted services list to access this storage accountΓÇ¥, then Workspace MSI is used.
+* If the virtual network setting is "Allow Azure services on the trusted services list to access this storage account", then Workspace MSI is used.
## Scenario: Azure Container Registry without admin user
machine-learning How To Inference Onnx Automl Image Models https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-inference-onnx-automl-image-models.md
[!INCLUDE [sdk v2](../../includes/machine-learning-sdk-v2.md)] > [!div class="op_single_selector" title1="Select the version of Azure Machine Learning CLI extension you are using:"]
-> * [v1](v1/how-to-inference-onnx-automl-image-models-v1.md)
+> * [v1](v1/how-to-inference-onnx-automl-image-models-v1.md?view=azureml-api-1&preserve-view=true)
> * [v2 (current version)](how-to-inference-onnx-automl-image-models.md) In this article, you will learn how to use Open Neural Network Exchange (ONNX) to make predictions on computer vision models generated from automated machine learning (AutoML) in Azure Machine Learning.
display_detections(img, boxes.copy(), labels, scores, masks.copy(),
## Next steps * [Learn more about computer vision tasks in AutoML](how-to-auto-train-image-models.md)
-* [Troubleshoot AutoML experiments](how-to-troubleshoot-auto-ml.md)
+* [Troubleshoot AutoML experiments (SDK v1)](./v1/how-to-troubleshoot-auto-ml.md?view=azureml-api-1&preserve-view=true)
machine-learning How To Log View Metrics https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-log-view-metrics.md
[!INCLUDE [sdk v2](../../includes/machine-learning-sdk-v2.md)] > [!div class="op_single_selector" title1="Select the version of Azure Machine Learning Python SDK you are using:"]
-> * [v1](./v1/how-to-log-view-metrics.md)
+> * [v1](./v1/how-to-log-view-metrics.md?view=azureml-api-1&preserve-view=true)
> * [v2 (current)](how-to-log-view-metrics.md) Azure Machine Learning supports logging and tracking experiments using [MLflow Tracking](https://www.mlflow.org/docs/latest/tracking.html). You can log models, metrics, parameters, and artifacts with MLflow as it supports local mode to cloud portability.
machine-learning How To Manage Environments V2 https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-manage-environments-v2.md
[!INCLUDE [dev v2](../../includes/machine-learning-dev-v2.md)] > [!div class="op_single_selector" title1="Select the version of Azure Machine Learning SDK or CLI extension you are using:"]
-> * [v1](./v1/how-to-use-environments.md)
+> * [v1](./v1/how-to-use-environments.md?view=azureml-api-1&preserve-view=true)
> * [v2 (current version)](how-to-manage-environments-v2.md)
machine-learning How To Manage Resources Vscode https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-manage-resources-vscode.md
Last updated 05/25/2021
+monikerRange: 'azureml-api-2 || azureml-api-1'
# Manage Azure Machine Learning resources with the VS Code Extension (preview)
Alternatively, you can create a resource by using the command palette:
## Version resources
-Some resources like environments, datasets, and models allow you to make changes to a resource and store the different versions.
+Some resources like environments, and models allow you to make changes to a resource and store the different versions.
To version a resource:
The extension currently supports datastores of the following types:
- Azure Data Lake Gen 2 - Azure File For more information, see [datastore](concept-data.md#datastore).-
+For more information, see [datastore](./v1/concept-data.md#connect-to-storage-with-datastores).
### Create a datastore 1. Expand the subscription node that contains your workspace.
Alternatively, use the `> Azure ML: Create Datastore` command in the command pal
Alternatively, use the `> Azure ML: Unregister Datastore` and `> Azure ML: View Datastore` commands respectively in the command palette. ## Datasets The extension currently supports the following dataset types:
Alternatively, use the `> Azure ML: Create Dataset` command in the command palet
- **Unregister dataset**. Removes a dataset and all versions of it from your workspace. Alternatively, use the `> Azure ML: View Dataset Properties` and `> Azure ML: Unregister Dataset` commands respectively in the command palette. ## Environments
To view the dependencies and configurations for a specific environment in the ex
Alternatively, use the `> Azure ML: View Environment` command in the command palette. ## Experiments For more information, see [experiments](v1/concept-azure-machine-learning-architecture.md#experiments). ### Create job
Alternatively, use the `> Azure ML: View Compute Properties` and `> Azure ML: De
## Models
-For more information, see [models](v1/concept-azure-machine-learning-architecture.md#models)
+For more information, see [train machine learning models](concept-train-machine-learning-model.md).
+For more information, see [train machine learning models](./v1/concept-train-machine-learning-model-v1.md).
### Create model
Alternatively, use the `> Azure ML: Remove Model` command in the command palette
## Endpoints
+For more information, see [endpdoints](concept-endpoints.md).
For more information, see [endpoints](v1/concept-azure-machine-learning-architecture.md#endpoints). ### Create endpoint
machine-learning How To Manage Workspace Cli https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-manage-workspace-cli.md
[!INCLUDE [cli v2](../../includes/machine-learning-cli-v2.md)] > [!div class="op_single_selector" title1="Select the version of Azure Machine Learning SDK or CLI extension you are using:"]
-> * [v1](v1/how-to-manage-workspace-cli.md)
+> * [v1](v1/how-to-manage-workspace-cli.md?view=azureml-api-1&preserve-view=true)
> * [v2 (current version)](how-to-manage-workspace-cli.md) In this article, you learn how to create and manage Azure Machine Learning workspaces using the Azure CLI. The Azure CLI provides commands for managing Azure resources and is designed to get you working quickly with Azure, with an emphasis on automation. The machine learning extension to the CLI provides commands for working with Azure Machine Learning resources.
machine-learning How To Manage Workspace https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-manage-workspace.md
[!INCLUDE [sdk v2](../../includes/machine-learning-sdk-v2.md)] > [!div class="op_single_selector" title1="Select the version of Azure Machine Learning SDK you are using:"]
-> * [v1](v1/how-to-manage-workspace.md)
+> * [v1](v1/how-to-manage-workspace.md?view=azureml-api-1&preserve-view=true)
> * [v2 (current)](how-to-manage-workspace.md) In this article, you create, view, and delete [**Azure Machine Learning workspaces**](concept-workspace.md) for [Azure Machine Learning](overview-what-is-azure-machine-learning.md), using the [Azure portal](https://portal.azure.com) or the [SDK for Python](https://aka.ms/sdk-v2-install).
If you have problems in accessing your subscription, see [Set up authentication
-### Networking
+### Networking
-> [!IMPORTANT]
+> [!IMPORTANT]
> For more information on using a private endpoint and virtual network with your workspace, see [Network isolation and privacy](how-to-network-security-overview.md).
This class requires an existing virtual network.
# [Portal](#tab/azure-portal)
-1. The default network configuration is to use a __Public endpoint__, which is accessible on the public internet. To limit access to your workspace to an Azure Virtual Network you've created, you can instead select __Private endpoint__ as the __Connectivity method__, and then use __+ Add__ to configure the endpoint.
+1. The default network configuration is to use a __Public endpoint__, which is accessible on the public internet. To limit access to your workspace to an Azure Virtual Network you've created, you can instead select __Private endpoint__ as the __Connectivity method__, and then use __+ Add__ to configure the endpoint.
- :::image type="content" source="media/how-to-manage-workspace/select-private-endpoint.png" alt-text="Private endpoint selection":::
+ :::image type="content" source="media/how-to-manage-workspace/select-private-endpoint.png" alt-text="Private endpoint selection":::
-1. On the __Create private endpoint__ form, set the location, name, and virtual network to use. If you'd like to use the endpoint with a Private DNS Zone, select __Integrate with private DNS zone__ and select the zone using the __Private DNS Zone__ field. Select __OK__ to create the endpoint.
+1. On the __Create private endpoint__ form, set the location, name, and virtual network to use. If you'd like to use the endpoint with a Private DNS Zone, select __Integrate with private DNS zone__ and select the zone using the __Private DNS Zone__ field. Select __OK__ to create the endpoint.
- :::image type="content" source="media/how-to-manage-workspace/create-private-endpoint.png" alt-text="Private endpoint creation":::
+ :::image type="content" source="media/how-to-manage-workspace/create-private-endpoint.png" alt-text="Private endpoint creation":::
1. When you're finished configuring networking, you can select __Review + Create__, or advance to the optional __Advanced__ configuration.
By default, metadata for the workspace is stored in an Azure Cosmos DB instance
To limit the data that Microsoft collects on your workspace, select __High business impact workspace__ in the portal, or set `hbi_workspace=true ` in Python. For more information on this setting, see [Encryption at rest](concept-data-encryption.md#encryption-at-rest).
-> [!IMPORTANT]
-> Selecting high business impact can only be done when creating a workspace. You cannot change this setting after workspace creation.
+> [!IMPORTANT]
+> Selecting high business impact can only be done when creating a workspace. You cannot change this setting after workspace creation.
#### Use your own data encryption key
You can provide your own key for data encryption. Doing so creates the Azure Cos
Use the following steps to provide your own key:
-> [!IMPORTANT]
-> Before following these steps, you must first perform the following actions:
+> [!IMPORTANT]
+> Before following these steps, you must first perform the following actions:
> > Follow the steps in [Configure customer-managed keys](how-to-setup-customer-managed-keys.md) to: > * Register the Azure Cosmos DB provider
machine-learning How To Migrate From V1 https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-migrate-from-v1.md
Last updated 09/23/2022
+monikerRange: 'azureml-api-2 || azureml-api-1'
# Upgrade to v2
Azure Machine Learning's v2 REST APIs, Azure CLI extension, and Python SDK intro
## Prerequisites - General familiarity with Azure Machine Learning and the v1 Python SDK.-- Understand [what is v2?](concept-v2.md)
+- Understand [what is v2?](concept-v2.md?view=azureml-api-2&preserve-view=true)
## Should I use v2?
v1 and v2 can co-exist in a workspace. You can reuse your existing assets in you
We do not recommend using the v1 and v2 SDKs together in the same code. It is technically possible to use v1 and v2 in the same code because they use different Azure namespaces. However, there are many classes with the same name across these namespaces (like Workspace, Model) which can cause confusion and make code readability and debuggability challenging. > [!IMPORTANT]
-> If your workspace uses a private endpoint, it will automatically have the `v1_legacy_mode` flag enabled, preventing usage of v2 APIs. See [how to configure network isolation with v2](how-to-configure-network-isolation-with-v2.md) for details.
+> If your workspace uses a private endpoint, it will automatically have the `v1_legacy_mode` flag enabled, preventing usage of v2 APIs. See [how to configure network isolation with v2](how-to-configure-network-isolation-with-v2.md?view=azureml-api-2&preserve-view=true) for details.
## Resources and assets in v1 and v2
This section gives an overview of specific resources and assets in Azure Machine
Workspaces don't need to be upgraded with v2. You can use the same workspace, regardless of whether you're using v1 or v2.
-If you create workspaces using automation, do consider upgrading the code for creating a workspace to v2. Typically Azure resources are managed via Azure Resource Manager (and Bicep) or similar resource provisioning tools. Alternatively, you can use the [CLI (v2) and YAML files](how-to-manage-workspace-cli.md#create-a-workspace).
+If you create workspaces using automation, do consider upgrading the code for creating a workspace to v2. Typically Azure resources are managed via Azure Resource Manager (and Bicep) or similar resource provisioning tools. Alternatively, you can use the [CLI (v2) and YAML files](how-to-manage-workspace-cli.md?view=azureml-api-2&preserve-view=true#create-a-workspace).
For a comparison of SDK v1 and v2 code, see [Workspace management in SDK v1 and SDK v2](migrate-to-v2-resource-workspace.md).
For a comparison of SDK v1 and v2 code, see [Compute management in SDK v1 and SD
### Endpoint and deployment (endpoint and web service in v1)
-With SDK/CLI v1, you can deploy models on ACI or AKS as web services. Your existing v1 model deployments and web services will continue to function as they are, but Using SDK/CLI v1 to deploy models on ACI or AKS as web services is now consiered as **legacy**. For new model deployments, we recommend upgrading to v2. In v2, we offer [managed endpoints or Kubernetes endpoints](./concept-endpoints.md). The following table guides our recommendation:
+With SDK/CLI v1, you can deploy models on ACI or AKS as web services. Your existing v1 model deployments and web services will continue to function as they are, but Using SDK/CLI v1 to deploy models on ACI or AKS as web services is now consiered as **legacy**. For new model deployments, we recommend upgrading to v2. In v2, we offer [managed endpoints or Kubernetes endpoints](./concept-endpoints.md?view=azureml-api-2&preserve-view=true). The following table guides our recommendation:
|Endpoint type in v2|Upgrade from|Notes| |-|-|-|
Datasets are renamed to data assets. *Backwards compatibility* is provided, whic
It should be noted that *forwards compatibility* is **not** provided, which means you **cannot** use V2 data assets in V1.
-This article talks more about handling data in v2 - [Read and write data in a job](how-to-read-write-data-v2.md)
+This article talks more about handling data in v2 - [Read and write data in a job](how-to-read-write-data-v2.md?view=azureml-api-2&preserve-view=true)
For a comparison of SDK v1 and v2 code, see [Data assets in SDK v1 and v2](migrate-to-v2-assets-data.md).
Environments created from v1 can be used in v2. In v2, environments have new fea
The management of Key Vault secrets differs significantly in V2 compared to V1. The V1 set_secret and get_secret SDK methods are not available in V2. Instead, direct access using Key Vault client libraries should be used.
-For details about Key Vault, see [Use authentication credential secrets in Azure Machine Learning training jobs](how-to-use-secrets-in-runs.md).
+For details about Key Vault, see [Use authentication credential secrets in Azure Machine Learning training jobs](how-to-use-secrets-in-runs.md?view=azureml-api-2&preserve-view=true).
## Scenarios across the machine learning lifecycle
We recommend v2 for prototyping models. You may consider using the CLI for an in
We recommend v2 for production model training. Jobs consolidate the terminology and provide a set of consistency that allows for easier transition between types (for example, `command` to `sweep`) and a GitOps-friendly process for serializing jobs into YAML files.
-With v2, you should separate your machine learning code from the control plane code. This separation allows for easier iteration and allows for easier transition between local and cloud. We also recommend using MLflow for tracking and model logging. See the [MLflow concept article](concept-mlflow.md) for details.
+With v2, you should separate your machine learning code from the control plane code. This separation allows for easier iteration and allows for easier transition between local and cloud. We also recommend using MLflow for tracking and model logging. See the [MLflow concept article](concept-mlflow.md?view=azureml-api-2&preserve-view=true) for details.
### Production model deployment
You can obtain a YAML representation of any entity with the CLI via `az ml <enti
## Next steps -- [Get started with the CLI (v2)](how-to-configure-cli.md)
+- [Get started with the CLI (v2)](how-to-configure-cli.md?view=azureml-api-2&preserve-view=true)
- [Get started with the Python SDK (v2)](https://aka.ms/sdk-v2-install)
machine-learning How To Mltable https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-mltable.md
# Working with tables in Azure Machine Learning
-> [!div class="op_single_selector" title1="Select the version of Azure Machine Learning developer platform you use:"]
-> * [v2 (current version)](how-to-mltable.md)
- [!INCLUDE [sdk v2](../../includes/machine-learning-sdk-v2.md)] [!INCLUDE [cli v2](../../includes/machine-learning-cli-v2.md)]
machine-learning How To Network Security Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-network-security-overview.md
Last updated 08/19/2022
+monikerRange: 'azureml-api-2 || azureml-api-1'
<!-- # Virtual network isolation and privacy overview --> # Secure Azure Machine Learning workspace resources using virtual networks (VNets) [!INCLUDE [sdk v2](../../includes/machine-learning-sdk-v2.md)] [!INCLUDE [cli v2](../../includes/machine-learning-cli-v2.md)]-
-> [!div class="op_single_selector" title1="Select the Azure Machine Learning SDK or CLI version you are using:"]
-> * [SDK/CLI v1](v1/how-to-network-security-overview.md)
-> * [SDK/CLI v2 (current version)](how-to-network-security-overview.md)
Secure Azure Machine Learning workspace resources and compute environments using virtual networks (VNets). This article uses an example scenario to show you how to configure a complete virtual network. > [!TIP] > This article is part of a series on securing an Azure Machine Learning workflow. See the other articles in this series: > > * [Secure the workspace resources](how-to-secure-workspace-vnet.md) > * [Secure the training environment](how-to-secure-training-vnet.md) > * [Secure the inference environment](how-to-secure-inferencing-vnet.md)
+> * [Secure the workspace resources](./v1/how-to-secure-workspace-vnet.md)
+> * [Secure the training environment](./v1/how-to-secure-training-vnet.md)
+> * [Secure the inference environment](./v1/how-to-secure-inferencing-vnet.md)
> * [Enable studio functionality](how-to-enable-studio-virtual-network.md) > * [Use custom DNS](how-to-custom-dns.md) > * [Use a firewall](how-to-access-azureml-behind-firewall.md) > * [API platform network isolation](how-to-configure-network-isolation-with-v2.md) > > For a tutorial on creating a secure workspace, see [Tutorial: Create a secure workspace](tutorial-create-secure-workspace.md) or [Tutorial: Create a secure workspace using a template](tutorial-create-secure-workspace-template.md).
If you want to access the workspace over the public internet while keeping all t
1. Create an [Azure Virtual Network](../virtual-network/virtual-networks-overview.md) that will contain the resources used by the workspace. 1. Use __one__ of the following options to create a publicly accessible workspace:
+ :::moniker range="azureml-api-2"
* Create an Azure Machine Learning workspace that __does not__ use the virtual network. For more information, see [Manage Azure Machine Learning workspaces](how-to-manage-workspace.md). * Create a [Private Link-enabled workspace](how-to-secure-workspace-vnet.md#secure-the-workspace-with-private-endpoint) to enable communication between your VNet and workspace. Then [enable public access to the workspace](#optional-enable-public-access).
+ :::moniker-end
+ :::moniker range="azureml-api-1"
+ * Create an Azure Machine Learning workspace that __does not__ use the virtual network. For more information, see [Manage Azure Machine Learning workspaces](./v1/how-to-manage-workspace.md).
+ * Create a [Private Link-enabled workspace](./v1/how-to-secure-workspace-vnet.md#secure-the-workspace-with-private-endpoint) to enable communication between your VNet and workspace. Then [enable public access to the workspace](#optional-enable-public-access).
+ :::moniker-end
1. Add the following services to the virtual network by using _either_ a __service endpoint__ or a __private endpoint__. Also allow trusted Microsoft services to access these
+ :::moniker range="azureml-api-2"
| Service | Endpoint information | Allow trusted information | | -- | -- | -- | | __Azure Key Vault__| [Service endpoint](../key-vault/general/overview-vnet-service-endpoints.md)</br>[Private endpoint](../key-vault/general/private-link-service.md) | [Allow trusted Microsoft services to bypass this firewall](how-to-secure-workspace-vnet.md#secure-azure-key-vault) | | __Azure Storage Account__ | [Service and private endpoint](how-to-secure-workspace-vnet.md?tabs=se#secure-azure-storage-accounts)</br>[Private endpoint](how-to-secure-workspace-vnet.md?tabs=pe#secure-azure-storage-accounts) | [Grant access to trusted Azure services](../storage/common/storage-network-security.md#grant-access-to-trusted-azure-services) | | __Azure Container Registry__ | [Private endpoint](../container-registry/container-registry-private-link.md) | [Allow trusted services](../container-registry/allow-access-trusted-services.md) |
+ :::moniker-end
+ :::moniker range="azureml-api-1"
+ | Service | Endpoint information | Allow trusted information |
+ | -- | -- | -- |
+ | __Azure Key Vault__| [Service endpoint](../key-vault/general/overview-vnet-service-endpoints.md)</br>[Private endpoint](../key-vault/general/private-link-service.md) | [Allow trusted Microsoft services to bypass this firewall](./v1/how-to-secure-workspace-vnet.md#secure-azure-key-vault) |
+ | __Azure Storage Account__ | [Service and private endpoint](./v1/how-to-secure-workspace-vnet.md?tabs=se#secure-azure-storage-accounts)</br>[Private endpoint](./v1/how-to-secure-workspace-vnet.md?tabs=pe#secure-azure-storage-accounts) | [Grant access to trusted Azure services](../storage/common/storage-network-security.md#grant-access-to-trusted-azure-services) |
+ | __Azure Container Registry__ | [Private endpoint](../container-registry/container-registry-private-link.md) | [Allow trusted services](../container-registry/allow-access-trusted-services.md) |
+ :::moniker-end
1. In properties for the Azure Storage Account(s) for your workspace, add your client IP address to the allowed list in firewall settings. For more information, see [Configure firewalls and virtual networks](../storage/common/storage-network-security.md#configuring-access-from-on-premises-networks).
If you want to access the workspace over the public internet while keeping all t
Use the following steps to secure your workspace and associated resources. These steps allow your services to communicate in the virtual network. 1. Create an [Azure Virtual Networks](../virtual-network/virtual-networks-overview.md) that will contain the workspace and other resources. Then create a [Private Link-enabled workspace](how-to-secure-workspace-vnet.md#secure-the-workspace-with-private-endpoint) to enable communication between your VNet and workspace. 1. Add the following services to the virtual network by using _either_ a __service endpoint__ or a __private endpoint__. Also allow trusted Microsoft services to access these
Use the following steps to secure your workspace and associated resources. These
| __Azure Key Vault__| [Service endpoint](../key-vault/general/overview-vnet-service-endpoints.md)</br>[Private endpoint](../key-vault/general/private-link-service.md) | [Allow trusted Microsoft services to bypass this firewall](how-to-secure-workspace-vnet.md#secure-azure-key-vault) | | __Azure Storage Account__ | [Service and private endpoint](how-to-secure-workspace-vnet.md?tabs=se#secure-azure-storage-accounts)</br>[Private endpoint](how-to-secure-workspace-vnet.md?tabs=pe#secure-azure-storage-accounts) | [Grant access from Azure resource instances](../storage/common/storage-network-security.md#grant-access-from-azure-resource-instances)</br>**or**</br>[Grant access to trusted Azure services](../storage/common/storage-network-security.md#grant-access-to-trusted-azure-services) | | __Azure Container Registry__ | [Private endpoint](../container-registry/container-registry-private-link.md) | [Allow trusted services](../container-registry/allow-access-trusted-services.md) |
+1. Create an [Azure Virtual Networks](../virtual-network/virtual-networks-overview.md) that will contain the workspace and other resources. Then create a [Private Link-enabled workspace](./v1/how-to-secure-workspace-vnet.md#secure-the-workspace-with-private-endpoint) to enable communication between your VNet and workspace.
+1. Add the following services to the virtual network by using _either_ a __service endpoint__ or a __private endpoint__. Also allow trusted Microsoft services to access these
+ | Service | Endpoint information | Allow trusted information |
+ | -- | -- | -- |
+ | __Azure Key Vault__| [Service endpoint](../key-vault/general/overview-vnet-service-endpoints.md)</br>[Private endpoint](../key-vault/general/private-link-service.md) | [Allow trusted Microsoft services to bypass this firewall](./v1/how-to-secure-workspace-vnet.md#secure-azure-key-vault) |
+ | __Azure Storage Account__ | [Service and private endpoint](./v1/how-to-secure-workspace-vnet.md?tabs=se#secure-azure-storage-accounts)</br>[Private endpoint](./v1/how-to-secure-workspace-vnet.md?tabs=pe#secure-azure-storage-accounts) | [Grant access from Azure resource instances](../storage/common/storage-network-security.md#grant-access-from-azure-resource-instances)</br>**or**</br>[Grant access to trusted Azure services](../storage/common/storage-network-security.md#grant-access-to-trusted-azure-services) |
+ | __Azure Container Registry__ | [Private endpoint](../container-registry/container-registry-private-link.md) | [Allow trusted services](../container-registry/allow-access-trusted-services.md) |
:::image type="content" source="./media/how-to-network-security-overview/secure-workspace-resources.svg" alt-text="Diagram showing how the workspace and associated resources communicate inside a VNet.":::
-For detailed instructions on how to complete these steps, see [Secure an Azure Machine Learning workspace](how-to-secure-workspace-vnet.md).
+For detailed instructions on how to complete these steps, see [Secure an Azure Machine Learning workspace](how-to-secure-workspace-vnet.md).
+For detailed instructions on how to complete these steps, see [Secure an Azure Machine Learning workspace](./v1/how-to-secure-workspace-vnet.md).
### Limitations
In this section, you learn how to secure the training environment in Azure Machi
To secure the training environment, use the following steps: 1. Create an Azure Machine Learning [compute instance and computer cluster in the virtual network](how-to-secure-training-vnet.md) to run the training job. 1. If your compute cluster or compute instance uses a public IP address, you must [Allow inbound communication](how-to-secure-training-vnet.md) so that management services can submit jobs to your compute resources. > [!TIP] > Compute cluster and compute instance can be created with or without a public IP address. If created with a public IP address, you get a load balancer with a public IP to accept the inbound access from Azure batch service and Azure Machine Learning service. You need to configure User Defined Routing (UDR) if you use a firewall. If created without a public IP, you get a private link service to accept the inbound access from Azure batch service and Azure Machine Learning service without a public IP.
+1. Create an Azure Machine Learning [compute instance and computer cluster in the virtual network](./v1/how-to-secure-training-vnet.md) to run the training job.
+1. If your compute cluster or compute instance uses a public IP address, you must [Allow inbound communication](./v1/how-to-secure-training-vnet.md) so that management services can submit jobs to your compute resources.
+
+ > [!TIP]
+ > Compute cluster and compute instance can be created with or without a public IP address. If created with a public IP address, you get a load balancer with a public IP to accept the inbound access from Azure batch service and Azure Machine Learning service. You need to configure User Defined Routing (UDR) if you use a firewall. If created without a public IP, you get a private link service to accept the inbound access from Azure batch service and Azure Machine Learning service without a public IP.
:::image type="content" source="./media/how-to-network-security-overview/secure-training-environment.svg" alt-text="Diagram showing how to secure managed compute clusters and instances."::: For detailed instructions on how to complete these steps, see [Secure a training environment](how-to-secure-training-vnet.md).
+For detailed instructions on how to complete these steps, see [Secure a training environment](./v1/how-to-secure-training-vnet.md).
### Example training job submission
In this section, you learn how Azure Machine Learning securely communicates betw
## Secure the inferencing environment You can enable network isolation for managed online endpoints to secure the following network traffic: * Inbound scoring requests. * Outbound communication with the workspace, Azure Container Registry, and Azure Blob Storage. For more information, see [Enable network isolation for managed online endpoints](how-to-secure-online-endpoint.md).
+In this section, you learn the options available for securing an inferencing environment when using the Azure CLI extension for ML v1 or the Azure Machine Learning Python SDK v1. When doing a v1 deployment, we recommend that you use Azure Kubernetes Services (AKS) clusters for high-scale, production deployments.
+
+You have two options for AKS clusters in a virtual network:
+
+- Deploy or attach a default AKS cluster to your VNet.
+- Attach a private AKS cluster to your VNet.
+
+**Default AKS clusters** have a control plane with public IP addresses. You can add a default AKS cluster to your VNet during the deployment or attach a cluster after it's created.
+
+**Private AKS clusters** have a control plane, which can only be accessed through private IPs. Private AKS clusters must be attached after the cluster is created.
+
+For detailed instructions on how to add default and private clusters, see [Secure an inferencing environment](./v1/how-to-secure-inferencing-vnet.md).
+
+Regardless default AKS cluster or private AKS cluster used, if your AKS cluster is behind of VNET, your workspace and its associate resources (storage, key vault, and ACR) must have private endpoints or service endpoints in the same VNET as the AKS cluster.
+
+The following network diagram shows a secured Azure Machine Learning workspace with a private AKS cluster attached to the virtual network.
+ ## Optional: Enable public access
You can secure the workspace behind a VNet using a private endpoint and still al
After securing the workspace with a private endpoint, use the following steps to enable clients to develop remotely using either the SDK or Azure Machine Learning studio: 1. [Enable public access](how-to-configure-private-link.md#enable-public-access) to the workspace.
+1. [Enable public access](./v1/how-to-configure-private-link.md#enable-public-access) to the workspace.
1. [Configure the Azure Storage firewall](../storage/common/storage-network-security.md?toc=%2fazure%2fstorage%2fblobs%2ftoc.json#grant-access-from-an-internet-ip-range) to allow communication with the IP address of clients that connect over the public internet. ## Optional: enable studio functionality
For more information on this configuration, see [Create an Azure Machine Learnin
This article is part of a series on securing an Azure Machine Learning workflow. See the other articles in this series: * [Secure the workspace resources](how-to-secure-workspace-vnet.md) * [Secure the training environment](how-to-secure-training-vnet.md) * [Secure the inference environment](how-to-secure-inferencing-vnet.md)
+* [Secure the workspace resources](./v1/how-to-secure-workspace-vnet.md)
+* [Secure the training environment](./v1/how-to-secure-training-vnet.md)
+* [Secure the inference environment](./v1/how-to-secure-inferencing-vnet.md)
* [Enable studio functionality](how-to-enable-studio-virtual-network.md) * [Use custom DNS](how-to-custom-dns.md) * [Use a firewall](how-to-access-azureml-behind-firewall.md)
machine-learning How To Prepare Datasets For Automl Images https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-prepare-datasets-for-automl-images.md
Last updated 05/26/2022
[!INCLUDE [dev v2](../../includes/machine-learning-dev-v2.md)] > [!div class="op_single_selector" title1="Select the version of Azure Machine Learning you are using:"]
-> * [v1](v1/how-to-prepare-datasets-for-automl-images-v1.md)
+> * [v1](v1/how-to-prepare-datasets-for-automl-images-v1.md?view=azureml-api-1&preserve-view=true)
> * [v2 (current version)](how-to-prepare-datasets-for-automl-images.md) > [!IMPORTANT]
machine-learning How To Prevent Data Loss Exfiltration https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-prevent-data-loss-exfiltration.md
Last updated 01/20/2023
+monikerRange: 'azureml-api-2 || azureml-api-1'
# Azure Machine Learning data exfiltration prevention
Service endpoint policies allow you to filter egress virtual network traffic to
### Inbound > [!IMPORTANT] > The following information __modifies__ the guidance provided in the [How to secure training environment](how-to-secure-training-vnet.md) article.
+> [!IMPORTANT]
+> The following information __modifies__ the guidance provided in the [How to secure training environment](./v1/how-to-secure-training-vnet.md) article.
When using Azure Machine Learning __compute instance__ _with a public IP address_, allow inbound traffic from Azure Batch management (service tag `BatchNodeManagement.<region>`). A compute instance _with no public IP_ __doesn't__ require this inbound communication. ### Outbound > [!IMPORTANT] > The following information is __in addition__ to the guidance provided in the [Secure training environment with virtual networks](how-to-secure-training-vnet.md) and [Configure inbound and outbound network traffic](how-to-access-azureml-behind-firewall.md) articles.
+> [!IMPORTANT]
+> The following information is __in addition__ to the guidance provided in the [Secure training environment with virtual networks](./v1/how-to-secure-training-vnet.md) and [Configure inbound and outbound network traffic](how-to-access-azureml-behind-firewall.md) articles.
Select the configuration that you're using:
__Allow__ outbound traffic over __ANY port 443__ to the following FQDNs. Replace
For more information, see [How to secure training environments](how-to-secure-training-vnet.md) and [Configure inbound and outbound network traffic](how-to-access-azureml-behind-firewall.md).
+For more information, see [How to secure training environments](./v1/how-to-secure-training-vnet.md) and [Configure inbound and outbound network traffic](how-to-access-azureml-behind-firewall.md).
## 3. Enable storage endpoint for the subnet
machine-learning How To Read Write Data V2 https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-read-write-data-v2.md
[!INCLUDE [dev v2](../../includes/machine-learning-dev-v2.md)] > [!div class="op_single_selector" title1="Select the version of Azure Machine Learning CLI extension you use:"]
-> * [v1](v1/how-to-train-with-datasets.md)
+> * [v1](v1/how-to-train-with-datasets.md?view=azureml-api-1&preserve-view=true)
> * [v2 (current version)](how-to-read-write-data-v2.md) Learn how to read and write data for your jobs with the Azure Machine Learning Python SDK v2 and the Azure Machine Learning CLI extension v2.
machine-learning How To Run Batch Predictions Designer https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-run-batch-predictions-designer.md
If you make some modifications in your training pipeline, you may want to update
## Next steps * Follow the [designer tutorial to train and deploy a regression model](tutorial-designer-automobile-price-train-score.md).
-* For how to publish and run a published pipeline using the SDK v1, see the [How to deploy pipelines](v1/how-to-deploy-pipelines.md) article.
+* For how to publish and run a published pipeline using the SDK v1, see the [How to deploy pipelines](v1/how-to-deploy-pipelines.md?view=azureml-api-1&preserve-view=true) article.
machine-learning How To Secure Inferencing Vnet https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-secure-inferencing-vnet.md
Last updated 09/06/2022
# Secure an Azure Machine Learning inferencing environment with virtual networks > [!div class="op_single_selector" title1="Select the Azure Machine Learning SDK or CLI version you are using:"]
-> * [SDK/CLI v1](v1/how-to-secure-inferencing-vnet.md)
+> * [SDK/CLI v1](v1/how-to-secure-inferencing-vnet.md?view=azureml-api-1&preserve-view=true)
> * [SDK/CLI v2 (current version)](how-to-secure-inferencing-vnet.md) In this article, you learn how to secure inferencing environments (online endpoints) with a virtual network in Azure Machine Learning. There are two inference options that can be secured using a VNet:
machine-learning How To Secure Training Vnet https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-secure-training-vnet.md
ms.devlang: azurecli
[!INCLUDE [SDK v2](../../includes/machine-learning-sdk-v2.md)] > [!div class="op_single_selector" title1="Select the Azure Machine Learning SDK version you are using:"]
-> * [SDK v1](./v1/how-to-secure-training-vnet.md)
+> * [SDK v1](./v1/how-to-secure-training-vnet.md?view=azureml-api-1&preserve-view=true)
> * [SDK v2 (current version)](how-to-secure-training-vnet.md) Azure Machine Learning compute instance and compute cluster can be used to securely train models in a virtual network. When planning your environment, you can configure the compute instance/cluster with or without a public IP address. The general differences between the two are:
machine-learning How To Secure Workspace Vnet https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-secure-workspace-vnet.md
[!INCLUDE [sdk/cli v2](../../includes/machine-learning-dev-v2.md)] > [!div class="op_single_selector" title1="Select the version of Azure Machine Learning SDK/CLI extension you are using:"]
-> * [v1](v1/how-to-secure-workspace-vnet.md)
> * [v2 (current version)](how-to-secure-workspace-vnet.md)
+> * [v1](v1/how-to-secure-workspace-vnet.md?view=azureml-api-1&preserve-view=true)
In this article, you learn how to secure an Azure Machine Learning workspace and its associated resources in a virtual network.
When your Azure Machine Learning workspace is configured with a private endpoint
### Azure Container Registry
-When ACR is behind a virtual network, Azure Machine Learning canΓÇÖt use it to directly build Docker images. Instead, the compute cluster is used to build the images.
+When ACR is behind a virtual network, Azure Machine Learning can't use it to directly build Docker images. Instead, the compute cluster is used to build the images.
> [!IMPORTANT]
-> The compute cluster used to build Docker images needs to be able to access the package repositories that are used to train and deploy your models. You may need to add network security rules that allow access to public repos, [use private Python packages](how-to-use-private-python-packages.md), or use [custom Docker images](v1/how-to-train-with-custom-image.md) that already include the packages.
+> The compute cluster used to build Docker images needs to be able to access the package repositories that are used to train and deploy your models. You may need to add network security rules that allow access to public repos, [use private Python packages](how-to-use-private-python-packages.md), or use [custom Docker images (SDK v1)](v1/how-to-train-with-custom-image.md?view=azureml-api-1&preserve-view=true) that already include the packages.
> [!WARNING] > If your Azure Container Registry uses a private endpoint or service endpoint to communicate with the virtual network, you cannot use a managed identity with an Azure Machine Learning compute cluster.
machine-learning How To Setup Authentication https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-setup-authentication.md
# Set up authentication for Azure Machine Learning resources and workflows [!INCLUDE [dev v2](../../includes/machine-learning-dev-v2.md)]
-
+
> [!div class="op_single_selector" title1="Select the version of Azure Machine Learning SDK you are using:"]
-> * [v1](./v1/how-to-setup-authentication.md)
+> * [v1](./v1/how-to-setup-authentication.md?view=azureml-api-1&preserve-view=true)
> * [v2 (current version)](how-to-setup-authentication.md) Learn how to set up authentication to your Azure Machine Learning workspace from the Azure CLI or Azure Machine Learning SDK v2. Authentication to your Azure Machine Learning workspace is based on __Azure Active Directory__ (Azure AD) for most things. In general, there are four authentication workflows that you can use when connecting to the workspace:
machine-learning How To Train Keras https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-train-keras.md
[!INCLUDE [sdk v2](../../includes/machine-learning-sdk-v2.md)] > [!div class="op_single_selector" title1="Select the Azure Machine Learning SDK version you are using:"]
-> * [v1](v1/how-to-train-keras.md)
+> * [v1](v1/how-to-train-keras.md?view=azureml-api-1&preserve-view=true)
> * [v2 (current version)](how-to-train-keras.md) In this article, learn how to run your Keras training scripts using the Azure Machine Learning Python SDK v2.
machine-learning How To Train Pytorch https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-train-pytorch.md
[!INCLUDE [sdk v2](../../includes/machine-learning-sdk-v2.md)] > [!div class="op_single_selector" title1="Select the Azure Machine Learning SDK version you are using:"]
-> * [v1](v1/how-to-train-pytorch.md)
+> * [v1](v1/how-to-train-pytorch.md?view=azureml-api-1&preserve-view=true)
> * [v2 (current version)](how-to-train-pytorch.md) In this article, you'll learn to train, hyperparameter tune, and deploy a [PyTorch](https://pytorch.org/) model using the Azure Machine Learning Python SDK v2.
machine-learning How To Train Scikit Learn https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-train-scikit-learn.md
[!INCLUDE [sdk v2](../../includes/machine-learning-sdk-v2.md)] > [!div class="op_single_selector" title1="Select the Azure Machine Learning SDK version you are using:"]
-> * [v1](v1/how-to-train-scikit-learn.md)
+> * [v1](v1/how-to-train-scikit-learn.md?view=azureml-api-1&preserve-view=true)
> * [v2 (current version)](how-to-train-scikit-learn.md) In this article, learn how to run your scikit-learn training scripts with Azure Machine Learning Python SDK v2.
machine-learning How To Train Tensorflow https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-train-tensorflow.md
[!INCLUDE [sdk v2](../../includes/machine-learning-sdk-v2.md)] > [!div class="op_single_selector" title1="Select the Azure Machine Learning SDK version you are using:"]
-> * [v1](v1/how-to-train-tensorflow.md)
+> * [v1](v1/how-to-train-tensorflow.md?view=azureml-api-1&preserve-view=true)
> * [v2 (current version)](how-to-train-tensorflow.md) In this article, learn how to run your [TensorFlow](https://www.tensorflow.org/overview) training scripts at scale using Azure Machine Learning Python SDK v2.
machine-learning How To Troubleshoot Online Endpoints https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-troubleshoot-online-endpoints.md
If you are creating or updating a Kubernetes online deployment, you can see [Com
### ERROR: ImageBuildFailure
-This error is returned when the environment (docker image) is being built. You can check the build log for more information on the failure(s). The build log is located in the default storage for your Azure Machine Learning workspace. The exact location may be returned as part of the error. For example, "The build log is available in the workspace blob store '[storage-account-name]' under the path '/azureml/ImageLogs/your-image-id/build.log'". In this case, "azureml" is the name of the blob container in the storage account.
+This error is returned when the environment (docker image) is being built. You can check the build log for more information on the failure(s). The build log is located in the default storage for your Azure Machine Learning workspace. The exact location may be returned as part of the error. For example, `"The build log is available in the workspace blob store '[storage-account-name]' under the path '/azureml/ImageLogs/your-image-id/build.log'"`. In this case, "azureml" is the name of the blob container in the storage account.
This is a list of common image build failure scenarios: * [Azure Container Registry (ACR) authorization failure](#container-registry-authorization-failure) * [Generic or unknown failure](#generic-image-build-failure)
+We also recommend reviewing the default [probe settings](reference-yaml-deployment-managed-online.md#probesettings) in case of ImageBuild timeouts.
+ #### Container registry authorization failure If the error message mentions `"container registry authorization failure"` that means you cannot access the container registry with the current credentials.
These are common error codes when consuming managed online endpoints with REST r
| 404 | Not found | The endpoint doesn't have any valid deployment with positive weight. | | 408 | Request timeout | The model execution took longer than the timeout supplied in `request_timeout_ms` under `request_settings` of your model deployment config. | | 424 | Model Error | If your model container returns a non-200 response, Azure returns a 424. Check the `Model Status Code` dimension under the `Requests Per Minute` metric on your endpoint's [Azure Monitor Metric Explorer](../azure-monitor/essentials/metrics-getting-started.md). Or check response headers `ms-azureml-model-error-statuscode` and `ms-azureml-model-error-reason` for more information. If 424 comes with liveness or readiness probe failing, consider adjusting [probe settings](reference-yaml-deployment-managed-online.md#probesettings) to allow longer time to probe liveness or readiness of the container. |
-| 429 | Too many pending requests | Your model is getting more requests than it can handle. Azure Machine Learning allows maximum 2 * `max_concurrent_requests_per_instance` * `instance_count` requests in parallel at any time and rejects extra requests. You can confirm these settings in your model deployment config under `request_settings` and `scale_settings`, respectively. If you're using auto-scaling, this error means that your model is getting requests faster than the system can scale up. With auto-scaling, you can try to resend requests with [exponential backoff](https://en.wikipedia.org/wiki/Exponential_backoff). Doing so can give the system time to adjust. Apart from enabling auto-scaling, you could also increase the number of instances by using the [code to calculate instance count](#how-to-calculate-instance-count). |
+| 429 | Too many pending requests | Your model is currently getting more requests than it can handle. Azure Machine Learning has implemented a system that permits a maximum of `2 * max_concurrent_requests_per_instance * instance_count requests` to be processed in parallel at any given moment to guarantee smooth operation. Additional requests that exceed this maximum will be rejected. You can review your model deployment configuration under the request_settings and scale_settings sections to verify and adjust these settings. Additionally, as outlined in the [YAML definition for RequestSettings](reference-yaml-deployment-managed-online.md#requestsettings), it is important to ensure that the environment variable `WORKER_COUNT` is correctly passed. <br><br> If you're using auto-scaling and get this error, it means your model is getting requests quicker than the system can scale up. In this situation, consider resending requests with an [exponential backoff](https://en.wikipedia.org/wiki/Exponential_backoff) to give the system the time it needs to adjust. You could also increase the number of instances by using [code to calculate instance count](#how-to-calculate-instance-count). These steps, combined with setting auto-scaling, will help ensure that your model is ready to handle the influx of requests. |
| 429 | Rate-limiting | The number of requests per second reached the [limit](./how-to-manage-quotas.md#azure-machine-learning-managed-online-endpoints) of managed online endpoints. | | 500 | Internal server error | Azure Machine Learning-provisioned infrastructure is failing. |
machine-learning How To Tune Hyperparameters https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-tune-hyperparameters.md
[!INCLUDE [sdk v2](../../includes/machine-learning-sdk-v2.md)] > [!div class="op_single_selector" title1="Select the version of Azure Machine Learning CLI extension you are using:"]
-> * [v1](v1/how-to-tune-hyperparameters-v1.md)
+> * [v1](v1/how-to-tune-hyperparameters-v1.md?view=azureml-api-1&preserve-view=true)
> * [v2 (current version)](how-to-tune-hyperparameters.md) Automate efficient hyperparameter tuning using Azure Machine Learning SDK v2 and CLI v2 by way of the SweepJob type.
machine-learning How To Use Automated Ml For Ml Models https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-use-automated-ml-for-ml-models.md
Otherwise, you'll see a list of your recent automated ML experiments, including
1. For **classification**, you can also enable deep learning.
- If deep learning is enabled, validation is limited to _train_validation split_. [Learn more about validation options](how-to-configure-cross-validation-data-splits.md).
+ If deep learning is enabled, validation is limited to _train_validation split_. [Learn more about validation options (SDK v1)](./v1/how-to-configure-cross-validation-data-splits.md).
1. For **forecasting** you can,
Otherwise, you'll see a list of your recent automated ML experiments, including
1. The **[Optional] Validate and test** form allows you to do the following.
- 1. Specify the type of validation to be used for your training job. [Learn more about cross validation](how-to-configure-cross-validation-data-splits.md#prerequisites).
+ 1. Specify the type of validation to be used for your training job. [Learn more about cross validation (SDK v1)](./v1/how-to-configure-cross-validation-data-splits.md#prerequisites).
1. Forecasting tasks only supports k-fold cross validation.
machine-learning How To Use Automl Small Object Detect https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-use-automl-small-object-detect.md
[!INCLUDE [sdk v2](../../includes/machine-learning-sdk-v2.md)] > [!div class="op_single_selector" title1="Select the version of Azure Machine Learning CLI extension you are using:"]
-> * [v1](v1/how-to-use-automl-small-object-detect-v1.md)
+> * [v1](v1/how-to-use-automl-small-object-detect-v1.md?view=azureml-api-1&preserve-view=true)
> * [v2 (current version)](how-to-use-automl-small-object-detect.md)
training_parameters:
```python image_object_detection_job.set_training_parameters(
- tile_grid_size='3x2'
+ tile_grid_size='3x2'
) ```
search_space:
```python image_object_detection_job.extend_search_space(
- SearchSpace(
- model_name=Choice(['fasterrcnn_resnet50_fpn']),
- tile_grid_size=Choice(['2x1', '3x2', '5x3'])
- )
+ SearchSpace(
+ model_name=Choice(['fasterrcnn_resnet50_fpn']),
+ tile_grid_size=Choice(['2x1', '3x2', '5x3'])
+ )
) ```
Doing so, may improve performance for some datasets, and won't incur the extra c
The following are the parameters you can use to control the tiling feature.
-| Parameter Name | Description | Default |
+| Parameter Name | Description | Default |
| |-| -| | `tile_grid_size` | The grid size to use for tiling each image. Available for use during training, validation, and inference.<br><br>Should be passed as a string in `'3x2'` format.<br><br> *Note: Setting this parameter increases the computation time proportionally, since all tiles and images are processed by the model.*| no default value | | `tile_overlap_ratio` | Controls the overlap ratio between adjacent tiles in each dimension. When the objects that fall on the tile boundary are too large to fit completely in one of the tiles, increase the value of this parameter so that the objects fit in at least one of the tiles completely.<br> <br> Must be a float in [0, 1).| 0.25 |
See the [object detection sample notebook](https://github.com/Azure/azureml-exam
>[!NOTE] > All images in this article are made available in accordance with the permitted use section of the [MIT licensing agreement](https://choosealicense.com/licenses/mit/).
-> Copyright © 2020 Roboflow, Inc.
+> Copyright &copy; 2020 Roboflow, Inc.
## Next steps
machine-learning How To Use Event Grid https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-use-event-grid.md
Last updated 09/09/2022
+monikerRange: 'azureml-api-2 || azureml-api-1'
# Trigger applications, processes, or CI/CD workflows based on Azure Machine Learning events (preview)
Use [Azure Logic Apps](../logic-apps/index.yml) to configure emails for all your
![Screenshot shows the Save As and Create buttons in the Logic Apps Designer.](./media/how-to-use-event-grid/confirm-logic-app-create.png) - ### Example: Data drift triggers retraining > [!IMPORTANT]
In this example, a simple Data Factory pipeline is used to copy files into a blo
Now the data factory pipeline is triggered when drift occurs. View details on your data drift run and machine learning pipeline in [Azure Machine Learning studio](https://ml.azure.com). :::image type="content" source="./media/how-to-use-event-grid/view-in-workspace.png" alt-text="Screenshot showing pipeline endpoints."::: ## Next steps
machine-learning How To Use Mlflow Cli Runs https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-use-mlflow-cli-runs.md
ms.devlang: azurecli
# Track ML experiments and models with MLflow > [!div class="op_single_selector" title1="Select the version of Azure Machine Learning developer platform you're using:"]
-> * [v1](./v1/how-to-use-mlflow.md)
+> * [v1](./v1/how-to-use-mlflow.md?view=azureml-api-1&preserve-view=true)
> * [v2 (current version)](how-to-use-mlflow-cli-runs.md) __Tracking__ refers to process of saving all experiment's related information that you may find relevant for every experiment you run. Such metadata varies based on your project, but it may include:
machine-learning How To Use Pipeline Parameter https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-use-pipeline-parameter.md
Published endpoints are especially useful for retraining and batch prediction sc
In this article, you learned how to create pipeline parameters in the designer. Next, see how you can use pipeline parameters to [retrain models](how-to-retrain-designer.md) or perform [batch predictions](how-to-run-batch-predictions-designer.md).
-You can also learn how to [use pipelines programmatically with the SDK v1](v1/how-to-deploy-pipelines.md).
+You can also learn how to [use pipelines programmatically with the SDK v1](v1/how-to-deploy-pipelines.md?view=azureml-api-1&preserve-view=true).
machine-learning How To Use Secrets In Runs https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-use-secrets-in-runs.md
[!INCLUDE [sdk v2](../../includes/machine-learning-sdk-v2.md)] > [!div class="op_single_selector" title1="Select the version of the Azure Machine Learning Python SDK you are using:"]
-> * [v1](v1/how-to-use-secrets-in-runs.md)
+> * [v1](v1/how-to-use-secrets-in-runs.md?view=azureml-api-1&preserve-view=true)
> * [v2 (current version)](how-to-use-secrets-in-runs.md) Authentication information such as your user name and password are secrets. For example, if you connect to an external database in order to query training data, you would need to pass your username and password to the remote job context. Coding such values into training scripts in clear text is insecure as it would potentially expose the secret.
machine-learning How To Workspace Diagnostic Api https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-workspace-diagnostic-api.md
Last updated 09/14/2022
+monikerRange: 'azureml-api-2 || azureml-api-1'
# How to use workspace diagnostics [!INCLUDE [sdk v2](../../includes/machine-learning-sdk-v2.md)]
-> [!div class="op_single_selector" title1="Select the version of the Azure Machine Learning Python SDK you are using:"]
-> * [v1](v1/how-to-workspace-diagnostic-api.md)
-> * [v2 (current version)](how-to-workspace-diagnostic-api.md)
Azure Machine Learning provides a diagnostic API that can be used to identify problems with your workspace. Errors returned in the diagnostics report include information on how to resolve the problem.
You can use the workspace diagnostics from the Azure Machine Learning studio or
## Prerequisites [!INCLUDE [sdk](../../includes/machine-learning-sdk-v2-prereqs.md)]
+* An Azure Machine Learning workspace. If you don't have one, see [Create a workspace](quickstart-create-resources.md).
+* The [Azure Machine Learning SDK v1 for Python](/python/api/overview/azure/ml).
## Diagnostics from studio
After diagnostics run, a list of any detected problems is returned. This list in
The following snippet demonstrates how to use workspace diagnostics from Python [!INCLUDE [sdk v2](../../includes/machine-learning-sdk-v2.md)] ```python
ml_client = MLClient(DefaultAzureCredential(), subscription_id, resource_group)
resp = ml_client.workspaces.begin_diagnose(workspace) print(resp) ```+
+```python
+from azureml.core import Workspace
+
+ws = Workspace.from_config()
+
+diag_param = {
+ "value": {
+ }
+ }
+
+resp = ws.diagnose_workspace(diag_param)
+print(resp)
+```
The response is a JSON document that contains information on any problems detected with the workspace. The following JSON is an example response:
The response is a JSON document that contains information on any problems detect
If no problems are detected, an empty JSON document is returned. For more information, see the [Workspace](/python/api/azure-ai-ml/azure.ai.ml.entities.workspace) reference.
+For more information, see the [Workspace.diagnose_workspace()](/python/api/azureml-core/azureml.core.workspace(class)#diagnose-workspace-diagnose-parameters-) reference.
## Next steps
machine-learning Migrate To V2 Assets Data https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/migrate-to-v2-assets-data.md
Last updated 02/13/2023
+monikerRange: 'azureml-api-1 || azureml-api-2'
# Upgrade data management to SDK v2
machine-learning Migrate To V2 Assets Model https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/migrate-to-v2-assets-model.md
Last updated 12/01/2022
+monikerRange: 'azureml-api-1 || azureml-api-2'
# Upgrade model management to SDK v2
machine-learning Migrate To V2 Command Job https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/migrate-to-v2-command-job.md
Last updated 09/16/2022
+monikerRange: 'azureml-api-1 || azureml-api-2'
# Upgrade script run to SDK v2
machine-learning Migrate To V2 Deploy Endpoints https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/migrate-to-v2-deploy-endpoints.md
Last updated 09/16/2022
+monikerRange: 'azureml-api-1 || azureml-api-2'
# Upgrade deployment endpoints to SDK v2
machine-learning Migrate To V2 Execution Automl https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/migrate-to-v2-execution-automl.md
Last updated 09/16/2022
+monikerRange: 'azureml-api-1 || azureml-api-2'
# Upgrade AutoML to SDK v2
machine-learning Migrate To V2 Execution Hyperdrive https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/migrate-to-v2-execution-hyperdrive.md
Last updated 09/16/2022
+monikerRange: 'azureml-api-1 || azureml-api-2'
# Upgrade hyperparameter tuning to SDK v2
machine-learning Migrate To V2 Execution Parallel Run Step https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/migrate-to-v2-execution-parallel-run-step.md
Last updated 09/16/2022
+monikerRange: 'azureml-api-1 || azureml-api-2'
# Upgrade parallel run step to SDK v2
machine-learning Migrate To V2 Execution Pipeline https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/migrate-to-v2-execution-pipeline.md
Last updated 09/16/2022
+monikerRange: 'azureml-api-1 || azureml-api-2'
# Upgrade pipelines to SDK v2
machine-learning Migrate To V2 Local Runs https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/migrate-to-v2-local-runs.md
Last updated 09/16/2022
+monikerRange: 'azureml-api-1 || azureml-api-2'
# Upgrade local runs to SDK v2
machine-learning Migrate To V2 Managed Online Endpoints https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/migrate-to-v2-managed-online-endpoints.md
Last updated 09/28/2022
+monikerRange: 'azureml-api-1 || azureml-api-2'
# Upgrade steps for Azure Container Instances web services to managed online endpoints
machine-learning Migrate To V2 Resource Compute https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/migrate-to-v2-resource-compute.md
Last updated 02/14/2023
+monikerRange: 'azureml-api-1 || azureml-api-2'
# Upgrade compute management to v2
machine-learning Migrate To V2 Resource Datastore https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/migrate-to-v2-resource-datastore.md
Last updated 09/16/2022
+monikerRange: 'azureml-api-1 || azureml-api-2'
# Upgrade datastore management to SDK v2
machine-learning Migrate To V2 Resource Workspace https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/migrate-to-v2-resource-workspace.md
Last updated 09/16/2022
+monikerRange: 'azureml-api-1 || azureml-api-2'
# Upgrade workspace management to SDK v2
machine-learning Reference Automl Images Hyperparameters https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/reference-automl-images-hyperparameters.md
Last updated 01/18/2022
[!INCLUDE [dev v2](../../includes/machine-learning-dev-v2.md)] > [!div class="op_single_selector" title1="Select the version of Azure Machine Learning you are using:"]
-> * [v1](v1/reference-automl-images-hyperparameters-v1.md)
+> * [v1](v1/reference-automl-images-hyperparameters-v1.md?view=azureml-api-1&preserve-view=true)
> * [v2 (current version)](reference-automl-images-hyperparameters.md) Learn which hyperparameters are available specifically for computer vision tasks in automated ML experiments.
machine-learning Reference Automl Images Schema https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/reference-automl-images-schema.md
Last updated 09/09/2022
[!INCLUDE [dev v2](../../includes/machine-learning-dev-v2.md)] > [!div class="op_single_selector" title1="Select the version of Azure Machine Learning you are using:"]
-> * [v1](v1/reference-automl-images-schema-v1.md)
+> * [v1](v1/reference-automl-images-schema-v1.md?view=azureml-api-1&preserve-view=true)
> * [v2 (current version)](reference-automl-images-schema.md)
If `model_explainability`, `visualizations`, `attributions` are set to `True` in
> [!NOTE]
-> The images used in this article are from the Fridge Objects dataset, copyright © Microsoft Corporation and available at [computervision-recipes/01_training_introduction.ipynb](https://github.com/microsoft/computervision-recipes/blob/master/scenarios/detection/01_training_introduction.ipynb) under the [MIT License](https://github.com/microsoft/computervision-recipes/blob/master/LICENSE).
+> The images used in this article are from the Fridge Objects dataset, copyright &copy; Microsoft Corporation and available at [computervision-recipes/01_training_introduction.ipynb](https://github.com/microsoft/computervision-recipes/blob/master/scenarios/detection/01_training_introduction.ipynb) under the [MIT License](https://github.com/microsoft/computervision-recipes/blob/master/LICENSE).
## Next steps
machine-learning Reference Yaml Deployment Managed Online https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/reference-yaml-deployment-managed-online.md
The source JSON schema can be found at https://azuremlschemas.azureedge.net/late
| Key | Type | Description | Default value | | | - | -- | - | | `request_timeout_ms` | integer | The scoring timeout in milliseconds. | `5000` |
-| `max_concurrent_requests_per_instance` | integer | The maximum number of concurrent requests per instance allowed for the deployment. <br><br> Set to the number of requests that your model can process concurrently on a single node. Setting this value higher than your model's actual concurrency can lead to higher latencies. Setting this value too low may lead to under utilized nodes. Setting too low may also result in requests being rejected with a 429 HTTP status code, as the system will opt to fail fast. <br><br> For more information, see [Troubleshooting online endpoints: HTTP status codes](how-to-troubleshoot-online-endpoints.md#http-status-codes). | `1` |
+| `max_concurrent_requests_per_instance` | integer | The maximum number of concurrent requests per instance allowed for the deployment. <br><br> **Note:** If you're using [AzureML Inference Server](how-to-inference-server-http.md) or [AzureML Inference Images](concept-prebuilt-docker-images-inference.md), your model must be configured to handle concurrent requests. To do so, pass `WORKER_COUNT: <int>` as an environment variable. For more information about `WORKER_COUNT`, see [AzureML Inference Server Parameters](how-to-inference-server-http.md#server-parameters) <br><br> **Note:** Set to the number of requests that your model can process concurrently on a single node. Setting this value higher than your model's actual concurrency can lead to higher latencies. Setting this value too low may lead to under utilized nodes. Setting too low may also result in requests being rejected with a 429 HTTP status code, as the system will opt to fail fast. For more information, see [Troubleshooting online endpoints: HTTP status codes](how-to-troubleshoot-online-endpoints.md#http-status-codes). | `1` |
| `max_queue_wait_ms` | integer | The maximum amount of time in milliseconds a request will stay in the queue. | `500` | ### ProbeSettings
machine-learning Reference Yaml Job Parallel https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/reference-yaml-job-parallel.md
Last updated 09/27/2022
[!INCLUDE [cli v2](../../includes/machine-learning-cli-v2.md)] > [!div class="op_single_selector" title1="Select the version of Azure Machine Learning CLI extension you are using:"]
-> * [v1](v1/reference-pipeline-yaml.md)
+> * [v1](v1/reference-pipeline-yaml.md?view=azureml-api-1&preserve-view=true)
> * [v2 (current version)](reference-yaml-job-pipeline.md) > [!IMPORTANT]
machine-learning Reference Yaml Job Pipeline https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/reference-yaml-job-pipeline.md
[!INCLUDE [cli v2](../../includes/machine-learning-cli-v2.md)] > [!div class="op_single_selector" title1="Select the version of Azure Machine Learning CLI extension you are using:"]
-> * [v1](v1/reference-pipeline-yaml.md)
+> * [v1](v1/reference-pipeline-yaml.md?view=azureml-api-1&preserve-view=true)
> * [v2 (current version)](reference-yaml-job-pipeline.md) The source JSON schema can be found at https://azuremlschemas.azureedge.net/latest/pipelineJob.schema.json.
machine-learning Samples Notebooks https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/samples-notebooks.md
[!INCLUDE [sdk v2](../../includes/machine-learning-sdk-v2.md)] > [!div class="op_single_selector" title1="Select the Azure Machine Learning version you are using:"]
-> * [v1](v1/samples-notebooks-v1.md)
+> * [v1](v1/samples-notebooks-v1.md?view=azureml-api-1&preserve-view=true)
> * [v2](samples-notebooks.md) The [AzureML-Examples](https://github.com/Azure/azureml-examples) repository includes the latest (v2) Azure Machine Learning Python CLI and SDK samples. For information on the various example types, see the [readme](https://github.com/Azure/azureml-examples#azure-machine-learning-examples).
machine-learning Tutorial Auto Train Image Models https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/tutorial-auto-train-image-models.md
[!INCLUDE [dev v2](../../includes/machine-learning-dev-v2.md)] > [!div class="op_single_selector" title1="Select the version of Azure Machine Learning you are using:"]
-> * [v1](v1/tutorial-auto-train-image-models-v1.md)
+> * [v1](v1/tutorial-auto-train-image-models-v1.md?view=azureml-api-1&preserve-view=true)
> * [v2 (current version)](tutorial-auto-train-image-models.md)
machine-learning Tutorial Automated Ml Forecast https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/tutorial-automated-ml-forecast.md
After you load and configure your data, set up your remote compute target and se
Field | Description | Value for tutorial -||
- Compute name | A unique name that identifies your compute context. | bike-compute
+ Compute name | A unique name that identifies your compute context. | bike-compute
Min / Max nodes| To profile data, you must specify 1 or more nodes.|Min nodes: 1<br>Max nodes: 6 Idle seconds before scale down | Idle time before the cluster is automatically scaled down to the minimum node count.|120 (default) Advanced settings | Settings to configure and authorize a virtual network for your experiment.| None
See this article for steps on how to create a Power BI supported schema to facil
+ Learn more about [automated machine learning](concept-automated-ml.md). + For more information on classification metrics and charts, see the [Understand automated machine learning results](how-to-understand-automated-ml.md) article.
-+ Learn more about [featurization](how-to-configure-auto-features.md#featurization).
-+ Learn more about [data profiling](v1/how-to-connect-data-ui.md#profile).
>[!NOTE] > This bike share dataset has been modified for this tutorial. This dataset was made available as part of a [Kaggle competition](https://www.kaggle.com/c/bike-sharing-demand/data) and was originally available via [Capital Bikeshare](https://www.capitalbikeshare.com/system-data). It can also be found within the [UCI Machine Learning Database](http://archive.ics.uci.edu/ml/datasets/Bike+Sharing+Dataset).<br><br>
machine-learning Tutorial Create Secure Workspace Template https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/tutorial-create-secure-workspace-template.md
Last updated 12/02/2021
+monikerRange: 'azureml-api-2 || azureml-api-1'
# How to create a secure workspace by using template
When using the Terraform template, the jump box name is passed using the `dsvm_n
> * [Create/manage VMs (Windows)](../virtual-machines/windows/tutorial-manage-vm.md). > * [Create/manage compute instance](how-to-create-manage-compute-instance.md). To continue learning how to use the secured workspace from the DSVM, see [Tutorial: Azure Machine Learning in a day](tutorial-azure-ml-in-a-day.md).
-To learn more about common secure workspace configurations and input/output requirements, see [Azure Machine Learning secure workspace traffic flow](concept-secure-network-traffic-flow.md).
+To learn more about common secure workspace configurations and input/output requirements, see [Azure Machine Learning secure workspace traffic flow](concept-secure-network-traffic-flow.md).
machine-learning Tutorial Create Secure Workspace https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/tutorial-create-secure-workspace.md
Last updated 09/06/2022
+monikerRange: 'azureml-api-2 || azureml-api-1'
# How to create a secure workspace
To delete all resources created in this tutorial, use the following steps:
1. Enter the resource group name, then select __Delete__. ## Next steps
-Now that you've created a secure workspace and can access studio, learn how to [deploy a model to an online endpoint with network isolation](how-to-secure-online-endpoint.md).
+Now that you've created a secure workspace and can access studio, learn how to [deploy a model to an online endpoint with network isolation](how-to-secure-online-endpoint.md).
+Now that you've created a secure workspace, learn how to [deploy a model](./v1/how-to-deploy-and-where.md).
machine-learning Tutorial First Experiment Automated Ml https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/tutorial-first-experiment-automated-ml.md
After you load and configure your data, you can set up your experiment. This set
Field | Description | Value for tutorial -||
- Compute name | A unique name that identifies your compute context. | automl-compute
+ Compute name | A unique name that identifies your compute context. | automl-compute
Min / Max nodes| To profile data, you must specify 1 or more nodes.|Min nodes: 1<br>Max nodes: 6 Idle seconds before scale down | Idle time before the cluster is automatically scaled down to the minimum node count.|120 (default) Advanced settings | Settings to configure and authorize a virtual network for your experiment.| None
In this automated machine learning tutorial, you used Azure Machine Learning's a
+ Learn more about [automated machine learning](concept-automated-ml.md). + For more information on classification metrics and charts, see the [Understand automated machine learning results](how-to-understand-automated-ml.md) article.
-+ Learn more about [featurization](how-to-configure-auto-features.md#featurization).
-+ Learn more about [data profiling](v1/how-to-connect-data-ui.md#profile).
>[!NOTE]
machine-learning Concept Automated Ml V1 https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/v1/concept-automated-ml-v1.md
[!INCLUDE [sdk v1](../../../includes/machine-learning-sdk-v1.md)] > [!div class="op_single_selector" title1="Select the version of the Azure Machine Learning Python SDK you are using:"] > * [v1](concept-automated-ml-v1.md)
-> * [v2 (current version)](../concept-automated-ml.md)
+> * [v2 (current version)](../concept-automated-ml.md?view=azureml-api-2&preserve-view=true)
Automated machine learning, also referred to as automated ML or AutoML, is the process of automating the time-consuming, iterative tasks of machine learning model development. It allows data scientists, analysts, and developers to build ML models with high scale, efficiency, and productivity all while sustaining model quality. Automated ML in Azure Machine Learning is based on a breakthrough from our [Microsoft Research division](https://www.microsoft.com/research/project/automl/).
ML professionals and developers across industries can use automated ML to:
### Classification
-Classification is a common machine learning task. Classification is a type of supervised learning in which models learn using training data, and apply those learnings to new data. Azure Machine Learning offers featurizations specifically for these tasks, such as deep neural network text featurizers for classification. Learn more about [featurization (v1) options](../how-to-configure-auto-features.md#featurization).
+Classification is a common machine learning task. Classification is a type of supervised learning in which models learn using training data, and apply those learnings to new data. Azure Machine Learning offers featurizations specifically for these tasks, such as deep neural network text featurizers for classification. Learn more about [featurization (v1) options](how-to-configure-auto-features.md#featurization).
The main goal of classification models is to predict which categories new data will fall into based on learnings from its training data. Common classification examples include fraud detection, handwriting recognition, and object detection. Learn more and see an example at [Create a classification model with automated ML (v1)](../tutorial-first-experiment-automated-ml.md).
Authoring AutoML models for vision tasks is supported via the Azure Machine Lear
Learn how to [set up AutoML training for computer vision models](../how-to-auto-train-image-models.md). Automated ML for images supports the following computer vision tasks:
To help confirm that such bias isn't applied to the final recommended model, aut
>[!IMPORTANT] > Testing your models with a test dataset to evaluate generated models is a preview feature. This capability is an [experimental](/python/api/overview/azure/ml/#stable-vs-experimental) preview feature, and may change at any time.
-Learn how to [configure AutoML experiments to use test data (preview) with the SDK (v1)](../how-to-configure-cross-validation-data-splits.md#provide-test-data-preview) or with the [Azure Machine Learning studio](../how-to-use-automated-ml-for-ml-models.md#create-and-run-experiment).
+Learn how to [configure AutoML experiments to use test data (preview) with the SDK (v1)](how-to-configure-cross-validation-data-splits.md#provide-test-data-preview) or with the [Azure Machine Learning studio](../how-to-use-automated-ml-for-ml-models.md#create-and-run-experiment).
You can also [test any existing automated ML model (preview) (v1)](../how-to-configure-auto-train.md)), including models from child jobs, by providing your own test data or by setting aside a portion of your training data.
You can also [test any existing automated ML model (preview) (v1)](../how-to-con
Feature engineering is the process of using domain knowledge of the data to create features that help ML algorithms learn better. In Azure Machine Learning, scaling and normalization techniques are applied to facilitate feature engineering. Collectively, these techniques and feature engineering are referred to as featurization.
-For automated machine learning experiments, featurization is applied automatically, but can also be customized based on your data. [Learn more about what featurization is included (v1)](../how-to-configure-auto-features.md#featurization) and how AutoML helps [prevent over-fitting and imbalanced data](../concept-manage-ml-pitfalls.md) in your models.
+For automated machine learning experiments, featurization is applied automatically, but can also be customized based on your data. [Learn more about what featurization is included (v1)](how-to-configure-auto-features.md#featurization) and how AutoML helps [prevent over-fitting and imbalanced data](../concept-manage-ml-pitfalls.md) in your models.
> [!NOTE] > Automated machine learning featurization steps (feature normalization, handling missing data,
Enable this setting with:
+ Azure Machine Learning studio: Enable **Automatic featurization** in the **View additional configuration** section [with these (v1) steps](../how-to-use-automated-ml-for-ml-models.md#customize-featurization).
-+ Python SDK: Specify `"feauturization": 'auto' / 'off' / 'FeaturizationConfig'` in your [AutoMLConfig](/python/api/azureml-train-automl-client/azureml.train.automl.automlconfig.automlconfig) object. Learn more about [enabling featurization (v1)](../how-to-configure-auto-features.md).
++ Python SDK: Specify `"feauturization": 'auto' / 'off' / 'FeaturizationConfig'` in your [AutoMLConfig](/python/api/azureml-train-automl-client/azureml.train.automl.automlconfig.automlconfig) object. Learn more about [enabling featurization (v1)](how-to-configure-auto-features.md). ## <a name="ensemble"></a> Ensemble models
How-to articles provide additional detail into what functionality automated ML o
+ Learn how to [train computer vision models with Python (v1)](../how-to-auto-train-image-models.md).
-+ Learn how to [view the generated code from your automated ML models](../how-to-generate-automl-training-code.md).
++ Learn how to [view the generated code from your automated ML models](how-to-generate-automl-training-code.md). ### Jupyter notebook samples
machine-learning Concept Azure Machine Learning Architecture https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/v1/concept-azure-machine-learning-architecture.md
Last updated 10/21/2021
+monikerRange: 'azureml-api-1'
#Customer intent: As a data scientist, I want to understand the big picture about how Azure Machine Learning works.
machine-learning Concept Data https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/v1/concept-data.md
> [!div class="op_single_selector" title1="Select the version of Azure Machine Learning developer platform you are using:"] > * [v1](concept-data.md)
-> * [v2 (current version)](../concept-data.md)
+> * [v2 (current version)](../concept-data.md?view=azureml-api-2&preserve-view=true)
Azure Machine Learning makes it easy to connect to your data in the cloud. It provides an abstraction layer over the underlying storage service, so you can securely access and work with your data without having to write code specific to your storage type. Azure Machine Learning also provides the following data capabilities:
machine-learning Concept Mlflow V1 https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/v1/concept-mlflow-v1.md
> [!div class="op_single_selector" title1="Select the version of the Azure Machine Learning developer platform that you're using:"] > * [v1](concept-mlflow-v1.md)
-> * [v2 (current version)](../concept-mlflow.md)
+> * [v2 (current version)](../concept-mlflow.md?view=azureml-api-2&preserve-view=true)
[MLflow](https://www.mlflow.org) is an open-source library for managing the life cycle of your machine learning experiments. MLflow's tracking URI and logging API are collectively known as [MLflow Tracking](https://mlflow.org/docs/latest/quickstart.html#using-the-tracking-api). This component of MLflow logs and tracks your training run metrics and model artifacts, no matter where your experiment's environment is--on your computer, on a remote compute target, on a virtual machine, or in an Azure Databricks cluster.
machine-learning Concept Model Management And Deployment https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/v1/concept-model-management-and-deployment.md
Last updated 01/04/2023
> [!div class="op_single_selector" title1="Select the version of Azure Machine Learning developer platform you are using:"] > * [v1](concept-model-management-and-deployment.md)
-> * [v2 (current version)](../concept-model-management-and-deployment.md)
+> * [v2 (current version)](../concept-model-management-and-deployment.md?view=azureml-api-2&preserve-view=true)
In this article, learn how to apply Machine Learning Operations (MLOps) practices in Azure Machine Learning for the purpose of managing the lifecycle of your models. Applying MLOps practices can improve the quality and consistency of your machine learning solutions.
machine-learning Concept Network Data Access https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/v1/concept-network-data-access.md
Last updated 11/16/2022
> [!div class="op_single_selector" title1="Select the version of Azure Machine Learning developer platform you are using:"] > * [v1](concept-network-data-access.md)
-> * [v2 (current version)](../how-to-administrate-data-authentication.md)
+> * [v2 (current version)](../how-to-administrate-data-authentication.md?view=azureml-api-2&preserve-view=true)
[!INCLUDE [sdk v1](../../../includes/machine-learning-sdk-v1.md)] [!INCLUDE [cli v1](../../../includes/machine-learning-cli-v1.md)]
machine-learning Concept Train Machine Learning Model V1 https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/v1/concept-train-machine-learning-model-v1.md
ms.devlang: azurecli
[!INCLUDE [sdk v1](../../../includes/machine-learning-sdk-v1.md)] > [!div class="op_single_selector" title1="Select the Azure Machine Learning version you are using:"] > * [v1](concept-train-machine-learning-model-v1.md)
-> * [v2 (current)](../concept-train-machine-learning-model.md)
+> * [v2 (current)](../concept-train-machine-learning-model.md?view=azureml-api-2&preserve-view=true)
Azure Machine Learning provides several ways to train your models, from code-first solutions using the SDK to low-code solutions such as automated machine learning and the visual designer. Use the following list to determine which training method is right for you:
machine-learning How To Access Data https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/v1/how-to-access-data.md
> [!div class="op_single_selector" title1="Select the version of Azure Machine Learning developer platform you are using:"] > * [v1](how-to-access-data.md)
-> * [v2 (current version)](../how-to-datastore.md)
+> * [v2 (current version)](../how-to-datastore.md?view=azureml-api-2&preserve-view=true)
[!INCLUDE [sdk v1](../../../includes/machine-learning-sdk-v1.md)] [!INCLUDE [cli v1](../../../includes/machine-learning-cli-v1.md)]
machine-learning How To Attach Compute Targets https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/v1/how-to-attach-compute-targets.md
> [!div class="op_single_selector" title1="Select the Azure Machine Learning SDK version you are using:"] > * [v1](how-to-attach-compute-targets.md)
-> * [v2 (current version)](../how-to-train-model.md)
-
+> * [v2 (current version)](../how-to-train-model.md?view=azureml-api-2&preserve-view=true)
+
Learn how to attach Azure compute resources to your Azure Machine Learning workspace with SDK v1. Then you can use these resources as training and inference [compute targets](../concept-compute-target.md) in your machine learning tasks. In this article, learn how to set up your workspace to use these compute resources:
machine-learning How To Auto Train Forecast V1 https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/v1/how-to-auto-train-forecast-v1.md
show_latex: true
> [!div class="op_single_selector" title1="Select the version of the Azure Machine Learning SDK you are using:"] > * [v1](how-to-auto-train-forecast-v1.md)
-> * [v2 (current version)](../how-to-auto-train-forecast.md)
+> * [v2 (current version)](../how-to-auto-train-forecast.md?view=azureml-api-2&preserve-view=true)
In this article, you learn how to set up AutoML training for time-series forecasting models with Azure Machine Learning automated ML in the [Azure Machine Learning Python SDK](/python/api/overview/azure/ml/).
automl_config = AutoMLConfig(task='forecasting',
```
-You can also bring your own validation data, learn more in [Configure data splits and cross-validation in AutoML](../how-to-configure-cross-validation-data-splits.md#provide-validation-data).
+You can also bring your own validation data, learn more in [Configure data splits and cross-validation in AutoML](how-to-configure-cross-validation-data-splits.md#provide-validation-data).
Learn more about how AutoML applies cross validation to [prevent over-fitting models](../concept-manage-ml-pitfalls.md#prevent-overfitting).
An `Error exception` is raised for any series in the dataset that does not meet
### Featurization steps
-In every automated machine learning experiment, automatic scaling and normalization techniques are applied to your data by default. These techniques are types of **featurization** that help *certain* algorithms that are sensitive to features on different scales. Learn more about default featurization steps in [Featurization in AutoML](../how-to-configure-auto-features.md#automatic-featurization)
+In every automated machine learning experiment, automatic scaling and normalization techniques are applied to your data by default. These techniques are types of **featurization** that help *certain* algorithms that are sensitive to features on different scales. Learn more about default featurization steps in [Featurization in AutoML](how-to-configure-auto-features.md#automatic-featurization)
However, the following steps are performed only for `forecasting` task types:
Supported customizations for `forecasting` tasks include:
|**Transformer parameter update** |Update the parameters for the specified transformer. Currently supports *Imputer* (fill_value and median).| |**Drop columns** |Specifies columns to drop from being featurized.|
-To customize featurizations with the SDK, specify `"featurization": FeaturizationConfig` in your `AutoMLConfig` object. Learn more about [custom featurizations](../how-to-configure-auto-features.md#customize-featurization).
+To customize featurizations with the SDK, specify `"featurization": FeaturizationConfig` in your `AutoMLConfig` object. Learn more about [custom featurizations](how-to-configure-auto-features.md#customize-featurization).
>[!NOTE] > The **drop columns** functionality is deprecated as of SDK version 1.19. Drop columns from your dataset as part of data cleansing, prior to consuming it in your automated ML experiment.
machine-learning How To Auto Train Image Models V1 https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/v1/how-to-auto-train-image-models-v1.md
# Set up AutoML to train computer vision models with Python (v1) [!INCLUDE [sdk v1](../../../includes/machine-learning-sdk-v1.md)]
-
+
> [!div class="op_single_selector" title1="Select the version of Azure Machine Learning CLI extension you are using:"] > * [v1](how-to-auto-train-image-models-v1.md)
-> * [v2 (current version)](../how-to-auto-train-image-models.md)
-
+> * [v2 (current version)](../how-to-auto-train-image-models.md?view=azureml-api-2&preserve-view=true)
+
[!INCLUDE [cli-version-info](../../../includes/machine-learning-cli-version-1-only.md)]
In addition to controlling the model algorithm, you can also tune hyperparameter
### Data augmentation
-In general, deep learning model performance can often improve with more data. Data augmentation is a practical technique to amplify the data size and variability of a dataset which helps to prevent overfitting and improve the modelΓÇÖs generalization ability on unseen data. Automated ML applies different data augmentation techniques based on the computer vision task, before feeding input images to the model. Currently, there is no exposed hyperparameter to control data augmentations.
+In general, deep learning model performance can often improve with more data. Data augmentation is a practical technique to amplify the data size and variability of a dataset which helps to prevent overfitting and improve the model's generalization ability on unseen data. Automated ML applies different data augmentation techniques based on the computer vision task, before feeding input images to the model. Currently, there is no exposed hyperparameter to control data augmentations.
|Task | Impacted dataset | Data augmentation technique(s) applied | |-|-||
-|Image classification (multi-class and multi-label) | Training <br><br><br> Validation & Test| Random resize and crop, horizontal flip, color jitter (brightness, contrast, saturation, and hue), normalization using channel-wise ImageNetΓÇÖs mean and standard deviation <br><br><br>Resize, center crop, normalization |
+|Image classification (multi-class and multi-label) | Training <br><br><br> Validation & Test| Random resize and crop, horizontal flip, color jitter (brightness, contrast, saturation, and hue), normalization using channel-wise ImageNet's mean and standard deviation <br><br><br>Resize, center crop, normalization |
|Object detection, instance segmentation| Training <br><br> Validation & Test |Random crop around bounding boxes, expand, horizontal flip, normalization, resize <br><br><br>Normalization, resize |Object detection using yolov5| Training <br><br> Validation & Test |Mosaic, random affine (rotation, translation, scale, shear), horizontal flip <br><br><br> Letterbox resizing|
Review detailed code examples and use cases in the [GitHub notebook repository f
* [Tutorial: Train an object detection model with AutoML and Python](../tutorial-auto-train-image-models.md). * [Make predictions with ONNX on computer vision models from AutoML](../how-to-inference-onnx-automl-image-models.md)
-* [Troubleshoot automated ML experiments](../how-to-troubleshoot-auto-ml.md).
+* [Troubleshoot automated ML experiments](how-to-troubleshoot-auto-ml.md).
machine-learning How To Auto Train Nlp Models V1 https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/v1/how-to-auto-train-nlp-models-v1.md
Last updated 03/15/2022
[!INCLUDE [sdk v1](../../../includes/machine-learning-sdk-v1.md)] > [!div class="op_single_selector" title1="Select the version of the developer platform of Azure Machine Learning you are using:"] > * [v1](how-to-auto-train-nlp-models-v1.md)
-> * [v2 (current version)](../how-to-auto-train-nlp-models.md)
+> * [v2 (current version)](../how-to-auto-train-nlp-models.md?view=azureml-api-2&preserve-view=true)
[!INCLUDE [preview disclaimer](../../../includes/machine-learning-preview-generic-disclaimer.md)]
In this article, you learn how to train natural language processing (NLP) models
Automated ML supports NLP which allows ML professionals and data scientists to bring their own text data and build custom models for tasks such as, multi-class text classification, multi-label text classification, and named entity recognition (NER).
-You can seamlessly integrate with the [Azure Machine Learning data labeling](../how-to-create-text-labeling-projects.md) capability to label your text data or bring your existing labeled data. Automated ML provides the option to use distributed training on multi-GPU compute clusters for faster model training. The resulting model can be operationalized at scale by leveraging Azure Machine LearningΓÇÖs MLOps capabilities.
+You can seamlessly integrate with the [Azure Machine Learning data labeling](../how-to-create-text-labeling-projects.md) capability to label your text data or bring your existing labeled data. Automated ML provides the option to use distributed training on multi-GPU compute clusters for faster model training. The resulting model can be operationalized at scale by leveraging Azure Machine Learning's MLOps capabilities.
## Prerequisites
Multi-class text classification| `'eng'` <br> `'deu'` <br> `'mul'`| English
Named entity recognition (NER)| `'eng'` <br> `'deu'` <br> `'mul'`| English&nbsp;BERT&nbsp;[cased](https://huggingface.co/bert-base-cased) <br> [German BERT](https://huggingface.co/bert-base-german-cased)<br> [Multilingual BERT](https://huggingface.co/bert-base-multilingual-cased) <br><br>For all other languages, automated ML applies multilingual BERT
-You can specify your dataset language in your `FeaturizationConfig`. BERT is also used in the featurization process of automated ML experiment training, learn more about [BERT integration and featurization in automated ML](../how-to-configure-auto-features.md#bert-integration-in-automated-ml).
+You can specify your dataset language in your `FeaturizationConfig`. BERT is also used in the featurization process of automated ML experiment training, learn more about [BERT integration and featurization in automated ML](how-to-configure-auto-features.md#bert-integration-in-automated-ml).
```python from azureml.automl.core.featurization import FeaturizationConfig
https://github.com/Azure/azureml-examples/blob/main/v1/python-sdk/tutorials/auto
## Next steps + Learn more about [how and where to deploy a model](../how-to-deploy-online-endpoints.md).
-+ [Troubleshoot automated ML experiments](../how-to-troubleshoot-auto-ml.md).
++ [Troubleshoot automated ML experiments](how-to-troubleshoot-auto-ml.md).
machine-learning How To Change Storage Access Key https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/v1/how-to-change-storage-access-key.md
--++ Last updated 10/20/2022
+monikerRange: 'azureml-api-1'
# Regenerate storage account access keys
machine-learning How To Configure Auto Features https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/v1/how-to-configure-auto-features.md
+
+ Title: Featurization with automated machine learning
+
+description: Learn the data featurization settings in Azure Machine Learning and how to customize those features for your automated ML experiments.
++++++++ Last updated : 01/24/2022
+monikerRange: 'azureml-api-1'
++
+# Data featurization in automated machine learning
++
+Learn about the data featurization settings in Azure Machine Learning, and how to customize those features for [automated machine learning experiments](../concept-automated-ml.md).
+
+## Feature engineering and featurization
+
+Training data consists of rows and columns. Each row is an observation or record, and the columns of each row are the features that describe each record. Typically, the features that best characterize the patterns in the data are selected to create predictive models.
+
+Although many of the raw data fields can be used directly to train a model, it's often necessary to create additional (engineered) features that provide information that better differentiates patterns in the data. This process is called **feature engineering**, where the use of domain knowledge of the data is leveraged to create features that, in turn, help machine learning algorithms to learn better.
+
+In Azure Machine Learning, data-scaling and normalization techniques are applied to make feature engineering easier. Collectively, these techniques and this feature engineering are called **featurization** in automated ML experiments.
+
+## Prerequisites
+
+This article assumes that you already know how to configure an automated ML experiment.
++
+For information about configuration, see the following articles:
+
+- For a code-first experience: [Configure automated ML experiments by using the Azure Machine Learning SDK for Python](how-to-configure-auto-train-v1.md).
+- For a low-code or no-code experience: [Create, review, and deploy automated machine learning models by using the Azure Machine Learning studio](../how-to-use-automated-ml-for-ml-models.md).
+
+## Configure featurization
+
+In every automated machine learning experiment, [automatic scaling and normalization techniques](#featurization) are applied to your data by default. These techniques are types of featurization that help *certain* algorithms that are sensitive to features on different scales. You can enable more featurization, such as *missing-values imputation*, *encoding*, and *transforms*.
+
+> [!NOTE]
+> Steps for automated machine learning featurization (such as feature normalization, handling missing data,
+> or converting text to numeric) become part of the underlying model. When you use the model for
+> predictions, the same featurization steps that are applied during training are applied to
+> your input data automatically.
+
+For experiments that you configure with the Python SDK, you can enable or disable the featurization setting and further specify the featurization steps to be used for your experiment. If you're using the Azure Machine Learning studio, see the [steps to enable featurization](../how-to-use-automated-ml-for-ml-models.md#customize-featurization).
+
+The following table shows the accepted settings for `featurization` in the [AutoMLConfig class](/python/api/azureml-train-automl-client/azureml.train.automl.automlconfig.automlconfig):
+
+|Featurization configuration | Description|
+- | - |
+|`"featurization": 'auto'`| Specifies that, as part of preprocessing, [data guardrails](#data-guardrails) and [featurization steps](#featurization) are to be done automatically. This setting is the default.|
+|`"featurization": 'off'`| Specifies that featurization steps are not to be done automatically.|
+|`"featurization":`&nbsp;`'FeaturizationConfig'`| Specifies that customized featurization steps are to be used. [Learn how to customize featurization](#customize-featurization).|
+
+<a name="featurization"></a>
+
+## Automatic featurization
+
+The following table summarizes techniques that are automatically applied to your data. These techniques are applied for experiments that are configured by using the SDK or the studio UI. To disable this behavior, set `"featurization": 'off'` in your `AutoMLConfig` object.
+
+> [!NOTE]
+> If you plan to export your AutoML-created models to an [ONNX model](../concept-onnx.md), only the featurization options indicated with an asterisk ("*") are supported in the ONNX format. Learn more about [converting models to ONNX](../how-to-use-automl-onnx-model-dotnet.md).
+
+|Featurization&nbsp;steps| Description |
+| - | - |
+|**Drop high cardinality or no variance features*** |Drop these features from training and validation sets. Applies to features with all values missing, with the same value across all rows, or with high cardinality (for example, hashes, IDs, or GUIDs).|
+|**Impute missing values*** |For numeric features, impute with the average of values in the column.<br/><br/>For categorical features, impute with the most frequent value.|
+|**Generate more features*** |For DateTime features: Year, Month, Day, Day of week, Day of year, Quarter, Week of the year, Hour, Minute, Second.<br><br> *For forecasting tasks,* these additional DateTime features are created: ISO year, Half - half-year, Calendar month as string, Week, Day of week as string, Day of quarter, Day of year, AM/PM (0 if hour is before noon (12 pm), 1 otherwise), AM/PM as string, Hour of day (12-hr basis)<br/><br/>For Text features: Term frequency based on unigrams, bigrams, and trigrams. Learn more about [how this is done with BERT.](#bert-integration)|
+|**Transform and encode***|Transform numeric features that have few unique values into categorical features.<br/><br/>One-hot encoding is used for low-cardinality categorical features. One-hot-hash encoding is used for high-cardinality categorical features.|
+|**Word embeddings**|A text featurizer converts vectors of text tokens into sentence vectors by using a pre-trained model. Each word's embedding vector in a document is aggregated with the rest to produce a document feature vector.|
+|**Cluster Distance**|Trains a k-means clustering model on all numeric columns. Produces *k* new features (one new numeric feature per cluster) that contain the distance of each sample to the centroid of each cluster.|
+
+In every automated machine learning experiment, your data is automatically scaled or normalized to help algorithms perform well. During model training, one of the following scaling or normalization techniques are applied to each model.
+
+|Scaling&nbsp;&&nbsp;processing| Description |
+| - | - |
+| [StandardScaleWrapper](https://scikit-learn.org/stable/modules/generated/sklearn.preprocessing.StandardScaler.html) | Standardize features by removing the mean and scaling to unit variance |
+| [MinMaxScalar](https://scikit-learn.org/stable/modules/generated/sklearn.preprocessing.MinMaxScaler.html) | Transforms features by scaling each feature by that column's minimum and maximum |
+| [MaxAbsScaler](https://scikit-learn.org/stable/modules/generated/sklearn.preprocessing.MaxAbsScaler.html#sklearn.preprocessing.MaxAbsScaler) |Scale each feature by its maximum absolute value |
+| [RobustScalar](https://scikit-learn.org/stable/modules/generated/sklearn.preprocessing.RobustScaler.html) | Scales features by their quantile range |
+| [PCA](https://scikit-learn.org/stable/modules/generated/sklearn.decomposition.PCA.html) |Linear dimensionality reduction using Singular Value Decomposition of the data to project it to a lower dimensional space |
+| [TruncatedSVDWrapper](https://scikit-learn.org/stable/modules/generated/sklearn.decomposition.TruncatedSVD.html) |This transformer performs linear dimensionality reduction by means of truncated singular value decomposition (SVD). Contrary to PCA, this estimator does not center the data before computing the singular value decomposition, which means it can work with scipy.sparse matrices efficiently |
+| [SparseNormalizer](https://scikit-learn.org/stable/modules/generated/sklearn.preprocessing.Normalizer.html) | Each sample (that is, each row of the data matrix) with at least one non-zero component is rescaled independently of other samples so that its norm (l1 or l2) equals one |
+
+## Data guardrails
+
+*Data guardrails* help you identify potential issues with your data (for example, missing values or [class imbalance](../concept-manage-ml-pitfalls.md#identify-models-with-imbalanced-data)). They also help you take corrective actions for improved results.
+
+Data guardrails are applied:
+
+- **For SDK experiments**: When the parameters `"featurization": 'auto'` or `validation=auto` are specified in your `AutoMLConfig` object.
+- **For studio experiments**: When automatic featurization is enabled.
+
+You can review the data guardrails for your experiment:
+
+- By setting `show_output=True` when you submit an experiment by using the SDK.
+
+- In the studio, on the **Data guardrails** tab of your automated ML run.
+
+### Data guardrail states
+
+Data guardrails display one of three states:
+
+|State| Description |
+|-|- |
+|**Passed**| No data problems were detected and no action is required by you. |
+|**Done**| Changes were applied to your data. We encourage you to review the corrective actions that AutoML took, to ensure that the changes align with the expected results. |
+|**Alerted**| A data issue was detected but couldn't be remedied. We encourage you to revise and fix the issue.|
+
+### Supported data guardrails
+
+The following table describes the data guardrails that are currently supported and the associated statuses that you might see when you submit your experiment:
+
+Guardrail|Status|Condition&nbsp;for&nbsp;trigger
+||
+**Missing feature values imputation** |Passed <br><br><br> Done| No missing feature values were detected in your training data. Learn more about [missing-value imputation.](../how-to-use-automated-ml-for-ml-models.md#customize-featurization) <br><br> Missing feature values were detected in your training data and were imputed.
+**High cardinality feature detection** |Passed <br><br><br> Done| Your inputs were analyzed, and no high-cardinality features were detected. <br><br> High-cardinality features were detected in your inputs and were handled.
+**Validation split handling** |Done| The validation configuration was set to `'auto'` and the training data contained *fewer than 20,000 rows*. <br> Each iteration of the trained model was validated by using cross-validation. Learn more about [validation data](how-to-configure-auto-train-v1.md#training-validation-and-test-data). <br><br> The validation configuration was set to `'auto'`, and the training data contained *more than 20,000 rows*. <br> The input data has been split into a training dataset and a validation dataset for validation of the model.
+**Class balancing detection** |Passed <br><br><br><br>Alerted <br><br><br>Done | Your inputs were analyzed, and all classes are balanced in your training data. A dataset is considered to be balanced if each class has good representation in the dataset, as measured by number and ratio of samples. <br><br> Imbalanced classes were detected in your inputs. To fix model bias, fix the balancing problem. Learn more about [imbalanced data](../concept-manage-ml-pitfalls.md#identify-models-with-imbalanced-data).<br><br> Imbalanced classes were detected in your inputs and the sweeping logic has determined to apply balancing.
+**Memory issues detection** |Passed <br><br><br><br> Done |<br> The selected values (horizon, lag, rolling window) were analyzed, and no potential out-of-memory issues were detected. Learn more about time-series [forecasting configurations](how-to-auto-train-forecast-v1.md#configuration-settings). <br><br><br>The selected values (horizon, lag, rolling window) were analyzed and will potentially cause your experiment to run out of memory. The lag or rolling-window configurations have been turned off.
+**Frequency detection** |Passed <br><br><br><br> Done |<br> The time series was analyzed, and all data points are aligned with the detected frequency. <br> <br> The time series was analyzed, and data points that don't align with the detected frequency were detected. These data points were removed from the dataset.
+**Cross validation** |Done| In order to accurately evaluate the model(s) trained by AutoML, we leverage a dataset that the model is not trained on. Hence, if the user doesn't provide an explicit validation dataset, a part of the training dataset is used to achieve this. For smaller datasets (fewer than 20,000 samples), cross-validation is leveraged, else a single hold-out set is split from the training data to serve as the validation dataset. Hence, for your input data we leverage cross-validation with 10 folds, if the number of training samples are fewer than 1000, and 3 folds in all other cases.
+**Train-Test data split** |Done| In order to accurately evaluate the model(s) trained by AutoML, we leverage a dataset that the model is not trained on. Hence, if the user doesn't provide an explicit validation dataset, a part of the training dataset is used to achieve this. For smaller datasets (fewer than 20,000 samples), cross-validation is leveraged, else a single hold-out set is split from the training data to serve as the validation dataset. Hence, your input data has been split into a training dataset and a holdout validation dataset.
+**Time Series ID detection** |Passed <br><br><br><br> Fixed | <br> The data set was analyzed, and no duplicate time index were detected. <br> <br> Multiple time series were found in the dataset, and the time series identifiers were automatically created for your dataset.
+**Time series aggregation** |Passed <br><br><br><br> Fixed | <br> The dataset frequency is aligned with the user specified frequency. No aggregation was performed. <br> <br> The data was aggregated to comply with user provided frequency.
+**Short series handling** |Passed <br><br><br><br> Fixed | <br> Automated ML detected enough data points for each series in the input data to continue with training. <br> <br> Automated ML detected that some series did not contain enough data points to train a model. To continue with training, these short series have been dropped or padded.
+
+## Customize featurization
+
+You can customize your featurization settings to ensure that the data and features that are used to train your ML model result in relevant predictions.
+
+To customize featurizations, specify `"featurization": FeaturizationConfig` in your `AutoMLConfig` object. If you're using the Azure Machine Learning studio for your experiment, see the [how-to article](../how-to-use-automated-ml-for-ml-models.md#customize-featurization). To customize featurization for forecastings task types, refer to the [forecasting how-to](how-to-auto-train-forecast-v1.md#customize-featurization).
+
+Supported customizations include:
+
+|Customization|Definition|
+|--|--|
+|**Column purpose update**|Override the autodetected feature type for the specified column.|
+|**Transformer parameter update** |Update the parameters for the specified transformer. Currently supports *Imputer* (mean, most frequent, and median) and *HashOneHotEncoder*.|
+|**Drop columns** |Specifies columns to drop from being featurized.|
+|**Block transformers**| Specifies block transformers to be used in the featurization process.|
+
+>[!NOTE]
+> The **drop columns** functionality is deprecated as of SDK version 1.19. Drop columns from your dataset as part of data cleansing, prior to consuming it in your automated ML experiment.
+
+Create the `FeaturizationConfig` object by using API calls:
+
+```python
+featurization_config = FeaturizationConfig()
+featurization_config.blocked_transformers = ['LabelEncoder']
+featurization_config.drop_columns = ['aspiration', 'stroke']
+featurization_config.add_column_purpose('engine-size', 'Numeric')
+featurization_config.add_column_purpose('body-style', 'CategoricalHash')
+#default strategy mean, add transformer param for for 3 columns
+featurization_config.add_transformer_params('Imputer', ['engine-size'], {"strategy": "median"})
+featurization_config.add_transformer_params('Imputer', ['city-mpg'], {"strategy": "median"})
+featurization_config.add_transformer_params('Imputer', ['bore'], {"strategy": "most_frequent"})
+featurization_config.add_transformer_params('HashOneHotEncoder', [], {"number_of_bits": 3})
+```
+
+## Featurization transparency
+
+Every AutoML model has featurization automatically applied. Featurization includes automated feature engineering (when `"featurization": 'auto'`) and scaling and normalization, which then impacts the selected algorithm and its hyperparameter values. AutoML supports different methods to ensure you have visibility into what was applied to your model.
+
+Consider this forecasting example:
+++ There are four input features: A (Numeric), B (Numeric), C (Numeric), D (DateTime).++ Numeric feature C is dropped because it is an ID column with all unique values.++ Numeric features A and B have missing values and hence are imputed by the mean.++ DateTime feature D is featurized into 11 different engineered features.+
+To get this information, use the `fitted_model` output from your automated ML experiment run.
+
+```python
+automl_config = AutoMLConfig(…)
+automl_run = experiment.submit(automl_config …)
+best_run, fitted_model = automl_run.get_output()
+```
+### Automated feature engineering
+The `get_engineered_feature_names()` returns a list of engineered feature names.
+
+ >[!Note]
+ >Use 'timeseriestransformer' for task='forecasting', else use 'datatransformer' for 'regression' or 'classification' task.
+
+ ```python
+ fitted_model.named_steps['timeseriestransformer']. get_engineered_feature_names ()
+ ```
+
+This list includes all engineered feature names.
+
+ ```
+ ['A', 'B', 'A_WASNULL', 'B_WASNULL', 'year', 'half', 'quarter', 'month', 'day', 'hour', 'am_pm', 'hour12', 'wday', 'qday', 'week']
+ ```
+
+The `get_featurization_summary()` gets a featurization summary of all the input features.
+
+ ```python
+ fitted_model.named_steps['timeseriestransformer'].get_featurization_summary()
+ ```
+
+Output
+
+ ```
+ [{'RawFeatureName': 'A',
+ 'TypeDetected': 'Numeric',
+ 'Dropped': 'No',
+ 'EngineeredFeatureCount': 2,
+ 'Tranformations': ['MeanImputer', 'ImputationMarker']},
+ {'RawFeatureName': 'B',
+ 'TypeDetected': 'Numeric',
+ 'Dropped': 'No',
+ 'EngineeredFeatureCount': 2,
+ 'Tranformations': ['MeanImputer', 'ImputationMarker']},
+ {'RawFeatureName': 'C',
+ 'TypeDetected': 'Numeric',
+ 'Dropped': 'Yes',
+ 'EngineeredFeatureCount': 0,
+ 'Tranformations': []},
+ {'RawFeatureName': 'D',
+ 'TypeDetected': 'DateTime',
+ 'Dropped': 'No',
+ 'EngineeredFeatureCount': 11,
+ 'Tranformations': ['DateTime','DateTime','DateTime','DateTime','DateTime','DateTime','DateTime','DateTime','DateTime','DateTime','DateTime']}]
+ ```
+
+ |Output|Definition|
+ |-|--|
+ |RawFeatureName|Input feature/column name from the dataset provided.|
+ |TypeDetected|Detected datatype of the input feature.|
+ |Dropped|Indicates if the input feature was dropped or used.|
+ |EngineeringFeatureCount|Number of features generated through automated feature engineering transforms.|
+ |Transformations|List of transformations applied to input features to generate engineered features.|
+
+### Scaling and normalization
+
+To understand the scaling/normalization and the selected algorithm with its hyperparameter values, use `fitted_model.steps`.
+
+The following sample output is from running `fitted_model.steps` for a chosen run:
+
+```
+[('RobustScaler',
+ RobustScaler(copy=True,
+ quantile_range=[10, 90],
+ with_centering=True,
+ with_scaling=True)),
+
+ ('LogisticRegression',
+ LogisticRegression(C=0.18420699693267145, class_weight='balanced',
+ dual=False,
+ fit_intercept=True,
+ intercept_scaling=1,
+ max_iter=100,
+ multi_class='multinomial',
+ n_jobs=1, penalty='l2',
+ random_state=None,
+ solver='newton-cg',
+ tol=0.0001,
+ verbose=0,
+ warm_start=False))
+```
+
+To get more details, use this helper function:
+
+```python
+from pprint import pprint
+
+def print_model(model, prefix=""):
+ for step in model.steps:
+ print(prefix + step[0])
+ if hasattr(step[1], 'estimators') and hasattr(step[1], 'weights'):
+ pprint({'estimators': list(e[0] for e in step[1].estimators), 'weights': step[1].weights})
+ print()
+ for estimator in step[1].estimators:
+ print_model(estimator[1], estimator[0]+ ' - ')
+ elif hasattr(step[1], '_base_learners') and hasattr(step[1], '_meta_learner'):
+ print("\nMeta Learner")
+ pprint(step[1]._meta_learner)
+ print()
+ for estimator in step[1]._base_learners:
+ print_model(estimator[1], estimator[0]+ ' - ')
+ else:
+ pprint(step[1].get_params())
+ print()
+```
+
+This helper function returns the following output for a particular run using `LogisticRegression with RobustScalar` as the specific algorithm.
+
+```
+RobustScaler
+{'copy': True,
+'quantile_range': [10, 90],
+'with_centering': True,
+'with_scaling': True}
+
+LogisticRegression
+{'C': 0.18420699693267145,
+'class_weight': 'balanced',
+'dual': False,
+'fit_intercept': True,
+'intercept_scaling': 1,
+'max_iter': 100,
+'multi_class': 'multinomial',
+'n_jobs': 1,
+'penalty': 'l2',
+'random_state': None,
+'solver': 'newton-cg',
+'tol': 0.0001,
+'verbose': 0,
+'warm_start': False}
+```
+
+### Predict class probability
+
+Models produced using automated ML all have wrapper objects that mirror functionality from their open-source origin class. Most classification model wrapper objects returned by automated ML implement the `predict_proba()` function, which accepts an array-like or sparse matrix data sample of your features (X values), and returns an n-dimensional array of each sample and its respective class probability.
+
+Assuming you have retrieved the best run and fitted model using the same calls from above, you can call `predict_proba()` directly from the fitted model, supplying an `X_test` sample in the appropriate format depending on the model type.
+
+```python
+best_run, fitted_model = automl_run.get_output()
+class_prob = fitted_model.predict_proba(X_test)
+```
+
+If the underlying model does not support the `predict_proba()` function or the format is incorrect, a model class-specific exception will be thrown. See the [RandomForestClassifier](https://scikit-learn.org/stable/modules/generated/sklearn.ensemble.RandomForestClassifier.html#sklearn.ensemble.RandomForestClassifier.predict_proba) and [XGBoost](https://xgboost.readthedocs.io/en/latest/python/python_api.html) reference docs for examples of how this function is implemented for different model types.
+
+<a name="bert-integration"></a>
+
+## BERT integration in automated ML
+
+[BERT](https://techcommunity.microsoft.com/t5/azure-ai/how-bert-is-integrated-into-azure-automated-machine-learning/ba-p/1194657) is used in the featurization layer of automated ML. In this layer, if a column contains free text or other types of data like timestamps or simple numbers, then featurization is applied accordingly.
+
+For BERT, the model is fine-tuned and trained utilizing the user-provided labels. From here, document embeddings are output as features alongside others, like timestamp-based features, day of week.
+
+Learn how to [set up natural language processing (NLP) experiments that also use BERT with automated ML](how-to-auto-train-nlp-models-v1.md).
+
+### Steps to invoke BERT
+
+In order to invoke BERT, set `enable_dnn: True` in your automl_settings and use a GPU compute (`vm_size = "STANDARD_NC6"` or a higher GPU). If a CPU compute is used, then instead of BERT, AutoML enables the BiLSTM DNN featurizer.
+
+Automated ML takes the following steps for BERT.
+
+1. **Preprocessing and tokenization of all text columns**. For example, the "StringCast" transformer can be found in the final model's featurization summary. An example of how to produce the model's featurization summary can be found in [this notebook](https://github.com/Azure/azureml-examples/blob/main/v1/python-sdk/tutorials/automl-with-azureml/classification-text-dnn/auto-ml-classification-text-dnn.ipynb).
+
+2. **Concatenate all text columns into a single text column**, hence the `StringConcatTransformer` in the final model.
+
+ Our implementation of BERT limits total text length of a training sample to 128 tokens. That means, all text columns when concatenated, should ideally be at most 128 tokens in length. If multiple columns are present, each column should be pruned so this condition is satisfied. Otherwise, for concatenated columns of length >128 tokens BERT's tokenizer layer truncates this input to 128 tokens.
+
+3. **As part of feature sweeping, AutoML compares BERT against the baseline (bag of words features) on a sample of the data.** This comparison determines if BERT would give accuracy improvements. If BERT performs better than the baseline, AutoML then uses BERT for text featurization for the whole data. In that case, you will see the `PretrainedTextDNNTransformer` in the final model.
+
+BERT generally runs longer than other featurizers. For better performance, we recommend using "STANDARD_NC24r" or "STANDARD_NC24rs_V3" for their RDMA capabilities.
+
+AutoML will distribute BERT training across multiple nodes if they are available (upto a max of eight nodes). This can be done in your `AutoMLConfig` object by setting the `max_concurrent_iterations` parameter to higher than 1.
+
+## Supported languages for BERT in AutoML
+
+AutoML currently supports around 100 languages and depending on the dataset's language, AutoML chooses the appropriate BERT model. For German data, we use the German BERT model. For English, we use the English BERT model. For all other languages, we use the multilingual BERT model.
+
+In the following code, the German BERT model is triggered, since the dataset language is specified to `deu`, the three letter language code for German according to [ISO classification](https://iso639-3.sil.org/code/deu):
+
+```python
+from azureml.automl.core.featurization import FeaturizationConfig
+
+featurization_config = FeaturizationConfig(dataset_language='deu')
+
+automl_settings = {
+ "experiment_timeout_minutes": 120,
+ "primary_metric": 'accuracy',
+# All other settings you want to use
+ "featurization": featurization_config,
+
+ "enable_dnn": True, # This enables BERT DNN featurizer
+ "enable_voting_ensemble": False,
+ "enable_stack_ensemble": False
+}
+```
+
+## Next steps
+
+* Learn how to set up your automated ML experiments:
+
+ * For a code-first experience: [Configure automated ML experiments by using the Azure Machine Learning SDK](how-to-configure-auto-train-v1.md).
+ * For a low-code or no-code experience: [Create your automated ML experiments in the Azure Machine Learning studio](../how-to-use-automated-ml-for-ml-models.md).
+
+* Learn more about [how and where to deploy a model](how-to-deploy-and-where.md).
+
+* Learn more about [how to train a regression model by using automated machine learning](how-to-auto-train-models-v1.md) or [how to train by using automated machine learning on a remote resource](concept-automated-ml-v1.md#local-remote).
machine-learning How To Configure Auto Train V1 https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/v1/how-to-configure-auto-train-v1.md
[!INCLUDE [sdk v1](../../../includes/machine-learning-sdk-v1.md)] > [!div class="op_single_selector" title1="Select the version of Azure Machine Learning Python you are using:"] > * [v1](how-to-configure-auto-train-v1.md)
-> * [v2 (current version)](../how-to-configure-auto-train.md)
+> * [v2 (current version)](../how-to-configure-auto-train.md?view=azureml-api-2&preserve-view=true)
In this guide, learn how to set up an automated machine learning, AutoML, training run with the [Azure Machine Learning Python SDK](/python/api/overview/azure/ml/intro) using Azure Machine Learning automated ML. Automated ML picks an algorithm and hyperparameters for you and generates a model ready for deployment. This guide provides details of the various options that you can use to configure automated ML experiments.
dataset = Dataset.Tabular.from_delimited_files(data)
## Training, validation, and test data
-You can specify separate **training data and validation data sets** directly in the `AutoMLConfig` constructor. Learn more about [how to configure training, validation, cross validation, and test data](../how-to-configure-cross-validation-data-splits.md) for your AutoML experiments.
+You can specify separate **training data and validation data sets** directly in the `AutoMLConfig` constructor. Learn more about [how to configure training, validation, cross validation, and test data](how-to-configure-cross-validation-data-splits.md) for your AutoML experiments.
If you do not explicitly specify a `validation_data` or `n_cross_validation` parameter, automated ML applies default techniques to determine how validation is performed. This determination depends on the number of rows in the dataset assigned to your `training_data` parameter.
If you do not explicitly specify a `validation_data` or `n_cross_validation` par
> [!TIP] > You can upload **test data (preview)** to evaluate models that automated ML generated for you. These features are [experimental](/python/api/overview/azure/ml/#stable-vs-experimental) preview capabilities, and may change at any time. > Learn how to:
-> * [Pass in test data to your AutoMLConfig object](../how-to-configure-cross-validation-data-splits.md#provide-test-data-preview).
+> * [Pass in test data to your AutoMLConfig object](how-to-configure-cross-validation-data-splits.md#provide-test-data-preview).
> * [Test the models automated ML generated for your experiment](#test-models-preview). > > If you prefer a no-code experience, see [step 12 in Set up AutoML with the studio UI](../how-to-use-automated-ml-for-ml-models.md#create-and-run-experiment)
Next determine where the model will be trained. An automated ML training experim
* **Choose a remote ML compute cluster**: If you are training with larger datasets like in production training creating models which need longer trains, remote compute will provide much better end-to-end time performance because `AutoML` will parallelize trains across the cluster's nodes. On a remote compute, the start-up time for the internal infrastructure will add around 1.5 minutes per child run, plus additional minutes for the cluster infrastructure if the VMs are not yet up and running.[Azure Machine Learning Managed Compute](../concept-compute-target.md#azure-machine-learning-compute-managed) is a managed service that enables the ability to train machine learning models on clusters of Azure virtual machines. Compute instance is also supported as a compute target.
- * An **Azure Databricks cluster** in your Azure subscription. You can find more details in [Set up an Azure Databricks cluster for automated ML](../how-to-configure-databricks-automl-environment.md). See this [GitHub site](https://github.com/Azure/azureml-examples/tree/main/v1/python-sdk/tutorials/automl-with-databricks) for examples of notebooks with Azure Databricks.
+ * An **Azure Databricks cluster** in your Azure subscription. You can find more details in [Set up an Azure Databricks cluster for automated ML](how-to-configure-databricks-automl-environment.md). See this [GitHub site](https://github.com/Azure/azureml-examples/tree/main/v1/python-sdk/tutorials/automl-with-databricks) for examples of notebooks with Azure Databricks.
Consider these factors when choosing your compute target:
The recommendations are similar to those noted for regression scenarios.
### Data featurization In every automated ML experiment, your data is automatically scaled and normalized to help *certain* algorithms that are sensitive to features that are on different scales. This scaling and normalization is referred to as featurization.
-See [Featurization in AutoML](../how-to-configure-auto-features.md) for more detail and code examples.
+See [Featurization in AutoML](how-to-configure-auto-features.md) for more detail and code examples.
> [!NOTE] > Automated machine learning featurization steps (feature normalization, handling missing data, converting text to numeric, etc.) become part of the underlying model. When using the model for predictions, the same featurization steps applied during training are applied to your input data automatically.
When configuring your experiments in your `AutoMLConfig` object, you can enable/
|Featurization Configuration | Description | | - | - |
-|`"featurization": 'auto'`| Indicates that as part of preprocessing, [data guardrails and featurization steps](../how-to-configure-auto-features.md#featurization) are performed automatically. **Default setting**.|
+|`"featurization": 'auto'`| Indicates that as part of preprocessing, [data guardrails and featurization steps](how-to-configure-auto-features.md#featurization) are performed automatically. **Default setting**.|
|`"featurization": 'off'`| Indicates featurization step shouldn't be done automatically.|
-|`"featurization":`&nbsp;`'FeaturizationConfig'`| Indicates customized featurization step should be used. [Learn how to customize featurization](../how-to-configure-auto-features.md#customize-featurization).|
+|`"featurization":`&nbsp;`'FeaturizationConfig'`| Indicates customized featurization step should be used. [Learn how to customize featurization](how-to-configure-auto-features.md#customize-featurization).|
Automated ML offers options for you to monitor and evaluate your training result
* For definitions and examples of the performance charts and metrics provided for each run, see [Evaluate automated machine learning experiment results](../how-to-understand-automated-ml.md).
-* To get a featurization summary and understand what features were added to a particular model, see [Featurization transparency](../how-to-configure-auto-features.md#featurization-transparency).
+* To get a featurization summary and understand what features were added to a particular model, see [Featurization transparency](how-to-configure-auto-features.md#featurization-transparency).
-You can view the hyperparameters, the scaling and normalization techniques, and algorithm applied to a specific automated ML run with the [custom code solution, `print_model()`](../how-to-configure-auto-features.md#scaling-and-normalization).
+You can view the hyperparameters, the scaling and normalization techniques, and algorithm applied to a specific automated ML run with the [custom code solution, `print_model()`](how-to-configure-auto-features.md#scaling-and-normalization).
> [!TIP]
-> Automated ML also let's you [view the generated model training code for Auto ML trained models](../how-to-generate-automl-training-code.md). This functionality is in public preview and can change at any time.
+> Automated ML also let's you [view the generated model training code for Auto ML trained models](how-to-generate-automl-training-code.md). This functionality is in public preview and can change at any time.
## <a name="monitor"></a> Monitor automated machine learning runs
RunDetails(run).show()
> * [Forecasting tasks where deep learning neural networks (DNN) are enabled](../how-to-auto-train-forecast.md#enable-deep-learning) > * [Automated ML runs from local computes or Azure Databricks clusters](../how-to-configure-auto-train.md#compute-to-run-experiment)
-Passing the `test_data` or `test_size` parameters into the `AutoMLConfig`, automatically triggers a remote test run that uses the provided test data to evaluate the best model that automated ML recommends upon completion of the experiment. This remote test run is done at the end of the experiment, once the best model is determined. See how to [pass test data into your `AutoMLConfig`](../how-to-configure-cross-validation-data-splits.md#provide-test-data-preview).
+Passing the `test_data` or `test_size` parameters into the `AutoMLConfig`, automatically triggers a remote test run that uses the provided test data to evaluate the best model that automated ML recommends upon completion of the experiment. This remote test run is done at the end of the experiment, once the best model is determined. See how to [pass test data into your `AutoMLConfig`](how-to-configure-cross-validation-data-splits.md#provide-test-data-preview).
### Get test job results
For general information on how model explanations and feature importance can be
+ Learn more about [how to train a regression model with Automated machine learning](how-to-auto-train-models-v1.md).
-+ [Troubleshoot automated ML experiments](../how-to-troubleshoot-auto-ml.md).
++ [Troubleshoot automated ML experiments](how-to-troubleshoot-auto-ml.md).
machine-learning How To Configure Cross Validation Data Splits https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/v1/how-to-configure-cross-validation-data-splits.md
+
+ Title: Data splits and cross-validation in automated machine learning
+
+description: Learn how to configure training, validation, cross-validation and test data for automated machine learning experiments.
++++++++ Last updated : 11/15/2021
+monikerRange: 'azureml-api-1'
++
+# Configure training, validation, cross-validation and test data in automated machine learning
++
+In this article, you learn the different options for configuring training data and validation data splits along with cross-validation settings for your automated machine learning, automated ML, experiments.
+
+In Azure Machine Learning, when you use automated ML to build multiple ML models, each child run needs to validate the related model by calculating the quality metrics for that model, such as accuracy or AUC weighted. These metrics are calculated by comparing the predictions made with each model with real labels from past observations in the validation data. [Learn more about how metrics are calculated based on validation type](#metric-calculation-for-cross-validation-in-machine-learning).
+
+Automated ML experiments perform model validation automatically. The following sections describe how you can further customize validation settings with the [Azure Machine Learning Python SDK](/python/api/overview/azure/ml/).
+
+For a low-code or no-code experience, see [Create your automated machine learning experiments in Azure Machine Learning studio](../how-to-use-automated-ml-for-ml-models.md#create-and-run-experiment).
+
+## Prerequisites
+
+For this article you need,
+
+* An Azure Machine Learning workspace. To create the workspace, see [Create workspace resources](../quickstart-create-resources.md).
+
+* Familiarity with setting up an automated machine learning experiment with the Azure Machine Learning SDK. Follow the [tutorial](../tutorial-auto-train-image-models.md) or [how-to](how-to-configure-auto-train-v1.md) to see the fundamental automated machine learning experiment design patterns.
+
+* An understanding of train/validation data splits and cross-validation as machine learning concepts. For a high-level explanation,
+
+ * [About training, validation and test data in machine learning](https://towardsdatascience.com/train-validation-and-test-sets-72cb40cba9e7)
+
+ * [Understand Cross Validation in machine learning](https://towardsdatascience.com/understanding-cross-validation-419dbd47e9bd)
++
+## Default data splits and cross-validation in machine learning
+
+Use the [AutoMLConfig](/python/api/azureml-train-automl-client/azureml.train.automl.automlconfig.automlconfig) object to define your experiment and training settings. In the following code snippet, notice that only the required parameters are defined, that is the parameters for `n_cross_validations` or `validation_data` are **not** included.
+
+> [!NOTE]
+> The default data splits and cross-validation are not supported in forecasting scenarios.
+
+```python
+data = "https://automlsamplenotebookdata.blob.core.windows.net/automl-sample-notebook-data/creditcard.csv"
+
+dataset = Dataset.Tabular.from_delimited_files(data)
+
+automl_config = AutoMLConfig(compute_target = aml_remote_compute,
+ task = 'classification',
+ primary_metric = 'AUC_weighted',
+ training_data = dataset,
+ label_column_name = 'Class'
+ )
+```
+
+If you do not explicitly specify either a `validation_data` or `n_cross_validations` parameter, automated ML applies default techniques depending on the number of rows provided in the single dataset `training_data`.
+
+|Training&nbsp;data&nbsp;size| Validation technique |
+||--|
+|**Larger&nbsp;than&nbsp;20,000&nbsp;rows**| Train/validation data split is applied. The default is to take 10% of the initial training data set as the validation set. In turn, that validation set is used for metrics calculation.
+|**Smaller&nbsp;than&nbsp;20,000&nbsp;rows**| Cross-validation approach is applied. The default number of folds depends on the number of rows. <br> **If the dataset is less than 1,000 rows**, 10 folds are used. <br> **If the rows are between 1,000 and 20,000**, then three folds are used.
++
+## Provide validation data
+
+In this case, you can either start with a single data file and split it into training data and validation data sets or you can provide a separate data file for the validation set. Either way, the `validation_data` parameter in your `AutoMLConfig` object assigns which data to use as your validation set. This parameter only accepts data sets in the form of an [Azure Machine Learning dataset](how-to-create-register-datasets.md) or pandas dataframe.
+
+> [!NOTE]
+> The `validation_data` parameter requires the `training_data` and `label_column_name` parameters to be set as well. You can only set one validation parameter, that is you can only specify either `validation_data` or `n_cross_validations`, not both.
+
+The following code example explicitly defines which portion of the provided data in `dataset` to use for training and validation.
+
+```python
+data = "https://automlsamplenotebookdata.blob.core.windows.net/automl-sample-notebook-data/creditcard.csv"
+
+dataset = Dataset.Tabular.from_delimited_files(data)
+
+training_data, validation_data = dataset.random_split(percentage=0.8, seed=1)
+
+automl_config = AutoMLConfig(compute_target = aml_remote_compute,
+ task = 'classification',
+ primary_metric = 'AUC_weighted',
+ training_data = training_data,
+ validation_data = validation_data,
+ label_column_name = 'Class'
+ )
+```
+
+## Provide validation set size
+
+In this case, only a single dataset is provided for the experiment. That is, the `validation_data` parameter is **not** specified, and the provided dataset is assigned to the `training_data` parameter.
+
+In your `AutoMLConfig` object, you can set the `validation_size` parameter to hold out a portion of the training data for validation. This means that the validation set will be split by automated ML from the initial `training_data` provided. This value should be between 0.0 and 1.0 non-inclusive (for example, 0.2 means 20% of the data is held out for validation data).
+
+> [!NOTE]
+> The `validation_size` parameter is not supported in forecasting scenarios.
+
+See the following code example:
+
+```python
+data = "https://automlsamplenotebookdata.blob.core.windows.net/automl-sample-notebook-data/creditcard.csv"
+
+dataset = Dataset.Tabular.from_delimited_files(data)
+
+automl_config = AutoMLConfig(compute_target = aml_remote_compute,
+ task = 'classification',
+ primary_metric = 'AUC_weighted',
+ training_data = dataset,
+ validation_size = 0.2,
+ label_column_name = 'Class'
+ )
+```
+
+## K-fold cross-validation
+
+To perform k-fold cross-validation, include the `n_cross_validations` parameter and set it to a value. This parameter sets how many cross validations to perform, based on the same number of folds.
+
+> [!NOTE]
+> The `n_cross_validations` parameter is not supported in classification scenarios that use deep neural networks.
+> For forecasting scenarios, see how cross validation is applied in [Set up AutoML to train a time-series forecasting model](how-to-auto-train-forecast-v1.md#training-and-validation-data).
+
+In the following code, five folds for cross-validation are defined. Hence, five different trainings, each training using 4/5 of the data, and each validation using 1/5 of the data with a different holdout fold each time.
+
+As a result, metrics are calculated with the average of the five validation metrics.
+
+```python
+data = "https://automlsamplenotebookdata.blob.core.windows.net/automl-sample-notebook-data/creditcard.csv"
+
+dataset = Dataset.Tabular.from_delimited_files(data)
+
+automl_config = AutoMLConfig(compute_target = aml_remote_compute,
+ task = 'classification',
+ primary_metric = 'AUC_weighted',
+ training_data = dataset,
+ n_cross_validations = 5
+ label_column_name = 'Class'
+ )
+```
+## Monte Carlo cross-validation
+
+To perform Monte Carlo cross validation, include both the `validation_size` and `n_cross_validations` parameters in your `AutoMLConfig` object.
+
+For Monte Carlo cross validation, automated ML sets aside the portion of the training data specified by the `validation_size` parameter for validation, and then assigns the rest of the data for training. This process is then repeated based on the value specified in the `n_cross_validations` parameter; which generates new training and validation splits, at random, each time.
+
+> [!NOTE]
+> The Monte Carlo cross-validation is not supported in forecasting scenarios.
+
+The follow code defines, 7 folds for cross-validation and 20% of the training data should be used for validation. Hence, 7 different trainings, each training uses 80% of the data, and each validation uses 20% of the data with a different holdout fold each time.
+
+```python
+data = "https://automlsamplenotebookdata.blob.core.windows.net/automl-sample-notebook-data/creditcard.csv"
+
+dataset = Dataset.Tabular.from_delimited_files(data)
+
+automl_config = AutoMLConfig(compute_target = aml_remote_compute,
+ task = 'classification',
+ primary_metric = 'AUC_weighted',
+ training_data = dataset,
+ n_cross_validations = 7
+ validation_size = 0.2,
+ label_column_name = 'Class'
+ )
+```
+
+## Specify custom cross-validation data folds
+
+You can also provide your own cross-validation (CV) data folds. This is considered a more advanced scenario because you are specifying which columns to split and use for validation. Include custom CV split columns in your training data, and specify which columns by populating the column names in the `cv_split_column_names` parameter. Each column represents one cross-validation split, and is filled with integer values 1 or 0--where 1 indicates the row should be used for training and 0 indicates the row should be used for validation.
+
+> [!NOTE]
+> The `cv_split_column_names` parameter is not supported in forecasting scenarios.
++
+The following code snippet contains bank marketing data with two CV split columns 'cv1' and 'cv2'.
+
+```python
+data = "https://automlsamplenotebookdata.blob.core.windows.net/automl-sample-notebook-data/bankmarketing_with_cv.csv"
+
+dataset = Dataset.Tabular.from_delimited_files(data)
+
+automl_config = AutoMLConfig(compute_target = aml_remote_compute,
+ task = 'classification',
+ primary_metric = 'AUC_weighted',
+ training_data = dataset,
+ label_column_name = 'y',
+ cv_split_column_names = ['cv1', 'cv2']
+ )
+```
+
+> [!NOTE]
+> To use `cv_split_column_names` with `training_data` and `label_column_name`, please upgrade your Azure Machine Learning Python SDK version 1.6.0 or later. For previous SDK versions, please refer to using `cv_splits_indices`, but note that it is used with `X` and `y` dataset input only.
++
+## Metric calculation for cross validation in machine learning
+
+When either k-fold or Monte Carlo cross validation is used, metrics are computed on each validation fold and then aggregated. The aggregation operation is an average for scalar metrics and a sum for charts. Metrics computed during cross validation are based on all folds and therefore all samples from the training set. [Learn more about metrics in automated machine learning](../how-to-understand-automated-ml.md).
+
+When either a custom validation set or an automatically selected validation set is used, model evaluation metrics are computed from only that validation set, not the training data.
+
+## Provide test data (preview)
++
+You can also provide test data to evaluate the recommended model that automated ML generates for you upon completion of the experiment. When you provide test data it's considered a separate from training and validation, so as to not bias the results of the test run of the recommended model. [Learn more about training, validation and test data in automated ML.](concept-automated-ml-v1.md#training-validation-and-test-data)
+
+> [!WARNING]
+> This feature is not available for the following automated ML scenarios
+> * [Computer vision tasks](../how-to-auto-train-image-models.md)
+> * [Many models and hiearchical time series forecasting training (preview)](how-to-auto-train-forecast-v1.md)
+> * [Forecasting tasks where deep learning neural networks (DNN) are enabled](how-to-auto-train-forecast-v1.md#enable-deep-learning)
+> * [Automated ML runs from local computes or Azure Databricks clusters](how-to-configure-auto-train-v1.md#compute-to-run-experiment)
+
+Test datasets must be in the form of an [Azure Machine Learning TabularDataset](how-to-create-register-datasets.md#tabulardataset). You can specify a test dataset with the `test_data` and `test_size` parameters in your `AutoMLConfig` object. These parameters are mutually exclusive and can not be specified at the same time or with `cv_split_column_names` or `cv_splits_indices`.
+
+With the `test_data` parameter, specify an existing dataset to pass into your `AutoMLConfig` object.
+
+```python
+automl_config = AutoMLConfig(task='forecasting',
+ ...
+ # Provide an existing test dataset
+ test_data=test_dataset,
+ ...
+ forecasting_parameters=forecasting_parameters)
+```
+
+To use a train/test split instead of providing test data directly, use the `test_size` parameter when creating the `AutoMLConfig`. This parameter must be a floating point value between 0.0 and 1.0 exclusive, and specifies the percentage of the training dataset that should be used for the test dataset.
+
+```python
+automl_config = AutoMLConfig(task = 'regression',
+ ...
+ # Specify train/test split
+ training_data=training_data,
+ test_size=0.2)
+```
+
+> [!Note]
+> For regression tasks, random sampling is used.<br>
+> For classification tasks, stratified sampling is used, but random sampling is used as a fall back when stratified sampling is not feasible. <br>
+> Forecasting does not currently support specifying a test dataset using a train/test split with the `test_size` parameter.
++
+Passing the `test_data` or `test_size` parameters into the `AutoMLConfig`, automatically triggers a remote test run upon completion of your experiment. This test run uses the provided test data to evaluate the best model that automated ML recommends. Learn more about [how to get the predictions from the test run](how-to-configure-auto-train-v1.md#test-models-preview).
+
+## Next steps
+
+* [Prevent imbalanced data and overfitting](../concept-manage-ml-pitfalls.md).
+
+* How to [Auto-train a time-series forecast model](how-to-auto-train-forecast-v1.md).
machine-learning How To Configure Databricks Automl Environment https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/v1/how-to-configure-databricks-automl-environment.md
+
+ Title: Develop with AutoML & Azure Databricks
+
+description: Learn to set up a development environment in Azure Machine Learning and Azure Databricks. Use the Azure Machine Learning SDKs for Databricks and Databricks with AutoML.
++++++ Last updated : 10/21/2021++
+monikerRange: 'azureml-api-1'
++
+# Set up a development environment with Azure Databricks and AutoML in Azure Machine Learning
+
+Learn how to configure a development environment in Azure Machine Learning that uses Azure Databricks and automated ML.
+
+Azure Databricks is ideal for running large-scale intensive machine learning workflows on the scalable Apache Spark platform in the Azure cloud. It provides a collaborative Notebook-based environment with a CPU or GPU-based compute cluster.
+
+For information on other machine learning development environments, see [Set up Python development environment](how-to-configure-environment-v1.md).
++
+## Prerequisite
+
+Azure Machine Learning workspace. To create one, use the steps in the [Create workspace resources](../quickstart-create-resources.md) article.
++
+## Azure Databricks with Azure Machine Learning and AutoML
+
+Azure Databricks integrates with Azure Machine Learning and its AutoML capabilities.
+
+You can use Azure Databricks:
+++ To train a model using Spark MLlib and deploy the model to ACI/AKS.++ With [automated machine learning](concept-automated-ml-v1.md) capabilities using an Azure Machine Learning SDK.++ As a compute target from an [Azure Machine Learning pipeline](../concept-ml-pipelines.md).+
+## Set up a Databricks cluster
+
+Create a [Databricks cluster](/azure/databricks/scenarios/quickstart-create-databricks-workspace-portal). Some settings apply only if you install the SDK for automated machine learning on Databricks.
+
+**It takes few minutes to create the cluster.**
+
+Use these settings:
+
+| Setting |Applies to| Value |
+|-|||
+| Cluster Name |always| yourclustername |
+| Databricks Runtime Version |always| 9.1 LTS|
+| Python version |always| 3 |
+| Worker Type <br>(determines max # of concurrent iterations) |Automated ML<br>only| Memory optimized VM preferred |
+| Workers |always| 2 or higher |
+| Enable Autoscaling |Automated ML<br>only| Uncheck |
+
+Wait until the cluster is running before proceeding further.
+
+## Add the Azure Machine Learning SDK to Databricks
+
+Once the cluster is running, [create a library](https://docs.databricks.com/user-guide/libraries.html#create-a-library) to attach the appropriate Azure Machine Learning SDK package to your cluster.
+
+To use automated ML, skip to [Add the Azure Machine Learning SDK with AutoML](#add-the-azure-machine-learning-sdk-with-automl-to-databricks).
++
+1. Right-click the current Workspace folder where you want to store the library. Select **Create** > **Library**.
+
+ > [!TIP]
+ > If you have an old SDK version, deselect it from cluster's installed libraries and move to trash. Install the new SDK version and restart the cluster. If there is an issue after the restart, detach and reattach your cluster.
+
+1. Choose the following option (no other SDK installations are supported)
+
+ |SDK&nbsp;package&nbsp;extras|Source|PyPi&nbsp;Name&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;|
+ |-|||
+ |For Databricks| Upload Python Egg or PyPI | azureml-sdk[databricks]|
+
+ > [!WARNING]
+ > No other SDK extras can be installed. Choose only the [`databricks`] option .
+
+ * Do not select **Attach automatically to all clusters**.
+ * Select **Attach** next to your cluster name.
+
+1. Monitor for errors until status changes to **Attached**, which may take several minutes. If this step fails:
+
+ Try restarting your cluster by:
+ 1. In the left pane, select **Clusters**.
+ 1. In the table, select your cluster name.
+ 1. On the **Libraries** tab, select **Restart**.
+
+ A successful install looks like the following:
+
+ ![Azure Machine Learning SDK for Databricks](../media/how-to-configure-environment/amlsdk-withoutautoml.jpg)
+
+## Add the Azure Machine Learning SDK with AutoML to Databricks
+If the cluster was created with Databricks Runtime 7.3 LTS (*not* ML), run the following command in the first cell of your notebook to install the Azure Machine Learning SDK.
+
+```
+%pip install --upgrade --force-reinstall -r https://aka.ms/automl_linux_requirements.txt
+```
+
+### AutoML config settings
+
+In AutoML config, when using Azure Databricks add the following parameters:
+
+- ```max_concurrent_iterations``` is based on number of worker nodes in your cluster.
+- ```spark_context=sc``` is based on the default spark context.
+
+## ML notebooks that work with Azure Databricks
+
+Try it out:
++ While many sample notebooks are available, **only [these sample notebooks](https://github.com/Azure/azureml-examples/tree/main/v1/python-sdk/tutorials/automl-with-databricks) work with Azure Databricks.**+++ Import these samples directly from your workspace. See below:
+![Select Import](../media/how-to-configure-environment/azure-db-screenshot.png)
+![Import Panel](../media/how-to-configure-environment/azure-db-import.png)
+++ Learn how to [create a pipeline with Databricks as the training compute](how-to-create-machine-learning-pipelines.md).+
+## Troubleshooting
+
+* **Databricks cancel an automated machine learning run**: When you use automated machine learning capabilities on Azure Databricks, to cancel a run and start a new experiment run, restart your Azure Databricks cluster.
+
+* **Databricks >10 iterations for automated machine learning**: In automated machine learning settings, if you have more than 10 iterations, set `show_output` to `False` when you submit the run.
+
+* **Databricks widget for the Azure Machine Learning SDK and automated machine learning**: The Azure Machine Learning SDK widget isn't supported in a Databricks notebook because the notebooks can't parse HTML widgets. You can view the widget in the portal by using this Python code in your Azure Databricks notebook cell:
+
+ ```
+ displayHTML("<a href={} target='_blank'>Azure Portal: {}</a>".format(local_run.get_portal_url(), local_run.id))
+ ```
+
+* **Failure when installing packages**
+
+ Azure Machine Learning SDK installation fails on Azure Databricks when more packages are installed. Some packages, such as `psutil`, can cause conflicts. To avoid installation errors, install packages by freezing the library version. This issue is related to Databricks and not to the Azure Machine Learning SDK. You might experience this issue with other libraries, too. Example:
+
+ ```python
+ psutil cryptography==1.5 pyopenssl==16.0.0 ipython==2.2.0
+ ```
+
+ Alternatively, you can use init scripts if you keep facing install issues with Python libraries. This approach isn't officially supported. For more information, see [Cluster-scoped init scripts](/azure/databricks/clusters/init-scripts#cluster-scoped-init-scripts).
+
+* **Import error: cannot import name `Timedelta` from `pandas._libs.tslibs`**: If you see this error when you use automated machine learning, run the two following lines in your notebook:
+ ```
+ %sh rm -rf /databricks/python/lib/python3.7/site-packages/pandas-0.23.4.dist-info /databricks/python/lib/python3.7/site-packages/pandas
+ %sh /databricks/python/bin/pip install pandas==0.23.4
+ ```
+
+* **Import error: No module named 'pandas.core.indexes'**: If you see this error when you use automated machine learning:
+
+ 1. Run this command to install two packages in your Azure Databricks cluster:
+
+ ```bash
+ scikit-learn==0.19.1
+ pandas==0.22.0
+ ```
+
+ 1. Detach and then reattach the cluster to your notebook.
+
+ If these steps don't solve the issue, try restarting the cluster.
+
+* **FailToSendFeather**: If you see a `FailToSendFeather` error when reading data on Azure Databricks cluster, refer to the following solutions:
+
+ * Upgrade `azureml-sdk[automl]` package to the latest version.
+ * Add `azureml-dataprep` version 1.1.8 or above.
+ * Add `pyarrow` version 0.11 or above.
+
+
+## Next steps
+
+- [Train and deploy a model](../tutorial-train-deploy-notebook.md) on Azure Machine Learning with the MNIST dataset.
+- See the [Azure Machine Learning SDK for Python reference](/python/api/overview/azure/ml/intro).
machine-learning How To Configure Environment V1 https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/v1/how-to-configure-environment-v1.md
The following table shows each development environment covered in this article,
| [Local environment](#local-computer-or-remote-vm-environment) | Full control of your development environment and dependencies. Run with any build tool, environment, or IDE of your choice. | Takes longer to get started. Necessary SDK packages must be installed, and an environment must also be installed if you don't already have one. | | [The Data Science Virtual Machine (DSVM)](#data-science-virtual-machine) | Similar to the cloud-based compute instance (Python and the SDK are pre-installed), but with additional popular data science and machine learning tools pre-installed. Easy to scale and combine with other custom tools and workflows. | A slower getting started experience compared to the cloud-based compute instance. | | [Azure Machine Learning compute instance](#azure-machine-learning-compute-instance) | Easiest way to get started. The entire SDK is already installed in your workspace VM, and notebook tutorials are pre-cloned and ready to run. | Lack of control over your development environment and dependencies. Additional cost incurred for Linux VM (VM can be stopped when not in use to avoid charges). See [pricing details](https://azure.microsoft.com/pricing/details/virtual-machines/linux/). |
-| [Azure Databricks](../how-to-configure-databricks-automl-environment.md) | Ideal for running large-scale intensive machine learning workflows on the scalable Apache Spark platform. | Overkill for experimental machine learning, or smaller-scale experiments and workflows. Additional cost incurred for Azure Databricks. See [pricing details](https://azure.microsoft.com/pricing/details/databricks/). |
+| [Azure Databricks](how-to-configure-databricks-automl-environment.md) | Ideal for running large-scale intensive machine learning workflows on the scalable Apache Spark platform. | Overkill for experimental machine learning, or smaller-scale experiments and workflows. Additional cost incurred for Azure Databricks. See [pricing details](https://azure.microsoft.com/pricing/details/databricks/). |
This article also provides additional usage tips for the following tools:
machine-learning How To Configure Private Link https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/v1/how-to-configure-private-link.md
Last updated 08/29/2022
> [!div class="op_single_selector" title1="Select the Azure Machine Learning version you are using:"] > * [CLI or SDK v1](how-to-configure-private-link.md)
-> * [CLI v2 (current version)](../how-to-configure-private-link.md)
+> * [CLI v2 (current version)](../how-to-configure-private-link.md?view=azureml-api-2&preserve-view=true)
-In this document, you learn how to configure a private endpoint for your Azure Machine Learning workspace. For information on creating a virtual network for Azure Machine Learning, see [Virtual network isolation and privacy overview](how-to-network-security-overview.md).
+In this document, you learn how to configure a private endpoint for your Azure Machine Learning workspace. For information on creating a virtual network for Azure Machine Learning, see [Virtual network isolation and privacy overview](../how-to-network-security-overview.md).
Azure Private Link enables you to connect to your workspace using a private endpoint. The private endpoint is a set of private IP addresses within your virtual network. You can then limit access to your workspace to only occur over the private IP addresses. A private endpoint helps reduce the risk of data exfiltration. To learn more about private endpoints, see the [Azure Private Link](../../private-link/private-link-overview.md) article.
Azure Private Link enables you to connect to your workspace using a private endp
> > For more information on securing resources used by Azure Machine Learning, see the following articles: >
-> * [Virtual network isolation and privacy overview](how-to-network-security-overview.md).
+> * [Virtual network isolation and privacy overview](../how-to-network-security-overview.md).
> * [Secure workspace resources](../how-to-secure-workspace-vnet.md). > * [Secure training environments (v1)](how-to-secure-training-vnet.md). > * [Secure inference environment (v1)](how-to-secure-inferencing-vnet.md)
If you want to isolate the development clients, so they don't have direct access
The following diagram illustrates this configuration. The __Workload__ VNet contains computes created by the workspace for training & deployment. The __Client__ VNet contains clients or client ExpressRoute/VPN connections. Both VNets contain private endpoints for the workspace, Azure Storage Account, Azure Key Vault, and Azure Container Registry. ### Scenario: Isolated Azure Kubernetes Service
If you want to create an isolated Azure Kubernetes Service used by the workspace
1. Add a new private endpoint to your workspace. This private endpoint should exist in the client VNet and have private DNS zone integration enabled. 1. Attach the AKS cluster to the Azure Machine Learning workspace. For more information, see [Create and attach an Azure Kubernetes Service cluster](how-to-create-attach-kubernetes.md#attach-an-existing-aks-cluster). ## Next steps
-* For more information on securing your Azure Machine Learning workspace, see the [Virtual network isolation and privacy overview](how-to-network-security-overview.md) article.
+* For more information on securing your Azure Machine Learning workspace, see the [Virtual network isolation and privacy overview](../how-to-network-security-overview.md) article.
* If you plan on using a custom DNS solution in your virtual network, see [how to use a workspace with a custom DNS server](../how-to-custom-dns.md).
machine-learning How To Create Attach Compute Cluster https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/v1/how-to-create-attach-compute-cluster.md
Last updated 05/02/2022
> [!div class="op_single_selector" title1="Select the Azure Machine Learning SDK or CLI version you are using:"] > * [v1](how-to-create-attach-compute-cluster.md)
-> * [v2 (current version)](../how-to-create-attach-compute-cluster.md)
+> * [v2 (current version)](../how-to-create-attach-compute-cluster.md?view=azureml-api-2&preserve-view=true)
Learn how to create and manage a [compute cluster](../concept-compute-target.md#azure-machine-learning-compute-managed) in your Azure Machine Learning workspace.
machine-learning How To Create Attach Kubernetes https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/v1/how-to-create-attach-kubernetes.md
Last updated 04/21/2022
> [!div class="op_single_selector" title1="Select the version of Azure Machine Learning SDK or CLI extension you are using:"] > * [v1](how-to-create-attach-kubernetes.md)
-> * [v2 (current version)](../how-to-attach-kubernetes-anywhere.md)
+> * [v2 (current version)](../how-to-attach-kubernetes-anywhere.md?view=azureml-api-2&preserve-view=true)
> [!IMPORTANT] > This article shows how to use the CLI and SDK v1 to create or attach an Azure Kubernetes Service cluster, which is considered as **legacy** feature now. To attach Azure Kubernetes Service cluster using the recommended approach for v2, see [Introduction to Kubernetes compute target in v2](../how-to-attach-kubernetes-anywhere.md).
machine-learning How To Create Manage Compute Instance https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/v1/how-to-create-manage-compute-instance.md
Last updated 05/02/2022
> [!div class="op_single_selector" title1="Select the Azure Machine Learning SDK or CLI version you are using:"] > * [v1](how-to-create-manage-compute-instance.md)
-> * [v2 (current version)](../how-to-create-manage-compute-instance.md)
+> * [v2 (current version)](../how-to-create-manage-compute-instance.md?view=azureml-api-2&preserve-view=true)
Learn how to create and manage a [compute instance](../concept-compute-instance.md) in your Azure Machine Learning workspace with CLI v1.
In the examples below, the name of the compute instance is **instance**
-[Azure RBAC](../../role-based-access-control/overview.md) allows you to control which users in the workspace can create, delete, start, stop, restart a compute instance. All users in the workspace contributor and owner role can create, delete, start, stop, and restart compute instances across the workspace. However, only the creator of a specific compute instance, or the user assigned if it was created on their behalf, is allowed to access Jupyter, JupyterLab, RStudio, and Posit Workbench (formerly RStudio Workbench) on that compute instance. A compute instance is dedicated to a single user who has root access. That user has access to Jupyter/JupyterLab/RStudio/Posit Workbench running on the instance. Compute instance will have single-user sign in and all actions will use that userΓÇÖs identity for Azure RBAC and attribution of experiment runs. SSH access is controlled through public/private key mechanism.
+[Azure RBAC](../../role-based-access-control/overview.md) allows you to control which users in the workspace can create, delete, start, stop, restart a compute instance. All users in the workspace contributor and owner role can create, delete, start, stop, and restart compute instances across the workspace. However, only the creator of a specific compute instance, or the user assigned if it was created on their behalf, is allowed to access Jupyter, JupyterLab, RStudio, and Posit Workbench (formerly RStudio Workbench) on that compute instance. A compute instance is dedicated to a single user who has root access. That user has access to Jupyter/JupyterLab/RStudio/Posit Workbench running on the instance. Compute instance will have single-user sign in and all actions will use that user's identity for Azure RBAC and attribution of experiment runs. SSH access is controlled through public/private key mechanism.
These actions can be controlled by Azure RBAC: * *Microsoft.MachineLearningServices/workspaces/computes/read*
machine-learning How To Create Register Datasets https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/v1/how-to-create-register-datasets.md
Last updated 09/28/2022
> [!div class="op_single_selector" title1="Select the version of Azure Machine Learning SDK you are using:"] > * [v1](how-to-create-register-datasets.md)
-> * [v2 (current version)](../how-to-create-data-assets.md)
+> * [v2 (current version)](../how-to-create-data-assets.md?view=azureml-api-2&preserve-view=true)
[!INCLUDE [sdk v1](../../../includes/machine-learning-sdk-v1.md)]
machine-learning How To Deploy Mlflow Models https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/v1/how-to-deploy-mlflow-models.md
[!INCLUDE [sdk v1](../../../includes/machine-learning-sdk-v1.md)] > [!div class="op_single_selector" title1="Select the version of the Azure Machine Learning developer platform you are using:"] > * [v1](how-to-deploy-mlflow-models.md)
-> * [v2 (current version)](../how-to-deploy-mlflow-models-online-endpoints.md)
+> * [v2 (current version)](../how-to-deploy-mlflow-models-online-endpoints.md?view=azureml-api-2&preserve-view=true)
In this article, learn how to deploy your [MLflow](https://www.mlflow.org) model as an Azure web service, so you can leverage and apply Azure Machine Learning's model management and data drift detection capabilities to your production models. See [MLflow and Azure Machine Learning](concept-mlflow-v1.md) for additional MLflow and Azure Machine Learning functionality integrations.
machine-learning How To Export Delete Data https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/v1/how-to-export-delete-data.md
- Title: Export or delete workspace data (v1)-
-description: Learn how to export or delete your workspace with the Azure Machine Learning studio, CLI, SDK, and authenticated REST APIs.
----- Previously updated : 10/21/2021-----
-# Export or delete your Machine Learning service workspace data (v1)
-
-In Azure Machine Learning, you can export or delete your workspace data using either the portal's graphical interface or the Python SDK. This article describes both options.
---
-## Control your workspace data
-
-In-product data stored by Azure Machine Learning is available for export and deletion. You can export and delete using Azure Machine Learning studio, CLI, and SDK. Telemetry data can be accessed through the Azure Privacy portal.
-
-In Azure Machine Learning, personal data consists of user information in job history documents.
-
-## Delete high-level resources using the portal
-
-When you create a workspace, Azure creates several resources within the resource group:
--- The workspace itself-- A storage account-- A container registry-- An Applications Insights instance-- A key vault-
-These resources can be deleted by selecting them from the list and choosing **Delete**
--
-Job history documents, which may contain personal user information, are stored in the storage account in blob storage, in subfolders of `/azureml`. You can download and delete the data from the portal.
--
-## Export and delete machine learning resources using Azure Machine Learning studio
-
-Azure Machine Learning studio provides a unified view of your machine learning resources, such as notebooks, datasets, models, and experiments. Azure Machine Learning studio emphasizes preserving a record of your data and experiments. Computational resources such as pipelines and compute resources can be deleted using the browser. For these resources, navigate to the resource in question and choose **Delete**.
-
-Datasets can be unregistered and Experiments can be archived, but these operations don't delete the data. To entirely remove the data, datasets and experiment data must be deleted at the storage level. Deleting at the storage level is done using the portal, as described previously. An individual Job can be deleted directly in studio. Deleting a Job deletes the Job's data.
-
-> [!NOTE]
-> Prior to unregistering a Dataset, use its **Data source** link to find the specific Data URL to delete.
-
-You can download training artifacts from experimental jobs using the Studio. Choose the **Experiment** and **Job** in which you're interested. Choose **Output + logs** and navigate to the specific artifacts you wish to download. Choose **...** and **Download**.
-
-You can download a registered model by navigating to the **Model** and choosing **Download**.
--
-## Export and delete resources using the Python SDK
-
-You can download the outputs of a particular job using:
-
-```python
-# Retrieved from Azure Machine Learning web UI
-run_id = 'aaaaaaaa-bbbb-cccc-dddd-0123456789AB'
-experiment = ws.experiments['my-experiment']
-run = next(run for run in ex.get_runs() if run.id == run_id)
-metrics_output_port = run.get_pipeline_output('metrics_output')
-model_output_port = run.get_pipeline_output('model_output')
-
-metrics_output_port.download('.', show_progress=True)
-model_output_port.download('.', show_progress=True)
-```
-
-The following machine learning resources can be deleted using the Python SDK:
-
-| Type | Function Call | Notes |
-| | | |
-| `Workspace` | [`delete`](/python/api/azureml-core/azureml.core.workspace.workspace#delete-delete-dependent-resources-false--no-wait-false-) | Use `delete-dependent-resources` to cascade the delete |
-| `Model` | [`delete`](/python/api/azureml-core/azureml.core.model%28class%29#delete--) | |
-| `ComputeTarget` | [`delete`](/python/api/azureml-core/azureml.core.computetarget#delete--) | |
-| `WebService` | [`delete`](/python/api/azureml-core/azureml.core.webservice%28class%29) | |
machine-learning How To Generate Automl Training Code https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/v1/how-to-generate-automl-training-code.md
+
+ Title: How to view AutoML model training code
+
+description: How to view model training code for an automated ML trained model and explanation of each stage.
++++++++ Last updated : 02/16/2022
+monikerRange: 'azureml-api-1'
++
+# View training code for an Automated ML model
+
+In this article, you learn how to view the generated training code from any automated machine learning trained model.
+
+Code generation for automated ML trained models allows you to see the following details that automated ML uses to train and build the model for a specific run.
+
+* Data preprocessing
+* Algorithm selection
+* Featurization
+* Hyperparameters
+
+You can select any automated ML trained model, recommended or child run, and view the generated Python training code that created that specific model.
+
+With the generated model's training code you can,
+
+* **Learn** what featurization process and hyperparameters the model algorithm uses.
+* **Track/version/audit** trained models. Store versioned code to track what specific training code is used with the model that's to be deployed to production.
+* **Customize** the training code by changing hyperparameters or applying your ML and algorithms skills/experience, and retrain a new model with your customized code.
+
+The following diagram illustrates that you can generate the code for automated ML experiments with all task types. First select a model. The model you selected will be highlighted, then Azure Machine Learning copies the code files used to create the model, and displays them into your notebooks shared folder. From here, you can view and customize the code as needed.
++
+## Prerequisites
+
+* An Azure Machine Learning workspace. To create the workspace, see [Create workspace resources](../quickstart-create-resources.md).
+
+* This article assumes some familiarity with setting up an automated machine learning experiment. Follow the [tutorial](../tutorial-auto-train-image-models.md) or [how-to](how-to-configure-auto-train-v1.md) to see the main automated machine learning experiment design patterns.
+
+* Automated ML code generation is only available for experiments run on remote Azure Machine Learning compute targets. Code generation isn't supported for local runs.
+
+* All automated ML runs triggered through Azure Machine Learning Studio, SDKv2 or CLIv2 will have code generation enabled.
+
+## Get generated code and model artifacts
+By default, each automated ML trained model generates its training code after training completes. Automated ML saves this code in the experiment's `outputs/generated_code` for that specific model. You can view them in the Azure Machine Learning studio UI on the **Outputs + logs** tab of the selected model.
+
+* **script.py** This is the model's training code that you likely want to analyze with the featurization steps, specific algorithm used, and hyperparameters.
+
+* **script_run_notebook.ipynb** Notebook with boiler-plate code to run the model's training code (script.py) in Azure Machine Learning compute through Azure Machine Learning SDKv2.
+
+After the automated ML training run completes, there are you can access the `script.py` and the `script_run_notebook.ipynb` files via the Azure Machine Learning studio UI.
+
+To do so, navigate to the **Models** tab of the automated ML experiment parent run page. After you select one of the trained models, you can select the **View generated code** button. This button redirects you to the **Notebooks** portal extension, where you can view, edit and run the generated code for that particular selected model.
+
+![parent run models tab view generate code button](./media/how-to-generate-automl-training-code/parent-run-view-generated-code.png)
+
+You can also access to the model's generated code from the top of the child run's page once you navigate into that child run's page of a particular model.
+
+![child run page view generated code button](./media/how-to-generate-automl-training-code/child-run-view-generated-code.png)
+
+If you're using the Python SDKv2, you can also download the "script.py" and the "script_run_notebook.ipynb" by retrieving the best run via MLFlow & downloading the resulting artifacts.
+
+## script.py
+
+The `script.py` file contains the core logic needed to train a model with the previously used hyperparameters. While intended to be executed in the context of an Azure Machine Learning script run, with some modifications, the model's training code can also be run standalone in your own on-premises environment.
+
+The script can roughly be broken down into several the following parts: data loading, data preparation, data featurization, preprocessor/algorithm specification, and training.
+
+### Data loading
+
+The function `get_training_dataset()` loads the previously used dataset. It assumes that the script is run in an Azure Machine Learning script run under the same workspace as the original experiment.
+
+```python
+def get_training_dataset(dataset_id):
+ from azureml.core.dataset import Dataset
+ from azureml.core.run import Run
+
+ logger.info("Running get_training_dataset")
+ ws = Run.get_context().experiment.workspace
+ dataset = Dataset.get_by_id(workspace=ws, id=dataset_id)
+ return dataset.to_pandas_dataframe()
+```
+
+When running as part of a script run, `Run.get_context().experiment.workspace` retrieves the correct workspace. However, if this script is run inside of a different workspace or run locally, you need to modify the script to [explicitly specify the appropriate workspace](/python/api/azureml-core/azureml.core.workspace.workspace).
+
+Once the workspace has been retrieved, the original dataset is retrieved by its ID. Another dataset with exactly the same structure could also be specified by ID or name with the [`get_by_id()`](/python/api/azureml-core/azureml.core.dataset.dataset#get-by-id-workspace--id-) or [`get_by_name()`](/python/api/azureml-core/azureml.core.dataset.dataset#get-by-name-workspace--name--version--latest--), respectively. You can find the ID later on in the script, in a similar section as the following code.
+
+```python
+if __name__ == '__main__':
+ parser = argparse.ArgumentParser()
+ parser.add_argument('--training_dataset_id', type=str, default='xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxx', help='Default training dataset id is populated from the parent run')
+ args = parser.parse_args()
+
+ main(args.training_dataset_id)
+```
+
+You can also opt to replace this entire function with your own data loading mechanism; the only constraints are that the return value must be a Pandas dataframe and that the data must have the same shape as in the original experiment.
+
+### Data preparation code
+
+The function `prepare_data()` cleans the data, splits out the feature and sample weight columns and prepares the data for use in training.
+This function can vary depending on the type of dataset and the experiment task type: classification, regression, time-series forecasting, images or NLP tasks.
+
+The following example shows that in general, the dataframe from the data loading step is passed in. The label column and sample weights, if originally specified, are extracted and rows containing `NaN` are dropped from the input data.
+
+```python
+def prepare_data(dataframe):
+ from azureml.training.tabular.preprocessing import data_cleaning
+
+ logger.info("Running prepare_data")
+ label_column_name = 'y'
+
+ # extract the features, target and sample weight arrays
+ y = dataframe[label_column_name].values
+ X = dataframe.drop([label_column_name], axis=1)
+ sample_weights = None
+ X, y, sample_weights = data_cleaning._remove_nan_rows_in_X_y(X, y, sample_weights,
+ is_timeseries=False, target_column=label_column_name)
+
+ return X, y, sample_weights
+```
+
+If you want to do any additional data preparation, it can be done in this step by adding your custom data preparation code.
+
+### Data featurization code
+
+The function `generate_data_transformation_config()` specifies the featurization step in the final scikit-learn pipeline. The featurizers from the original experiment are reproduced here, along with their parameters.
+
+For example, possible data transformation that can happen in this function can be based on imputers like, `SimpleImputer()` and `CatImputer()`, or transformers such as `StringCastTransformer()` and `LabelEncoderTransformer()`.
+
+The following is a transformer of type `StringCastTransformer()` that can be used to transform a set of columns. In this case, the set indicated by `column_names`.
+
+```python
+def get_mapper_0(column_names):
+ # ... Multiple imports to package dependencies, removed for simplicity ...
+
+ definition = gen_features(
+ columns=column_names,
+ classes=[
+ {
+ 'class': StringCastTransformer,
+ },
+ {
+ 'class': CountVectorizer,
+ 'analyzer': 'word',
+ 'binary': True,
+ 'decode_error': 'strict',
+ 'dtype': numpy.uint8,
+ 'encoding': 'utf-8',
+ 'input': 'content',
+ 'lowercase': True,
+ 'max_df': 1.0,
+ 'max_features': None,
+ 'min_df': 1,
+ 'ngram_range': (1, 1),
+ 'preprocessor': None,
+ 'stop_words': None,
+ 'strip_accents': None,
+ 'token_pattern': '(?u)\\b\\w\\w+\\b',
+ 'tokenizer': wrap_in_lst,
+ 'vocabulary': None,
+ },
+ ]
+ )
+ mapper = DataFrameMapper(features=definition, input_df=True, sparse=True)
+
+ return mapper
+```
+
+Be aware that if you have many columns that need to have the same featurization/transformation applied (for example, 50 columns in several column groups), these columns are handled by grouping based on type.
+
+In the following example, notice that each group has a unique mapper applied. This mapper is then applied to each of the columns of that group.
+
+```python
+def generate_data_transformation_config():
+ from sklearn.pipeline import FeatureUnion
+
+ column_group_1 = [['id'], ['ps_reg_01'], ['ps_reg_02'], ['ps_reg_03'], ['ps_car_11_cat'], ['ps_car_12'], ['ps_car_13'], ['ps_car_14'], ['ps_car_15'], ['ps_calc_01'], ['ps_calc_02'], ['ps_calc_03']]
+
+ column_group_2 = ['ps_ind_06_bin', 'ps_ind_07_bin', 'ps_ind_08_bin', 'ps_ind_09_bin', 'ps_ind_10_bin', 'ps_ind_11_bin', 'ps_ind_12_bin', 'ps_ind_13_bin', 'ps_ind_16_bin', 'ps_ind_17_bin', 'ps_ind_18_bin', 'ps_car_08_cat', 'ps_calc_15_bin', 'ps_calc_16_bin', 'ps_calc_17_bin', 'ps_calc_18_bin', 'ps_calc_19_bin', 'ps_calc_20_bin']
+
+ column_group_3 = ['ps_ind_01', 'ps_ind_02_cat', 'ps_ind_03', 'ps_ind_04_cat', 'ps_ind_05_cat', 'ps_ind_14', 'ps_ind_15', 'ps_car_01_cat', 'ps_car_02_cat', 'ps_car_03_cat', 'ps_car_04_cat', 'ps_car_05_cat', 'ps_car_06_cat', 'ps_car_07_cat', 'ps_car_09_cat', 'ps_car_10_cat', 'ps_car_11', 'ps_calc_04', 'ps_calc_05', 'ps_calc_06', 'ps_calc_07', 'ps_calc_08', 'ps_calc_09', 'ps_calc_10', 'ps_calc_11', 'ps_calc_12', 'ps_calc_13', 'ps_calc_14']
+
+ feature_union = FeatureUnion([
+ ('mapper_0', get_mapper_0(column_group_1)),
+ ('mapper_1', get_mapper_1(column_group_3)),
+ ('mapper_2', get_mapper_2(column_group_2)),
+ ])
+ return feature_union
+```
+
+This approach allows you to have a more streamlined code, by not having a transformer's code-block for each column, which can be especially cumbersome even when you have tens or hundreds of columns in your dataset.
+
+With classification and regression tasks, [`FeatureUnion`] is used for featurizers.
+For time-series forecasting models, multiple time series-aware featurizers are collected into a scikit-learn pipeline, then wrapped in the `TimeSeriesTransformer`.
+Any user provided featurizations for time series forecasting models happens before the ones provided by automated ML.
+
+### Preprocessor specification code
+
+The function `generate_preprocessor_config()`, if present, specifies a preprocessing step to be done after featurization in the final scikit-learn pipeline.
+
+Normally, this preprocessing step only consists of data standardization/normalization that's accomplished with [`sklearn.preprocessing`](https://scikit-learn.org/stable/modules/preprocessing.html).
+
+Automated ML only specifies a preprocessing step for non-ensemble classification and regression models.
+
+Here's an example of a generated preprocessor code:
+
+```python
+def generate_preprocessor_config():
+ from sklearn.preprocessing import MaxAbsScaler
+
+ preproc = MaxAbsScaler(
+ copy=True
+ )
+
+ return preproc
+```
+
+### Algorithm and hyperparameters specification code
+
+The algorithm and hyperparameters specification code is likely what many ML professionals are most interested in.
+
+The `generate_algorithm_config()` function specifies the actual algorithm and hyperparameters for training the model as the last stage of the final scikit-learn pipeline.
+
+The following example uses an XGBoostClassifier algorithm with specific hyperparameters.
+
+```python
+def generate_algorithm_config():
+ from xgboost.sklearn import XGBClassifier
+
+ algorithm = XGBClassifier(
+ base_score=0.5,
+ booster='gbtree',
+ colsample_bylevel=1,
+ colsample_bynode=1,
+ colsample_bytree=1,
+ gamma=0,
+ learning_rate=0.1,
+ max_delta_step=0,
+ max_depth=3,
+ min_child_weight=1,
+ missing=numpy.nan,
+ n_estimators=100,
+ n_jobs=-1,
+ nthread=None,
+ objective='binary:logistic',
+ random_state=0,
+ reg_alpha=0,
+ reg_lambda=1,
+ scale_pos_weight=1,
+ seed=None,
+ silent=None,
+ subsample=1,
+ verbosity=0,
+ tree_method='auto',
+ verbose=-10
+ )
+
+ return algorithm
+```
+
+The generated code in most cases uses open source software (OSS) packages and classes. There are instances where intermediate wrapper classes are used to simplify more complex code. For example, XGBoost classifier and other commonly used libraries like LightGBM or Scikit-Learn algorithms can be applied.
+
+As an ML Professional, you are able to customize that algorithm's configuration code by tweaking its hyperparameters as needed based on your skills and experience for that algorithm and your particular ML problem.
+
+For ensemble models, `generate_preprocessor_config_N()` (if needed) and `generate_algorithm_config_N()` are defined for each learner in the ensemble model, where `N` represents the placement of each learner in the ensemble model's list. For stack ensemble models, the meta learner `generate_algorithm_config_meta()` is defined.
+
+### End to end training code
+
+Code generation emits `build_model_pipeline()` and `train_model()` for defining the scikit-learn pipeline and for calling `fit()` on it, respectively.
+
+```python
+def build_model_pipeline():
+ from sklearn.pipeline import Pipeline
+
+ logger.info("Running build_model_pipeline")
+ pipeline = Pipeline(
+ steps=[
+ ('featurization', generate_data_transformation_config()),
+ ('preproc', generate_preprocessor_config()),
+ ('model', generate_algorithm_config()),
+ ]
+ )
+
+ return pipeline
+```
+
+The scikit-learn pipeline includes the featurization step, a preprocessor (if used), and the algorithm or model.
+
+For time-series forecasting models, the scikit-learn pipeline is wrapped in a `ForecastingPipelineWrapper`, which has some additional logic needed to properly handle time-series data depending on the applied algorithm.
+For all task types, we use `PipelineWithYTransformer` in cases where the label column needs to be encoded.
+
+Once you have the scikit-Learn pipeline, all that is left to call is the `fit()` method to train the model:
+
+```python
+def train_model(X, y, sample_weights):
+
+ logger.info("Running train_model")
+ model_pipeline = build_model_pipeline()
+
+ model = model_pipeline.fit(X, y)
+ return model
+```
+
+The return value from `train_model()` is the model fitted/trained on the input data.
+
+The main code that runs all the previous functions is the following:
+
+```python
+def main(training_dataset_id=None):
+ from azureml.core.run import Run
+
+ # The following code is for when running this code as part of an Azure Machine Learning script run.
+ run = Run.get_context()
+ setup_instrumentation(run)
+
+ df = get_training_dataset(training_dataset_id)
+ X, y, sample_weights = prepare_data(df)
+ split_ratio = 0.1
+ try:
+ (X_train, y_train, sample_weights_train), (X_valid, y_valid, sample_weights_valid) = split_dataset(X, y, sample_weights, split_ratio, should_stratify=True)
+ except Exception:
+ (X_train, y_train, sample_weights_train), (X_valid, y_valid, sample_weights_valid) = split_dataset(X, y, sample_weights, split_ratio, should_stratify=False)
+
+ model = train_model(X_train, y_train, sample_weights_train)
+
+ metrics = calculate_metrics(model, X, y, sample_weights, X_test=X_valid, y_test=y_valid)
+
+ print(metrics)
+ for metric in metrics:
+ run.log(metric, metrics[metric])
+```
+
+Once you have the trained model, you can use it for making predictions with the predict() method. If your experiment is for a time series model, use the forecast() method for predictions.
+
+```python
+y_pred = model.predict(X)
+```
+
+Finally, the model is serialized and saved as a `.pkl` file named "model.pkl":
+
+```python
+ with open('model.pkl', 'wb') as f:
+ pickle.dump(model, f)
+ run.upload_file('outputs/model.pkl', 'model.pkl')
+```
+
+## script_run_notebook.ipynb
+
+The `script_run_notebook.ipynb` notebook serves as an easy way to execute `script.py` on an Azure Machine Learning compute.
+This notebook is similar to the existing automated ML sample notebooks however, there are a couple of key differences as explained in the following sections.
+
+### Environment
+
+Typically, the training environment for an automated ML run is automatically set by the SDK. However, when running a custom script run like the generated code, automated ML is no longer driving the process, so the environment must be specified for the command job to succeed.
+
+Code generation reuses the environment that was used in the original automated ML experiment, if possible. Doing so guarantees that the training script run doesn't fail due to missing dependencies, and has a side benefit of not needing a Docker image rebuild, which saves time and compute resources.
+
+If you make changes to `script.py` that require additional dependencies, or you would like to use your own environment, you need to update the environment in the `script_run_notebook.ipynb` accordingly.
++
+### Submit the experiment
+
+Since the generated code isn't driven by automated ML anymore, instead of creating and submitting an AutoML Job, you need to create a `Command Job` and provide the generated code (script.py) to it.
+
+The following example contains the parameters and regular dependencies needed to run a Command Job, such as compute, environment, etc.
+```python
+from azure.ai.ml import command, Input
+
+# To test with new training / validation datasets, replace the default dataset id(s) taken from parent run below
+training_dataset_id = '<DATASET_ID>'
+
+dataset_arguments = {'training_dataset_id': training_dataset_id}
+command_str = 'python script.py --training_dataset_id ${{inputs.training_dataset_id}}'
+
+command_job = command(
+ code=project_folder,
+ command=command_str,
+ environment='AutoML-Non-Prod-DNN:25',
+ inputs=dataset_arguments,
+ compute='automl-e2e-cl2',
+ experiment_name='build_70775722_9249eda8'
+)
+
+returned_job = ml_client.create_or_update(command_job)
+print(returned_job.studio_url) # link to naviagate to submitted run in Azure Machine Learning Studio
+```
+
+## Next steps
+
+* Learn more about [how and where to deploy a model](how-to-deploy-and-where.md).
+* See how to [enable interpretability features](how-to-machine-learning-interpretability-automl.md) specifically within automated ML experiments.
machine-learning How To High Availability Machine Learning https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/v1/how-to-high-availability-machine-learning.md
Last updated 11/04/2022
+monikerRange: 'azureml-api-1'
# Failover for business continuity and disaster recovery
machine-learning How To Inference Onnx Automl Image Models V1 https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/v1/how-to-inference-onnx-automl-image-models-v1.md
> [!div class="op_single_selector" title1="Select the version of Azure Machine Learning CLI extension you are using:"] > * [v1](how-to-inference-onnx-automl-image-models-v1.md)
-> * [v2 (current version)](../how-to-inference-onnx-automl-image-models.md)
+> * [v2 (current version)](../how-to-inference-onnx-automl-image-models.md?view=azureml-api-2&preserve-view=true)
display_detections(img, boxes.copy(), labels, scores, masks.copy(),
## Next steps * [Learn more about computer vision tasks in AutoML](how-to-auto-train-image-models-v1.md)
-* [Troubleshoot AutoML experiments](../how-to-troubleshoot-auto-ml.md)
+* [Troubleshoot AutoML experiments](how-to-troubleshoot-auto-ml.md)
machine-learning How To Log View Metrics https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/v1/how-to-log-view-metrics.md
> [!div class="op_single_selector" title1="Select the version of Azure Machine Learning Python SDK you are using:"] > * [v1](how-to-log-view-metrics.md)
-> * [v2](../how-to-log-view-metrics.md)
+> * [v2](../how-to-log-view-metrics.md?view=azureml-api-2&preserve-view=true)
Log real-time information using both the default Python logging package and Azure Machine Learning Python SDK-specific functionality. You can log locally and send logs to your workspace in the portal.
params = finished_mlflow_run.data.params
>[!NOTE] > The metrics dictionary under `mlflow.entities.Run.data.metrics` only returns the most recently logged value for a given metric name. For example, if you log, in order, 1, then 2, then 3, then 4 to a metric called `sample_metric`, only 4 is present in the metrics dictionary for `sample_metric`. >
-> To get all metrics logged for a particular metric name, you can use [`MlFlowClient.get_metric_history()`](https://www.mlflow.org/docs/latest/python_api/mlflow.tracking.html#mlflow.tracking.MlflowClient.get_metric_history).
+> To get all metrics logged for a particular metric name, you can use [`MlFlowClient.get_metric_history()`](https://www.mlflow.org/docs/latest/python_api/mlflow.client.html#mlflow.client.MlflowClient.get_metric_history).
<a name="view-the-experiment-in-the-web-portal"></a>
Log files are an essential resource for debugging the Azure Machine Learning wor
2. Select **Download all** to download all your logs into a zip folder. 3. You can also download individual log files by choosing the log file and selecting **Download** #### user_logs folder
machine-learning How To Manage Workspace Cli https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/v1/how-to-manage-workspace-cli.md
> [!div class="op_single_selector" title1="Select the version of Azure Machine Learning CLI extension you are using:"] > * [v1](how-to-manage-workspace-cli.md)
-> * [v2 (current version)](../how-to-manage-workspace-cli.md)
+> * [v2 (current version)](../how-to-manage-workspace-cli.md?view=azureml-api-2&preserve-view=true)
[!INCLUDE [cli-version-info](../../../includes/machine-learning-cli-version-1-only.md)]
machine-learning How To Manage Workspace https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/v1/how-to-manage-workspace.md
[!INCLUDE [sdk v1](../../../includes/machine-learning-sdk-v1.md)] > [!div class="op_single_selector" title1="Select the version of Azure Machine Learning SDK you are using:"] > * [v1](how-to-manage-workspace.md)
-> * [v2 (current version)](../how-to-manage-workspace.md)
+> * [v2 (current version)](../how-to-manage-workspace.md?view=azureml-api-2&preserve-view=true)
In this article, you create, view, and delete [**Azure Machine Learning workspaces**](../concept-workspace.md) for [Azure Machine Learning](../overview-what-is-azure-machine-learning.md), using the [SDK for Python](/python/api/overview/azure/ml/).
For more information, see [Workspace SDK reference](/python/api/azureml-core/azu
If you have problems in accessing your subscription, see [Set up authentication for Azure Machine Learning resources and workflows](how-to-setup-authentication.md), as well as the [Authentication in Azure Machine Learning](https://aka.ms/aml-notebook-auth) notebook.
-### Networking
+### Networking
-> [!IMPORTANT]
-> For more information on using a private endpoint and virtual network with your workspace, see [Network isolation and privacy](how-to-network-security-overview.md).
+> [!IMPORTANT]
+> For more information on using a private endpoint and virtual network with your workspace, see [Network isolation and privacy](../how-to-network-security-overview.md).
The Azure Machine Learning Python SDK provides the [PrivateEndpointConfig](/python/api/azureml-core/azureml.core.privateendpointconfig) class, which can be used with [Workspace.create()](/python/api/azureml-core/azureml.core.workspace.workspace#create-name--auth-none--subscription-id-none--resource-group-none--location-none--create-resource-group-true--sku--basictags-none--friendly-name-none--storage-account-none--key-vault-none--app-insights-none--container-registry-none--adb-workspace-none--cmk-keyvault-none--resource-cmk-uri-none--hbi-workspace-false--default-cpu-compute-target-none--default-gpu-compute-target-none--private-endpoint-config-none--private-endpoint-auto-approval-true--exist-ok-false--show-output-true-) to create a workspace with a private endpoint. This class requires an existing virtual network.
By default, metadata for the workspace is stored in an Azure Cosmos DB instance
To limit the data that Microsoft collects on your workspace, select __High business impact workspace__ in the portal, or set `hbi_workspace=true ` in Python. For more information on this setting, see [Encryption at rest](../concept-data-encryption.md#encryption-at-rest).
-> [!IMPORTANT]
-> Selecting high business impact can only be done when creating a workspace. You cannot change this setting after workspace creation.
+> [!IMPORTANT]
+> Selecting high business impact can only be done when creating a workspace. You cannot change this setting after workspace creation.
#### Use your own data encryption key
You can provide your own key for data encryption. Doing so creates the Azure Cos
Use the following steps to provide your own key:
-> [!IMPORTANT]
-> Before following these steps, you must first perform the following actions:
+> [!IMPORTANT]
+> Before following these steps, you must first perform the following actions:
> > Follow the steps in [Configure customer-managed keys](../how-to-setup-customer-managed-keys.md) to: > * Register the Azure Cosmos DB provider
machine-learning How To Network Security Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/v1/how-to-network-security-overview.md
- Title: Secure workspace resources using virtual networks (v1)-
-description: Secure Azure Machine Learning workspace resources and compute environments using an isolated Azure Virtual Network. SDK/CLI v1.
------ Previously updated : 11/16/2022----
-<!-- # Virtual network isolation and privacy overview -->
-# Secure Azure Machine Learning workspace resources using virtual networks (SDK/CLI v1)
--
-> [!div class="op_single_selector" title1="Select the Azure Machine Learning SDK or CLI version you are using:"]
-> * [SDK v1](how-to-network-security-overview.md)
-> * [SDK v2 (current version)](../how-to-network-security-overview.md)
-
-Secure Azure Machine Learning workspace resources and compute environments using virtual networks (VNets). This article uses an example scenario to show you how to configure a complete virtual network.
-
-> [!TIP]
-> This article is part of a series on securing an Azure Machine Learning workflow. See the other articles in this series:
->
-> * [Secure the workspace resources](../how-to-secure-workspace-vnet.md)
-> * [Secure the training environment (v1)](how-to-secure-training-vnet.md)
-> * [Secure inference environment (v1)](how-to-secure-inferencing-vnet.md)
-> * [Enable studio functionality](../how-to-enable-studio-virtual-network.md)
-> * [Use custom DNS](../how-to-custom-dns.md)
-> * [Use a firewall](../how-to-access-azureml-behind-firewall.md)
-> * [API platform network isolation](../how-to-configure-network-isolation-with-v2.md)
->
-> For a tutorial on creating a secure workspace, see [Tutorial: Create a secure workspace](../tutorial-create-secure-workspace.md) or [Tutorial: Create a secure workspace using a template](../tutorial-create-secure-workspace-template.md).
-
-## Prerequisites
-
-This article assumes that you have familiarity with the following topics:
-+ [Azure Virtual Networks](../../virtual-network/virtual-networks-overview.md)
-+ [IP networking](../../virtual-network/ip-services/public-ip-addresses.md)
-+ [Azure Machine Learning workspace with private endpoint](how-to-configure-private-link.md)
-+ [Network Security Groups (NSG)](../../virtual-network/network-security-groups-overview.md)
-+ [Network firewalls](../../firewall/overview.md)
-## Example scenario
-
-In this section, you learn how a common network scenario is set up to secure Azure Machine Learning communication with private IP addresses.
-
-The following table compares how services access different parts of an Azure Machine Learning network with and without a VNet:
-
-| Scenario | Workspace | Associated resources | Training compute environment | Inferencing compute environment |
-|-|-|-|-|-|-|
-|**No virtual network**| Public IP | Public IP | Public IP | Public IP |
-|**Public workspace, all other resources in a virtual network** | Public IP | Public IP (service endpoint) <br> **- or -** <br> Private IP (private endpoint) | Public IP | Private IP |
-|**Secure resources in a virtual network**| Private IP (private endpoint) | Public IP (service endpoint) <br> **- or -** <br> Private IP (private endpoint) | Private IP | Private IP |
-
-* **Workspace** - Create a private endpoint for your workspace. The private endpoint connects the workspace to the vnet through several private IP addresses.
- * **Public access** - You can optionally enable public access for a secured workspace.
-* **Associated resource** - Use service endpoints or private endpoints to connect to workspace resources like Azure storage, Azure Key Vault. For Azure Container Services, use a private endpoint.
- * **Service endpoints** provide the identity of your virtual network to the Azure service. Once you enable service endpoints in your virtual network, you can add a virtual network rule to secure the Azure service resources to your virtual network. Service endpoints use public IP addresses.
- * **Private endpoints** are network interfaces that securely connect you to a service powered by Azure Private Link. Private endpoint uses a private IP address from your VNet, effectively bringing the service into your VNet.
-* **Training compute access** - Access training compute targets like Azure Machine Learning Compute Instance and Azure Machine Learning Compute Clusters with public or private IP addresses.
-* **Inference compute access** - Access Azure Kubernetes Services (AKS) compute clusters with private IP addresses.
--
-The next sections show you how to secure the network scenario described above. To secure your network, you must:
-
-1. Secure the [**workspace and associated resources**](#secure-the-workspace-and-associated-resources).
-1. Secure the [**training environment** (v1)](#secure-the-training-environment).
-1. Secure the [**inferencing environment** (v1)](#secure-the-inferencing-environment-v1).
-1. Optionally: [**enable studio functionality**](#optional-enable-studio-functionality).
-1. Configure [**firewall settings**](#configure-firewall-settings).
-1. Configure [**DNS name resolution**](#custom-dns).
-
-## Public workspace and secured resources
-
-If you want to access the workspace over the public internet while keeping all the associated resources secured in a virtual network, use the following steps:
-
-1. Create an [Azure Virtual Network](../../virtual-network/virtual-networks-overview.md) that will contain the resources used by the workspace.
-1. Use __one__ of the following options to create a publicly accessible workspace:
-
- * Create an Azure Machine Learning workspace that __does not__ use the virtual network. For more information, see [Manage Azure Machine Learning workspaces](../how-to-manage-workspace.md).
- * Create a [Private Link-enabled workspace](../how-to-secure-workspace-vnet.md#secure-the-workspace-with-private-endpoint) to enable communication between your VNet and workspace. Then [enable public access to the workspace](#optional-enable-public-access).
-
-1. Add the following services to the virtual network by using _either_ a __service endpoint__ or a __private endpoint__. Also allow trusted Microsoft services to access these
-
- | Service | Endpoint information | Allow trusted information |
- | -- | -- | -- |
- | __Azure Key Vault__| [Service endpoint](../../key-vault/general/overview-vnet-service-endpoints.md)</br>[Private endpoint](../../key-vault/general/private-link-service.md) | [Allow trusted Microsoft services to bypass this firewall](../how-to-secure-workspace-vnet.md#secure-azure-key-vault) |
- | __Azure Storage Account__ | [Service and private endpoint](../how-to-secure-workspace-vnet.md?tabs=se#secure-azure-storage-accounts)</br>[Private endpoint](../how-to-secure-workspace-vnet.md?tabs=pe#secure-azure-storage-accounts) | [Grant access to trusted Azure services](../../storage/common/storage-network-security.md#grant-access-to-trusted-azure-services) |
- | __Azure Container Registry__ | [Private endpoint](../../container-registry/container-registry-private-link.md) | [Allow trusted services](../../container-registry/allow-access-trusted-services.md) |
-
-1. In properties for the Azure Storage Account(s) for your workspace, add your client IP address to the allowed list in firewall settings. For more information, see [Configure firewalls and virtual networks](../../storage/common/storage-network-security.md#configuring-access-from-on-premises-networks).
-
-## Secure the workspace and associated resources
-
-Use the following steps to secure your workspace and associated resources. These steps allow your services to communicate in the virtual network.
-
-1. Create an [Azure Virtual Networks](../../virtual-network/virtual-networks-overview.md) that will contain the workspace and other resources. Then create a [Private Link-enabled workspace](../how-to-secure-workspace-vnet.md#secure-the-workspace-with-private-endpoint) to enable communication between your VNet and workspace.
-1. Add the following services to the virtual network by using _either_ a __service endpoint__ or a __private endpoint__. Also allow trusted Microsoft services to access these
-
- | Service | Endpoint information | Allow trusted information |
- | -- | -- | -- |
- | __Azure Key Vault__| [Service endpoint](../../key-vault/general/overview-vnet-service-endpoints.md)</br>[Private endpoint](../../key-vault/general/private-link-service.md) | [Allow trusted Microsoft services to bypass this firewall](../how-to-secure-workspace-vnet.md#secure-azure-key-vault) |
- | __Azure Storage Account__ | [Service and private endpoint](../how-to-secure-workspace-vnet.md?tabs=se#secure-azure-storage-accounts)</br>[Private endpoint](../how-to-secure-workspace-vnet.md?tabs=pe#secure-azure-storage-accounts) | [Grant access from Azure resource instances](../../storage/common/storage-network-security.md#grant-access-from-azure-resource-instances)</br>**or**</br>[Grant access to trusted Azure services](../../storage/common/storage-network-security.md#grant-access-to-trusted-azure-services) |
- | __Azure Container Registry__ | [Private endpoint](../../container-registry/container-registry-private-link.md) | [Allow trusted services](../../container-registry/allow-access-trusted-services.md) |
---
-For detailed instructions on how to complete these steps, see [Secure an Azure Machine Learning workspace](../how-to-secure-workspace-vnet.md).
-
-### Limitations
-
-Securing your workspace and associated resources within a virtual network have the following limitations:
-- All resources must be behind the same VNet. However, subnets within the same VNet are allowed.-
-## Secure the training environment
-
-In this section, you learn how to secure the training environment in Azure Machine Learning. You also learn how Azure Machine Learning completes a training job to understand how the network configurations work together.
-
-To secure the training environment, use the following steps:
-
-1. Create an Azure Machine Learning [compute instance and computer cluster in the virtual network](how-to-secure-training-vnet.md) to run the training job.
-1. If your compute cluster or compute instance uses a public IP address, you must [Allow inbound communication](how-to-secure-training-vnet.md#required-public-internet-access-to-train-models) so that management services can submit jobs to your compute resources.
-
- > [!TIP]
- > Compute cluster and compute instance can be created with or without a public IP address. If created with a public IP address, you get a load balancer with a public IP to accept the inbound access from Azure batch service and Azure Machine Learning service. You need to configure User Defined Routing (UDR) if you use a firewall. If created without a public IP, you get a private link service to accept the inbound access from Azure batch service and Azure Machine Learning service without a public IP.
--
-For detailed instructions on how to complete these steps, see [Secure a training environment](how-to-secure-training-vnet.md).
-
-### Example training job submission
-
-In this section, you learn how Azure Machine Learning securely communicates between services to submit a training job. This shows you how all your configurations work together to secure communication.
-
-1. The client uploads training scripts and training data to storage accounts that are secured with a service or private endpoint.
-
-1. The client submits a training job to the Azure Machine Learning workspace through the private endpoint.
-
-1. Azure Batch service receives the job from the workspace. It then submits the training job to the compute environment through the public load balancer for the compute resource.
-
-1. The compute resource receives the job and begins training. The compute resource uses information stored in key vault to access storage accounts to download training files and upload output.
-
-### Limitations
--- Azure Compute Instance and Azure Compute Clusters must be in the same VNet, region, and subscription as the workspace and its associated resources. -
-## Secure the inferencing environment (v1)
--
-In this section, you learn the options available for securing an inferencing environment when using the Azure CLI extension for ML v1 or the Azure Machine Learning Python SDK v1. When doing a v1 deployment, we recommend that you use Azure Kubernetes Services (AKS) clusters for high-scale, production deployments.
-
-You have two options for AKS clusters in a virtual network:
--- Deploy or attach a default AKS cluster to your VNet.-- Attach a private AKS cluster to your VNet.-
-**Default AKS clusters** have a control plane with public IP addresses. You can add a default AKS cluster to your VNet during the deployment or attach a cluster after it's created.
-
-**Private AKS clusters** have a control plane, which can only be accessed through private IPs. Private AKS clusters must be attached after the cluster is created.
-
-For detailed instructions on how to add default and private clusters, see [Secure an inferencing environment](how-to-secure-inferencing-vnet.md).
-
-Regardless default AKS cluster or private AKS cluster used, if your AKS cluster is behind of VNET, your workspace and its associate resources (storage, key vault, and ACR) must have private endpoints or service endpoints in the same VNET as the AKS cluster.
-
-The following network diagram shows a secured Azure Machine Learning workspace with a private AKS cluster attached to the virtual network.
--
-## Optional: Enable public access
-
-You can secure the workspace behind a VNet using a private endpoint and still allow access over the public internet. The initial configuration is the same as [securing the workspace and associated resources](#secure-the-workspace-and-associated-resources).
-
-After securing the workspace with a private endpoint, use the following steps to enable clients to develop remotely using either the SDK or Azure Machine Learning studio:
-
-1. [Enable public access](how-to-configure-private-link.md#enable-public-access) to the workspace.
-1. [Configure the Azure Storage firewall](../../storage/common/storage-network-security.md?toc=%2fazure%2fstorage%2fblobs%2ftoc.json#grant-access-from-an-internet-ip-range) to allow communication with the IP address of clients that connect over the public internet.
-
-## Optional: Enable studio functionality
-
-If your storage is in a VNet, you must use extra configuration steps to enable full functionality in studio. By default, the following features are disabled:
-
-* Preview data in the studio.
-* Visualize data in the designer.
-* Deploy a model in the designer.
-* Submit an AutoML experiment.
-* Start a labeling project.
-
-To enable full studio functionality, see [Use Azure Machine Learning studio in a virtual network](../how-to-enable-studio-virtual-network.md).
-
-### Limitations
-
-[ML-assisted data labeling](../how-to-create-image-labeling-projects.md#use-ml-assisted-data-labeling) doesn't support a default storage account behind a virtual network. Instead, use a storage account other than the default for ML assisted data labeling.
-
-> [!TIP]
-> As long as it is not the default storage account, the account used by data labeling can be secured behind the virtual network.
-
-## Configure firewall settings
-
-Configure your firewall to control traffic between your Azure Machine Learning workspace resources and the public internet. While we recommend Azure Firewall, you can use other firewall products.
-
-For more information on firewall settings, see [Use workspace behind a Firewall](../how-to-access-azureml-behind-firewall.md).
-
-## Custom DNS
-
-If you need to use a custom DNS solution for your virtual network, you must add host records for your workspace.
-
-For more information on the required domain names and IP addresses, see [how to use a workspace with a custom DNS server](../how-to-custom-dns.md).
-
-## Microsoft Sentinel
-
-Microsoft Sentinel is a security solution that can integrate with Azure Machine Learning. For example, using Jupyter notebooks provided through Azure Machine Learning. For more information, see [Use Jupyter notebooks to hunt for security threats](../../sentinel/notebooks.md).
-
-### Public access
-
-Microsoft Sentinel can automatically create a workspace for you if you are OK with a public endpoint. In this configuration, the security operations center (SOC) analysts and system administrators connect to notebooks in your workspace through Sentinel.
-
-For information on this process, see [Create an Azure Machine Learning workspace from Microsoft Sentinel](../../sentinel/notebooks-hunt.md?tabs=public-endpoint#create-an-azure-ml-workspace-from-microsoft-sentinel)
--
-### Private endpoint
-
-If you want to secure your workspace and associated resources in a VNet, you must create the Azure Machine Learning workspace first. You must also create a virtual machine 'jump box' in the same VNet as your workspace, and enable Azure Bastion connectivity to it. Similar to the public configuration, SOC analysts and administrators can connect using Microsoft Sentinel, but some operations must be performed using Azure Bastion to connect to the VM.
-
-For more information on this configuration, see [Create an Azure Machine Learning workspace from Microsoft Sentinel](../../sentinel/notebooks-hunt.md?tabs=private-endpoint#create-an-azure-ml-workspace-from-microsoft-sentinel)
--
-## Next steps
-
-This article is part of a series on securing an Azure Machine Learning workflow. See the other articles in this series:
-
-* [Secure the workspace resources](../how-to-secure-workspace-vnet.md)
-* [Secure the training environment (v1)](how-to-secure-training-vnet.md)
-* [Secure inference environment (v1)](how-to-secure-inferencing-vnet.md)
-* [Enable studio functionality](../how-to-enable-studio-virtual-network.md)
-* [Use custom DNS](../how-to-custom-dns.md)
-* [Use a firewall](../how-to-access-azureml-behind-firewall.md)
-* [API platform network isolation](../how-to-configure-network-isolation-with-v2.md)
machine-learning How To Prepare Datasets For Automl Images V1 https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/v1/how-to-prepare-datasets-for-automl-images-v1.md
Last updated 10/13/2021
> [!div class="op_single_selector" title1="Select the version of Azure Machine Learning you are using:"] > * [v1](how-to-prepare-datasets-for-automl-images-v1.md)
-> * [v2 (current version)](../how-to-prepare-datasets-for-automl-images.md)
+> * [v2 (current version)](../how-to-prepare-datasets-for-automl-images.md?view=azureml-api-2&preserve-view=true)
[!INCLUDE [cli-version-info](../../../includes/machine-learning-cli-version-1-only.md)]
machine-learning How To Secure Inferencing Vnet https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/v1/how-to-secure-inferencing-vnet.md
> [!div class="op_single_selector" title1="Select the Azure Machine Learning SDK or CLI version you are using:"] > * [SDK/CLI v1](how-to-secure-inferencing-vnet.md)
-> * [SDK/CLI v2 (current version)](../how-to-secure-inferencing-vnet.md)
+> * [SDK/CLI v2 (current version)](../how-to-secure-inferencing-vnet.md?view=azureml-api-2&preserve-view=true)
In this article, you learn how to secure inferencing environments with a virtual network in Azure Machine Learning. This article is specific to the SDK/CLI v1 deployment workflow of deploying a model as a web service.
machine-learning How To Secure Training Vnet https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/v1/how-to-secure-training-vnet.md
> [!div class="op_single_selector" title1="Select the Azure Machine Learning SDK version you are using:"] > * [SDK v1](how-to-secure-training-vnet.md)
-> * [SDK v2 (current version)](../how-to-secure-training-vnet.md)
+> * [SDK v2 (current version)](../how-to-secure-training-vnet.md?view=azureml-api-2&preserve-view=true)
In this article, you learn how to secure training environments with a virtual network in Azure Machine Learning using the Python SDK v1.
In this article you learn how to secure the following training compute resources
> For more information, see [Supplemental Terms of Use for Microsoft Azure Previews](https://azure.microsoft.com/support/legal/preview-supplemental-terms/). ## Prerequisites
-+ Read the [Network security overview](how-to-network-security-overview.md) article to understand common virtual network scenarios and overall virtual network architecture.
++ Read the [Network security overview](../how-to-network-security-overview.md) article to understand common virtual network scenarios and overall virtual network architecture. + An existing virtual network and subnet to use with your compute resources. This VNet must be in the same subscription as your Azure Machine Learning workspace.
For information on using a firewall solution, see [Use a firewall with Azure Mac
This article is part of a series on securing an Azure Machine Learning workflow. See the other articles in this series:
-* [Virtual network overview (v1)](how-to-network-security-overview.md)
+* [Virtual network overview (v1)](../how-to-network-security-overview.md)
* [Secure the workspace resources](../how-to-secure-workspace-vnet.md) * [Secure inference environment (v1)](how-to-secure-inferencing-vnet.md) * [Enable studio functionality](../how-to-enable-studio-virtual-network.md)
machine-learning How To Secure Workspace Vnet https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/v1/how-to-secure-workspace-vnet.md
> [!div class="op_single_selector" title1="Select the version of Azure Machine Learning SDK/CLI extension you are using:"] > * [v1](how-to-secure-workspace-vnet.md)
-> * [v2 (current version)](../how-to-secure-workspace-vnet.md)
+> * [v2 (current version)](../how-to-secure-workspace-vnet.md?view=azureml-api-2&preserve-view=true)
In this article, you learn how to secure an Azure Machine Learning workspace and its associated resources in a virtual network. > [!TIP] > This article is part of a series on securing an Azure Machine Learning workflow. See the other articles in this series: >
-> * [Virtual network overview](how-to-network-security-overview.md)
+> * [Virtual network overview](../how-to-network-security-overview.md)
> * [Secure the training environment](how-to-secure-training-vnet.md) > * [Secure the inference environment](how-to-secure-inferencing-vnet.md) > * [Enable studio functionality](../how-to-enable-studio-virtual-network.md)
In this article you learn how to enable the following workspaces resources in a
## Prerequisites
-+ Read the [Network security overview](how-to-network-security-overview.md) article to understand common virtual network scenarios and overall virtual network architecture.
++ Read the [Network security overview](../how-to-network-security-overview.md) article to understand common virtual network scenarios and overall virtual network architecture. + Read the [Azure Machine Learning best practices for enterprise security](/azure/cloud-adoption-framework/ready/azure-best-practices/ai-machine-learning-enterprise-security) article to learn about best practices.
When your Azure Machine Learning workspace is configured with a private endpoint
### Azure Container Registry
-When ACR is behind a virtual network, Azure Machine Learning canΓÇÖt use it to directly build Docker images. Instead, the compute cluster is used to build the images.
+When ACR is behind a virtual network, Azure Machine Learning can't use it to directly build Docker images. Instead, the compute cluster is used to build the images.
> [!IMPORTANT] > The compute cluster used to build Docker images needs to be able to access the package repositories that are used to train and deploy your models. You may need to add network security rules that allow access to public repos, [use private Python packages](how-to-use-private-python-packages.md), or use [custom Docker images](how-to-train-with-custom-image.md) that already include the packages.
json_ds = Dataset.Tabular.from_json_lines_files(path=datastore_paths,
validate=False) ```
+## Secure Azure Monitor and Application Insights
+
+To enable network isolation for Azure Monitor and the Application Insights instance for the workspace, use the following steps:
+
+1. Upgrade the Application Insights instance for your workspace. For steps on how to upgrade, see [Migrate to workspace-based Application Insights resources](/azure/azure-monitor/app/convert-classic-resource).
+
+ > [!TIP]
+ > New workspaces create a workspace-based Application Insights resource by default.
+
+1. Create an Azure Monitor Private Link Scope and add the Application Insights instance from step 1 to the scope. For steps on how to do this, see [Configure your Azure Monitor private link](/azure/azure-monitor/logs/private-link-configure).
+ ## Securely connect to your workspace [!INCLUDE [machine-learning-connect-secure-workspace](../../../includes/machine-learning-connect-secure-workspace.md)]
validate=False)
This article is part of a series on securing an Azure Machine Learning workflow. See the other articles in this series:
-* [Virtual network overview](how-to-network-security-overview.md)
+* [Virtual network overview](../how-to-network-security-overview.md)
* [Secure the training environment](how-to-secure-training-vnet.md) * [Secure the inference environment](how-to-secure-inferencing-vnet.md) * [Enable studio functionality](../how-to-enable-studio-virtual-network.md)
machine-learning How To Set Up Training Targets https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/v1/how-to-set-up-training-targets.md
See these notebooks for examples of configuring jobs for various training scenar
* **Job or experiment deletion**: Experiments can be archived by using the [Experiment.archive](/python/api/azureml-core/azureml.core.experiment%28class%29#archive--) method, or from the Experiment tab view in Azure Machine Learning studio client via the "Archive experiment" button. This action hides the experiment from list queries and views, but does not delete it.
- Permanent deletion of individual experiments or jobs is not currently supported. For more information on deleting Workspace assets, see [Export or delete your Machine Learning service workspace data](how-to-export-delete-data.md).
+ Permanent deletion of individual experiments or jobs is not currently supported. For more information on deleting Workspace assets, see [Export or delete your Machine Learning service workspace data](../how-to-export-delete-data.md).
* **Metric Document is too large**: Azure Machine Learning has internal limits on the size of metric objects that can be logged at once from a training job. If you encounter a "Metric Document is too large" error when logging a list-valued metric, try splitting the list into smaller chunks, for example:
machine-learning How To Setup Authentication https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/v1/how-to-setup-authentication.md
# Set up authentication for Azure Machine Learning resources and workflows using SDK v1 [!INCLUDE [sdk v1](../../../includes/machine-learning-sdk-v1.md)]
-
+
> [!div class="op_single_selector" title1="Select the version of Azure Machine Learning SDK you are using:"] > * [v1](how-to-setup-authentication.md)
-> * [v2 (current version)](../how-to-setup-authentication.md)
+> * [v2 (current version)](../how-to-setup-authentication.md?view=azureml-api-2&preserve-view=true)
Learn how to set up authentication to your Azure Machine Learning workspace. Authentication to your Azure Machine Learning workspace is based on __Azure Active Directory__ (Azure AD) for most things. In general, there are four authentication workflows that you can use when connecting to the workspace:
can require two-factor authentication, or allow sign in only from managed device
## Next steps
-* [How to use secrets in training](../how-to-use-secrets-in-runs.md).
+* [How to use secrets in training](how-to-use-secrets-in-runs.md).
* [How to configure authentication for models deployed as a web service](how-to-authenticate-web-service.md). * [Consume an Azure Machine Learning model deployed as a web service](how-to-consume-web-service.md).
machine-learning How To Track Monitor Analyze Runs https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/v1/how-to-track-monitor-analyze-runs.md
root_run(current_child_run).log("MyMetric", f"Data from child run {current_child
1. Select **Diagnostic settings** and then select **+ Add diagnostic setting**.
- ![Screenshot of diagnostic settings for email notification.](./media/how-to-track-monitor-analyze-runs/diagnostic-setting.png)
+ ![Screenshot of diagnostic settings for email notification.](../media/how-to-track-monitor-analyze-runs/diagnostic-setting.png)
1. In the Diagnostic Setting, 1. under the **Category details**, select the **AmlRunStatusChangedEvent**.
root_run(current_child_run).log("MyMetric", f"Data from child run {current_child
> [!NOTE] > The **Azure Log Analytics Workspace** is a different type of Azure Resource than the **Azure Machine Learning service Workspace**. If there are no options in that list, you can [create a Log Analytics Workspace](../../azure-monitor/logs/quick-create-workspace.md).
- ![Screenshot of configuring the email notification.](./media/how-to-track-monitor-analyze-runs/log-location.png)
+ ![Screenshot of configuring the email notification.](../media/how-to-track-monitor-analyze-runs/log-location.png)
1. In the **Logs** tab, add a **New alert rule**.
- ![Screeenshot of the new alert rule.](./media/how-to-track-monitor-analyze-runs/new-alert-rule.png)
+ ![Screeenshot of the new alert rule.](../media/how-to-track-monitor-analyze-runs/new-alert-rule.png)
1. See [how to create and manage log alerts using Azure Monitor](../../azure-monitor/alerts/alerts-log.md).
machine-learning How To Train Keras https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/v1/how-to-train-keras.md
[!INCLUDE [sdk v1](../../../includes/machine-learning-sdk-v1.md)] > [!div class="op_single_selector" title1="Select the Azure Machine Learning SDK version you are using:"] > * [v1](how-to-train-keras.md)
-> * [v2 (current version)](../how-to-train-keras.md)
+> * [v2 (current version)](../how-to-train-keras.md?view=azureml-api-2&preserve-view=true)
In this article, learn how to run your Keras training scripts with Azure Machine Learning.
machine-learning How To Train Pytorch https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/v1/how-to-train-pytorch.md
[!INCLUDE [sdk v1](../../../includes/machine-learning-sdk-v1.md)] > [!div class="op_single_selector" title1="Select the Azure Machine Learning SDK version you are using:"] > * [v1](how-to-train-pytorch.md)
-> * [v2 (current version)](../how-to-train-pytorch.md)
+> * [v2 (current version)](../how-to-train-pytorch.md?view=azureml-api-2&preserve-view=true)
In this article, learn how to run your [PyTorch](https://pytorch.org/) training scripts at enterprise scale using Azure Machine Learning.
machine-learning How To Train Scikit Learn https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/v1/how-to-train-scikit-learn.md
[!INCLUDE [sdk v1](../../../includes/machine-learning-sdk-v1.md)] > [!div class="op_single_selector" title1="Select the Azure Machine Learning SDK version you are using:"] > * [v1](how-to-train-scikit-learn.md)
-> * [v2 (current version)](../how-to-train-scikit-learn.md)
+> * [v2 (current version)](../how-to-train-scikit-learn.md?view=azureml-api-2&preserve-view=true)
In this article, learn how to run your scikit-learn training scripts with Azure Machine Learning.
machine-learning How To Train Tensorflow https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/v1/how-to-train-tensorflow.md
[!INCLUDE [sdk v1](../../../includes/machine-learning-sdk-v1.md)] > [!div class="op_single_selector" title1="Select the Azure Machine Learning SDK version you are using:"] > * [v1](how-to-train-tensorflow.md)
-> * [v2 (current version)](../how-to-train-tensorflow.md)
+> * [v2 (current version)](../how-to-train-tensorflow.md?view=azureml-api-2&preserve-view=true)
In this article, learn how to run your [TensorFlow](https://www.tensorflow.org/overview) training scripts at scale using Azure Machine Learning.
machine-learning How To Train With Datasets https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/v1/how-to-train-with-datasets.md
> [!div class="op_single_selector" title1="Select the version of Azure Machine Learning CLI extension you are using:"] > * [v1](how-to-train-with-datasets.md)
-> * [v2 (current version)](../how-to-read-write-data-v2.md)
+> * [v2 (current version)](../how-to-read-write-data-v2.md?view=azureml-api-2&preserve-view=true)
[!INCLUDE [sdk v1](../../../includes/machine-learning-sdk-v1.md)]
machine-learning How To Troubleshoot Auto Ml https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/v1/how-to-troubleshoot-auto-ml.md
+
+ Title: Troubleshoot automated ML experiments
+
+description: Learn how to troubleshoot and resolve issues in your automated machine learning experiments.
++++++ Last updated : 10/21/2021++
+monikerRange: 'azureml-api-1'
++
+# Troubleshoot automated ML experiments in Python
++
+In this guide, learn how to identify and resolve known issues in your automated machine learning experiments with the [Azure Machine Learning SDK](/python/api/overview/azure/ml/intro).
+
+## Version dependencies
+
+**`AutoML` dependencies to newer package versions break compatibility**. After SDK version 1.13.0, models aren't loaded in older SDKs due to incompatibility between the older versions pinned in previous `AutoML` packages, and the newer versions pinned today.
+
+Expect errors such as:
+
+* Module not found errors such as,
+
+ `No module named 'sklearn.decomposition._truncated_svd'`
+
+* Import errors such as,
+
+ `ImportError: cannot import name 'RollingOriginValidator'`,
+* Attribute errors such as,
+
+ `AttributeError: 'SimpleImputer' object has no attribute 'add_indicator'`
+
+Resolutions depend on your `AutoML` SDK training version:
+
+* If your `AutoML` SDK training version is greater than 1.13.0, you need `pandas == 0.25.1` and `scikit-learn==0.22.1`.
+
+ * If there is a version mismatch, upgrade scikit-learn and/or pandas to correct version with the following,
+
+ ```bash
+ pip install --upgrade pandas==0.25.1
+ pip install --upgrade scikit-learn==0.22.1
+ ```
+
+* If your `AutoML` SDK training version is less than or equal to 1.12.0, you need `pandas == 0.23.4` and `sckit-learn==0.20.3`.
+ * If there is a version mismatch, downgrade scikit-learn and/or pandas to correct version with the following,
+
+ ```bash
+ pip install --upgrade pandas==0.23.4
+ pip install --upgrade scikit-learn==0.20.3
+ ```
+
+## Setup
+
+`AutoML` package changes since version 1.0.76 require the previous version to be uninstalled before updating to the new version.
+
+* **`ImportError: cannot import name AutoMLConfig`**
+
+ If you encounter this error after upgrading from an SDK version before v1.0.76 to v1.0.76 or later, resolve the error by running: `pip uninstall azureml-train automl` and then `pip install azureml-train-automl`. The automl_setup.cmd script does this automatically.
+
+* **automl_setup fails**
+
+ * On Windows, run automl_setup from an Anaconda Prompt. [Install Miniconda](https://docs.conda.io/en/latest/miniconda.html).
+
+ * Ensure that conda 64-bit version 4.4.10 or later is installed. You can check the bit with the `conda info` command. The `platform` should be `win-64` for Windows or `osx-64` for Mac. To check the version use the command `conda -V`. If you have a previous version installed, you can update it by using the command: `conda update conda`. To check 32-bit by running
+
+ * Ensure that conda is installed.
+
+ * Linux - `gcc: error trying to exec 'cc1plus'`
+
+ 1. If the `gcc: error trying to exec 'cc1plus': execvp: No such file or directory` error is encountered, install the GCC build tools for your Linux distribution. For example, on Ubuntu, use the command `sudo apt-get install build-essential`.
+
+ 1. Pass a new name as the first parameter to automl_setup to create a new conda environment. View existing conda environments using `conda env list` and remove them with `conda env remove -n <environmentname>`.
+
+* **automl_setup_linux.sh fails**: If automl_setup_linus.sh fails on Ubuntu Linux with the error: `unable to execute 'gcc': No such file or directory`
+
+ 1. Make sure that outbound ports 53 and 80 are enabled. On an Azure virtual machine, you can do this from the Azure portal by selecting the VM and clicking on **Networking**.
+ 1. Run the command: `sudo apt-get update`
+ 1. Run the command: `sudo apt-get install build-essential --fix-missing`
+ 1. Run `automl_setup_linux.sh` again
+
+* **configuration.ipynb fails**:
+
+ * For local conda, first ensure that `automl_setup` has successfully run.
+ * Ensure that the subscription_id is correct. Find the subscription_id in the Azure portal by selecting All Service and then Subscriptions. The characters "<" and ">" should not be included in the subscription_id value. For example, `subscription_id = "12345678-90ab-1234-5678-1234567890abcd"` has the valid format.
+ * Ensure Contributor or Owner access to the subscription.
+ * Check that the region is one of the supported regions: `eastus2`, `eastus`, `westcentralus`, `southeastasia`, `westeurope`, `australiaeast`, `westus2`, `southcentralus`.
+ * Ensure access to the region using the Azure portal.
+
+* **workspace.from_config fails**:
+
+ If the call `ws = Workspace.from_config()` fails:
+
+ 1. Ensure that the configuration.ipynb notebook has run successfully.
+ 1. If the notebook is being run from a folder that is not under the folder where the `configuration.ipynb` was run, copy the folder aml_config and the file config.json that it contains to the new folder. Workspace.from_config reads the config.json for the notebook folder or its parent folder.
+ 1. If a new subscription, resource group, workspace, or region, is being used, make sure that you run the `configuration.ipynb` notebook again. Changing config.json directly will only work if the workspace already exists in the specified resource group under the specified subscription.
+ 1. If you want to change the region, change the workspace, resource group, or subscription. `Workspace.create` will not create or update a workspace if it already exists, even if the region specified is different.
+
+## TensorFlow
+
+As of version 1.5.0 of the SDK, automated machine learning does not install TensorFlow models by default. To install TensorFlow and use it with your automated ML experiments, install `tensorflow==1.12.0` via `CondaDependencies`.
+
+```python
+ from azureml.core.runconfig import RunConfiguration
+ from azureml.core.conda_dependencies import CondaDependencies
+ run_config = RunConfiguration()
+ run_config.environment.python.conda_dependencies = CondaDependencies.create(conda_packages=['tensorflow==1.12.0'])
+```
+
+## Numpy failures
+
+* **`import numpy` fails in Windows**: Some Windows environments see an error loading numpy with the latest Python version 3.6.8. If you see this issue, try with Python version 3.6.7.
+
+* **`import numpy` fails**: Check the TensorFlow version in the automated ml conda environment. Supported versions are < 1.13. Uninstall TensorFlow from the environment if version is >= 1.13.
+
+You can check the version of TensorFlow and uninstall as follows:
+
+ 1. Start a command shell, activate conda environment where automated ml packages are installed.
+ 1. Enter `pip freeze` and look for `tensorflow`, if found, the version listed should be < 1.13
+ 1. If the listed version is not a supported version, `pip uninstall tensorflow` in the command shell and enter y for confirmation.
+
+## `jwt.exceptions.DecodeError`
+
+Exact error message: `jwt.exceptions.DecodeError: It is required that you pass in a value for the "algorithms" argument when calling decode()`.
+
+For SDK versions <= 1.17.0, installation might result in an unsupported version of PyJWT. Check that the PyJWT version in the automated ml conda environment is a supported version. That is PyJWT version < 2.0.0.
+
+You may check the version of PyJWT as follows:
+
+1. Start a command shell and activate conda environment where automated ML packages are installed.
+
+1. Enter `pip freeze` and look for `PyJWT`, if found, the version listed should be < 2.0.0
+
+If the listed version is not a supported version:
+
+1. Consider upgrading to the latest version of AutoML SDK: `pip install -U azureml-sdk[automl]`
+
+1. If that is not viable, uninstall PyJWT from the environment and install the right version as follows:
+
+ 1. `pip uninstall PyJWT` in the command shell and enter `y` for confirmation.
+ 1. Install using `pip install 'PyJWT<2.0.0'`.
+
+
+## Data access
+
+For automated ML jobs, you need to ensure the file datastore that connects to your AzureFile storage has the appropriate authentication credentials. Otherwise, the following message results. Learn how to [update your data access authentication credentials](how-to-train-with-datasets.md#azurefile-storage).
+
+Error message:
+`Could not create a connection to the AzureFileService due to missing credentials. Either an Account Key or SAS token needs to be linked the default workspace blob store.`
+
+## Data schema
+
+When you try to create a new automated ML experiment via the **Edit and submit** button in the Azure Machine Learning studio, the data schema for the new experiment must match the schema of the data that was used in the original experiment. Otherwise, an error message similar to the following results. Learn more about how to [edit and submit experiments from the studio UI](../how-to-use-automated-ml-for-ml-models.md#edit-and-submit-jobs-preview).
+
+Error message non-vision experiments: ` Schema mismatch error: (an) additional column(s): "Column1: String, Column2: String, Column3: String", (a) missing column(s)`
+
+Error message for vision datasets: `Schema mismatch error: (an) additional column(s): "dataType: String, dataSubtype: String, dateTime: Date, category: String, subcategory: String, status: String, address: String, latitude: Decimal, longitude: Decimal, source: String, extendedProperties: String", (a) missing column(s): "image_url: Stream, image_details: DataRow, label: List" Vision dataset error(s): Vision dataset should have a target column with name 'label'. Vision dataset should have labelingProjectType tag with value as 'Object Identification (Bounding Box)'.`
+
+## Databricks
+See [How to configure an automated ML experiment with Databricks (Azure Machine Learning SDK v1)](how-to-configure-databricks-automl-environment.md#troubleshooting).
++
+## Forecasting R2 score is always zero
+
+ This issue arises if the training data provided has time series that contains the same value for the last `n_cv_splits` + `forecasting_horizon` data points.
+
+If this pattern is expected in your time series, you can switch your primary metric to **normalized root mean squared error**.
+
+## Failed deployment
+
+ For versions <= 1.18.0 of the SDK, the base image created for deployment may fail with the following error: `ImportError: cannot import name cached_property from werkzeug`.
+
+ The following steps can work around the issue:
+
+ 1. Download the model package
+ 1. Unzip the package
+ 1. Deploy using the unzipped assets
+
+## Azure Functions application
+
+ Automated ML does not currently support Azure Functions applications.
+
+## Sample notebook failures
+
+ If a sample notebook fails with an error that property, method, or library does not exist:
+
+* Ensure that the correct kernel has been selected in the Jupyter Notebook. The kernel is displayed in the top right of the notebook page. The default is *azure_automl*. The kernel is saved as part of the notebook. If you switch to a new conda environment, you need to select the new kernel in the notebook.
+
+ * For Azure Notebooks, it should be Python 3.6.
+ * For local conda environments, it should be the conda environment name that you specified in automl_setup.
+
+* To ensure the notebook is for the SDK version that you are using,
+ * Check the SDK version by executing `azureml.core.VERSION` in a Jupyter Notebook cell.
+ * You can download previous version of the sample notebooks from GitHub with these steps:
+ 1. Select the `Branch` button
+ 1. Navigate to the `Tags` tab
+ 1. Select the version
+
+## Experiment throttling
+
+If you have over 100 automated ML experiments, this may cause new automated ML experiments to have long run times.
+
+## VNet Firewall Setting Download Failure
+
+If you are under virtual networks (VNets), you may run into model download failures when using AutoML NLP. This is because network traffic is blocked from downloading the models and tokenizers from Azure CDN. To unblock this, please allow list the below URLs in the "Application rules" setting of the VNet firewall policy:
+
+* aka.ms
+* https://automlresources-prod.azureedge.net
+
+Please follow the instructions [here to configure the firewall settings.](../how-to-access-azureml-behind-firewall.md)
+
+Instructions for configuring workspace under vnet are available [here.](../tutorial-create-secure-workspace.md)
+
+## Next steps
+++ Learn more about [how to train a regression model with Automated machine learning](how-to-auto-train-models-v1.md) or [how to train using Automated machine learning on a remote resource](concept-automated-ml-v1.md#local-remote).+++ Learn more about [how and where to deploy a model](how-to-deploy-and-where.md).
machine-learning How To Tune Hyperparameters V1 https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/v1/how-to-tune-hyperparameters-v1.md
[!INCLUDE [cli v1](../../../includes/machine-learning-cli-v1.md)] > [!div class="op_single_selector" title1="Select the version of Azure Machine Learning CLI extension you are using:"] > * [v1](how-to-tune-hyperparameters-v1.md)
-> * [v2 (current version)](../how-to-tune-hyperparameters.md)
-
+> * [v2 (current version)](../how-to-tune-hyperparameters.md?view=azureml-api-2&preserve-view=true)
+
[!INCLUDE [cli-version-info](../../../includes/machine-learning-cli-version-1-only.md)] Automate efficient hyperparameter tuning by using Azure Machine Learning (v1) [HyperDrive package](/python/api/azureml-train-core/azureml.train.hyperdrive). Learn how to complete the steps required to tune hyperparameters with the [Azure Machine Learning SDK](/python/api/overview/azure/ml/):
machine-learning How To Use Automl Small Object Detect V1 https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/v1/how-to-use-automl-small-object-detect-v1.md
> [!div class="op_single_selector" title1="Select the version of Azure Machine Learning CLI extension you are using:"] > * [v1](how-to-use-automl-small-object-detect-v1.md)
-> * [v2 (current version)](../how-to-use-automl-small-object-detect.md)
+> * [v2 (current version)](../how-to-use-automl-small-object-detect.md?view=azureml-api-2&preserve-view=true)
[!INCLUDE [cli-version-info](../../../includes/machine-learning-cli-version-1-only.md)]
You can specify the value for `tile_grid_size` in your hyperparameter space as a
```python parameter_space = {
- 'model_name': choice('fasterrcnn_resnet50_fpn'),
- 'tile_grid_size': choice('(3, 2)'),
- ...
+ 'model_name': choice('fasterrcnn_resnet50_fpn'),
+ 'tile_grid_size': choice('(3, 2)'),
+ ...
} ```
To choose the optimal value for this parameter for your dataset, you can use hyp
```python parameter_space = {
- 'model_name': choice('fasterrcnn_resnet50_fpn'),
- 'tile_grid_size': choice('(2, 1)', '(3, 2)', '(5, 3)'),
- ...
+ 'model_name': choice('fasterrcnn_resnet50_fpn'),
+ 'tile_grid_size': choice('(2, 1)', '(3, 2)', '(5, 3)'),
+ ...
} ``` ## Tiling during inference
Doing so, may improve performance for some datasets, and won't incur the extra c
The following are the parameters you can use to control the tiling feature.
-| Parameter Name | Description | Default |
+| Parameter Name | Description | Default |
| |-| -| | `tile_grid_size` | The grid size to use for tiling each image. Available for use during training, validation, and inference.<br><br>Tuple of two integers passed as a string, e.g `'(3, 2)'`<br><br> *Note: Setting this parameter increases the computation time proportionally, since all tiles and images are processed by the model.*| no default value | | `tile_overlap_ratio` | Controls the overlap ratio between adjacent tiles in each dimension. When the objects that fall on the tile boundary are too large to fit completely in one of the tiles, increase the value of this parameter so that the objects fit in at least one of the tiles completely.<br> <br> Must be a float in [0, 1).| 0.25 |
See the [object detection sample notebook](https://github.com/Azure/azureml-exam
>[!NOTE] > All images in this article are made available in accordance with the permitted use section of the [MIT licensing agreement](https://choosealicense.com/licenses/mit/).
-> Copyright © 2020 Roboflow, Inc.
+> Copyright &copy; 2020 Roboflow, Inc.
## Next steps
machine-learning How To Use Environments https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/v1/how-to-use-environments.md
ms.devlang: azurecli
> [!div class="op_single_selector" title1="Select the version of Azure Machine Learning CLI extension you are using:"] > * [v1](how-to-use-environments.md)
-> * [v2 (current version)](../how-to-manage-environments-v2.md)
+> * [v2 (current version)](../how-to-manage-environments-v2.md?view=azureml-api-2&preserve-view=true)
In this article, learn how to create and manage Azure Machine Learning [environments](/python/api/azureml-core/azureml.core.environment.environment) using CLI v1. Use the environments to track and reproduce your projects' software dependencies as they evolve. The [Azure Machine Learning CLI](reference-azure-machine-learning-cli.md) v1 mirrors most of the functionality of the Python SDK v1. You can use it to create and manage environments.
az ml environment download -n myenv -d downloaddir
## Next steps
-* After you have a trained model, learn [how and where to deploy models](../how-to-deploy-online-endpoints.md).
+* After you have a trained model, learn [how and where to deploy models](how-to-deploy-and-where.md).
* View the [`Environment` class SDK reference](/python/api/azureml-core/azureml.core.environment%28class%29).
machine-learning How To Use Managed Identities https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/v1/how-to-use-managed-identities.md
> [!div class="op_single_selector" title1="Select the version of Azure Machine Learning SDK or CLI extension you are using:"] > * [v1](how-to-use-managed-identities.md)
-> * [v2 (current version)](../how-to-identity-based-service-authentication.md)
+> * [v2 (current version)](../how-to-identity-based-service-authentication.md?view=azureml-api-2&preserve-view=true)
[Managed identities](../../active-directory/managed-identities-azure-resources/overview.md) allow you to configure your workspace with the *minimum required permissions to access resources*.
az ml computetarget create amlcompute --name <cluster name> -w <workspace> -g <r
# [Studio](#tab/azure-studio)
-For information on configuring managed identity when creating a compute cluster in studio, see [Set up managed identity](../how-to-create-attach-compute-cluster.md#set-up-managed-identity).
+For information on configuring managed identity when creating a compute cluster in studio, see [Set up managed identity](how-to-create-attach-compute-cluster.md#set-up-managed-identity).
az role assignment create --assignee <principal ID> \
--scope "/subscriptions/<subscription ID>/resourceGroups/<private ACR resource group>/providers/Microsoft.ContainerRegistry/registries/<private ACR name>" ```
-Finally, when submitting a training run, specify the base image location in the [environment definition](../how-to-use-environments.md#use-existing-environments).
+Finally, when submitting a training run, specify the base image location in the [environment definition](how-to-use-environments.md).
[!INCLUDE [sdk v1](../../../includes/machine-learning-sdk-v1.md)]
For a workspace with [customer-managed keys for encryption](../concept-data-encr
## Next steps * Learn more about [enterprise security in Azure Machine Learning](../concept-enterprise-security.md)
-* Learn about [identity-based data access](../how-to-identity-based-data-access.md)
+* Learn about [identity-based data access](how-to-identity-based-data-access.md)
* Learn about [managed identities on compute cluster](how-to-create-attach-compute-cluster.md).
machine-learning How To Use Mlflow https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/v1/how-to-use-mlflow.md
[!INCLUDE [sdk v1](../../../includes/machine-learning-sdk-v1.md)] > [!div class="op_single_selector" title1="Select the version of the Azure Machine Learning Python SDK you are using:"] > * [v1](how-to-use-mlflow.md)
-> * [v2 (current version)](../how-to-use-mlflow-cli-runs.md)
+> * [v2 (current version)](../how-to-use-mlflow-cli-runs.md?view=azureml-api-2&preserve-view=true)
In this article, learn how to enable [MLflow Tracking](https://mlflow.org/docs/latest/quickstart.html#using-the-tracking-api) to connect Azure Machine Learning as the backend of your MLflow experiments.
with mlflow.start_run() as mlflow_run:
mlflow.log_artifact("helloworld.txt") ```
-For details about how to log metrics, parameters and artifacts in a run using MLflow view [How to log and view metrics](../how-to-log-view-metrics.md).
+For details about how to log metrics, parameters and artifacts in a run using MLflow view [How to log and view metrics](how-to-log-view-metrics.md).
## Track runs running on Azure Machine Learning
mlflow.autolog()
Register and track your models with the [Azure Machine Learning model registry](concept-model-management-and-deployment.md#register-package-and-deploy-models-from-anywhere), which supports the MLflow model registry. Azure Machine Learning models are aligned with the MLflow model schema making it easy to export and import these models across different workflows. The MLflow-related metadata, such as run ID, is also tracked with the registered model for traceability. Users can submit training runs, register, and deploy models produced from MLflow runs.
-If you want to deploy and register your production ready model in one step, see [Deploy and register MLflow models](../how-to-deploy-mlflow-models.md).
+If you want to deploy and register your production ready model in one step, see [Deploy and register MLflow models](how-to-deploy-mlflow-models.md).
To register and view a model from a run, use the following steps:
machine-learning How To Use Secrets In Runs https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/v1/how-to-use-secrets-in-runs.md
[!INCLUDE [sdk v1](../../../includes/machine-learning-sdk-v1.md)] > [!div class="op_single_selector" title1="Select the version of the Azure Machine Learning Python SDK you are using:"] > * [v1](how-to-use-secrets-in-runs.md)
-> * [v2 (current version)](../how-to-use-secrets-in-runs.md)
+> * [v2 (current version)](../how-to-use-secrets-in-runs.md?view=azureml-api-2&preserve-view=true)
In this article, you learn how to use secrets in training jobs securely. Authentication information such as your user name and password are secrets. For example, if you connect to an external database in order to query training data, you would need to pass your username and password to the remote job context. Coding such values into training scripts in cleartext is insecure as it would expose the secret.
machine-learning How To Workspace Diagnostic Api https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/v1/how-to-workspace-diagnostic-api.md
- Title: Workspace diagnostics (v1)-
-description: Learn how to use Azure Machine Learning workspace diagnostics with the Python SDK v1.
------ Previously updated : 09/14/2022----
-# How to use workspace diagnostics (SDK v1)
-
-> [!div class="op_single_selector" title1="Select the version of the Azure Machine Learning Python SDK you are using:"]
-> * [v1](how-to-workspace-diagnostic-api.md)
-> * [v2 (current version)](../how-to-workspace-diagnostic-api.md)
-
-Azure Machine Learning provides a diagnostic API that can be used to identify problems with your workspace. Errors returned in the diagnostics report include information on how to resolve the problem.
-
-In this article, learn how to use the workspace diagnostics from the Azure Machine Learning Python SDK v1.
-
-## Prerequisites
-
-* An Azure Machine Learning workspace. If you don't have one, see [Create a workspace](../quickstart-create-resources.md).
-* The [Azure Machine Learning SDK for Python](/python/api/overview/azure/ml).
-
-## Diagnostics from Python
-
-The following snippet demonstrates how to use workspace diagnostics from Python
--
-```python
-from azureml.core import Workspace
-
-ws = Workspace.from_config()
-
-diag_param = {
- "value": {
- }
- }
-
-resp = ws.diagnose_workspace(diag_param)
-print(resp)
-```
-
-The response is a JSON document that contains information on any problems detected with the workspace. The following JSON is an example response:
-
-```json
-{
- "value": {
- "user_defined_route_results": [],
- "network_security_rule_results": [],
- "resource_lock_results": [],
- "dns_resolution_results": [{
- "code": "CustomDnsInUse",
- "level": "Warning",
- "message": "It is detected VNet '/subscriptions/<subscription-id>/resourceGroups/<resource-group-name>/providers/Microsoft.Network/virtualNetworks/<virtual-network-name>' of private endpoint '/subscriptions/<subscription-id>/resourceGroups/larrygroup0916/providers/Microsoft.Network/privateEndpoints/<workspace-private-endpoint>' is not using Azure default DNS. You need to configure your DNS server and check https://learn.microsoft.com/azure/machine-learning/how-to-custom-dns to make sure the custom DNS is set up correctly."
- }],
- "storage_account_results": [],
- "key_vault_results": [],
- "container_registry_results": [],
- "application_insights_results": [],
- "other_results": []
- }
-}
-```
-
-If no problems are detected, an empty JSON document is returned.
-
-For more information, see the [Workspace.diagnose_workspace()](/python/api/azureml-core/azureml.core.workspace(class)#diagnose-workspace-diagnose-parameters-) reference.
-
-## Next steps
-
-* [Workspace.diagnose_workspace()](/python/api/azureml-core/azureml.core.workspace(class)#diagnose-workspace-diagnose-parameters-)
-* [How to manage workspaces in portal or SDK](../how-to-manage-workspace.md)
machine-learning Introduction https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/v1/introduction.md
> [!div class="op_single_selector" title1="Select the version of Azure Machine Learning CLI extension or Python SDK you are using:"] > * [v1](introduction.md)
-> * [v2 (current version)](../index.yml)
+> * [v2 (current version)](../index.yml?view=azureml-api-2&preserve-view=true)
All articles in this section document the use of the first version of Azure Machine Learning Python SDK (v1) or Azure CLI ml extension (v1).
To find which extensions you have installed, use `az extension list`.
For more information on installing and using the different extensions, see the following articles: * `azure-cli-ml` - [Install, set up, and use the CLI (v1)](reference-azure-machine-learning-cli.md)
-* `ml` - [Install and set up the CLI (v2)](../how-to-configure-cli.md)
+* `ml` - [Install and set up the CLI (v2)](../how-to-configure-cli.md?view=azureml-api-2&preserve-view=true)
For more information on installing and using the different SDK versions:
machine-learning Reference Automl Images Hyperparameters V1 https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/v1/reference-automl-images-hyperparameters-v1.md
Last updated 01/18/2022
[!INCLUDE [sdk v1](../../../includes/machine-learning-sdk-v1.md)] > [!div class="op_single_selector" title1="Select the version of Azure Machine Learning you are using:"] > * [v1](reference-automl-images-hyperparameters-v1.md)
-> * [v2 (current version)](../reference-automl-images-hyperparameters.md)
+> * [v2 (current version)](../reference-automl-images-hyperparameters.md?view=azureml-api-2&preserve-view=true)
Learn which hyperparameters are available specifically for computer vision tasks in automated ML experiments.
machine-learning Reference Automl Images Schema V1 https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/v1/reference-automl-images-schema-v1.md
Last updated 10/13/2021
# Data schemas to train computer vision models with automated machine learning (v1) [!INCLUDE [sdk v1](../../../includes/machine-learning-sdk-v1.md)]
-
+
> [!div class="op_single_selector" title1="Select the version of Azure Machine Learning CLI extension you are using:"] > * [v1](reference-automl-images-schema-v1.md)
-> * [v2 (current version)](../reference-automl-images-schema.md)
-
+> * [v2 (current version)](../reference-automl-images-schema.md?view=azureml-api-2&preserve-view=true)
+
[!INCLUDE [cli-version-info](../../../includes/machine-learning-cli-version-1-only.md)] > [!IMPORTANT]
In instance segmentation, output consists of multiple boxes with their scaled to
``` > [!NOTE]
-> The images used in this article are from the Fridge Objects dataset, copyright © Microsoft Corporation and available at [computervision-recipes/01_training_introduction.ipynb](https://github.com/microsoft/computervision-recipes/blob/master/scenarios/detection/01_training_introduction.ipynb) under the [MIT License](https://github.com/microsoft/computervision-recipes/blob/master/LICENSE).
+> The images used in this article are from the Fridge Objects dataset, copyright &copy; Microsoft Corporation and available at [computervision-recipes/01_training_introduction.ipynb](https://github.com/microsoft/computervision-recipes/blob/master/scenarios/detection/01_training_introduction.ipynb) under the [MIT License](https://github.com/microsoft/computervision-recipes/blob/master/LICENSE).
## Next steps
machine-learning Reference Azure Machine Learning Cli https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/v1/reference-azure-machine-learning-cli.md
> [!div class="op_single_selector" title1="Select the version of Azure Machine Learning CLI extension you are using:"] > * [v1](reference-azure-machine-learning-cli.md)
-> * [v2 (current version)](../how-to-configure-cli.md)
+> * [v2 (current version)](../how-to-configure-cli.md?view=azureml-api-2&preserve-view=true)
[!INCLUDE [cli-version-info](../../../includes/machine-learning-cli-version-1-only.md)]
machine-learning Reference Pipeline Yaml https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/v1/reference-pipeline-yaml.md
> [!div class="op_single_selector" title1="Select the version of Azure Machine Learning CLI extension you are using:"] > * [v1](reference-pipeline-yaml.md)
-> * [v2 (current version)](../reference-yaml-job-pipeline.md)
+> * [v2 (current version)](../reference-yaml-job-pipeline.md?view=azureml-api-2&preserve-view=true)
> [!NOTE] > The YAML syntax detailed in this document is based on the JSON schema for the v1 version of the ML CLI extension. This syntax is guaranteed only to work with the ML CLI v1 extension.
machine-learning Samples Notebooks V1 https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/v1/samples-notebooks-v1.md
[!INCLUDE [sdk v1](../../../includes/machine-learning-sdk-v1.md)] > [!div class="op_single_selector" title1="Select the Azure Machine Learning version you are using:"] > * [v1](<samples-notebooks-v1.md>)
-> * [v2](../samples-notebooks.md)
+> * [v2](../samples-notebooks.md?view=azureml-api-2&preserve-view=true)
The [Azure Machine Learning Notebooks repository](https://github.com/azure/machinelearningnotebooks) includes Azure Machine Learning Python SDK (v1) samples. These Jupyter notebooks are designed to help you explore the SDK and serve as models for your own machine learning projects. In this repository, you'll find tutorial notebooks in the **tutorials** folder and feature-specific notebooks in the **how-to-use-azureml** folder.
machine-learning Tutorial 1St Experiment Bring Data https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/v1/tutorial-1st-experiment-bring-data.md
> [!div class="op_single_selector" title1="Select the version of Azure Machine Learning SDK you are using:"] > * [v1](tutorial-1st-experiment-bring-data.md)
-> * [v2](../tutorial-1st-experiment-bring-data.md)
+> * [v2](../tutorial-1st-experiment-bring-data.md?view=azureml-api-2&preserve-view=true)
This tutorial shows you how to upload and use your own data to train machine learning models in Azure Machine Learning. This tutorial is *part 3 of a three-part tutorial series*.
machine-learning Tutorial 1St Experiment Hello World https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/v1/tutorial-1st-experiment-hello-world.md
> [!div class="op_single_selector" title1="Select the version of Azure Machine Learning SDK you are using:"] > * [v1](tutorial-1st-experiment-hello-world.md)
-> * [v2](../tutorial-1st-experiment-hello-world.md)
+> * [v2](../tutorial-1st-experiment-hello-world.md?view=azureml-api-2&preserve-view=true)
In this tutorial, you run your first Python script in the cloud with Azure Machine Learning. This tutorial is *part 1 of a three-part tutorial series*.
machine-learning Tutorial 1St Experiment Sdk Train https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/v1/tutorial-1st-experiment-sdk-train.md
> [!div class="op_single_selector" title1="Select the version of Azure Machine Learning SDK you are using:"] > * [v1](tutorial-1st-experiment-sdk-train.md)
-> * [v2](../tutorial-1st-experiment-sdk-train.md)
+> * [v2](../tutorial-1st-experiment-sdk-train.md?view=azureml-api-2&preserve-view=true)
This tutorial shows you how to train a machine learning model in Azure Machine Learning. This tutorial is *part 2 of a three-part tutorial series*.
machine-learning Tutorial Auto Train Image Models V1 https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/v1/tutorial-auto-train-image-models-v1.md
[!INCLUDE [sdk v1](../../../includes/machine-learning-sdk-v1.md)] > [!div class="op_single_selector" title1="Select the version of Azure Machine Learning SDK you are using:"] > * [v1](tutorial-auto-train-image-models-v1.md)
-> * [v2 (current version)](../tutorial-auto-train-image-models.md)
-
+> * [v2 (current version)](../tutorial-auto-train-image-models.md?view=azureml-api-2&preserve-view=true)
+
>[!IMPORTANT] > The features presented in this article are in preview. They should be considered [experimental](/python/api/overview/azure/ml/#stable-vs-experimental) preview features that might change at any time.
You'll write code using the Python SDK in this tutorial and learn the following
## Prerequisites
-* If you donΓÇÖt have an Azure subscription, create a free account before you begin. Try the [free or paid version](https://azure.microsoft.com/free/) of Azure Machine Learning today.
+* If you don't have an Azure subscription, create a free account before you begin. Try the [free or paid version](https://azure.microsoft.com/free/) of Azure Machine Learning today.
* Python 3.7 or 3.8 are supported for this feature
network-watcher Network Watcher Nsg Flow Logging Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/network-watcher/network-watcher-nsg-flow-logging-overview.md
Previously updated : 03/15/2023 Last updated : 04/03/2023
Here's an example format of a version 1 NSG flow log:
] } },
- "records":
- [
-
- {
- "time": "2017-02-16T22:00:32.8950000Z",
- "systemId": "2c002c16-72f3-4dc5-b391-3444c3527434",
- "category": "NetworkSecurityGroupFlowEvent",
- "resourceId": "/SUBSCRIPTIONS/00000000-0000-0000-0000-000000000000/RESOURCEGROUPS/FABRIKAMRG/PROVIDERS/MICROSOFT.NETWORK/NETWORKSECURITYGROUPS/FABRIAKMVM1-NSG",
- "operationName": "NetworkSecurityGroupFlowEvents",
- "properties": {"Version":1,"flows":[{"rule":"DefaultRule_DenyAllInBound","flows":[{"mac":"000D3AF8801A","flowTuples":["1487282421,42.119.146.95,10.1.0.4,51529,5358,T,I,D"]}]},{"rule":"UserRule_default-allow-rdp","flows":[{"mac":"000D3AF8801A","flowTuples":["1487282370,163.28.66.17,10.1.0.4,61771,3389,T,I,A","1487282393,5.39.218.34,10.1.0.4,58596,3389,T,I,A","1487282393,91.224.160.154,10.1.0.4,61540,3389,T,I,A","1487282423,13.76.89.229,10.1.0.4,53163,3389,T,I,A"]}]}]}
- }
- ,
- {
- "time": "2017-02-16T22:01:32.8960000Z",
- "systemId": "2c002c16-72f3-4dc5-b391-3444c3527434",
- "category": "NetworkSecurityGroupFlowEvent",
- "resourceId": "/SUBSCRIPTIONS/00000000-0000-0000-0000-000000000000/RESOURCEGROUPS/FABRIKAMRG/PROVIDERS/MICROSOFT.NETWORK/NETWORKSECURITYGROUPS/FABRIAKMVM1-NSG",
- "operationName": "NetworkSecurityGroupFlowEvents",
- "properties": {"Version":1,"flows":[{"rule":"DefaultRule_DenyAllInBound","flows":[{"mac":"000D3AF8801A","flowTuples":["1487282481,195.78.210.194,10.1.0.4,53,1732,U,I,D"]}]},{"rule":"UserRule_default-allow-rdp","flows":[{"mac":"000D3AF8801A","flowTuples":["1487282435,61.129.251.68,10.1.0.4,57776,3389,T,I,A","1487282454,84.25.174.170,10.1.0.4,59085,3389,T,I,A","1487282477,77.68.9.50,10.1.0.4,65078,3389,T,I,A"]}]}]}
- }
- ,
{
- "time": "2017-02-16T22:02:32.9040000Z",
- "systemId": "2c002c16-72f3-4dc5-b391-3444c3527434",
- "category": "NetworkSecurityGroupFlowEvent",
- "resourceId": "/SUBSCRIPTIONS/00000000-0000-0000-0000-000000000000/RESOURCEGROUPS/FABRIKAMRG/PROVIDERS/MICROSOFT.NETWORK/NETWORKSECURITYGROUPS/FABRIAKMVM1-NSG",
- "operationName": "NetworkSecurityGroupFlowEvents",
- "properties": {"Version":1,"flows":[{"rule":"DefaultRule_DenyAllInBound","flows":[{"mac":"000D3AF8801A","flowTuples":["1487282492,175.182.69.29,10.1.0.4,28918,5358,T,I,D","1487282505,71.6.216.55,10.1.0.4,8080,8080,T,I,D"]}]},{"rule":"UserRule_default-allow-rdp","flows":[{"mac":"000D3AF8801A","flowTuples":["1487282512,91.224.160.154,10.1.0.4,59046,3389,T,I,A"]}]}]}
+ "records": [
+ {
+ "time": "2017-02-16T22:00:32.8950000Z",
+ "systemId": "2c002c16-72f3-4dc5-b391-3444c3527434",
+ "category": "NetworkSecurityGroupFlowEvent",
+ "resourceId": "/SUBSCRIPTIONS/00000000-0000-0000-0000-000000000000/RESOURCEGROUPS/FABRIKAMRG/PROVIDERS/MICROSOFT.NETWORK/NETWORKSECURITYGROUPS/FABRIAKMVM1-NSG",
+ "operationName": "NetworkSecurityGroupFlowEvents",
+ "properties": {
+ "Version": 1,
+ "flows": [
+ {
+ "rule": "DefaultRule_DenyAllInBound",
+ "flows": [
+ {
+ "mac": "000D3AF8801A",
+ "flowTuples": [
+ "1487282421,42.119.146.95,10.1.0.4,51529,5358,T,I,D"
+ ]
+ }
+ ]
+ },
+ {
+ "rule": "UserRule_default-allow-rdp",
+ "flows": [
+ {
+ "mac": "000D3AF8801A",
+ "flowTuples": [
+ "1487282370,163.28.66.17,10.1.0.4,61771,3389,T,I,A",
+ "1487282393,5.39.218.34,10.1.0.4,58596,3389,T,I,A",
+ "1487282393,91.224.160.154,10.1.0.4,61540,3389,T,I,A",
+ "1487282423,13.76.89.229,10.1.0.4,53163,3389,T,I,A"
+ ]
+ }
+ ]
+ }
+ ]
+ }
+ },
+ {
+ "time": "2017-02-16T22:01:32.8960000Z",
+ "systemId": "2c002c16-72f3-4dc5-b391-3444c3527434",
+ "category": "NetworkSecurityGroupFlowEvent",
+ "resourceId": "/SUBSCRIPTIONS/00000000-0000-0000-0000-000000000000/RESOURCEGROUPS/FABRIKAMRG/PROVIDERS/MICROSOFT.NETWORK/NETWORKSECURITYGROUPS/FABRIAKMVM1-NSG",
+ "operationName": "NetworkSecurityGroupFlowEvents",
+ "properties": {
+ "Version": 1,
+ "flows": [
+ {
+ "rule": "DefaultRule_DenyAllInBound",
+ "flows": [
+ {
+ "mac": "000D3AF8801A",
+ "flowTuples": [
+ "1487282481,195.78.210.194,10.1.0.4,53,1732,U,I,D"
+ ]
+ }
+ ]
+ },
+ {
+ "rule": "UserRule_default-allow-rdp",
+ "flows": [
+ {
+ "mac": "000D3AF8801A",
+ "flowTuples": [
+ "1487282435,61.129.251.68,10.1.0.4,57776,3389,T,I,A",
+ "1487282454,84.25.174.170,10.1.0.4,59085,3389,T,I,A",
+ "1487282477,77.68.9.50,10.1.0.4,65078,3389,T,I,A"
+ ]
+ }
+ ]
+ }
+ ]
+ }
+ },
+ {
+ "time": "2017-02-16T22:02:32.9040000Z",
+ "systemId": "2c002c16-72f3-4dc5-b391-3444c3527434",
+ "category": "NetworkSecurityGroupFlowEvent",
+ "resourceId": "/SUBSCRIPTIONS/00000000-0000-0000-0000-000000000000/RESOURCEGROUPS/FABRIKAMRG/PROVIDERS/MICROSOFT.NETWORK/NETWORKSECURITYGROUPS/FABRIAKMVM1-NSG",
+ "operationName": "NetworkSecurityGroupFlowEvents",
+ "properties": {
+ "Version": 1,
+ "flows": [
+ {
+ "rule": "DefaultRule_DenyAllInBound",
+ "flows": [
+ {
+ "mac": "000D3AF8801A",
+ "flowTuples": [
+ "1487282492,175.182.69.29,10.1.0.4,28918,5358,T,I,D",
+ "1487282505,71.6.216.55,10.1.0.4,8080,8080,T,I,D"
+ ]
+ }
+ ]
+ },
+ {
+ "rule": "UserRule_default-allow-rdp",
+ "flows": [
+ {
+ "mac": "000D3AF8801A",
+ "flowTuples": [
+ "1487282512,91.224.160.154,10.1.0.4,59046,3389,T,I,A"
+ ]
+ }
+ ]
+ }
+ ]
+ }
+ }
+ ]
}
+ ]
+}
```
Here's an example format of a version 2 NSG flow log:
] } }
+ ]
+}
```
NSG flow logs for network security groups associated to Azure Application Gatewa
Currently, these Azure services don't support NSG flow logs: -- [Azure Container Instances](https://azure.microsoft.com/services/container-instances/)-- [Azure Logic Apps](https://azure.microsoft.com/services/logic-apps/) -- [Azure Functions](https://azure.microsoft.com/services/functions/)
+- [Azure Container Instances](../container-instances/container-instances-overview.md)
+- [Azure Logic Apps](../logic-apps/logic-apps-overview.md)
+- [Azure Functions](../azure-functions/functions-overview.md)
+- [Azure DNS Private Resolver](../dns/dns-private-resolver-overview.md)
> [!NOTE] > App services deployed under an Azure App Service plan don't support NSG flow logs. To learn more, see [How virtual network integration works](../app-service/overview-vnet-integration.md#how-regional-virtual-network-integration-works).
partner-solutions New Relic Create https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/partner-solutions/new-relic/new-relic-create.md
Title: Create an instance of Azure Native New Relic Service Preview
description: Learn how to create a resource by using Azure Native New Relic Service. Previously updated : 02/21/2023 Last updated : 04/04/2023 # Quickstart: Get started with Azure Native New Relic Service Preview
When you use the integrated New Relic experience in the Azure portal by using Az
## Prerequisites
-Before you link the subscription to New Relic, complete the pre-deployment configuration. For more information, see [Configure pre-deployment for Azure Native New Relic Service](new-relic-how-to-configure-prereqs.md).
+Before you link the subscription to New Relic, complete the predeployment configuration. For more information, see [Configure predeployment for Azure Native New Relic Service](new-relic-how-to-configure-prereqs.md).
## Find an offer
Use the Azure portal to find the Azure Native New Relic Service application:
| Property | Description | |--|--| | **Subscription** | Select the Azure subscription that you want to use for creating the New Relic resource. You must have owner access.|
- | **Resource group** |Specify whether you want to create a new resource group or use an existing one. A [resource group](../../azure-resource-manager/management/overview.md) is a container that holds related resources for an Azure solution.|
- | **Resource name** |Specify a name for the New Relic resource. This name will be the friendly name of the New Relic account.|
- | **Region** |Select the region where the New Relic resource on Azure and the New Relic account will be created.|
+ | **Resource group** | Specify whether you want to create a new resource group or use an existing one. A [resource group](/azure/azure-resource-manager/management/overview#resource-groups) is a container that holds related resources for an Azure solution.|
+ | **Resource name** | Specify a name for the New Relic resource. This name is the friendly name of the New Relic account.|
+ | **Region** | Select the region where the New Relic resource on Azure and the New Relic account will be created.|
-1. When you're choosing the organization under which to create the New Relic account, you have two options: create a new organization, or select an existing organization to link the newly created account.
+1. When you're choosing the organization under which to create the New Relic account, you have two options: **Create new** organization, or **Associate with existing** organization to link the newly created account.
- > [!IMPORTANT]
- > You can't use **Associate with existing** functionality, presently. The ability to create a new New Relic resource and associate it with an existing organization is currently disabled.
-
- If you opt to create a new organization, you can choose a plan from the list of available plans by selecting **Change Plan** on the working pane.
+ If you select **Create new** organization, you can choose a plan from the list of available plans by selecting **Change Plan** in the working pane.
:::image type="content" source="media/new-relic-create/new-relic-change-plan.png" alt-text="Screenshot of the panel for changing a plan.":::
-
+1. If you select **Associate with existing** to associate the New Relic resource with an existing organization, the corresponding billing information is the same as when you created the organization.
+
+1. If the organization you selected is currently billed by New Relic, it remains billed by New Relic.
+
+ :::image type="content" source="media/new-relic-create/new-relic-existing.png" alt-text="Screenshot showing Associate with existing was selected in the organization section of the working pane.":::
+ ## Configure metrics and logs Your next step is to configure metrics and logs on the **Metrics and Logs** tab. When you're creating the New Relic resource, you can set up metrics monitoring and automatic log forwarding:
-1. To set up monitoring of platform metrics for Azure resources by New Relic, select **Enable metrics collection**. If you leave this option cleared, metrics are not be pulled by New Relic.
+1. To set up monitoring of platform metrics for Azure resources by New Relic, select **Enable metrics collection**. If you leave this option cleared, metrics aren't pulled by New Relic.
1. To send subscription-level logs to New Relic, select **Subscription activity logs**. If you leave this option cleared, no subscription-level logs are sent to New Relic.
- These logs provide insight into the operations on your resources at the [control plane](../../azure-resource-manager/management/control-plane-and-data-plane.md). These logs also include updates on service-health events.
+ These logs provide insight into the operations on your resources at the [control plane](/azure/azure-resource-manager/management/control-plane-and-data-plane). These logs also include updates on service-health events.
Use the activity log to determine what, who, and when for any write operations (`PUT`, `POST`, `DELETE`). There's a single activity log for each Azure subscription.
-1. To send Azure resource logs to New Relic, select **Azure resource logs** for all supported resource types. The types of Azure resource logs are listed in [Azure Monitor Resource Log categories](../../azure-monitor/essentials/resource-logs-categories.md).
+1. To send Azure resource logs to New Relic, select **Azure resource logs** for all supported resource types. The types of Azure resource logs are listed in [Azure Monitor Resource Log categories](/azure/azure-monitor/essentials/resource-logs-categories).
- These logs provide insight into operations that were taken on an Azure resource at the [data plane](../../azure-resource-manager/management/control-plane-and-data-plane.md). For example, getting a secret from a key vault is a data plane operation. Making a request to a database is also a data plane operation. The content of resource logs varies by the Azure service and resource type.
+ These logs provide insight into operations that were taken on an Azure resource at the [data plane](/azure/azure-resource-manager/management/control-plane-and-data-plane). For example, getting a secret from a key vault is a data plane operation. Making a request to a database is also a data plane operation. The content of resource logs varies by the Azure service and resource type.
:::image type="content" source="media/new-relic-create/new-relic-metrics.png" alt-text="Screenshot of the tab for logs in a New Relic resource, with resource logs selected.":::
partner-solutions New Relic How To Configure Prereqs https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/partner-solutions/new-relic/new-relic-how-to-configure-prereqs.md
This article describes the prerequisites that you must complete in your Azure su
## Access control
-To set up New Relic on Azure, you must have owner access on the Azure subscription. [Confirm that you have the appropriate access](../../role-based-access-control/check-access.md) before you start the setup.
+To set up Azure Native New Relic Service Preview, you must have owner access on the Azure subscription. [Confirm that you have the appropriate access](../../role-based-access-control/check-access.md) before you start the setup.
## Resource provider registration
partner-solutions New Relic How To Manage https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/partner-solutions/new-relic/new-relic-how-to-manage.md
Title: Manage Azure Native New Relic Service Preview
description: Learn how to manage your Azure Native New Relic Service settings. Previously updated : 01/16/2023 Last updated : 04/04/2023
The columns in the table denote valuable information for your resource:
| **Logs to New Relic** | Count of logs for the resource type | | **Metrics to New Relic** | Count of resources that are sending metrics to New Relic through the integration |
+If you are currently billed by New Relic and want to change to Azure Marketplace billing to consume your Azure commitment, you should work with New Relic to align on timeline as per the current contract tenure. Then, switch your billing using the **Bill via Marketplace** from the working pane of the Overview page or your New Relic resource.
++ ## Reconfigure rules for logs or metrics To change the configuration rules for logs or metrics, select **Metrics and Logs** in the Resource menu.
For more information, see [Configure metrics and logs](new-relic-how-to-configur
To see the list of resources that are sending metrics and logs to New Relic, select **Monitored resources** on the left pane. You can filter the list of resources by resource type, resource group name, region, and whether the resource is sending metrics and logs.
partner-solutions New Relic Link To Existing https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/partner-solutions/new-relic/new-relic-link-to-existing.md
When you use Azure Native New Relic Service Preview in the Azure portal for link
|Property | Description | |||
- | **Subscription** | Select the Azure subscription that you want to use for creating the New Relic resource. This subscription will be linked to the New Relic account for monitoring purposes.|
+ | **Subscription** | Select the Azure subscription that you want to use for creating the New Relic resource. This subscription is linked to the New Relic account for monitoring purposes.|
| **Resource group** | Specify whether you want to create a new resource group or use an existing one. A [resource group](../../azure-resource-manager/management/overview.md) is a container that holds related resources for an Azure solution.| | **Resource name** | Specify a name for the New Relic resource.| | **Region** | Select the Azure region where the New Relic resource should be created.| | **New Relic account** | The Azure portal displays a list of existing accounts that can be linked. Select the desired account from the available options.|
-1. After you select a New Relic account, the New Relic billing details appear for your reference. The user who is performing the linking action should have global administrator permissions on the New Relic account that's being linked.
+1. After you select an account from New Relic, the New Relic billing details appear for your reference. The user who is performing the linking action should have global administrator permissions on the New Relic account that's being linked.
:::image type="content" source="media/new-relic-link-to-existing/new-relic-form.png" alt-text="Screenshot that shows the Basics tab and New Relic account details in a red box.":::
+1. If the New Relic account you selected has a parent New Relic organization that was created from New Relic portal, your billing is managed by New Relic and continues to be managed by New Relic.
+
+ > [!NOTE]
+ > Linking requires:
+ > - The account and the New Relic resource reside in the same Azure region
+ > - The user who is the linking the account and resource must have Global administrator permissions on the New Relic account being linked
+ >
+ > If the account that you want to link to does not appear in the dropdown list, verify that these conditions are satisfied.
++ 1. Select **Next**. ## Configure metrics and logs
partner-solutions New Relic Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/partner-solutions/new-relic/new-relic-overview.md
Last updated 01/16/2023
New Relic is a full-stack observability platform that enables a single source of truth for application performance, infrastructure monitoring, log management, error tracking, real-user monitoring, and more. Combined with the Azure platform, use Azure Native New Relic Service Preview to help monitor, troubleshoot, and optimize Azure services and applications.
-Azure Native New Relic Service in Azure Marketplace enables you to create and manage New Relic accounts by using the Azure portal with a fully integrated experience. Integration with Azure enables you to use New Relic as a monitoring solution for your Azure workloads through a streamlined workflow, starting from procurement and moving all the way to configuration and management.
+Azure Native New Relic Service Preview in Marketplace enables you to create and manage New Relic accounts by using the Azure portal with a fully integrated experience. Integration with Azure enables you to use New Relic as a monitoring solution for your Azure workloads through a streamlined workflow, starting from procurement and moving all the way to configuration and management.
You can create and manage the New Relic resources by using the Azure portal through a resource provider named `NewRelic.Observability`. New Relic owns and runs the software as a service (SaaS) application, including the New Relic organizations and accounts that are created through this experience.
partner-solutions New Relic Troubleshoot https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/partner-solutions/new-relic/new-relic-troubleshoot.md
Resource monitoring in New Relic is enabled through the *ingest API key*, which
If your Azure subscription is suspended or deleted because of payment-related issues, resource monitoring in New Relic automatically stops. Use a different Azure subscription. Or, add or update the credit card or payment method for the subscription. For more information, see [Add, update, or delete a payment method](../../cost-management-billing/manage/change-credit-card.md). New Relic manages the APIs for creating and managing resources, and for the storage and processing of customer telemetry data. The New Relic APIs might be on or outside Azure. If your Azure subscription and resource are working correctly but the New Relic portal shows problems with monitoring data, contact New Relic support.
-<!-- need some clarification here -->
## Next steps
postgresql Concepts Query Performance Insight https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/flexible-server/concepts-query-performance-insight.md
+
+ Title: Query Performance Insight - Azure Database for PostgreSQL - Flexible server
+description: This article describes the Query Performance Insight feature in Azure Database for PostgreSQL - Flexible server.
+++++ Last updated : 4/1/2023++
+# Query Performance Insight (Preview)
++
+Query Performance Insight provides intelligent query analysis for Azure Postgres Flexible server databases. It helps identify the top resource consuming and long-running queries in your workload. This helps you find the queries to optimize to improve overall workload performance and efficiently use the resource that you are paying for. Query Performance Insight helps you spend less time troubleshooting database performance by providing:
+
+>[!div class="checklist"]
+> * Identify what your long running queries, and how they change over time.
+> * Determine the wait types affecting those queries.
+> * Details on top database queries by Calls (execution count ), by data-usage, by IOPS and by Temporary file usage (potential tuning candidates for performance improvements).
+> * The ability to drill down into details of a query, to view the Query ID and history of resource utilization.
+> * Deeper insight into overall databases resource consumption.
+
+## Prerequisites
+
+1. **[Query Store](concepts-query-store.md)** is enabled on your database. If Query Store is not running, the Azure portal will prompt you to enable it. To enable Query Store, refer [here](concepts-query-store.md#enable-query-store)
+
+> [!NOTE]
+> **Query Store** is currently **disabled**. This troubleshooting guide depends on Query Store data. You need to enable it by setting the dynamic server parameter `pg_qs.query_capture_mode` to either **ALL** or **TOP**.
+
+2. **[Query Store Wait Sampling](concepts-query-store.md)** is enabled on your database. If Query Store Wait Sampling is not running, the Azure portal will prompt you to enable it. To enable Query Store Wait Sampling, refer [here](concepts-query-store.md#enable-query-store-wait-sampling)
+
+> [!NOTE]
+> **Query Store Wait Sampling** is currently **disabled**. This troubleshooting guide depends on Query Store wait sampling data. You need to enable it by setting the dynamic server parameter `pgms_wait_sampling.query_capture_mode` to **ALL**.
+
+3. **[Log analytics workspace](howto-configure-and-access-logs.md)** is configured for storing 3 log categories including - PostgreSQL Sessions logs, PostgreSQL Query Store and Runtime and PostgreSQL Query Store Wait Statistics. To configure log analytics, refer [Log analytics workspace](howto-configure-and-access-logs.md#configure-diagnostic-settings)
+
+> [!NOTE]
+> The **Query Store data is not being transmitted to the log analytics workspace**. The PostgreSQL logs (Sessions data / Query Store Runtime / Query Store Wait Statistics) is not being sent to the log analytics workspace, which is necessary to use Query Performance Insight . To configure the logging settings for category PostgreSQL sessions and send the data to a log analytics workspace.
+
+## Using Query Performance Insight
+
+The [Query Performance Insight](concepts-query-performance-insight.md) view in the Azure portal will surface visualizations on key information from Query Store. Query Performance Insight is easy to use:
+
+1. Open the Azure portal and find a postgres instance that you want to examine.
+2. From the left-side menu, open **Intelligent Performance** > **Query Performance Insight**
+3. Select a **time range** for investigating queries.
+4. On the first tab, review the list of **Long Running Queries**.
+5. Use sliders or zoom to change the observed interval.
+
+6. Optionally, you can select the **custom** to specify a time range.
+
+> [!NOTE]
+> For Azure PostgreSQL Flexible Server to render the information in Query Performance Insight, **Query Store needs to capture a couple hours of data**. If the database has no activity or if Query Store was not active during a certain period, the charts will be empty when Query Performance Insight displays that time range. You can enable Query Store at any time if it's not running. For more information, see [Best practices with Query Store](concepts-query-store-best-practices.md)
+
+7. To **view details** of a specific query, click the `QueryId Snapshot` dropdown.
+
+8. To get the **Query Text** of a specific query, connect to the `azure_sys` database on the server and query `query_store.query_texts_view` with the `QueryId`
+
+9. On the Consecutive tabs, you can find other query insights including:
+ >[!div class="checklist"]
+ > * Wait Statistics
+ > * Top Queries by Calls
+ > * Top Queries by Data-Usage
+ > * Top Queries by IOPS
+ > * Top Queries by Temporary Files
+
+## Considerations
+
+* Query Performance Insight is not available for [read replicas](concepts-read-replicas.md)
+* For Query Performance Insight to function, data must exist in the Query Store. Query Store is an opt-in feature, so it isn't enabled by default on a server. Query store is enabled or disabled globally for all databases on a given server and cannot be turned on or off per database.
+* Enabling Query Store on the Burstable pricing tier may negatively impact performance; therefore, it is not recommended.
++
+## Next steps
+
+- Learn more about [monitoring and tuning](concepts-monitoring.md) in Azure Database for PostgreSQL.
postgresql Concepts Read Replicas https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/flexible-server/concepts-read-replicas.md
Last updated 10/21/2022
# Read replicas in Azure Database for PostgreSQL - Flexible Server The read replica feature allows you to replicate data from an Azure Database for PostgreSQL server to a read-only replica. Replicas are updated **asynchronously** with the PostgreSQL engine native physical replication technology. Streaming replication by using replication slots is the default operation mode. When necessary, file-based log shipping is used to catch up. You can replicate from the primary server to up to five replicas.
postgresql Howto Configure And Access Logs https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/flexible-server/howto-configure-and-access-logs.md
For Azure Monitor Logs, logs are sent to the workspace you selected. The Postgre
The following are queries you can try to get started. You can configure alerts based on queries.
-Search for all Postgres logs for a particular server in the last day
+Search for all Postgres logs for a particular server in the last day.
```kusto AzureDiagnostics
AzureDiagnostics
| where Category == "PostgreSQLLogs" | where TimeGenerated > ago(1d) ```-
-Search for all non-localhost connection attempts
+Search for all non-localhost connection attempts. Below query will show results over the last 6 hours for any Postgres server logging in this workspace.
```kusto AzureDiagnostics
AzureDiagnostics
| where Category == "PostgreSQLLogs" and TimeGenerated > ago(6h) ```
-The query above will show results over the last 6 hours for any Postgres server logging in this workspace.
+Search for PostgreSQL Sessions collected from `pg_stat_activity` system view for a particular server in the last day.
+
+```kusto
+AzureDiagnostics
+| where Resource == "myservername"
+| where Category =='PostgreSQLFlexSessions'
+| where TimeGenerated > ago(1d)
+```
+
+Search for PostgreSQL Query Store Runtime statistics collected from `query_store.qs_view` for a particular server in the last day. It requires Query Store to be enabled.
+
+```kusto
+AzureDiagnostics
+| where Resource == "myservername"
+| where Category =='PostgreSQLFlexQueryStoreRuntime'
+| where TimeGenerated > ago(1d)
+```
+
+Search for PostgreSQL Query Store Wait Statistics collected from `query_store.pgms_wait_sampling_view` for a particular server in the last day. It requires Query Store Wait Sampling to be enabled.
+
+```kusto
+AzureDiagnostics
+| where Resource == "myservername"
+| where Category =='PostgreSQLFlexQueryStoreWaitStats'
+| where TimeGenerated > ago(1d)
+```
+
+Search for PostgreSQL Autovacuum and Schema statistics for each database in a particular server within the last day.
+
+```kusto
+AzureDiagnostics
+| where Resource == "myservername"
+| where Category =='PostgreSQLFlexTableStats'
+| where TimeGenerated > ago(1d)
+```
+
+Search for PostgreSQL remaining transactions and multixacts till emergency autovacuum or wraparound protection for each database in a particular server within the last day.
+
+```kusto
+AzureDiagnostics
+| where Resource == "myservername"
+| where Category =='PostgreSQLFlexDatabaseXacts'
+| where TimeGenerated > ago(1d)
+```
## Next steps
private-5g-core Commission Cluster https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/private-5g-core/commission-cluster.md
You should see the new **Custom Location** visible as a resource in the Azure po
## Rollback
-If you have made an error in the Azure Stack Edge configuration, you can use the portal to remove the AKS cluster. You can then modify the settings via the local UI, or perform a full reset using the **Device Reset** blade in the local UI and then restart this procedure.
+If you have made an error in the Azure Stack Edge configuration, you can use the portal to remove the AKS cluster (see [Deploy Azure Kubernetes service on Azure Stack Edge](/azure/databox-online/azure-stack-edge-deploy-aks-on-azure-stack-edge)). You can then modify the settings via the local UI.
+
+Alternatively, you can perform a full reset using the **Device Reset** blade in the local UI (see [Azure Stack Edge device reset and reactivation](/azure/databox-online/azure-stack-edge-reset-reactivate-device)) and then restart this procedure. In this case, you should also [delete any associated resources](/azure/databox-online/azure-stack-edge-return-device?tabs=azure-portal) left in the Azure Portal after completing the Azure Stack Edge reset. This will include some or all of the following, depending on how far through the process you are:
+
+- **Azure Stack Edge** resource
+- Autogenerated **KeyVault** associated with the **Azure Stack Edge** resource
+- Autogenerated **StorageAccount** associated with the **Azure Stack Edge** resource
+- **Azure Kubernetes Cluster** (if successfully created)
+- **Custom location** (if successfully created)
## Next steps
purview Scanning Shir Troubleshooting https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/purview/scanning-shir-troubleshooting.md
Title: Troubleshoot SHIR self-hosted integration runtime-
+ Title: Troubleshoot the self-hosted integration runtime in Microsoft Purview
description: Learn how to troubleshoot self-hosted integration runtime issues in Microsoft Purview. Previously updated : 04/01/2023 Last updated : 04/03/2023
-# Troubleshoot Microsoft Purview SHIR self-hosted integration runtime
+# Troubleshoot Microsoft Purview self-hosted integration runtime (SHIR)
-APPLIES TO: :::image type="icon" source="media/yes.png" border="false":::Microsoft Purview :::image type="icon" source="media/yes.png" border="false":::Azure Data Factory :::image type="icon" source="media/yes.png" border="false":::Azure Synapse Analytics
+This article explores common troubleshooting methods for self-hosted integration runtime (SHIR) in Microsoft Purview.
-This article explores common troubleshooting methods for self-hosted integration runtime (SHIR) in Microsoft Purview, Azure Data Factory and Synapse workspaces.
+The self-hosted integration runtime (SHIR) is also used by Azure Data Factory and Azure Synapse Analytics. Though many of the troubleshooting steps overlap, follow this guide to troubleshoot your SHIR for those products:
+
+- [Troubleshoot SHIR for Azure Data Factory and Azure Synapse Analytics](../data-factory/self-hosted-integration-runtime-troubleshoot-guide.md)
## Gather Microsoft Purview specific SHIR self-hosted integration runtime logs
-For failed activities that are running on a self-hosted IR or a shared IR, the service supports viewing and uploading error logs from the [Windows Event Viewer](https://learn.microsoft.com/shows/inside/event-viewer).
-To get support and troubleshooting guidance for SHIR issues, you may need to generate an error report and send it across to Microsoft. To generate the error report ID, follow the instructions here, and then enter the report ID to search for related known issues.
-
-1. Before starting the scan on the Microsoft Purview governance portal:
-- Navigate to the SHIR VM, or machine and open the Windows Event Viewer.-- Clear the windows event viewer logs in the "Integration Runtime" section. Right-click on the logs and select clear logs option.-- Navigate back to the Microsoft Purview governance portal and start the scan.-- Once the scan shows status "Failed", navigate back to the SHIR VM, or machine and refresh the event viewer in the "Integration Runtime" section.-- The activity logs are displayed for the failed scan run.
-
- :::image type="content" source="media/scanning-shir-troubleshooting/shir-event-viewer-logs-ir.png" lightbox="media/scanning-shir-troubleshooting/shir-event-viewer-logs-ir.png" alt-text="Screenshot of the logs for the failed scan SHIR activity.":::
-
+For failed Microsoft Purview activities that are running on a self-hosted IR or shared IR, the service supports viewing and uploading error logs from the [Windows Event Viewer](/shows/inside/event-viewer).
+
+You can look up any errors you see in the error guide below.
+To get support and troubleshooting guidance for SHIR issues, you may need to generate an error report ID and [reach out to Microsoft support](https://azure.microsoft.com/support/create-ticket/).
+
+To generate the error report ID for Microsoft Support, follow these instructions:
+
+1. Before starting a scan in the Microsoft Purview governance portal:
+
+ 1. Navigate to the machine where the self-hosted integration runtime is installed and open the Windows Event Viewer.
+ 1. Clear the windows event viewer logs in the **Integration Runtime** section. Right-click on the logs and select the clear logs option.
+ 1. Navigate back to the Microsoft Purview governance portal and start the scan.
+ 1. Once the scan shows status **Failed**, navigate back to the SHIR VM, or machine and refresh the event viewer in the **Integration Runtime** section.
+ 1. The activity logs are displayed for the failed scan run.
+
+ :::image type="content" source="media/scanning-shir-troubleshooting/shir-event-viewer-logs-ir.png" lightbox="media/scanning-shir-troubleshooting/shir-event-viewer-logs-ir.png" alt-text="Screenshot of the logs for the failed scan SHIR activity.":::
+ 1. For further assistance from Microsoft, select **Send Logs**.
-
+ The **Share the self-hosted integration runtime (SHIR) logs with Microsoft** window opens. :::image type="content" source="media/scanning-shir-troubleshooting/shir-send-logs-ir.png" lightbox="media/scanning-shir-troubleshooting/shir-send-logs-ir.png" alt-text="Screenshot of the send logs button on the self-hosted integration runtime (SHIR) to upload logs to Microsoft.":::
-1. Select which logs you want to send.
- * For a *self-hosted IR*, you can upload logs that are related to the failed activity or all logs on the self-hosted IR node.
+1. Select which logs you want to send.
+
+ * For a *self-hosted IR*, you can upload logs that are related to the failed activity or all logs on the self-hosted IR node.
* For a *shared IR*, you can upload only logs that are related to the failed activity. 1. When the logs are uploaded, keep a record of the Report ID for later use if you need further assistance to solve the issue.
To get support and troubleshooting guidance for SHIR issues, you may need to gen
:::image type="content" source="media/scanning-shir-troubleshooting/shir-send-logs-complete.png" lightbox="media/scanning-shir-troubleshooting/shir-send-logs-complete.png" alt-text="Screenshot of the displayed report ID in the upload progress window for the Purview SHIR logs."::: > [!NOTE]
-> Log viewing and uploading requests are executed on all online self-hosted IR instances. If any logs are missing, make sure that all the self-hosted IR instances are online.
-
+> Log viewing and uploading requests are executed on all online self-hosted IR instances. If any logs are missing, make sure that all the self-hosted IR instances are online.
## Self-hosted integration runtime SHIR general failure or error There are lots of common errors, warnings, issues between Purview SHIR and Azure Data Factory or Azure Synapse SHIR. If your SHIR issues aren't resolved at this stage, refer to the [Azure Data Factory ADF or Azure Synapse SHIR troubleshooting guide](../data-factory/self-hosted-integration-runtime-troubleshoot-guide.md)
-## Manage your Purview SHIR - next steps
+## Next Steps
For more help with troubleshooting, try the following resources:
-* [Getting started with Microsoft Purview](https://azure.microsoft.com/products/purview/)
-* [Create and Manage SHIR Self-hosted integration runtimes in Purview](manage-integration-runtimes.md)
-* [Stack overflow forum for Microsoft Purview](https://stackoverflow.com/questions/tagged/azure-purview)
-* [Twitter information about Microsoft Purview](https://twitter.com/hashtag/Purview)
-* [Microsoft Purview Troubleshooting](frequently-asked-questions.yml)
--
+- [Getting started with Microsoft Purview](https://azure.microsoft.com/products/purview/)
+- [Create and Manage SHIR Self-hosted integration runtimes in Purview](manage-integration-runtimes.md)
+- [Microsoft Purview frequently asked questions](frequently-asked-questions.yml)
sentinel Connect Cef Syslog Options https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sentinel/connect-cef-syslog-options.md
+
+ Title: Options for streaming logs in the CEF and Syslog format to Microsoft Sentinel
+description: Find the relevant option for streaming and filtering logs in the CEF and Syslog format to your Microsoft Sentinel workspace.
++ Last updated : 02/09/2023+
+#Customer intent: As a security operator, I want to understand what my options are for streaming CEF and Syslog-based logs from my organization to my Microsoft Sentinel workspace.
++
+# Options for streaming logs in the CEF and Syslog format to Microsoft Sentinel
+
+In this article, you can find the relevant option for streaming and filtering logs in the CEF and Syslog format to your Microsoft Sentinel workspace.
+
+## Stream logs in the CEF and Syslog format to Microsoft Sentinel
+
+Depending on where your logs are located, select the article that's most relevant to your scenario:
+
+- **[CEF](connect-cef-ama.md)**: Stream CEF logs with the CEF AMA connector.
+- **Syslog**: To ingest logs over Syslog with the AMA, [create a DCR](../azure-monitor/essentials/data-collection-rule-structure.md), or for the full procedure, see [forward syslog data to Log Analytics using the AMA](forward-syslog-monitor-agent.md).
+- **[CEF and Syslog](connect-cef-syslog.md)**: Stream logs in both the CEF and Syslog format.
+
+## Next steps
+
+In this article, we reviewed the available options for streaming logs in the CEF and Syslog format to your Microsoft Sentinel workspace.
+- [Stream CEF logs with the AMA connector](connect-cef-ama.md)
+- [Collect data from Linux-based sources using Syslog](connect-syslog.md)
+- [Stream logs in both the CEF and Syslog format](connect-cef-syslog.md)
sentinel Connect Cef Syslog https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sentinel/connect-cef-syslog.md
+
+ Title: Stream logs in both the CEF and Syslog format to Microsoft Sentinel
+description: Stream and filter logs in both the CEF and Syslog format to your Microsoft Sentinel workspace.
++ Last updated : 02/09/2023+
+#Customer intent: As a security operator, I want to stream and filter CEF an Syslog-based logs from my organization to my Microsoft Sentinel workspace, so I can avoid duplication between CEF and Syslog data.
++
+# Stream logs in both the CEF and Syslog format
+
+This article describes how to stream and filter logs in both the CEF and Syslog format to your Microsoft Sentinel workspace from multiple appliances. This article is useful in the following scenario:
+
+- You're using a Linux log collector to forward both Syslog and CEF events to your Microsoft Sentinel workspaces using the Azure Monitor Agent (AMA).
+- You want to ingest Syslog events in the Syslog table and CEF events in the CommonSecurityLog table.
+
+During this process, you use the AMA and Data Collection Rules (DCRs). With DCRs, you can filter the logs before they're ingested, for quicker upload, efficient analysis, and querying. Data Collection Rules (DCRs) to filter the logs before they're ingested, for quicker upload, efficient analysis, and querying.
+
+> [!IMPORTANT]
+>
+> On **February 28th 2023**, we will introduce [changes to the CommonSecurityLog table schema](https://techcommunity.microsoft.com/t5/microsoft-sentinel-blog/upcoming-changes-to-the-commonsecuritylog-table/ba-p/3643232). This means that custom queries will require being reviewed and updated. Out-of-the-box content (detections, hunting queries, workbooks, parsers, etc.) will be updated by Microsoft Sentinel.
+
+Read more about [CEF](connect-cef-ama.md#what-is-cef-collection) and [Syslog](connect-syslog.md#architecture) collection in Microsoft Sentinel.
+
+## Prerequisites
+
+Before you begin, verify that you have:
+
+- The Microsoft Sentinel solution enabled.
+- A defined Microsoft Sentinel workspace.
+- A Linux machine to collect logs.
+ - The Linux machine must have Python 2.7 or 3 installed on the Linux machine. Use the ``python --version`` or ``python3 --version`` command to check.
+- Either the `syslog-ng` or `rsyslog` daemon enabled.
+- To collect events from any system that isn't an Azure virtual machine, ensure that [Azure Arc](../azure-monitor/agents/azure-monitor-agent-manage.md) is installed.
+- To ingest Syslog and CEF logs into Microsoft Sentinel, you can designate and configure a Linux machine that collects the logs from your devices and forwards them to your Microsoft Sentinel workspace. [Configure a log forwarder](connect-cef-ama.md#configure-a-log-forwarder).
+
+## Avoid data ingestion duplication
+
+Using the same facility for both Syslog and CEF messages may result in data ingestion duplication between the CommonSecurityLog and Syslog tables.
+
+To avoid this scenario, use one of these methods:
+
+- **If the source device enables configuration of the target facility**: On each source machine that sends logs to the log forwarder in CEF format, edit the Syslog configuration file to remove the facilities used to send CEF messages. This way, the facilities sent in CEF won't also be sent in Syslog. Make sure that each DCR you configure in the next steps uses the relevant facility for CEF or Syslog respectively.
+- **If changing the facility for the source appliance isn't applicable**: Use an ingest time transformation to filter out CEF messages from the Syslog stream to avoid duplication:
+
+ ```kusto
+ source |
+ where ProcessName !contains ΓÇ£\ΓÇ£CEF\ΓÇ¥ΓÇ¥
+ ```
+## Create a DCR for your CEF logs
+
+- Create the DCR via the UI:
+ 1. [Open the connector page and create the DCR](connect-cef-ama.md#open-the-connector-page-and-create-the-dcr).
+ 1. [Define resources (VMs)](connect-cef-ama.md#define-resources-vms).
+ 1. [Select the data source type and create the DCR](connect-cef-ama.md#select-the-data-source-type-and-create-the-dcr).
+
+ > [!IMPORTANT]
+ > Make sure to **[avoid data ingestion duplication](#avoid-data-ingestion-duplication)** (review the options in this section).
+
+ 1. [Run the installation script](connect-cef-ama.md).
+
+- Create the DCR via the API:
+ 1. [Create the request URL and header](connect-cef-ama.md#request-url-and-header).
+ 1. [Create the request body](connect-cef-ama.md#request-body).
+
+ See [examples of facilities and log levels sections](connect-cef-ama.md#examples-of-facilities-and-log-levels-sections).
+
+## Create a DCR for your Syslog logs
+
+Create the DCR for your Syslog-based logs using the Azure Monitor [guidelines](../azure-monitor/essentials/data-collection-rule-overview.md) and [structure](../azure-monitor/essentials/data-collection-rule-structure.md). Review the [best practices](../azure-monitor/essentials/data-collection-rule-best-practices.md) if needed.
+
+## Create a DCR for both Syslog and CEF logs
+
+1. Run this command to launch the installation script:
+
+ ```python
+ sudo wget -O Forwarder_AMA_installer.py https://raw.githubusercontent.com/Azure/Azure-Sentinel/master/DataConnectors/Syslog/Forwarder_AMA_installer.py&&sudo python Forwarder_AMA_installer.py
+ ```
+ The installation script configures the `rsyslog` or `syslog-ng` daemon to use the required protocol and restarts the daemon.
+
+1. Create the request URL and header:ΓÇ»
+
+ ```rest
+ GET
+ https://management.azure.com/subscriptions/{subscriptionId}/resourceGroups/{resourceGroupName}/providers/Microsoft.Insights/dataCollectionRules/{dataCollectionRuleName}?api-version=2019-11-01-preview
+ ```
+
+1. Create the request body:
+ - Verify that the `streams` field is set to `Microsoft-CommonSecurityLog` and `Microsoft-Syslog` for the CEF/Syslog facility respectively.
+ - Add the filter and facility log levels in the `facilityNames` and `logLevels` parameters.
+
+ ```rest
+ {
+ "properties": {
+ "immutableId": "dcr-c7847b758fb0484b88b51c5d907796a6",
+ "dataSources": {
+ "syslog": [
+ {
+ "streams": ["Microsoft-Syslog"],
+ "facilityNames": ["auth"],
+ "logLevels": [
+ "Info",
+ "Notice",
+ "Warning",
+ "Error",
+ "Critical",
+ "Alert",
+ "Emergency"
+ ],
+ "name": "sysLogsDataSource--1469397783"
+ },
+ {
+ "streams": ["Microsoft-CommonSecurityLog"],
+ "facilityNames": [
+ "local4"
+ ],
+ "logLevels": [
+ "Warning"
+ ],
+ "name": "sysLogsDataSource-1688419672"
+ }
+ ]
+ },
+ "destinations": {
+ "logAnalytics": [
+ {
+ "workspaceResourceId": "/subscriptions/<sub-id>/resourceGroups/<resourceGroup>/providers/Microsoft.OperationalInsights/workspaces/<WS>",
+ "workspaceId": "<WS-ID>",
+ "name": "la--591870646"
+ }
+ ]
+ },
+ "dataFlows": [
+ { "streams": ["Microsoft-Syslog", "Microsoft-CommonSecurityLog"], "destinations": ["la--591870646"] }
+ ],
+ "provisioningState": "Succeeded"
+ },
+ "location": "eastus",
+ "tags": {},
+ "kind": "Linux",
+ "id": "/subscriptions/<sub-id>/resourceGroups/<resourceGroup>/providers/Microsoft.Insights/dataCollectionRules/<DCR-Name>",
+ "name": "<DCR-Name>",
+ "type": "Microsoft.Insights/dataCollectionRules",
+ "etag": "\"6d00bdde-0000-0100-0000-62c177f70000\"",
+ "systemData": {
+ "createdBy": someuser@microsoft.com,
+ "createdByType": "User",
+ "createdAt": "2022-07-03T11:05:27.2454015Z",
+ "lastModifiedBy": someuser@microsoft.com,
+ "lastModifiedByType": "User",
+ "lastModifiedAt": "2022-07-03T11:05:27.2454015Z"
+ }
+ }
+ ```
+1. After you finish editing the template, use `POST` or `PUT` to deploy it:
+
+ ```rest
+ PUT
+ https://management.azure.com/subscriptions/{subscriptionId}/resourceGroups/{resourceGroupName}/providers/Microsoft.Insights/dataCollectionRules/{dataCollectionRuleName}?api-version=2019-11-01-preview
+ ```
+
+See [examples of facilities and log levels sections](connect-cef-ama.md#examples-of-facilities-and-log-levels-sections).
+
+## Next steps
+
+In this article, you learned how to stream and filter logs in both the CEF and Syslog format to your Microsoft Sentinel workspace. To learn more about Microsoft Sentinel, see the following articles:
+- Learn how to [get visibility into your data, and potential threats](get-visibility.md).
+- Get started [detecting threats with Microsoft Sentinel](detect-threats-built-in.md).
+- [Use workbooks](monitor-your-data.md) to monitor your data.
sentinel Connect Data Sources https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sentinel/connect-data-sources.md
Title: Microsoft Sentinel data connectors
description: Learn about supported data connectors, like Microsoft 365 Defender (formerly Microsoft Threat Protection), Microsoft 365 and Office 365, Azure AD, ATP, and Defender for Cloud Apps to Microsoft Sentinel. Previously updated : 07/14/2022 Last updated : 02/15/2023
Learn which firewalls, proxies, and endpoints connect to Microsoft Sentinel thro
### Syslog
-You can stream events from Linux-based, Syslog-supporting devices into Microsoft Sentinel using the Log Analytics agent for Linux, formerly named the OMS agent. Depending on the device type, the agent is installed either directly on the device, or on a dedicated Linux-based log forwarder. The Log Analytics agent receives events from the Syslog daemon over UDP. If a Linux machine is expected to collect a high volume of Syslog events, it sends events over TCP from the Syslog daemon to the agent, and from there to Log Analytics. Learn how to [connect Syslog-based appliances to Microsoft Sentinel](connect-syslog.md).
+You can stream events from Linux-based, Syslog-supporting devices into Microsoft Sentinel using the [Azure Monitor Agent (AMA)](forward-syslog-monitor-agent.md). Depending on the device type, the agent is installed either directly on the device, or on a dedicated Linux-based log forwarder. The AMA receives events from the Syslog daemon over UDP. The Syslog daemon forwards events to the agent internally, communicating over UDS (Unix Domain Sockets). The AMA then transmits these events to the Microsoft Sentinel workspace.
Here is a simple flow that shows how Microsoft Sentinel streams Syslog data.
sentinel Forward Syslog Monitor Agent https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sentinel/forward-syslog-monitor-agent.md
If you're forwarding syslogs to an Azure VM, use the following steps to allow re
### Configure Linux syslog daemon
+> [!NOTE]
+> To avoid [Full Disk scenarios](../azure-monitor/agents/azure-monitor-agent-troubleshoot-linux-vm-rsyslog.md) where the agent can't function, we recommend that you set the `syslog-ng` or `rsyslog` configuration not to store unneeded logs. A Full Disk scenario disrupts the function of the installed AMA.
+> Read more about [RSyslog](https://www.rsyslog.com/doc/master/configuration/actions.html) or [Syslog-ng](
+https://www.syslog-ng.com/technical-documents/doc/syslog-ng-open-source-edition/3.26/administration-guide/34#TOPIC-1431029).
+ Connect to your Linux VM and run the following command to configure the Linux syslog daemon: ```bash
service-health Impacted Resources Outage https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/service-health/impacted-resources-outage.md
Last updated 12/16/2022
# Resource impact from Azure outages
-[Azure Service Health](https://azure.microsoft.com/get-started/azure-portal/service-health/) helps customers view any health events that affect their subscriptions and tenants. In the Azure portal, the **Service Issues** pane in **Service Health** shows any ongoing problems in Azure services that are affecting your resources. You can understand when each problem began, and what services and regions are affected.
+[Azure Service Health](https://azure.microsoft.com/get-started/azure-portal/service-health/) helps customers view any health events that affect their subscriptions and tenants. In the Azure portal, the **Service Issues** pane in **Service Health** shows any ongoing problems in Azure services that are affecting your resources. You can understand when each problem began, and what services and regions are impacted.
Previously, the **Potential Impact** tab on the **Service Issues** pane appeared in the incident details section. It showed any resources under a subscription that an outage might affect, along with a signal from [Azure Resource Health](../service-health/resource-health-overview.md) to help you evaluate impact.
-In support of the experience of viewing affected resources, Service Health has enabled a new feature to:
+In support of the experience of viewing impacted resources, Service Health has enabled a new feature to:
- Replace the **Potential Impact** tab with an **Impacted Resources** tab on the **Service Issues** pane.-- Display resources that are confirmed to be affected by an outage.-- Display resources that are not confirmed to be affected by an outage but could be affected because they fall under a service or region that's confirmed to be affected.-- Provide the status of both confirmed and potential affected resources to show their availability.
+- Display resources that are confirmed to be impacted by an outage.
+- Display resources that are not confirmed to be impacted by an outage but could be impacted because they fall under a service or region that's confirmed to be impacted.
+- Provide the status of both confirmed and potential impacted resources to show their availability.
-This article details what Service Health communicates and where you can view information about your affected resources.
+This article details what Service Health communicates and where you can view information about your impacted resources.
>[!Note]
->This feature will be rolled out in phases. Initially, only selected subscription-level customers will get the experience. The rollout will gradually expand to 100 percent of subscription customers. It will go live for tenant-level customers in the future.
+>This feature will be rolled out in phases. Initially, only selected subscription-level customers will get the experience. The rollout will gradually expand to 100 percent of subscription and tenant-level users.
-## View affected resources
+## View impacted resources
-In the Azure portal, the **Impacted Resources** tab under **Service Health** > **Service Issues** displays resources that are or might be affected by an outage. The following example of the **Impacted Resources** tab shows an incident with confirmed and potentially affected resources.
--
-Service Health provides the following information:
+In the Azure portal, the **Impacted Resources** tab under **Service Health** > **Service Issues** displays resources that are or might be impacted by an outage. Service Health provides the below information to users whose resources are impacted by an outage:
|Column |Description | ||| |**Resource Name**|Name of the resource. This name is a clickable link that goes to the Resource Health page for the resource. If no Resource Health signal is available for the resource, this name is text only.|
-|**Resource Health**|Health status of the resource: <br><br>**Available**: Your resource is healthy, but a service event might have affected it at a previous point in time. <br><br>**Degraded** or **Unavailable**: A customer-initiated action or a platform-initiated action might have caused this status. It means your resource was affected but might now be healthy, pending a status update. <br><br>:::image type="content" source="./media/impacted-resource-outage/rh-cropped.PNG" alt-text="Screenshot of health statuses for a resource.":::|
-|**Impact Type**|Indication of whether the resource is or might be affected: <br><br>**Confirmed**: The resource is confirmed to be affected by an outage. Check the **Summary** section for any action items that you can take to remediate the problem. <br><br>**Potential**: The resource is not confirmed to be affected, but it could potentially be affected because it's under a service or region that an outage is affecting. Check the **Resource Health** column to make sure that everything is working as planned.|
-|**Resource Type**|Type of affected resource (for example, virtual machine).|
-|**Resource Group**|Resource group that contains the affected resource.|
-|**Location**|Location that contains the affected resource.|
-|**Subscription ID**|Unique ID for the subscription that contains the affected resource.|
-|**Subscription Name**|Subscription name for the subscription that contains the affected resource.|
-|**Tenant ID**|Unique ID for the tenant that contains the affected resource.|
+|**Resource Health**|Health status of the resource: <br><br>**Available**: Your resource is healthy, but a service event might have impacted it at a previous point in time. <br><br>**Degraded** or **Unavailable**: A customer-initiated action or a platform-initiated action might have caused this status. It means your resource was impacted but might now be healthy, pending a status update. <br><br>:::image type="content" source="./media/impacted-resource-outage/rh-cropped.PNG" alt-text="Screenshot of health statuses for a resource.":::|
+|**Impact Type**|Indication of whether the resource is or might be impacted: <br><br>**Confirmed**: The resource is confirmed to be impacted by an outage. Check the **Summary** section for any action items that you can take to remediate the problem. <br><br>**Potential**: The resource is not confirmed to be impacted, but it could potentially be impacted because it's under a service or region that an outage is affecting. Check the **Resource Health** column to make sure that everything is working as planned.|
+|**Resource Type**|Type of impacted resource (for example, virtual machine).|
+|**Resource Group**|Resource group that contains the impacted resource.|
+|**Location**|Location that contains the impacted resource.|
+|**Subscription ID**|Unique ID for the subscription that contains the impacted resource.|
+|**Subscription Name**|Subscription name for the subscription that contains the impacted resource.|
+
+The following is an example of outage **Impacted Resources** from the subscription and tenant scope.
++
+>[!Note]
+>Not all resources in subscription scope will show a Resource Health status. The status appears only on resources for which a Resource Health signal is available. The status of resources for which a Resource Health signal is not available appears as **N/A**, and the corresponding **Resource Name** value is text instead of a clickable link.
+ >[!Note]
->Not all resources show a Resource Health status. The status appears only on resources for which a Resource Health signal is available. The status of resources for which a Resource Health signal is not available appears as **N/A**, and the corresponding **Resource Name** value is text instead of a clickable link.
+>Resource Health status, tenant name, and tenant ID are not included at the tenant level scope. The corresponding **Resource Name** value is text only instead of a clickable link.
## Filter results
You can adjust the results by using these filters:
## Export to CSV
-To export the list of affected resources to an Excel file, select the **Export to CSV** option.
+To export the list of impacted resources to an Excel file, select the **Export to CSV** option.
-## Access affected resources programmatically via an API
+## Access impacted resources programmatically via an API
-You can get information about outage-affected resources programmatically by using the Events API. For details on how to access this data, see the [API documentation](/rest/api/resourcehealth/2022-05-01/impacted-resources/list-by-subscription-id-and-event-id?tabs=HTTP).
+You can get information about outage-impacted resources programmatically by using the Events API. For details on how to access this data, see the [API documentation](/rest/api/resourcehealth/2022-05-01/impacted-resources/list-by-subscription-id-and-event-id?tabs=HTTP).
## Next steps - [Introduction to the Azure Service Health dashboard](service-health-overview.md)
site-recovery Azure To Azure Support Matrix https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/site-recovery/azure-to-azure-support-matrix.md
Title: Support matrix for Azure VM disaster recovery with Azure Site Recovery description: Summarizes support for Azure VMs disaster recovery to a secondary region with Azure Site Recovery. Previously updated : 03/31/2023 Last updated : 04/04/2023
Oracle Linux | 6.4, 6.5, 6.6, 6.7, 6.8, 6.9, 6.10, 7.0, 7.1, 7.2, 7.3, 7.4, 7.5,
16.04 LTS | [9.50](https://support.microsoft.com/topic/update-rollup-63-for-azure-site-recovery-kb5017421-992e63af-aa94-4ea6-8d1b-2dd89a9cc70b) | No new 16.04 LTS kernels supported in this release. | 16.04 LTS | [9.49](https://support.microsoft.com/topic/update-rollup-62-for-azure-site-recovery-e7aff36f-b6ad-4705-901c-f662c00c402b) | No new 16.04 LTS kernels supported in this release. | |||
-18.04 LTS |[9.53](https://support.microsoft.com/topic/update-rollup-66-for-azure-site-recovery-kb5023601-c306c467-c896-4c9d-b236-73b21ca27ca5)| 5.4.0-137-generic <br> 5.4.0-1101-azure <br> 4.15.0-1161-azure <br> 4.15.0-204-generic <br> 5.4.0-1103-azure <br> 5.4.0-139-generic <br> 4.15.0-206-generic <br> 5.4.0-1104-azure <br> 5.4.0-144-generic |
+18.04 LTS |[9.53](https://support.microsoft.com/topic/update-rollup-66-for-azure-site-recovery-kb5023601-c306c467-c896-4c9d-b236-73b21ca27ca5)| 5.4.0-137-generic <br> 5.4.0-1101-azure <br> 4.15.0-1161-azure <br> 4.15.0-204-generic <br> 5.4.0-1103-azure <br> 5.4.0-139-generic <br> 4.15.0-206-generic <br> 5.4.0-1104-azure <br> 5.4.0-144-generic <br> 4.15.0-1162-azure |
18.04 LTS |[9.52](https://support.microsoft.com/topic/update-rollup-65-for-azure-site-recovery-kb5021964-15db362f-faac-417d-ad71-c22424df43e0)| 4.15.0-196-generic <br> 4.15.0-1157-azure <br> 5.4.0-1098-azure <br> 4.15.0-1158-azure <br> 4.15.0-1159-azure <br> 4.15.0-201-generic <br> 4.15.0-202-generic <br> 5.4.0-1100-azure <br> 5.4.0-136-generic | 18.04 LTS | [9.51](https://support.microsoft.com/topic/update-rollup-64-for-azure-site-recovery-kb5020102-23db9799-102c-4378-9754-2f19f6c7858a) |4.15.0-1151-azure </br> 4.15.0-193-generic </br> 5.4.0-1091-azure </br> 5.4.0-126-generic</br>4.15.0-1153-azure </br>4.15.0-194-generic </br>5.4.0-1094-azure </br>5.4.0-128-generic </br>5.4.0-131-generic | 18.04 LTS |[9.50](https://support.microsoft.com/topic/update-rollup-63-for-azure-site-recovery-kb5017421-992e63af-aa94-4ea6-8d1b-2dd89a9cc70b) | 4.15.0-1149-azure </br> 4.15.0-1150-azure </br> 4.15.0-191-generic </br> 4.15.0-192-generic </br>5.4.0-1089-azure </br>5.4.0-1090-azure </br>5.4.0-124-generic|
Debian 11 | [9.52](https://support.microsoft.com/topic/update-rollup-65-for-azur
**Release** | **Mobility service version** | **Kernel version** | | | |
-SUSE Linux Enterprise Server 12 (SP1, SP2, SP3, SP4, SP5) | [9.53](https://support.microsoft.com/topic/update-rollup-66-for-azure-site-recovery-kb5023601-c306c467-c896-4c9d-b236-73b21ca27ca5) | All [stock SUSE 12 SP1,SP2,SP3,SP4,SP5 kernels](https://www.suse.com/support/kb/doc/?id=000019587) are supported. </br></br> No new SLES 12 Azure kernels supported in this release. |
+SUSE Linux Enterprise Server 12 (SP1, SP2, SP3, SP4, SP5) | [9.53](https://support.microsoft.com/topic/update-rollup-66-for-azure-site-recovery-kb5023601-c306c467-c896-4c9d-b236-73b21ca27ca5) | All [stock SUSE 12 SP1,SP2,SP3,SP4,SP5 kernels](https://www.suse.com/support/kb/doc/?id=000019587) are supported. </br></br> 4.12.14-16.124-azure:5 <br> 4.12.14-16.127-azure:5 |
SUSE Linux Enterprise Server 12 (SP1, SP2, SP3, SP4, SP5) | [9.52](https://support.microsoft.com/topic/update-rollup-65-for-azure-site-recovery-kb5021964-15db362f-faac-417d-ad71-c22424df43e0) | All [stock SUSE 12 SP1,SP2,SP3,SP4,SP5 kernels](https://www.suse.com/support/kb/doc/?id=000019587) are supported. </br></br> 4.12.14-16.115-azure:5 <br> 4.12.14-16.120-azure:5 | SUSE Linux Enterprise Server 12 (SP1, SP2, SP3, SP4, SP5) | [9.51](https://support.microsoft.com/topic/update-rollup-64-for-azure-site-recovery-kb5020102-23db9799-102c-4378-9754-2f19f6c7858a) | All [stock SUSE 12 SP1,SP2,SP3,SP4,SP5 kernels](https://www.suse.com/support/kb/doc/?id=000019587) are supported. </br></br> 4.12.14-16.106-azure:5 </br> 4.12.14-16.112-azure | SUSE Linux Enterprise Server 12 (SP1, SP2, SP3, SP4, SP5) | [9.50](https://support.microsoft.com/topic/update-rollup-63-for-azure-site-recovery-kb5017421-992e63af-aa94-4ea6-8d1b-2dd89a9cc70b) | All [stock SUSE 12 SP1,SP2,SP3,SP4,SP5 kernels](https://www.suse.com/support/kb/doc/?id=000019587) are supported. </br></br> No new SLES 12 Azure kernels supported in this release. |
SUSE Linux Enterprise Server 12 (SP1, SP2, SP3, SP4, SP5) | [9.49](https://suppo
**Release** | **Mobility service version** | **Kernel version** | | | |
-SUSE Linux Enterprise Server 15 (SP1, SP2, SP3, SP4) | [9.53](https://support.microsoft.com/topic/update-rollup-66-for-azure-site-recovery-kb5023601-c306c467-c896-4c9d-b236-73b21ca27ca5) | By default, all [stock SUSE 15, SP1, SP2, SP3, SP4 kernels](https://www.suse.com/support/kb/doc/?id=000019587) are supported. </br></br> 5.14.21-150400.14.31-azure:4 <br> 5.14.21-150400.14.34-azure:4 |
+SUSE Linux Enterprise Server 15 (SP1, SP2, SP3, SP4) | [9.53](https://support.microsoft.com/topic/update-rollup-66-for-azure-site-recovery-kb5023601-c306c467-c896-4c9d-b236-73b21ca27ca5) | By default, all [stock SUSE 15, SP1, SP2, SP3, SP4 kernels](https://www.suse.com/support/kb/doc/?id=000019587) are supported. </br></br> 5.14.21-150400.14.31-azure:4 <br> 5.14.21-150400.14.34-azure:4 <br> 5.14.21-150400.14.37-azure:4 |
SUSE Linux Enterprise Server 15 (SP1, SP2, SP3, SP4) | [9.52](https://support.microsoft.com/topic/update-rollup-65-for-azure-site-recovery-kb5021964-15db362f-faac-417d-ad71-c22424df43e0) | By default, all [stock SUSE 15, SP1, SP2, SP3, SP4 kernels](https://www.suse.com/support/kb/doc/?id=000019587) are supported. </br></br> 5.14.21-150400.12-azure:4 <br> 5.14.21-150400.14.10-azure:4 <br> 5.14.21-150400.14.13-azure:4 <br> 5.14.21-150400.14.16-azure:4 <br> 5.14.21-150400.14.7-azure:4 <br> 5.3.18-150300.38.83-azure:3 <br> 5.14.21-150400.14.21-azure:4 <br> 5.14.21-150400.14.28-azure:4 <br> 5.3.18-150300.38.88-azure:3 | SUSE Linux Enterprise Server 15 (SP1, SP2, SP3) | [9.51](https://support.microsoft.com/topic/update-rollup-64-for-azure-site-recovery-kb5020102-23db9799-102c-4378-9754-2f19f6c7858a) | By default, all [stock SUSE 15, SP1, SP2, SP3 kernels](https://www.suse.com/support/kb/doc/?id=000019587) are supported. </br></br> 5.3.18-150300.38.80-azure | SUSE Linux Enterprise Server 15 (SP1, SP2, SP3) | [9.50](https://support.microsoft.com/topic/update-rollup-63-for-azure-site-recovery-kb5017421-992e63af-aa94-4ea6-8d1b-2dd89a9cc70b) | By default, all [stock SUSE 15, SP1, SP2, SP3 kernels](https://www.suse.com/support/kb/doc/?id=000019587) are supported. </br></br> 5.3.18-150300.38.75-azure:3 |
site-recovery Hyper V Azure Support Matrix https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/site-recovery/hyper-v-azure-support-matrix.md
Multi-path (MPIO). Tested with:<br></br> Microsoft DSM, EMC PowerPath 5.7 SP4, E
VMDK | NA | NA VHD/VHDX | Yes | Yes Generation 2 VM | Yes | Yes
-EFI/UEFI<br></br>The migrated VM in Azure will be automatically converted to a BIOS boot VM. The VM should be running Windows Server 2012 and later only. The OS disk should have up to five partitions or fewer and the size of OS disk should be less than 300 GB.| Yes | Yes
+EFI/UEFI<br></br>The migrated VM in Azure will be automatically converted to a BIOS boot VM. The VM should be running Windows Server 2012 and later only. The OS disk should have up to five partitions or fewer and the size of OS disk should be less than 2 TB.| Yes | Yes
Shared cluster disk | No | No Encrypted disk | No | No NFS | NA | NA SMB 3.0 | No | No RDM | NA | NA
-Disk >1 TB | Yes, up to 4,095 GB | Yes, up to 4,095 GB
+Disk >1 TB | Yes, up to 32 TB <br></br> You will need to upgrade the replication provider on the Hyper-V host to any version after 2.0.9214.0 to replicate large disks up to 32 TB. For large disks, replication will happen to managed disks only.| Yes, up to 32 TB <br></br> You will need to upgrade the replication provider on the Hyper-V host to any version after 2.0.9214.0 to replicate large disks up to 32 TB. For large disks, replication will happen to managed disks only.
Disk: 4K logical and physical sector | Not supported: Gen 1/Gen 2 | Not supported: Gen 1/Gen 2 Disk: 4K logical and 512-bytes physical sector | Yes | Yes Logical volume management (LVM). LVM is supported on data disks only. Azure provides only a single OS disk. | Yes | Yes
On-premises VMs that you replicate to Azure must meet the Azure VM requirements
| | Guest operating system | Site Recovery supports all operating systems that are [supported by Azure](/previous-versions/windows/it-pro/windows-server-2008-R2-and-2008/cc794868(v=ws.10)). | Prerequisites check fails if unsupported. Guest operating system architecture | 32-bit (Windows Server 2008)/64-bit | Prerequisites check fails if unsupported.
-Operating system disk size | Up to 2,048 GB for generation 1 VMs.<br/><br/> Up to 300 GB for generation 2 VMs. | Prerequisites check fails if unsupported.
+Operating system disk size | Up to 2 TB for generation 1 VMs.<br/><br/> Up to 4 TB for generation 2 VMs. <br/><br/> You will need to upgrade the replication provider on the Hyper-V host to any version after 2.0.9214.0 to replicate large OS disks. For large disks, replication will happen to managed disks only. | Prerequisites check fails if unsupported.
Operating system disk count | 1 | Prerequisites check fails if unsupported. Data disk count | 16 or less | Prerequisites check fails if unsupported.
-Data disk VHD size | Up to 4,095 GB | Prerequisites check fails if unsupported.
+Data disk VHD size | Up to 32 TB <br/><br/> You will need to upgrade the replication provider on the Hyper-V host to any version after 2.0.9214.0 to replicate large disks. For large disks, replication will happen to managed disks only. | Prerequisites check fails if unsupported.
Network adapters | Multiple adapters are supported | Shared VHD | Not supported | Prerequisites check fails if unsupported. FC disk | Not supported | Prerequisites check fails if unsupported. Hard disk format | VHD <br/><br/> VHDX | Site Recovery automatically converts VHDX to VHD when you fail over to Azure. When you fail back to on-premises, the virtual machines continue to use the VHDX format. BitLocker | Not supported | BitLocker must be disabled before you enable replication for a VM. VM name | Between 1 and 63 characters. Restricted to letters, numbers, and hyphens. The VM name must start and end with a letter or number. | Update the value in the VM properties in Site Recovery.
-VM type | Generation 1<br/><br/> Generation 2--Windows | Generation 2 VMs with an OS disk type of basic (which includes one or two data volumes formatted as VHDX) and less than 300 GB of disk space are supported.<br></br>Linux Generation 2 VMs aren't supported. [Learn more](https://azure.microsoft.com/blog/2015/04/28/disaster-recovery-to-azure-enhanced-and-were-listening/).|
+VM type | Generation 1<br/><br/> Generation 2--Windows | Generation 2 VMs with an OS disk type of basic (which includes one or two data volumes formatted as VHDX) and less than 2 TB of disk space are supported.<br></br>Linux Generation 2 VMs aren't supported. [Learn more](https://azure.microsoft.com/blog/2015/04/28/disaster-recovery-to-azure-enhanced-and-were-listening/).|
## Recovery Services vault actions
site-recovery Vmware Azure Architecture Modernized https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/site-recovery/vmware-azure-architecture-modernized.md
After replication is set up and you run a disaster recovery drill (test failover
## Next steps
-Follow [this tutorial](vmware-azure-tutorial.md) to enable VMware to Azure replication.
+Follow [this tutorial](vmware-azure-set-up-replication-tutorial-modernized.md) to enable VMware to Azure replication.
spring-apps How To Configure Enterprise Spring Cloud Gateway https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/spring-apps/how-to-configure-enterprise-spring-cloud-gateway.md
**This article applies to:** ❌ Basic/Standard tier ✔️ Enterprise tier
-This article shows you how to configure VMware Spring Cloud Gateway with Azure Spring Apps Enterprise tier.
+This article shows you how to configure Spring Cloud Gateway for VMware Tanzu with Azure Spring Apps Enterprise tier.
-[VMware Spring Cloud Gateway](https://docs.vmware.com/en/VMware-Spring-Cloud-Gateway-for-Kubernetes/https://docsupdatetracker.net/index.html) is a commercial VMware Tanzu component based on the open-source Spring Cloud Gateway project. Spring Cloud Gateway for Tanzu handles cross-cutting concerns for API development teams, such as single sign-on (SSO), access control, rate-limiting, resiliency, security, and more. You can accelerate API delivery using modern cloud native patterns, and any programming language you choose for API development.
+[VMware Spring Cloud Gateway](https://docs.vmware.com/en/VMware-Spring-Cloud-Gateway-for-Kubernetes/https://docsupdatetracker.net/index.html) is a commercial VMware Tanzu component based on the open-source Spring Cloud Gateway project. Spring Cloud Gateway for Tanzu handles the cross-cutting concerns for API development teams, such as single sign-on (SSO), access control, rate-limiting, resiliency, security, and more. You can accelerate API delivery using modern cloud native patterns using your choice of programming language for API development.
A Spring Cloud Gateway instance routes traffic according to rules. Both *scale in/out* and *up/down* are supported to meet a dynamic traffic load. VMware Spring Cloud Gateway includes the following features: -- Dynamic routing configuration, independent of individual applications that can be applied and changed without recompilation.
+- Dynamic routing configuration, independent of individual applications, that you can apply and change without recompiling.
- Commercial API route filters for transporting authorized JSON Web Token (JWT) claims to application services. - Client certificate authorization. - Rate-limiting approaches. - Circuit breaker configuration. - Support for accessing application services via HTTP Basic Authentication credentials.
-To integrate with [API portal for VMware Tanzu®](./how-to-use-enterprise-api-portal.md), VMware Spring Cloud Gateway automatically generates OpenAPI version 3 documentation after any route configuration additions or changes.
+To integrate with API portal for VMware Tanzu, VMware Spring Cloud Gateway automatically generates OpenAPI version 3 documentation after any route configuration additions or changes. For more information, see [Use API portal for VMware Tanzu®](./how-to-use-enterprise-api-portal.md).
## Prerequisites - An already provisioned Azure Spring Apps Enterprise tier service instance with VMware Spring Cloud Gateway enabled. For more information, see [Quickstart: Build and deploy apps to Azure Spring Apps using the Enterprise tier](quickstart-deploy-apps-enterprise.md). > [!NOTE]
- > To use VMware Spring Cloud Gateway, you must enable it when you provision your Azure Spring Apps service instance. You cannot enable it after provisioning at this time.
+ > You must enable VMware Spring Cloud Gateway when you provision your Azure Spring Apps service instance. You can't enable VMware Spring Cloud Gateway after provisioning.
-- [Azure CLI version 2.0.67 or later](/cli/azure/install-azure-cli).
+- Azure CLI version 2.0.67 or later. For more information, see [How to install the Azure CLI](/cli/azure/install-azure-cli).
## Configure Spring Cloud Gateway
-This section describes how to assign an endpoint to Spring Cloud Gateway and configure its properties.
+This section describes how to assign a public endpoint to Spring Cloud Gateway and configure its properties.
-To view the running state and resources given to Spring Cloud Gateway and its operator, open your Azure Spring Apps instance in the Azure portal, select the **Spring Cloud Gateway** section, and then select **Overview**.
+#### [Azure portal](#tab/Azure-portal)
-To assign a public endpoint, select **Yes** next to **Assign endpoint**. You'll get a URL in a few minutes. Save the URL to use later.
+To assign an endpoint in the Azure portal, use the following steps:
+1. Open your Azure Spring Apps instance.
+1. Select **Spring Cloud Gateway** in the navigation pane, and then select **Overview**.
+1. Set **Assign endpoint** to **Yes**.
-You can also use Azure CLI to assign the endpoint, as shown in the following command:
+After a few minutes, **URL** shows the configured endpoint URL. Save the URL to use later.
++
+#### [Azure CLI](#tab/Azure-CLI)
+
+Use the following command to assign the endpoint.
```azurecli az spring gateway update --assign-endpoint ``` ++ ## Configure VMware Spring Cloud Gateway metadata
-VMware Spring Cloud Gateway metadata is used to automatically generate OpenAPI version 3 documentation so that the [API portal](./how-to-use-enterprise-api-portal.md) can gather information to show the route groups. The available metadata options are described in the following table.
+You can configure VMware Spring Cloud Gateway metadata, which automatically generates OpenAPI version 3 documentation, to display route groups in API portal for VMware Tanzu. For more information, see [Use API portal for VMware Tanzu](./how-to-use-enterprise-api-portal.md).
+
+The following table describes the available metadata options:
| Property | Description | ||--|
-| title | A title describing the context of the APIs available on the Gateway instance. The default value is *Spring Cloud Gateway for K8S*. |
-| description | A detailed description of the APIs available on the Gateway instance. The default value is *Generated OpenAPI 3 document that describes the API routes configured for '\[Gateway instance name\]' Spring Cloud Gateway instance deployed under '\[namespace\]' namespace.*. |
-| documentation | The location of more documentation for the APIs available on the Gateway instance. |
-| version | The version of APIs available on this Gateway instance. The default value is *unspecified*. |
-| serverUrl | The base URL that API consumers will use to access APIs on the Gateway instance. |
+| title | A title that describes the context of the APIs available on the Gateway instance. The default value is `Spring Cloud Gateway for K8S`. |
+| description | A detailed description of the APIs available on the Gateway instance. The default value is `Generated OpenAPI 3 document that describes the API routes configured for '[Gateway instance name]' Spring Cloud Gateway instance deployed under '[namespace]' namespace.*.` |
+| documentation | The location of API documentation that's available on the Gateway instance. |
+| version | The version of APIs available on this Gateway instance. The default value is `unspecified`. |
+| serverUrl | The base URL to access APIs on the Gateway instance. |
> [!NOTE]
-> `serverUrl` is mandatory if you want to integrate with [API portal](./how-to-use-enterprise-api-portal.md).
+> The `serverUrl` property is mandatory if you want to integrate with [API portal](./how-to-use-enterprise-api-portal.md).
+
+You can use the Azure portal and the Azure CLI to edit metadata properties.
+
+#### [Azure portal](#tab/Azure-portal)
+
+To edit metadata in the Azure portal, do these steps:
-Use the following command to configure VMware Spring Cloud Gateway metadata properties:
+1. Open your Azure Spring Apps instance.
+1. Select **Spring Cloud Gateway** in the navigation pane, and then select **Configuration**.
+1. Specify values for the properties listed for **API**.
+1. Select **Save**.
++
+#### [Azure CLI](#tab/Azure-CLI)
+
+Use the following command to configure VMware Spring Cloud Gateway metadata properties. You need the endpoint URL obtained from the [Configure Spring Cloud Gateway](#configure-spring-cloud-gateway) section.
```azurecli az spring gateway update \ --api-description "<api-description>" \ --api-title "<api-title>" \ --api-version "v0.1" \
- --server-url "<endpoint-in-the-previous-step>" \
+ --server-url "<gateway-endpoint-URL>" \
--allowed-origins "*" ```
-You can also view or edit these properties in the Azure portal, as shown in the following screenshot.
-+ ## Configure single sign-on (SSO)
-VMware Spring Cloud Gateway supports authentication and authorization using single sign-on (SSO) with an OpenID identity provider (IdP) which supports OpenID Connect Discovery protocol.
+VMware Spring Cloud Gateway supports authentication and authorization using single sign-on (SSO) with an OpenID identity provider, which supports the OpenID Connect Discovery protocol.
| Property | Required? | Description | |-|--|-|
-| `issuerUri` | Yes | The URI that is asserted as its Issuer Identifier. For example, if the `issuer-uri` provided is `https://example.com`, then an OpenID Provider Configuration Request will be made to `https://example.com/.well-known/openid-configuration`. The result is expected to be an OpenID Provider Configuration Response. |
-| `clientId` | Yes | The OpenID Connect client ID provided by your IdP. |
-| `clientSecret` | Yes | The OpenID Connect client secret provided by your IdP. |
+| `issuerUri` | Yes | The URI that is asserted as its Issuer Identifier. For example, if the `issuer-uri` is `https://example.com`, then an OpenID Provider Configuration Request is made to `https://example.com/.well-known/openid-configuration`. The result is expected to be an OpenID Provider Configuration Response. |
+| `clientId` | Yes | The OpenID Connect client ID provided by your identity provider. |
+| `clientSecret` | Yes | The OpenID Connect client secret provided by your identity provider. |
| `scope` | Yes | A list of scopes to include in JWT identity tokens. This list should be based on the scopes allowed by your identity provider. |
-To set up SSO with Azure AD, see [How to set up single sign-on with Azure Active Directory for Spring Cloud Gateway and API Portal](./how-to-set-up-sso-with-azure-ad.md).
+To set up SSO with Azure AD, see [How to set up single sign-on with Azure Active Directory for Spring Cloud Gateway and API portal](./how-to-set-up-sso-with-azure-ad.md).
+
+You can use the Azure portal and the Azure CLI to edit SSO properties.
+
+#### [Azure portal](#tab/Azure-portal)
+
+To edit SSO properties in the Azure portal, use the following steps:
+
+1. Open your Azure Spring Apps instance.
+1. Select **Spring Cloud Gateway** in the navigation pane, and then select **Configuration**.
+1. Specify values for the properties listed for **SSO**.
+1. Select **Save**.
-Use the following command to configure SSO properties for VMware Spring Cloud Gateway:
+
+#### [Azure CLI](#tab/Azure-CLI)
+
+Use the following command to configure SSO properties for VMware Spring Cloud Gateway.
```azurecli az spring gateway update \
az spring gateway update \
--scope <scope> ```
-You can also view or edit those properties in the Azure portal, as shown in the following screenshot:
-+ > [!NOTE]
-> Only authorization servers supporting OpenID Connect Discovery protocol are supported. Also, be sure to configure the external authorization server to allow redirects back to the gateway. Refer to your authorization server's documentation and add `https://<gateway-external-url>/login/oauth2/code/sso` to the list of allowed redirect URIs.
+> VMware Spring Cloud Gateway supports only the authorization servers that support OpenID Connect Discovery protocol. Also, be sure to configure the external authorization server to allow redirects back to the gateway. Refer to your authorization server's documentation and add `https://<gateway-external-url>/login/oauth2/code/sso` to the list of allowed redirect URIs.
> > If you configure the wrong SSO property, such as the wrong password, you should remove the entire SSO property and re-add the correct configuration. >
You can also view or edit those properties in the Azure portal, as shown in the
## Configure single sign-on (SSO) logout
-VMware Spring Cloud Gateway service instances provide a default API endpoint to log out of the current SSO session. The path to this endpoint is `/scg-logout`. You can accomplish one of the following two outcomes depending on how you call the logout endpoint:
+VMware Spring Cloud Gateway service instances provide a default API endpoint to log out of the current SSO session. The path to this endpoint is `/scg-logout`. The logout results in one of the following outcomes, depending on how you call the logout endpoint:
-- Logout of session and redirect to IdP logout.-- Just logout the service instance session.
+- Logout of session and redirect to the identity provider (IdP) logout.
+- Logout the service instance session.
### Logout of IdP and SSO session
-If you send a GET request to the `/scg-logout` endpoint, then the endpoint will send a 302 redirect response to the IdP logout URL. To get the endpoint to return the user back to a path on the gateway service instance, add a redirect parameter to the GET `/scg-logout` request. For example, `${serverUrl}/scg-logout?redirect=/home`.
+If you send a `GET` request to the `/scg-logout` endpoint, then the endpoint sends a `302` redirect response to the IdP logout URL. To get the endpoint to return the user back to a path on the gateway service instance, add a redirect parameter to the `GET` request with the `/scg-logout` endpoint. For example, `${server-url}/scg-logout?redirect=/home`.
-The following steps describe an example of how to implement the function in your microservices.
+The following steps describe an example of how to implement the function in your micro
-1. You need [a route config](https://github.com/Azure-Samples/animal-rescue/blob/0e343a27f44cc4a4bfbf699280476b0517854d7b/frontend/azure/api-route-config.json#L32) to route the logout request to your application.
+1. Get a route config to route the logout request to your application. For example, see the Animal Rescue UI pages route config in the [animal-rescue](https://github.com/Azure-Samples/animal-rescue/blob/0e343a27f44cc4a4bfbf699280476b0517854d7b/frontend/azure/api-route-config.json#L32) repository on GitHub.
-1. In that application, you can add whatever logout logic you need. At the end, you need to [send a get request](https://github.com/Azure-Samples/animal-rescue/blob/0e343a27f44cc4a4bfbf699280476b0517854d7b/frontend/src/App.js#L84) to the gateway's `/scg-logout` endpoint.
+1. Add whatever logout logic you need to the application. At the end, you need to a `GET` request to the gateway's `/scg-logout` endpoint as shown in the `return` value for the `getActionButton` method in the [animal-rescue](https://github.com/Azure-Samples/animal-rescue/blob/0e343a27f44cc4a4bfbf699280476b0517854d7b/frontend/src/App.js#L84) repository.
> [!NOTE]
-> The value of the redirect parameter is a valid path on the gateway service instance. You can't redirect to an external URL.
+> The value of the redirect parameter must be a valid path on the gateway service instance. You can't redirect to an external URL.
### Log out just the SSO session
-If you send the GET request to the `/scg-logout` endpoint using a `XMLHttpRequest` (XHR), then the 302 redirect could be swallowed and not handled in the response handler. In this case, the user would only be logged out of the SSO session on the gateway service instance and would still have a valid IdP session. The behavior typically seen in this case is that if the user attempts to log in again, they are automatically sent back to the gateway as authenticated from IdP.
+If you send the `GET` request to the `/scg-logout` endpoint using a `XMLHttpRequest` (XHR), then the `302` redirect could be swallowed and not handled in the response handler. In this case, the user would only be logged out of the SSO session on the gateway service instance and would still have a valid IdP session. The behavior typically seen in this case is that if the user attempts to log in again, they're automatically sent back to the gateway as authenticated from IdP.
-You need to have a route configuration to route the logout request to your application, as shown in the following example. This code will make a gateway-only logout SSO session.
+You need to have a route configuration to route the logout request to your application, as shown in the following example. This code makes a gateway-only logout SSO session.
```java const req = new XMLHttpRequest();
Cross-origin resource sharing (CORS) allows restricted resources on a web page t
| allowedOrigins | Allowed origins to make cross-site requests. | | allowedMethods | Allowed HTTP methods on cross-site requests. | | allowedHeaders | Allowed headers in cross-site request. |
-| maxAge | How long, in seconds, the response from a pre-flight request can be cached by clients. |
+| maxAge | How long, in seconds, the response from a preflight request is cached by clients. |
| allowCredentials | Whether user credentials are supported on cross-site requests. | | exposedHeaders | HTTP response headers to expose for cross-site requests. | > [!NOTE]
-> Be sure you have the correct CORS configuration if you want to integrate with the [API portal](./how-to-use-enterprise-api-portal.md). For an example, see the [Configure Spring Cloud Gateway](#configure-spring-cloud-gateway) section.
+> Be sure you have the correct CORS configuration if you want to integrate with API portal. For more information, see the [Configure Spring Cloud Gateway](#configure-spring-cloud-gateway) section.
## Use service scaling
-Customization of resource allocation for Spring Cloud Gateway instances is supported, including vCpu, memory, and instance count.
+You can customize resource allocation for Spring Cloud Gateway instances, including vCpu, memory, and instance count.
> [!NOTE] > For high availability, a single replica is not recommended.
-The following table describes the default resource usage:
+The following table describes the default resource usage.
| Component name | Instance count | vCPU per instance | Memory per instance | |-|-|-||
The following table describes the default resource usage:
## Configure application performance monitoring
-There are several types of application performance monitoring (APM) Java agents provided by Spring Cloud Gateway to monitor a gateway managed by Azure Spring Apps.
+To monitor Spring Cloud Gateway, you can configure application performance monitoring (APM). The following table lists the five types of APM Java agents provided by Spring Cloud Gateway and their required environment variables.
+
+| Java Agent | Required environment variables |
+| | |
+| Application Insights | `APPLICATIONINSIGHTS_CONNECTION_STRING` |
+| Dynatrace | `DT_TENANT`<br>`DT_TENANTTOKEN`<br>`DT_CONNECTION_POINT` |
+| New Relic | `NEW_RELIC_LICENSE_KEY`<br>`NEW_RELIC_APP_NAME` |
+| AppDynamics | `APPDYNAMICS_AGENT_APPLICATION_NAME`<br>`APPDYNAMICS_AGENT_TIER_NAME`<br>`APPDYNAMICS_AGENT_NODE_NAME`<br> `APPDYNAMICS_AGENT_ACCOUNT_NAME`<br>`APPDYNAMICS_AGENT_ACCOUNT_ACCESS_KEY`<br>`APPDYNAMICS_CONTROLLER_HOST_NAME`<br>`APPDYNAMICS_CONTROLLER_SSL_ENABLED`<br>`APPDYNAMICS_CONTROLLER_PORT` |
+| ElasticAPM | `ELASTIC_APM_SERVICE_NAME`<br>`ELASTIC_APM_APPLICATION_PACKAGES`<br>`ELASTIC_APM_SERVER_URL` |
+
+For other supported environment variables, see the following sources:
+
+- [Application Insights public document](../azure-monitor/app/app-insights-overview.md?tabs=net)
+- [Dynatrace Environment Variables](https://www.dynatrace.com/support/help/setup-and-configuration/setup-on-cloud-platforms/microsoft-azure-services/azure-integrations/azure-spring#envvar)
+- [New Relic Environment Variables](https://docs.newrelic.com/docs/apm/agents/java-agent/configuration/java-agent-configuration-config-file/#Environment_Variables)
+- [AppDynamics Environment Variables](https://docs.appdynamics.com/21.11/en/application-monitoring/install-app-server-agents/java-agent/monitor-azure-spring-cloud-with-java-agent#MonitorAzureSpringCloudwithJavaAgent-ConfigureUsingtheEnvironmentVariablesorSystemProperties)
+- [Elastic Environment Variables](https://www.elastic.co/guide/en/apm/agent/java/master/configuration.html).
+
+### Manage APM in Spring Cloud Gateway
+
+You can use the Azure portal or the Azure CLI to set up application performance monitoring (APM) in Spring Cloud Gateway. You can also specify the types of APM Java agents to use and the corresponding APM environment variables they support.
-### [Azure portal](#tab/Azure-portal)
+#### [Azure portal](#tab/Azure-portal)
Use the following steps to set up APM using the Azure portal:
-1. Open the **Spring Cloud Gateway** page and select the **Configuration** tab.
+1. In your Azure Spring Apps instance, select **Spring Cloud Gateway** in the navigation page and then select **Configuration**.
1. Choose the APM type in the **APM** list to monitor a gateway.
Use the following steps to set up APM using the Azure portal:
Updating the configuration can take a few minutes. You should get a notification when the configuration is complete.
-### [Azure CLI](#tab/Azure-CLI)
+#### [Azure CLI](#tab/Azure-CLI)
Use the following command to set up APM using Azure CLI:
az spring gateway update \
--secrets <key=value> ``` -
+The allowed values for `--apm-types` are `ApplicationInsights`, `AppDynamics`, `Dynatrace`, `NewRelic`, and `ElasticAPM`. The following command shows the usage using Application Insights as an example.
+
+```azurecli
+az spring gateway update \
+ --apm-types ApplicationInsights \
+ --properties APPLICATIONINSIGHTS_CONNECTION_STRING=<THE CONNECTION STRING OF YOUR APPINSIGHTS> APPLICATIONINSIGHTS_SAMPLE_RATE=10
+```
+
+You can also put environment variables in the `--secrets` parameter instead of `--properties`, which makes the environment variable more secure in network transmission and data storage in the backend.
-The supported APM types are `ApplicationInsights`, `AppDynamics`, `Dynatrace`, `NewRelic`, and `ElasticAPM`. For more information about the functions provided and which environment variables are exposed, see the public documentation for the APM Java agent you're using. Azure Spring Apps will upgrade the APM agent with the same cadence as deployed apps to keep compatibility of agents between Spring Cloud Gateway and apps.
+ > [!NOTE]
-> By default, Azure Spring Apps prints the logs of the APM Java agent to `STDOUT`. These logs are mixed with the Spring Cloud Gateway logs. You can check the version of the APM agent used in the logs. You can query these logs in Log Analytics to troubleshoot.
+> Azure Spring Apps upgrades the APM agent and deployed apps with the same cadence to keep compatibility of agents between Spring Cloud Gateway and Spring apps.
+>
+> By default, Azure Spring Apps prints the logs of the APM Java agent to `STDOUT`. These logs are included with the Spring Cloud Gateway logs. You can check the version of the APM agent used in the logs. You can query these logs in Log Analytics to troubleshoot.
> To make the APM agents work correctly, increase the CPU and memory of Spring Cloud Gateway. ## Next steps
spring-apps How To Enterprise Marketplace Offer https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/spring-apps/how-to-enterprise-marketplace-offer.md
To provide the best customer experience to manage the Tanzu component license pu
Under this implicit Azure Marketplace third-party offer purchase from VMware, your personal data and application vCPU usage data is shared with VMware. You agree to this data sharing when you agree to the marketplace terms upon creating the service instance.
-To purchase the Tanzu component license successfully, the billing account of your subscription must be included in one of the locations listed in the [Supported geographic locations of billing account](#supported-geographic-locations-of-billing-account) section. Because of tax management restrictions from VMware in some countries, not all countries are supported.
+To purchase the Tanzu component license successfully, the billing account of your subscription must be included in one of the locations listed in the [Supported geographic locations of billing account](#supported-geographic-locations-of-billing-account) section. Because of tax management restrictions from VMware in some countries/regions, not all countries/regions are supported.
The extra license fees apply only to the Enterprise tier. In the Azure Spring Apps Standard tier, there are no extra license fees because the managed Spring components use the OSS config server and Eureka server. No other third-party license fees are required.
spring-apps Quickstart Configure Single Sign On Enterprise https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/spring-apps/quickstart-configure-single-sign-on-enterprise.md
Title: "Quickstart - Configure single sign-on for applications using Azure Spring Apps Enterprise tier"
+ Title: Quickstart - Configure single sign-on for applications using Azure Spring Apps Enterprise tier
description: Describes single sign-on configuration for Azure Spring Apps Enterprise tier.
This quickstart shows you how to configure single sign-on for applications runni
## Prerequisites - An Azure account with an active subscription. [Create an account for free](https://azure.microsoft.com/free/?WT.mc_id=A261C142F).-- Understand and fulfill the [Requirements](how-to-enterprise-marketplace-offer.md#requirements) section of [Enterprise Tier in Azure Marketplace](how-to-enterprise-marketplace-offer.md).-- [The Azure CLI version 2.0.67 or higher](/cli/azure/install-azure-cli).
+- A license for Azure Spring Apps Enterprise tier. For more information, see [Enterprise tier in Azure Marketplace](how-to-enterprise-marketplace-offer.md).
+- [The Azure CLI version 2.37.0 or higher](/cli/azure/install-azure-cli).
- [Git](https://git-scm.com/). - [jq](https://stedolan.github.io/jq/download/) - [!INCLUDE [install-enterprise-extension](includes/install-enterprise-extension.md)]
This quickstart shows you how to configure single sign-on for applications runni
## Prepare single sign-on credentials
-To configure single sign-on for the application, you'll need to prepare credentials. The following sections describe steps for an existing provider or provisioning an application registration with Azure Active Directory.
+To configure single sign-on for the application, you need to prepare credentials. The following sections describe steps for using an existing provider or provisioning an application registration with Azure Active Directory.
### Use an existing provider Follow these steps to configure single sign-on using an existing Identity Provider. If you're provisioning an Azure Active Directory App Registration, skip ahead to the following section, [Create and configure an application registration with Azure Active Directory](#create-and-configure-an-application-registration-with-azure-active-directory).
-1. Configure your existing identity provider to allow redirects back to Spring Cloud Gateway and API Portal. Spring Cloud Gateway has a single URI to allow re-entry to the gateway. API Portal has two URIs for supporting the user interface and underlying API. Retrieve these URIs by using the following commands, then add them to your single sign-on provider's configuration.
+1. Configure your existing identity provider to allow redirects back to Spring Cloud Gateway for VMware Tanzu and API portal for VMware Tanzu. Spring Cloud Gateway has a single URI to allow re-entry to the gateway. API portal has two URIs for supporting the user interface and underlying API. The following commands retrieve these URIs that you add to your single sign-on provider's configuration.
```azurecli GATEWAY_URL=$(az spring gateway show \
Follow these steps to configure single sign-on using an existing Identity Provid
1. Obtain the `Client ID` and `Client Secret` for your identity provider.
-1. Obtain the `Issuer URI` for your identity provider. You must configure the provider with an issuer URI, which is the URI that it asserts as its Issuer Identifier. For example, if the `issuer-uri` provided is "https://example.com", then an OpenID Provider Configuration Request will be made to "https://example.com/.well-known/openid-configuration". The result is expected to be an OpenID Provider Configuration Response.
+1. Obtain the `Issuer URI` for your identity provider. You must configure the provider with an issuer URI, which is the URI that it asserts as its Issuer Identifier. For example, if the `issuer-uri` provided is `https://example.com`, then an OpenID Provider Configuration Request is made to `https://example.com/.well-known/openid-configuration`. The result is expected to be an OpenID Provider Configuration Response.
> [!NOTE]
- > You can only use authorization servers supporting OpenID Connect Discovery protocol.
+ > You can only use authorization servers that support OpenID Connect Discovery protocol.
-1. Obtain the `JWK URI` for your identity provider for use later. The `JWK URI` typically takes the form `${ISSUER_URI}/keys` or `${ISSUER_URI}/<version>/keys`. The Identity Service application will use the public JSON Web Keys (JWK) to verify JSON Web Tokens (JWT) issued by your single sign-on identity provider's authorization server.
+1. Obtain the `JWK URI` for your identity provider for use later. The `JWK URI` typically takes the form `${ISSUER_URI}/keys` or `${ISSUER_URI}/<version>/keys`. The Identity Service application uses the public JSON Web Keys (JWK) to verify JSON Web Tokens (JWT) issued by your single sign-on identity provider's authorization server.
### Create and configure an application registration with Azure Active Directory
To register the application with Azure Active Directory, follow these steps. If
az ad sp create --id ${APPLICATION_ID} ```
-1. Use the following commands to retrieve the URLs for Spring Cloud Gateway and API Portal and add the necessary Reply URLs to the Active Directory App Registration:
+1. Use the following commands to retrieve the URLs for Spring Cloud Gateway and API portal, and add the necessary Reply URLs to the Active Directory App Registration.
```azurecli APPLICATION_ID=$(cat ad.json | jq -r '.appId')
To register the application with Azure Active Directory, follow these steps. If
az ad app update \ --id ${APPLICATION_ID} \
- --reply-urls "https://${GATEWAY_URL}/login/oauth2/code/sso" "https://${PORTAL_URL}/oauth2-redirect.html" "https://${PORTAL_URL}/login/oauth2/code/sso"
+ --web-redirect-uris "https://${GATEWAY_URL}/login/oauth2/code/sso" "https://${PORTAL_URL}/oauth2-redirect.html" "https://${PORTAL_URL}/login/oauth2/code/sso"
``` 1. Use the following command to retrieve the application's `Client ID`. Save the output to use later in this quickstart.
To register the application with Azure Active Directory, follow these steps. If
echo "https://login.microsoftonline.com/${TENANT_ID}/v2.0" ```
-1. Retrieve the `JWK URI` from the output of the following command. The Identity Service application will use the public JSON Web Keys (JWK) to verify JSON Web Tokens (JWT) issued by Active Directory.
+1. Retrieve the `JWK URI` from the output of the following command. The Identity Service application uses the public JSON Web Keys (JWK) to verify JSON Web Tokens (JWT) issued by Active Directory.
```bash TENANT_ID=$(cat sso.json | jq -r '.tenant')
To register the application with Azure Active Directory, follow these steps. If
## Deploy the Identity Service application
-To complete the single sign-on experience, use the following steps to deploy the Identity Service application. The Identity Service application provides a single route to aid in identifying the user. For these steps, be sure to navigate to the project folder before running any commands.
+To complete the single sign-on experience, use the following steps to deploy the Identity Service application. The Identity Service application provides a single route to aid in identifying the user.
+
+1. Navigate to the project folder.
1. Use the following command to create the `identity-service` application:
To complete the single sign-on experience, use the following steps to deploy the
## Configure single sign-on for Spring Cloud Gateway
-You can configure Spring Cloud Gateway to authenticate requests via single sign-on. To configure Spring Cloud Gateway to use single sign-on, follow these steps:
+You can configure Spring Cloud Gateway to authenticate requests using single sign-on. To configure Spring Cloud Gateway to use single sign-on, follow these steps:
1. Use the following commands to configure Spring Cloud Gateway to use single sign-on:
You can configure Spring Cloud Gateway to authenticate requests via single sign-
echo "https://${GATEWAY_URL}" ```
- You can open the output URL in a browser to explore the updated application. The Log In function will now work, allowing you to add items to the cart and place orders. After you sign in, the customer information button will display the signed-in username.
+ You can open the output URL in a browser to explore the updated application. The Log In function is now operational, allowing you to add items to the cart and place orders. After you sign in, the customer information button displays the signed-in username.
-## Configure single sign-on for API Portal
+## Configure single sign-on for API portal
-You can configure API Portal to use single sign-on to require authentication before exploring APIs. Use the following commands to configure single sign-on for API Portal:
+You can configure API portal for VMware Tanzu to use single sign-on to require authentication before exploring APIs. Use the following commands to configure single sign-on for API portal:
```azurecli PORTAL_URL=$(az spring api-portal show \
az spring api-portal update \
--issuer-uri <issuer-uri> ```
-Use the following commands to retrieve the URL for API Portal:
+Use the following commands to retrieve the URL for API portal:
```azurecli PORTAL_URL=$(az spring api-portal show \
PORTAL_URL=$(az spring api-portal show \
echo "https://${PORTAL_URL}" ```
-You can open the output URL in a browser to explore the application APIs. This time, you'll be directed to sign on before exploring APIs.
+You can open the output URL in a browser to explore the application APIs. You're directed to sign on before exploring APIs.
storage-mover Deployment Planning https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage-mover/deployment-planning.md
When you deploy an Azure storage mover resource, you also need to choose a regio
:::image type="content" source="media/across-articles/data-vs-management-path.png" alt-text="A diagram illustrating a migration's path by showing two arrows. The first arrow represents data traveling to a storage account from the source and agent, and a second arrow represents the management and control info to the storage mover resource and service." lightbox="media/across-articles/data-vs-management-path-large.png":::
-In most cases, deploying only a single storage mover resource is the best option, even when you need to migrate files located in other countries. One or more migration agents are registered to a storage mover resource. An agent can only be used by the storage mover to which it's registered. The agents themselves should be located close to the source storage, even if that means registering agents deployed in other countries to a storage mover resource located across the globe.
+In most cases, deploying only a single storage mover resource is the best option, even when you need to migrate files located in other countries/regions. One or more migration agents are registered to a storage mover resource. An agent can only be used by the storage mover to which it's registered. The agents themselves should be located close to the source storage, even if that means registering agents deployed in other countries/regions to a storage mover resource located across the globe.
Only deploy multiple storage mover resources if you have distinct sets of migration agents. Having separate storage mover resources and agents allows you to keep permissions separate for the admins managing their part of the source or target storage.
storage Storage Custom Domain Name https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/storage-custom-domain-name.md
Title: Map a custom domain to an Azure Blob Storage endpoint description: Map a custom domain to a Blob Storage or web endpoint in an Azure storage account.-++ Last updated 02/12/2021-+
storage Storage Files Identity Ad Ds Mount File Share https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/files/storage-files-identity-ad-ds-mount-file-share.md
Previously updated : 01/24/2023 Last updated : 04/03/2023 recommendations: false
Sign in to the client using the credentials of the identity that you granted per
Before you can mount the Azure file share, make sure you've gone through the following prerequisites: -- If you're mounting the file share from a client that has previously connected to the file share using your storage account key, make sure that you've disconnected the share, removed the persistent credentials of the storage account key, and are currently using AD DS credentials for authentication. For instructions on how to remove cached credentials with storage account key and delete existing SMB connections before initializing new connection with Azure AD or AD credentials, follow the two-step process on the [FAQ page](./storage-files-faq.md#ad-ds--azure-ad-ds-authentication).
+- If you're mounting the file share from a client that has previously connected to the file share using your storage account key, make sure that you've disconnected the share, removed the persistent credentials of the storage account key, and are currently using AD DS credentials for authentication. For instructions on how to remove cached credentials with storage account key and delete existing SMB connections before initializing a new connection with AD DS or Azure AD credentials, follow the two-step process on the [FAQ page](./storage-files-faq.md#ad-ds--azure-ad-ds-authentication).
- Your client must have line of sight to your AD DS. If your machine or VM is outside of the network managed by your AD DS, you'll need to enable VPN to reach AD DS for authentication. ## Mount the file share from a domain-joined VM Run the PowerShell script below or [use the Azure portal](storage-files-quick-create-use-windows.md#map-the-azure-file-share-to-a-windows-drive) to persistently mount the Azure file share and map it to drive Z: on Windows. If Z: is already in use, replace it with an available drive letter. The script will check to see if this storage account is accessible via TCP port 445, which is the port SMB uses. Remember to replace the placeholder values with your own values. For more information, see [Use an Azure file share with Windows](storage-how-to-use-files-windows.md).
-Always mount Azure file shares using file.core.windows.net, even if you set up a private endpoint for your share. Using CNAME for file share mount isn't supported for identity-based authentication.
+Mount Azure file shares using `file.core.windows.net`, even if you set up a private endpoint for your share.
```powershell $connectTestResult = Test-NetConnection -ComputerName <storage-account-name>.file.core.windows.net -Port 445
You can also use the `net-use` command from a Windows prompt to mount the file s
net use Z: \\<YourStorageAccountName>.file.core.windows.net\<FileShareName> ```
-If you run into issues mounting with AD DS credentials, refer to [Unable to mount Azure file shares with AD credentials](files-troubleshoot-smb-authentication.md#unable-to-mount-azure-file-shares-with-ad-credentials).
+If you run into issues, refer to [Unable to mount Azure file shares with AD credentials](files-troubleshoot-smb-authentication.md#unable-to-mount-azure-file-shares-with-ad-credentials).
## Mount the file share from a non-domain-joined VM Non-domain-joined VMs can access Azure file shares if they have line-of-sight to the domain controllers. The user accessing the file share must have an identity and credentials in the AD domain.
-To mount a file share from a non-domain-joined VM, the user must either:
--- Provide explicit credentials such as **DOMAINNAME\username** where **DOMAINNAME** is the AD domain and **username** is the identityΓÇÖs user name, or-- Use the notation **username@domainFQDN**, where **domainFQDN** is the fully qualified domain name.-
-Using one of these approaches will allow the client to contact the domain controller to request and receive Kerberos tickets.
+To mount a file share from a non-domain-joined VM, use the notation **username@domainFQDN**, where **domainFQDN** is the fully qualified domain name. This will allow the client to contact the domain controller to request and receive Kerberos tickets. You can get the value of **domainFQDN** by running `(Get-ADDomain).Dnsroot` in Active Directory PowerShell.
For example:
-```
-net use Z: \\<YourStorageAccountName>.file.core.windows.net\<FileShareName> /user:<DOMAINNAME\username>
-```
-
-or
- ``` net use Z: \\<YourStorageAccountName>.file.core.windows.net\<FileShareName> /user:<username@domainFQDN> ```
synapse-analytics Gateway Ip Addresses https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/synapse-analytics/security/gateway-ip-addresses.md
Periodically, we will retire Gateways using old hardware and migrate the traffic
| Region name | Gateway IP addresses | Gateway IP address subnets | | | | |
-| Australia Central | 20.36.105.0, 20.36.104.6, 20.36.104.7 | 20.36.105.32/29 |
+| Australia Central | 20.36.104.6, 20.36.104.7 | 20.36.105.32/29 |
| Australia Central 2 | 20.36.113.0, 20.36.112.6 | 20.36.113.32/29 | | Australia East | 13.75.149.87, 40.79.161.1, 13.70.112.9 | 13.70.112.32/29, 40.79.160.32/29, 40.79.168.32/29 |
-| Australia Southeast | 191.239.192.109, 13.73.109.251, 13.77.48.10, 13.77.49.32 | 13.77.49.32/29 |
+| Australia Southeast | 13.73.109.251, 13.77.48.10, 13.77.49.32 | 13.77.49.32/29 |
| Brazil South | 191.233.200.14, 191.234.144.16, 191.234.152.3 | 191.233.200.32/29, 191.234.144.32/29 |
-| Canada Central | 40.85.224.249, 52.246.152.0, 20.38.144.1 | 13.71.168.32/29, 20.38.144.32/29, 52.246.152.32/29 |
-| Canada East | 40.86.226.166, 52.242.30.154, 40.69.105.9 , 40.69.105.10 | 40.69.105.32/29|
-| Central US | 13.67.215.62, 52.182.137.15, 104.208.21.1, 13.89.169.20 | 104.208.21.192/29, 13.89.168.192/29, 52.182.136.192/29 |
+| Canada Central | 52.246.152.0, 20.38.144.1 | 13.71.168.32/29, 20.38.144.32/29, 52.246.152.32/29 |
+| Canada East | 40.69.105.9, 40.69.105.10 | 40.69.105.32/29|
+| Central US | 104.208.21.1, 13.89.169.20 | 104.208.21.192/29, 13.89.168.192/29, 52.182.136.192/29 |
| China East | 139.219.130.35 | 52.130.112.136/29 | | China East 2 | 40.73.82.1 | 52.130.120.88/29 |
-| China North | 139.219.15.17 | 52.130.128.88/29 |
+| China North | | 52.130.128.88/29 |
| China North 2 | 40.73.50.0 | 52.130.40.64/29 |
-| East Asia | 52.175.33.150, 13.75.32.4, 13.75.32.14, 20.205.77.200, 20.205.83.224 | 13.75.32.192/29, 13.75.33.192/29 |
+| East Asia | 13.75.32.4, 13.75.32.14, 20.205.77.200, 20.205.83.224 | 13.75.32.192/29, 13.75.33.192/29 |
| East US | 40.121.158.30, 40.79.153.12, 40.78.225.32 | 20.42.65.64/29, 20.42.73.0/29, 52.168.116.64/29 |
-| East US 2 | 40.79.84.180, 52.177.185.181, 52.167.104.0, 191.239.224.107, 104.208.150.3, 40.70.144.193 | 104.208.150.192/29, 40.70.144.192/29, 52.167.104.192/29 |
-| France Central | 40.79.137.0, 40.79.129.1, 40.79.137.8, 40.79.145.12 | 40.79.136.32/29, 40.79.144.32/29 |
+| East US 2 | 40.79.84.180, 52.177.185.181, 52.167.104.0, 104.208.150.3, 40.70.144.193 | 104.208.150.192/29, 40.70.144.192/29, 52.167.104.192/29 |
+| France Central | 40.79.129.1, 40.79.137.8, 40.79.145.12 | 40.79.136.32/29, 40.79.144.32/29 |
| France South | 40.79.177.0, 40.79.177.10 ,40.79.177.12 | 40.79.176.40/29, 40.79.177.32/29 | | Germany West Central | 51.116.240.0, 51.116.248.0, 51.116.152.0 | 51.116.152.32/29, 51.116.240.32/29, 51.116.248.32/29 |
+| Germany North | 51.116.56.0 | |
| Central India | 104.211.96.159, 104.211.86.30 , 104.211.86.31, 40.80.48.32, 20.192.96.32 | 104.211.86.32/29, 20.192.96.32/29 | | South India | 104.211.224.146 | 40.78.192.32/29, 40.78.193.32/29 | | West India | 104.211.160.80, 104.211.144.4 | 104.211.144.32/29, 104.211.145.32/29 |
-| Japan East | 13.78.61.196, 40.79.184.8, 13.78.106.224, 40.79.192.5, 13.78.104.32, 40.79.184.32 | 13.78.104.32/29, 40.79.184.32/29, 40.79.192.32/29 |
-| Japan West | 104.214.148.156, 40.74.100.192, 40.74.97.10 | 40.74.96.32/29 |
+| Japan East | 40.79.184.8, 40.79.192.5, 13.78.104.32, 40.79.184.32 | 13.78.104.32/29, 40.79.184.32/29, 40.79.192.32/29 |
+| Japan West | 104.214.148.156, 40.74.97.10 | 40.74.96.32/29 |
| Korea Central | 52.231.32.42, 52.231.17.22 ,52.231.17.23, 20.44.24.32, 20.194.64.33 | 20.194.64.32/29,20.44.24.32/29, 52.231.16.32/29 |
-| Korea South | 52.231.200.86, 52.231.151.96 | |
-| North Central US | 23.96.178.199, 23.98.55.75, 52.162.104.33, 52.162.105.9 | 52.162.105.192/29 |
-| North Europe | 40.113.93.91, 52.138.224.1, 13.74.104.113 | 13.69.233.136/29, 13.74.105.192/29, 52.138.229.72/29 |
+| Korea South | 52.231.151.96 | |
+| North Central US | 52.162.104.33, 52.162.105.9 | 52.162.105.192/29 |
+| North Europe | 52.138.224.1, 13.74.104.113 | 13.69.233.136/29, 13.74.105.192/29, 52.138.229.72/29 |
| Norway East | 51.120.96.0, 51.120.96.33, 51.120.104.32, 51.120.208.32 | 51.120.96.32/29 | | Norway West | 51.120.216.0 | 51.120.217.32/29 | | South Africa North | 102.133.152.0, 102.133.120.2, 102.133.152.32 | 102.133.120.32/29, 102.133.152.32/29, 102.133.248.32/29| | South Africa West | 102.133.24.0 | 102.133.25.32/29 |
-| South Central US | 13.66.62.124, 104.214.16.32, 20.45.121.1, 20.49.88.1 | 20.45.121.32/29, 20.49.88.32/29, 20.49.89.32/29, 40.124.64.136/29 |
+| South Central US | 104.214.16.32, 20.45.121.1, 20.49.88.1 | 20.45.121.32/29, 20.49.88.32/29, 20.49.89.32/29, 40.124.64.136/29 |
| South East Asia | 104.43.15.0, 40.78.232.3, 13.67.16.193 | 13.67.16.192/29, 23.98.80.192/29, 40.78.232.192/29|
-| Switzerland North | 51.107.56.0, 51.107.57.0 | 51.107.56.32/29, 51.103.203.192/29, 20.208.19.192/29, 51.107.242.32/27 |
-| Switzerland West | 51.107.152.0, 51.107.153.0 | 51.107.153.32/29 |
+| Switzerland North | 51.107.56.0 | 51.107.56.32/29, 51.103.203.192/29, 20.208.19.192/29, 51.107.242.32/27 |
+| Switzerland West | 51.107.152.0 | 51.107.153.32/29 |
| UAE Central | 20.37.72.64 | 20.37.72.96/29, 20.37.73.96/29 | | UAE North | 65.52.248.0 | 40.120.72.32/29, 65.52.248.32/29 | | UK South | 51.140.184.11, 51.105.64.0, 51.140.144.36, 51.105.72.32 | 51.105.64.32/29, 51.105.72.32/29, 51.140.144.32/29 | | UK West | 51.141.8.11, 51.140.208.96, 51.140.208.97 | 51.140.208.96/29, 51.140.209.32/29 | | West Central US | 13.78.145.25, 13.78.248.43, 13.71.193.32, 13.71.193.33 | 13.71.193.32/29 |
-| West Europe | 40.68.37.158, 104.40.168.105, 52.236.184.163 | 104.40.169.32/29, 13.69.112.168/29, 52.236.184.32/29 |
+| West Europe | 104.40.168.105, 52.236.184.163 | 104.40.169.32/29, 13.69.112.168/29, 52.236.184.32/29 |
| West US | 104.42.238.205, 13.86.216.196 | 13.86.217.224/29 |
-| West US 2 | 13.66.226.202, 40.78.240.8, 40.78.248.10 | 13.66.136.192/29, 40.78.240.192/29, 40.78.248.192/29 |
+| West US 2 | 40.78.240.8, 40.78.248.10 | 13.66.136.192/29, 40.78.240.192/29, 40.78.248.192/29 |
| West US 3 | 20.150.168.0, 20.150.184.2 | 20.150.168.32/29, 20.150.176.32/29, 20.150.184.32/29 |
virtual-desktop Msixmgr Tool Syntax Description https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-desktop/msixmgr-tool-syntax-description.md
+
+ Title: MSIXMGR tool parameters - Azure Virtual Desktop
+description: This article contains the command line syntax to help you understand and get the most from the MSIXMGR tool. In this article, we'll show you the syntax of all the parameters used by the MSIXMGR tool.
+++ Last updated : 04/04/2023++
+# MSIXMGR tool parameters
+
+This article contains the command line syntax to help you understand and get the most from the MSIXMGR tool. In this article, we'll show you the syntax of all the parameters used by the MSIXMGR tool.
+
+## Prerequisites:
+
+Before you can follow the instructions in this article, you'll need to do the following things:
+
+- [Download the MSIXMGR tool](https://aka.ms/msixmgr)
+- Get an MSIX-packaged application (.MSIX file)
+- Get administrative permissions on the machine where you'll create the MSIX image
+- [Set up MSIXMGR tool](/azure/virtual-desktop/app-attach-msixmgr)
+
+## Parameters
+
+### -AddPackage or -p
+
+Add the package at specified file path.
+
+```
+-AddPackage [Path to the MSIX package]
+```
+
+#### Example
+
+```
+msixmgr.exe -AddPackage "C:\SomeDirectory\myapp.msix"
+```
+
+### -RemovePackage or -x
+
+Remove the package with specified package full name.
+
+```
+-RemovePackage [Package name]
+```
+
+#### Example
+
+```
+msixmgr.exe -RemovePackage myapp_0.0.0.1_x64__8wekyb3d8bbwe
+```
+
+### -FindPackage
+
+Find a package with specific package full name.
+
+```
+-FindPackage [Package name]
+```
+
+#### Example
+
+```
+msixmgr.exe -FindPackage myapp_0.0.0.1_x64__8wekyb3d8bbwe
+```
+
+### -ApplyACLs
+
+Applies ACLs to a package folder (an unpacked package).
+
+```
+-ApplyACLs -packagePath [Path to the package folder]
+```
+
+#### Example:
+
+```
+msixmgr.exe -ApplyACLs -packagePath "C:\SomeDirectory\name_version_arch_pub"
+```
+
+### -MountImage
+
+Mounts the VHD, VHDX, or CIM image.
+
+```
+-MountImage -imagePath [Path to the MSIX image] -fileType [VHD | VHDX | CIM]
+```
+
+#### Example
+
+```
+msixmgr.exe -MountImage -imagePath "C:\SomeDirectory\myapp.cim" -fileType CIM
+```
+
+|Optional parameters|Description|Example|
+| -- | -- | -- |
+|-readOnly|Boolean (true of false) indicating whether the image should be mounted as read only. If not specified, the image is mounted as read-only by default. |`msixmgr.exe -MountImage -imagePath "C:\SomeDirectory\myapp.cim" -filetype CIM -readOnly false`|
+
+### -quietUX
+
+Quiet mode, no user interaction. Use in conjunction with other parameters.
+
+```
+-quietUX
+```
+
+#### Example
+
+```
+msixmgr.exe -AddPackage "C:\SomeDirectory\myapp.msix" -quietUX
+```
+
+### -UnmountImage
+
+Unmounts the VHD, VHDX, or CIM image.
+
+```
+-UnmountImage -imagePath [Path to the MSIX image] -fileType [VHD | VHDX | CIM]
+```
+
+#### Example:
+
+```
+msixmgr.exe -UnmountImage -imagePath "C:\SomeDirectory\myapp.vhdx" -fileType VHDX
+```
+
+|Optional parameters|Description|Example|
+| -- | -- | -- |
+|-volumeId|Specifies GUID (specified without curly braces) associated with image to unmount. This is an optional parameter only for CIM files. |`msixmgr.exe -UnmountImage -volumeId 0ea000fe-0021-465a-887b-6dc94f15e86e -filetype CIM`|
+
+### -Unpack
+
+Unpacks package (`.appx`, `.msix`, `.appxbundle`, `.msixbundle`) and extract its contents to a folder.
+
+```
+-Unpack -packagePath [Path to package to unpack OR path to a directory containing multiple packages to unpack] -destination [Directory to place the resulting package folder(s) in]
+```
+
+> [!NOTE]
+> If you're using VHD or VHDX, we recommend the size is four times the size of MSIX package.
+
+#### Example
+
+```
+msixmgr.exe -Unpack -packagePath "C:\SomeDirectory\myapp.msix" -destination "C:\Apps\myapp"
+```
+
+|Optional parameters|Description|Example|
+| -- | -- | -- |
+|-applyacls|Applies ACLs to the resulting package folder(s) and their parent folder. |`msixmgr.exe -Unpack -packagePath "C:\SomeDirectory\myapp.msix" -destination "C:\Apps\myapp" -applyACLs` |
+|-create|Creates a new image with the specified -filetype and unpacks the packages to that image. |`msixmgr.exe -Unpack -packagePath "C:\SomeDirectory\myapp.msix" -destination "C:\Apps\myapp" -applyACLs -create -fileType VHDX -vhdSize 200` |
+|-fileType|The type of file to unpack packages to. Valid file types include `VHD`, `VHDX`, `CIM`. This is a required parameter when unpacking to CIM files. |`msixmgr.exe -Unpack -packagePath "C:\SomeDirectory\myapp.msix" -destination "C:\Apps\myapp" -applyACLs -create -fileType CIM -rootDirectory apps` |
+|-rootDirectory|Specifies root directory on image to unpack packages to. Required parameter for unpacking to new and existing CIM files. |`msixmgr.exe -Unpack -packagePath "C:\SomeDirectory\myapp.msix" -destination "C:\Apps\myapp" -applyACLs -create -filetype CIM -rootDirectory apps` |
+|-validateSignature|Validates a package's signature file before unpacking package. This parameter will require that the package's certificate is installed on the machine.<br /><br />For more information, see [Certificate Stores](/windows-hardware/drivers/install/certificate-stores).|`msixmgr.exe -Unpack -packagePath "C:\SomeDirectory\myapp.msix" -destination "C:\Apps\Myapp" -validateSignature -applyACLs`|
+
+### -?
+
+Display help text at the command prompt.
+
+#### Example:
+
+```
+msixmgr.exe -?
+```
+
+## Next steps
+
+To learn more about MSIX app attach, check out these articles:
+
+- [What is MSIX app attach?](/azure/virtual-desktop/what-is-app-attach)
+- [Set up MSIX app attach with the Azure portal](/azure/virtual-desktop/app-attach-azure-portal)
+- [Set up MSIX app attach using PowerShell](/azure/virtual-desktop/app-attach-powershell)
+- [Create PowerShell scripts for MSIX app attach](/azure/virtual-desktop/app-attach)
+- [Prepare an MSIX image for Azure Virtual Desktop](/azure/virtual-desktop/app-attach-image-prep)
+- [Set up a file share for MSIX app attach](/azure/virtual-desktop/app-attach-file-share)
+
+If you have questions about MSIX app attach, see our [App attach FAQ](/azure/virtual-desktop/app-attach-faq) and [App attach glossary](/azure/virtual-desktop/app-attach-glossary).
virtual-machines Disks Enable Performance https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/disks-enable-performance.md
$sourceURI=diskOrSnapshotURI
Set-AzContext -SubscriptionName <<yourSubscriptionName>>
-$diskConfig = New-AzDiskConfig -Location $region -CreateOption Copy -DiskSizeGB $size -SkuName $sku -PerfromancePlus $true -SourceResourceID $sourceURI
+$diskConfig = New-AzDiskConfig -Location $region -CreateOption Copy -DiskSizeGB $size -SkuName $sku -PerformancePlus $true -SourceResourceID $sourceURI
-$dataDisk = New-AzDisk -ResourceGroupName $myRG -DiskName $myDisk
+$dataDisk = New-AzDisk -ResourceGroupName $myRG -DiskName $myDisk -Disk $diskconfig
```
$dataDisk = New-AzDisk -ResourceGroupName $myRG -DiskName $myDisk
- [Create an incremental snapshot for managed disks](disks-incremental-snapshots.md) - [Expand virtual hard disks on a Linux VM](linux/expand-disks.md)-- [How to expand virtual hard disks attached to a Windows virtual machine](windows/expand-os-disk.md)
+- [How to expand virtual hard disks attached to a Windows virtual machine](windows/expand-os-disk.md)
virtual-machines N Series Driver Setup https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/linux/n-series-driver-setup.md
Then run installation commands specific for your distribution.
1. Download and install the CUDA drivers from the NVIDIA website. > [!NOTE]
- > The example below shows the CUDA package path for Ubuntu 16.04. Replace the path specific to the version you plan to use.
+ > The example below shows the CUDA package path for Ubuntu 20.04. Replace the path specific to the version you plan to use.
>
- > Visit the [Nvidia Download Center](https://developer.download.nvidia.com/compute/cuda/repos/) for the full path specific to each version.
+ > Visit the [NVIDIA Download Center](https://developer.download.nvidia.com/compute/cuda/repos/) or the [NVIDIA CUDA Resources page](https://developer.nvidia.com/cuda-downloads?target_os=Linux&target_arch=x86_64&Distribution=Ubuntu&target_version=20.04&target_type=deb_network) for the full path specific to each version.
> ```bash
- CUDA_REPO_PKG=cuda-repo-ubuntu1604_10.0.130-1_amd64.deb
- wget -O /tmp/${CUDA_REPO_PKG} https://developer.download.nvidia.com/compute/cuda/repos/ubuntu1604/x86_64/${CUDA_REPO_PKG}
-
- sudo dpkg -i /tmp/${CUDA_REPO_PKG}
- sudo apt-key adv --fetch-keys https://developer.download.nvidia.com/compute/cuda/repos/ubuntu1604/x86_64/3bf863cc.pub
- rm -f /tmp/${CUDA_REPO_PKG}
-
+ wget https://developer.download.nvidia.com/compute/cuda/repos/ubuntu2004/x86_64/cuda-keyring_1.0-1_all.deb
+ sudo dpkg -i cuda-keyring_1.0-1_all.deb
sudo apt-get update
- sudo apt-get install cuda-drivers
+ sudo apt-get -y install cuda-drivers
+ ``` The installation can take several minutes.
-
-2. To optionally install the complete CUDA toolkit, type:
-
- ```bash
- sudo apt-get install cuda
- ```
-
-3. Reboot the VM and proceed to verify the installation.
+2. Reboot the VM and proceed to verify the installation.
#### CUDA driver updates
sudo apt-get install cuda-drivers
sudo reboot ``` + ### CentOS or Red Hat Enterprise Linux 1. Update the kernel (recommended). If you choose not to update the kernel, ensure that the versions of `kernel-devel` and `dkms` are appropriate for your kernel.
virtual-network Create Peering Different Subscriptions https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-network/create-peering-different-subscriptions.md
This tutorial peers virtual networks in the same region. You can also peer virtu
- An Azure account with permissions in both subscriptions or an account in each subscription with the proper permissions to create a virtual network peering. For a list of permissions, see [Virtual network peering permissions](virtual-network-manage-peering.md#permissions).
- - If the virtual networks are in different subscriptions and Active Directory tenants, add the user from each tenant as a guest in the opposite tenant. For more information about guest users, see [Add Azure Active Directory B2B collaboration users in the Azure portal](../active-directory/external-identities/add-users-administrator.md?toc=%2fazure%2fvirtual-network%2ftoc.json#add-guest-users-to-the-directory).
+ - If the virtual networks are in different subscriptions and Active Directory tenants, and you intend to separate the duty of managing the network belonging to each tenant, then add the user from each tenant as a guest in the opposite tenant and assign them a reader role to the virtual network.
+
+ - If the virtual networks are in different subscriptions and Active Directory tenants, and you do not intend to separate the duty of managing the network belonging to each tenant, then add the user from tenant A as a guest in the opposite tenant and assign them the correct permissions to establish a network peering. This user will be able to initiate and connect the network peering from each subscription.
+
+ - For more information about guest users, see [Add Azure Active Directory B2B collaboration users in the Azure portal](../active-directory/external-identities/add-users-administrator.md?toc=%2fazure%2fvirtual-network%2ftoc.json#add-guest-users-to-the-directory).
- Each user must accept the guest user invitation from the opposite Azure Active Directory tenant.
vpn-gateway Vpn Gateway Vpn Faq https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/vpn-gateway/vpn-gateway-vpn-faq.md
You can connect to multiple sites by using Windows PowerShell and the Azure REST
### Is there an additional cost for setting up a VPN gateway as active-active?
-No.
+No. However, costs for any additional public IPs will be charged accordingly. See [IP Address Pricing](https://azure.microsoft.com/pricing/details/ip-addresses/).
### What are my cross-premises connection options?