Updates from: 02/25/2023 02:13:03
Service Microsoft Docs article Related commit history on GitHub Change details
active-directory Application Provisioning Quarantine Status https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/app-provisioning/application-provisioning-quarantine-status.md
A job can go into quarantine regardless of failure counts for issues such as adm
The logic documented here may be different for certain connectors to ensure best customer experience, but we generally have the below retry cycles after a failure:
-After the first failure, the first retry happens within the next 2 hours (usually in the next sync cycle).
-- The second retry happens 6 hours after the first failure.-- The third retry happens 12 hours after the first failure.-- The fourth retry happens 24 hours after the first failure.-- The fifth retry happens 48 hours after the first failure.-- The sixth retry happens 72 hours after the first failure.-- The seventh retry happens 96 hours after the first failure.-- The eighth retry happens 120 hours after the first failure.-
-This cycle is repeated every 24 hours until the 30th day when retries are stopped and the job is disabled.
+After the failure, the first retry will happen in 6 hours.
+- The second retry happens 12 hours after the first failure.
+- The third retry happens 24 hours after the first failure.
+- The fourth retry happens 48 hours after the first failure.
+- The fifth retry happens 96 hours after the first failure.
+- The sixth retry happens 192 hours after the first failure.
+- The seventh retry happens 384 hours after the first failure.
+- The eighth retry happens 768 hours after the first failure.
+
+The retries are stopped after the 8th retry and the escrow entry is removed. The job will continue unless it hits the escrow thresholds from the section above
## How do I get my application out of quarantine?
active-directory Partner List https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/cloud-infrastructure-entitlement-management/partner-list.md
Microsoft verified partners can help you onboard Microsoft Entra Permissions Man
* **Onboarding and Deployment Support** Partners can guide you through the entire onboarding and deployment process for
- ermissions Management across AWS, Azure, and GCP.
+ Permissions Management across AWS, Azure, and GCP.
## Partner list
active-directory Concept Conditional Access Grant https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/conditional-access/concept-conditional-access-grant.md
To apply this grant control, the device must be registered in Azure AD, which re
The following client apps support this setting, this list isn't exhaustive and is subject to change:: - Microsoft Azure Information Protection-- Microsoft Bookings - Microsoft Cortana - Microsoft Dynamics 365 - Microsoft Edge
The following client apps support this setting, this list isn't exhaustive and i
- Microsoft PowerPoint - Microsoft SharePoint - Microsoft Skype for Business-- Microsoft StaffHub - Microsoft Stream - Microsoft Teams - Microsoft To-Do
active-directory Concept Continuous Access Evaluation https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/conditional-access/concept-continuous-access-evaluation.md
Networks and network services used by clients connecting to identity and resourc
### Supported location policies
-CAE only has insight into [IP-based named locations](../conditional-access/location-condition.md#ip-address-ranges). CAE doesn't have insight into other location conditions like [MFA trusted IPs](../authentication/howto-mfa-mfasettings.md#trusted-ips) or country-based locations. When a user comes from an MFA trusted IP, trusted location that includes MFA Trusted IPs, or country location, CAE won't be enforced after that user moves to a different location. In those cases, Azure AD will issue a one-hour access token without instant IP enforcement check.
+CAE only has insight into [IP-based named locations](../conditional-access/location-condition.md#ipv4-and-ipv6-address-ranges). CAE doesn't have insight into other location conditions like [MFA trusted IPs](../authentication/howto-mfa-mfasettings.md#trusted-ips) or country-based locations. When a user comes from an MFA trusted IP, trusted location that includes MFA Trusted IPs, or country location, CAE won't be enforced after that user moves to a different location. In those cases, Azure AD will issue a one-hour access token without instant IP enforcement check.
> [!IMPORTANT] > If you want your location policies to be enforced in real time by continuous access evaluation, use only the [IP based Conditional Access location condition](../conditional-access/location-condition.md) and configure all IP addresses, **including both IPv4 and IPv6**, that can be seen by your identity provider and resources provider. Do not use country location conditions or the trusted ips feature that is available in Azure AD Multi-Factor Authentication's service settings page.
active-directory Howto Conditional Access Policy Location https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/conditional-access/howto-conditional-access-policy-location.md
Previously updated : 08/22/2022 Last updated : 02/23/2023
# Conditional Access: Block access by location
-With the location condition in Conditional Access, you can control access to your cloud apps based on the network location of a user. The location condition is commonly used to block access from countries/regions where your organization knows traffic shouldn't come from.
+With the location condition in Conditional Access, you can control access to your cloud apps based on the network location of a user. The location condition is commonly used to block access from countries/regions where your organization knows traffic shouldn't come from. For more information about IPv6 support, see the article [IPv6 support in Azure Active Directory](/troubleshoot/azure/active-directory/azure-ad-ipv6-support).
> [!NOTE] > Conditional Access policies are enforced after first-factor authentication is completed. Conditional Access isn't intended to be an organization's first line of defense for scenarios like denial-of-service (DoS) attacks, but it can use signals from these events to determine access.
active-directory Location Condition https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/conditional-access/location-condition.md
Title: Location condition in Azure Active Directory Conditional Access
-description: Use the location condition to control access based on user physical or network location.
+description: Learn about creating location-based Conditional Access policies using Azure AD.
Previously updated : 01/09/2023 Last updated : 02/23/2023 -+ # Using the location condition in a Conditional Access policy
-As explained in the [overview article](overview.md) Conditional Access policies are at their most basic an if-then statement combining signals, to make decisions, and enforce organization policies. One of those signals that can be incorporated into the decision-making process is location.
+Conditional Access policies are at their most basic an if-then statement combining signals, to make decisions, and enforce organization policies. One of those signals is location.
![Conceptual Conditional signal plus decision to get enforcement](./media/location-condition/conditional-access-signal-decision-enforcement.png)
Organizations can use this location for common tasks like:
- Requiring multifactor authentication for users accessing a service when they're off the corporate network. - Blocking access for users accessing a service from specific countries or regions.
-The location is determined by the public IP address a client provides to Azure Active Directory or GPS coordinates provided by the Microsoft Authenticator app. Conditional Access policies by default apply to all IPv4 and IPv6 addresses.
+The location found using the public IP address a client provides to Azure Active Directory or GPS coordinates provided by the Microsoft Authenticator app. Conditional Access policies by default apply to all IPv4 and IPv6 addresses. For more information about IPv6 support, see the article [IPv6 support in Azure Active Directory](/troubleshoot/azure/active-directory/azure-ad-ipv6-support).
## Named locations
-Locations are named in the Azure portal under **Azure Active Directory** > **Security** > **Conditional Access** > **Named locations**. These named network locations may include locations like an organization's headquarters network ranges, VPN network ranges, or ranges that you wish to block. Named locations can be defined by IPv4/IPv6 address ranges or by countries.
+Locations exist in the Azure portal under **Azure Active Directory** > **Security** > **Conditional Access** > **Named locations**. These named network locations may include locations like an organization's headquarters network ranges, VPN network ranges, or ranges that you wish to block. Named locations are defined by IPv4 and IPv6 address ranges or by countries.
![Named locations in the Azure portal](./media/location-condition/new-named-location.png)
-### IP address ranges
+### IPv4 and IPv6 address ranges
-To define a named location by IPv4/IPv6 address ranges, you'll need to provide:
+To define a named location by IPv4/IPv6 address ranges, you need to provide:
-- A **Name** for the location-- One or more IP ranges-- Optionally **Mark as trusted location**
+- A **Name** for the location.
+- One or more IP ranges.
+- Optionally **Mark as trusted location**.
![New IP locations in the Azure portal](./media/location-condition/new-trusted-location.png) Named locations defined by IPv4/IPv6 address ranges are subject to the following limitations: -- Configure up to 195 named locations-- Configure up to 2000 IP ranges per named location-- Both IPv4 and IPv6 ranges are supported-- Private IP ranges can't be configured
+- Configure up to 195 named locations.
+- Configure up to 2000 IP ranges per named location.
+- Both IPv4 and IPv6 ranges are supported.
+- Private IP ranges can't be configured.
- The number of IP addresses contained in a range is limited. Only CIDR masks greater than /8 are allowed when defining an IP range. #### Trusted locations
-Administrators can name locations defined by IP address ranges to be trusted named locations.
+Locations such as your organization's public network ranges can be marked as trusted. This marking is used by features in several ways.
-Sign-ins from trusted named locations improve the accuracy of Azure AD Identity Protection's risk calculation, lowering a user's sign-in risk when they authenticate from a location marked as trusted. Additionally, trusted named locations can be targeted in Conditional Access policies. For example, you may [restrict multifactor authentication registration to trusted locations](howto-conditional-access-policy-registration.md).
+- Conditional Access policies can include or exclude these locations.
+- Sign-ins from trusted named locations improve the accuracy of Azure AD Identity Protection's risk calculation, lowering a user's sign-in risk when they authenticate from a location marked as trusted.
+
+> [!WARNING]
+> Even if you know the network and mark it as trusted does not mean you should exclude it from policies being applied. Verify explicitly is a core principle of a Zero Trust architecture. To find out more about Zero Trust and other ways to align your organization to the guiding principles, see the [Zero Trust Guidance Center](/security/zero-trust/).
### Countries Organizations can determine country location by IP address or GPS coordinates.
-To define a named location by country, you'll need to provide:
+To define a named location by country, you need to provide:
-- A **Name** for the location-- Choose to determine location by IP address or GPS coordinates-- Add one or more countries-- Optionally choose to **Include unknown countries/regions**
+- A **Name** for the location.
+- Choose to determine location by IP address or GPS coordinates.
+- Add one or more countries.
+- Optionally choose to **Include unknown countries/regions**.
![Country as a location in the Azure portal](./media/location-condition/new-named-location-country-region.png)
-If you select **Determine location by IP address (IPv4 only)**, the system will collect the IP address of the device the user is signing into. When a user signs in, Azure AD resolves the user's IPv4 address to a country or region, and the mapping is updated periodically. Organizations can use named locations defined by countries to block traffic from countries where they don't do business.
-
-> [!NOTE]
-> Sign-ins from IPv6 addresses cannot be mapped to countries or regions, and are considered unknown areas. Only IPv4 addresses can be mapped to countries or regions.
+If you select **Determine location by IP address**, the system collects the IP address of the device the user is signing into. When a user signs in, Azure AD resolves the user's IPv4 or [IPv6](/troubleshoot/azure/active-directory/azure-ad-ipv6-support) address to a country or region, and the mapping updates periodically. Organizations can use named locations defined by countries to block traffic from countries where they don't do business.
-If you select **Determine location by GPS coordinates**, the user will need to have the Microsoft Authenticator app installed on their mobile device. Every hour, the system will contact the userΓÇÖs Microsoft Authenticator app to collect the GPS location of the userΓÇÖs mobile device.
+If you select **Determine location by GPS coordinates**, the user needs to have the Microsoft Authenticator app installed on their mobile device. Every hour, the system contacts the userΓÇÖs Microsoft Authenticator app to collect the GPS location of the userΓÇÖs mobile device.
-The first time the user is required to share their location from the Microsoft Authenticator app, the user will receive a notification in the app. The user will need to open the app and grant location permissions.
+The first time the user must share their location from the Microsoft Authenticator app, the user receives a notification in the app. The user needs to open the app and grant location permissions.
-Every hour the user is accessing resources covered by the policy they will need to approve a push notification from the app.
+Every hour the user is accessing resources covered by the policy they need to approve a push notification from the app.
Every time the user shares their GPS location, the app does jailbreak detection (Using the same logic as the Intune MAM SDK). If the device is jailbroken, the location isn't considered valid, and the user isn't granted access. > [!NOTE]
->A Conditional Access policy with GPS-based named locations in report-only mode prompts users to share their GPS location, even though they aren't blocked from signing in.
+> A Conditional Access policy with GPS-based named locations in report-only mode prompts users to share their GPS location, even though they aren't blocked from signing in.
GPS location doesn't work with [passwordless authentication methods](../authentication/concept-authentication-passwordless.md).
-Multiple conditional access policies applications may prompt users for their GPS location before all Conditional Access policies are applied. Because of the way Conditional Access policies are applied, a user may be denied access if they pass the location check but fail another policy. For more information about policy enforcement, see the article [Building a Conditional Access policy](concept-conditional-access-policies.md).
+Multiple Conditional Access policies may prompt users for their GPS location before all are applied. Because of the way Conditional Access policies are applied, a user may be denied access if they pass the location check but fail another policy. For more information about policy enforcement, see the article [Building a Conditional Access policy](concept-conditional-access-policies.md).
> [!IMPORTANT] > Users may receive prompts every hour letting them know that Azure AD is checking their location in the Authenticator app. The preview should only be used to protect very sensitive apps where this behavior is acceptable or where access needs to be restricted to a specific country/region. #### Include unknown countries/regions
-Some IP addresses aren't mapped to a specific country or region, including all IPv6 addresses. To capture these IP locations, check the box **Include unknown countries/regions** when defining a geographic location. This option allows you to choose if these IP addresses should be included in the named location. Use this setting when the policy using the named location should apply to unknown locations.
-
-### Configure MFA trusted IPs
-
-You can also configure IP address ranges representing your organization's local intranet in the [multifactor authentication service settings](https://account.activedirectory.windowsazure.com/usermanagement/mfasettings.aspx). This feature enables you to configure up to 50 IP address ranges. The IP address ranges are in CIDR format. For more information, see [Trusted IPs](../authentication/howto-mfa-mfasettings.md#trusted-ips).
-
-If you have Trusted IPs configured, they show up as **MFA Trusted IPs** in the list of locations for the location condition.
-
-#### Skipping multifactor authentication
-
-On the multifactor authentication service settings page, you can identify corporate intranet users by selecting **Skip multifactor authentication for requests from federated users on my intranet**. This setting indicates that the inside corporate network claim, which is issued by AD FS, should be trusted and used to identify the user as being on the corporate network. For more information, see [Enable the Trusted IPs feature by using Conditional Access](../authentication/howto-mfa-mfasettings.md#enable-the-trusted-ips-feature-by-using-conditional-access).
-
-After checking this option, including the named location **MFA Trusted IPs** will apply to any policies with this option selected.
-
-For mobile and desktop applications, which have long lived session lifetimes, Conditional Access is periodically reevaluated. The default is once an hour. When the inside corporate network claim is only issued at the time of the initial authentication, Azure AD may not have a list of trusted IP ranges. In this case, it's more difficult to determine if the user is still on the corporate network:
-
-1. Check if the userΓÇÖs IP address is in one of the trusted IP ranges.
-1. Check whether the first three octets of the userΓÇÖs IP address match the first three octets of the IP address of the initial authentication. The IP address is compared with the initial authentication when the inside corporate network claim was originally issued and the user location was validated.
-
-If both steps fail, a user is considered to be no longer on a trusted IP.
+Some IP addresses don't map to a specific country or region. To capture these IP locations, check the box **Include unknown countries/regions** when defining a geographic location. This option allows you to choose if these IP addresses should be included in the named location. Use this setting when the policy using the named location should apply to unknown locations.
## Location condition in policy
When you configure the location condition, you can distinguish between:
### Any location
-By default, selecting **Any location** causes a policy to be applied to all IP addresses, which means any address on the Internet. This setting isn't limited to IP addresses you've configured as named location. When you select **Any location**, you can still exclude specific locations from a policy. For example, you can apply a policy to all locations except trusted locations to set the scope to all locations, except the corporate network.
+By default, selecting **Any location** causes a policy to apply to all IP addresses, which means any address on the Internet. This setting isn't limited to IP addresses you've configured as named location. When you select **Any location**, you can still exclude specific locations from a policy. For example, you can apply a policy to all locations except trusted locations to set the scope to all locations, except the corporate network.
### All trusted locations This option applies to: -- All locations that have been marked as trusted location-- **MFA Trusted IPs** (if configured)-
-### Selected locations
+- All locations marked as trusted locations.
+- **MFA Trusted IPs**, if configured.
-With this option, you can select one or more named locations. For a policy with this setting to apply, a user needs to connect from any of the selected locations. When you **Select** the named network selection control that shows the list of named networks opens. The list also shows if the network location has been marked as trusted. The named location called **MFA Trusted IPs** is used to include the IP settings that can be configured in the multifactor authentication service setting page.
-
-## IPv6 traffic
+#### Multifactor authentication trusted IPs
-By default, Conditional Access policies will apply to all IPv6 traffic. You can exclude specific IPv6 address ranges from a Conditional Access policy if you donΓÇÖt want policies to be enforced for specific IPv6 ranges. For example, if you want to not enforce a policy for uses on your corporate network, and your corporate network is hosted on public IPv6 ranges.
+Using the trusted IPs section of multifactor authentication's service settings is no longer recommended. This control only accepts IPv4 addresses and should only be used for specific scenarios covered in the article [Configure Azure AD Multi-Factor Authentication settings](../authentication/howto-mfa-mfasettings.md#trusted-ips)
-### Identifying IPv6 traffic in the Azure AD Sign-in activity reports
+If you have these trusted IPs configured, they show up as **MFA Trusted IPs** in the list of locations for the location condition.
-You can discover IPv6 traffic in your tenant by going the [Azure AD sign-in activity reports](../reports-monitoring/concept-sign-ins.md). After you have the activity report open, add the ΓÇ£IP addressΓÇ¥ column. This column will give you to identify the IPv6 traffic.
+### Selected locations
-You can also find the client IP by clicking a row in the report, and then going to the ΓÇ£LocationΓÇ¥ tab in the sign-in activity details.
+With this option, you can select one or more named locations. For a policy with this setting to apply, a user needs to connect from any of the selected locations. When you **Select** the named network selection control that shows the list of named networks opens. The list also shows if the network location is marked as trusted.
-### When will my tenant have IPv6 traffic?
+## IPv6 traffic
-Azure Active Directory (Azure AD) doesnΓÇÖt currently support direct network connections that use IPv6. However, there are some cases that authentication traffic is proxied through another service. In these cases, the IPv6 address will be used during policy evaluation.
+Conditional Access policies apply to all IPv4 **and** IPv6 traffic.
-Most of the IPv6 traffic that gets proxied to Azure AD comes from Microsoft Exchange Online. When available, Exchange will prefer IPv6 connections. **So if you have any Conditional Access policies for Exchange, that have been configured for specific IPv4 ranges, youΓÇÖll want to make sure youΓÇÖve also added your organizations IPv6 ranges.** Not including IPv6 ranges will cause unexpected behavior for the following two cases:
+### Identifying IPv6 traffic with Azure AD Sign-in activity reports
-- When a mail client is used to connect to Exchange Online with legacy authentication, Azure AD may receive an IPv6 address. The initial authentication request goes to Exchange and is then proxied to Azure AD.-- When Outlook Web Access (OWA) is used in the browser, it will periodically verify all Conditional Access policies continue to be satisfied. This check is used to catch cases where a user may have moved from an allowed IP address to a new location, like the coffee shop down the street. In this case, if an IPv6 address is used and if the IPv6 address isn't in a configured range, the user may have their session interrupted and be directed back to Azure AD to reauthenticate.
+You can discover IPv6 traffic in your tenant by going the [Azure AD sign-in activity reports](../reports-monitoring/concept-sign-ins.md). After you have the activity report open, add the ΓÇ£IP addressΓÇ¥ column and add a colon (**:**) to the field. This filter helps distinguish IPv6 traffic from IPv4 traffic.
-If you're using Azure VNets, you'll have traffic coming from an IPv6 address. If you have VNet traffic blocked by a Conditional Access policy, check your Azure AD sign-in log. Once youΓÇÖve identified the traffic, you can get the IPv6 address being used and exclude it from your policy.
+You can also find the client IP by clicking a row in the report, and then going to the ΓÇ£LocationΓÇ¥ tab in the sign-in activity details.
-> [!NOTE]
-> If you want to specify an IP CIDR range for a single address, apply the /128 bit mask. If you see the IPv6 address 2001:db8:4a7d:3f57:a1e2:6b4a:8f3e:d17b and wanted to exclude that single address as a range, you would use 2001:db8:4a7d:3f57:a1e2:6b4a:8f3e:d17b/128.
## What you should know
Conditional Access policies are evaluated when:
- A user initially signs in to a web app, mobile or desktop application. - A mobile or desktop application that uses modern authentication, uses a refresh token to acquire a new access token. By default this check is once an hour.
-This check means for mobile and desktop applications using modern authentication, a change in location would be detected within an hour of changing the network location. For mobile and desktop applications that donΓÇÖt use modern authentication, the policy is applied on each token request. The frequency of the request can vary based on the application. Similarly, for web applications, the policy is applied at initial sign-in and is good for the lifetime of the session at the web application. Because of differences in session lifetimes across applications, the time between policy evaluation will also vary. Each time the application requests a new sign-in token, the policy is applied.
+This check means for mobile and desktop applications using modern authentication, a change in location is detected within an hour of changing the network location. For mobile and desktop applications that donΓÇÖt use modern authentication, the policy applies on each token request. The frequency of the request can vary based on the application. Similarly, for web applications, policies apply at initial sign-in and are good for the lifetime of the session at the web application. Because of differences in session lifetimes across applications, the time between policy evaluation varies. Each time the application requests a new sign-in token, the policy is applied.
-By default, Azure AD issues a token on an hourly basis. After moving off the corporate network, within an hour the policy is enforced for applications using modern authentication.
+By default, Azure AD issues a token on an hourly basis. After users move off the corporate network, within an hour the policy is enforced for applications using modern authentication.
### User IP address
-The IP address used in policy evaluation is the public IP address of the user. For devices on a private network, this IP address isn't the client IP of the userΓÇÖs device on the intranet, it's the address used by the network to connect to the public internet.
+The IP address used in policy evaluation is the public IPv4 or IPv6 address of the user. For devices on a private network, this IP address isn't the client IP of the userΓÇÖs device on the intranet, it's the address used by the network to connect to the public internet.
### Bulk uploading and downloading of named locations
When you create or update named locations, for bulk updates, you can upload or d
When you use a cloud hosted proxy or VPN solution, the IP address Azure AD uses while evaluating a policy is the IP address of the proxy. The X-Forwarded-For (XFF) header that contains the userΓÇÖs public IP address isn't used because there's no validation that it comes from a trusted source, so would present a method for faking an IP address.
-When a cloud proxy is in place, a policy that is used to require a hybrid Azure AD joined device can be used, or the inside corpnet claim from AD FS.
+When a cloud proxy is in place, a policy that requires a hybrid Azure AD joined device can be used, or the inside corpnet claim from AD FS.
### API support and PowerShell
active-directory Plan Conditional Access https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/conditional-access/plan-conditional-access.md
Will this policy apply to any application, user action, or authentication contex
* What application(s) will the policy apply to? * What user actions will be subject to this policy?
-* What authentication contexts does this policy will be applied to?
+* What authentication contexts will this policy be applied to?
**Conditions**
active-directory Resilience Defaults https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/conditional-access/resilience-defaults.md
If the required controls of a policy weren't previously satisfied, the policy is
- Country location (resolving new IP or GPS coordinates) - Authentication strengths
-When active, the Backup Authentication Service doesn't evaluate authentication methods required by [authentication strengths](../authentication/concept-authentication-strengths.md). If you used a non-phishing-resistant authentication method before an outage, during an outage you aren't be prompted for multifactor authentication even if accessing a resource protected by a Conditional Access policy with a phishing-resistant authentication strength.
+When active, the Backup Authentication Service doesn't evaluate authentication methods required by [authentication strengths](../authentication/concept-authentication-strengths.md). If you used a non-phishing-resistant authentication method before an outage, during an outage you aren't prompted for multifactor authentication even if accessing a resource protected by a Conditional Access policy with a phishing-resistant authentication strength.
## Resilience defaults enabled
active-directory Console Quickstart Portal Nodejs https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/develop/console-quickstart-portal-nodejs.md
> * [Node.js](https://nodejs.org/en/download/) > * [Visual Studio Code](https://code.visualstudio.com/download) or another code editor >
->
> ### Download and configure the sample app > > #### Step 1: Configure the application in Azure portal
active-directory Msal Net Aad B2c Considerations https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/develop/msal-net-aad-b2c-considerations.md
Previously updated : 05/07/2020 Last updated : 02/21/2023
This article applies to MSAL.NET 3.x. For MSAL.NET 2.x, see [Azure AD B2C specif
The authority format for Azure AD B2C is: `https://{azureADB2CHostname}/tfp/{tenant}/{policyName}` -- `azureADB2CHostname` - The name of the Azure AD B2C tenant plus the host. For example, *contosob2c.b2clogin.com*.-- `tenant` - The domain name or the directory (tenant) ID of the Azure AD B2C tenant. For example, *contosob2c.onmicrosoft.com* or a GUID, respectively.-- `policyName` - The name of the user flow or custom policy to apply. For example, a sign-up/sign-in policy like *b2c_1_susi*.
+- `azureADB2CHostname` - The name of the Azure AD B2C tenant plus the host. For example, _contosob2c.b2clogin.com_.
+- `tenant` - The domain name or the directory (tenant) ID of the Azure AD B2C tenant. For example, _contosob2c.onmicrosoft.com_ or a GUID, respectively.
+- `policyName` - The name of the user flow or custom policy to apply. For example, a sign-up/sign-in policy like _b2c_1_susi_.
For more information about Azure AD B2C authorities, see [Set redirect URLs to b2clogin.com](../../active-directory-b2c/b2clogin.md).
catch (MsalUiRequiredException ex)
.WithAccount(account) .WithParentActivityOrWindow(ParentActivityOrWindow) .ExecuteAsync();
-}
+}
``` In the preceding code snippet:
private async void EditProfileButton_Click(object sender, RoutedEventArgs e)
For more information on the ROPC flow, see [Sign in with resource owner password credentials grant](v2-oauth-ropc.md).
-The ROPC flow is **not recommended** because asking a user for their password in your application is not secure. For more information about this problem, see [WhatΓÇÖs the solution to the growing problem of passwords?](https://news.microsoft.com/features/whats-solution-growing-problem-passwords-says-microsoft/).
+The ROPC flow is **not recommended** because asking a user for their password in your application isn't secure. For more information about this problem, see [WhatΓÇÖs the solution to the growing problem of passwords?](https://news.microsoft.com/features/whats-solution-growing-problem-passwords-says-microsoft/).
By using username/password in an ROPC flow, you sacrifice several things: - Core tenets of modern identity: The password can be fished or replayed because the shared secret can be intercepted. By definition, ROPC is incompatible with passwordless flows.-- Users who need to do MFA won't be able to sign in (as there is no interaction).
+- Users who use multi-factor authentication (MFA) won't be able to sign in as there's no interaction.
- Users won't be able to use single sign-on (SSO). ### Configure the ROPC flow in Azure AD B2C
AcquireTokenByUsernamePassword(
SecureString password) ```
-This `AcquireTokenByUsernamePassword` method takes the following parameters:
+The `AcquireTokenByUsernamePassword` method takes the following parameters:
-- The *scopes* for which to obtain an access token.-- A *username*.-- A SecureString *password* for the user.
+- The _scopes_ for which to obtain an access token.
+- A _username_.
+- A SecureString _password_ for the user.
### Limitations of the ROPC flow
The ROPC flow **only works for local accounts**, where your users have registere
## Google auth and embedded webview
-If you're using Google as an identity provider, we recommend you use the system browser as Google doesn't allow [authentication from embedded webviews](https://developers.googleblog.com/2016/08/modernizing-oauth-interactions-in-native-apps.html). Currently, `login.microsoftonline.com` is a trusted authority with Google and will work with embedded webview. However, `b2clogin.com` is not a trusted authority with Google, so users will not be able to authenticate.
-
-We'll provide an update to this [issue](https://github.com/AzureAD/microsoft-authentication-library-for-dotnet/issues/688) if things change.
+If you're using Google as an identity provider, we recommend you use the system browser as Google doesn't allow [authentication from embedded webviews](https://developers.googleblog.com/2016/08/modernizing-oauth-interactions-in-native-apps.html). Currently, `login.microsoftonline.com` is a trusted authority with Google and will work with embedded webview. However, `b2clogin.com` isn't a trusted authority with Google, so users won't be able to authenticate.
## Token caching in MSAL.NET
For more information about specifying which claims are returned by your user flo
More details about acquiring tokens interactively with MSAL.NET for Azure AD B2C applications are provided in the following sample.
-| Sample | Platform | Description|
-| | -- | --|
-|[active-directory-b2c-xamarin-native](https://github.com/Azure-Samples/active-directory-b2c-xamarin-native) | Xamarin iOS, Xamarin Android, UWP | A Xamarin Forms app that uses MSAL.NET to authenticate users via Azure AD B2C and then access a web API with the tokens returned.|
+| Sample | Platform | Description |
+| -- | | |
+| [active-directory-b2c-xamarin-native](https://github.com/Azure-Samples/active-directory-b2c-xamarin-native) | Xamarin iOS, Xamarin Android, UWP | A Xamarin Forms app that uses MSAL.NET to authenticate users via Azure AD B2C and then access a web API with the tokens returned. |
active-directory Workload Identity Federation Create Trust https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/develop/workload-identity-federation-create-trust.md
The Microsoft Graph endpoint (`https://graph.microsoft.com`) exposes REST APIs t
Run the following method to [create a new federated identity credential](/graph/api/application-post-federatedidentitycredentials) on your app (specified by the object ID of the app). The *issuer* identifies GitHub as the external token issuer. *subject* identifies the GitHub organization, repo, and environment for your GitHub Actions workflow. When the GitHub Actions workflow requests Microsoft identity platform to exchange a GitHub token for an access token, the values in the federated identity credential are checked against the provided GitHub token. ```azurecli
-az rest --method POST --uri 'https://graph.microsoft.com/applications/f6475511-fd81-4965-a00e-41e7792b7b9c/federatedIdentityCredentials' --body '{"name":"Testing","issuer":"https://token.actions.githubusercontent.com/","subject":"repo:octo-org/octo-repo:environment:Production","description":"Testing","audiences":["api://AzureADTokenExchange"]}'
+az rest --method POST --uri 'https://graph.microsoft.com/applications/f6475511-fd81-4965-a00e-41e7792b7b9c/federatedIdentityCredentials' --body '{"name":"Testing","issuer":"https://token.actions.githubusercontent.com","subject":"repo:octo-org/octo-repo:environment:Production","description":"Testing","audiences":["api://AzureADTokenExchange"]}'
``` And you get the response:
And you get the response:
], "description": "Testing", "id": "1aa3e6a7-464c-4cd2-88d3-90db98132755",
- "issuer": "https://token.actions.githubusercontent.com/",
+ "issuer": "https://token.actions.githubusercontent.com",
"name": "Testing", "subject": "repo:octo-org/octo-repo:environment:Production" }
And you get the response:
*name*: The name of your Azure application.
-*issuer*: The path to the GitHub OIDC provider: `https://token.actions.githubusercontent.com/`. This issuer will become trusted by your Azure application.
+*issuer*: The path to the GitHub OIDC provider: `https://token.actions.githubusercontent.com`. This issuer will become trusted by your Azure application.
*subject*: Before Azure will grant an access token, the request must match the conditions defined here. - For Jobs tied to an environment: `repo:< Organization/Repository >:environment:< Name >`
active-directory Howto Vm Sign In Azure Ad Linux https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/devices/howto-vm-sign-in-azure-ad-linux.md
The following Linux distributions are currently supported for deployments in a s
| CentOS | CentOS 7, CentOS 8 | | Debian | Debian 9, Debian 10, Debian 11 | | openSUSE | openSUSE Leap 42.3, openSUSE Leap 15.1+ |
-| RedHat Enterprise Linux (RHEL) | RHEL 7.4 to RHEL 7.10, RHEL 8.3+ |
+| RedHat Enterprise Linux (RHEL) | RHEL 7.4 to RHEL 7.9, RHEL 8.3+ |
| SUSE Linux Enterprise Server (SLES) | SLES 12, SLES 15.1+ | | Ubuntu Server | Ubuntu Server 16.04 to Ubuntu Server 22.04 |
active-directory Licensing Service Plan Reference https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/enterprise-users/licensing-service-plan-reference.md
The following service plans cannot be assigned together:
| Service Plan Name | GUID | | | | | EXCHANGE_B_STANDARD | 90927877-dcff-4af6-b346-2332c0b15bb7 |
-| EXCHANGE_L_STANDARD | d42bdbd6-c335-4231-ab3d-c8f348d5aff5 |
| EXCHANGE_S_ARCHIVE | da040e0a-b393-4bea-bb76-928b3fa1cf5a | | EXCHANGE_S_DESKLESS | 4a82b400-a79f-41a4-b4e2-e94f5787b113 |
-| EXCHANGE_S_ENTERPRISE | efb87545-963c-4e0d-99df-69c6916d9eb0 |
| EXCHANGE_S_ESSENTIALS | 1126bef5-da20-4f07-b45e-ad25d2581aa8 | | EXCHANGE_S_STANDARD | 9aaf7827-d63c-4b61-89c3-182f06f82e5c | | EXCHANGE_S_STANDARD_MIDMARKET | fc52cc4b-ed7d-472d-bbe7-b081c23ecc56 |
active-directory 1 Secure Access Posture https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/fundamentals/1-secure-access-posture.md
Previously updated : 02/03/2023 Last updated : 02/23/2023
As you consider the governance of external access, assess your organization's se
> [!NOTE] > A high degree of control over collaboration can lead to higher IT budgets, reduced productivity, and delayed business outcomes. When official collaboration channels are perceived as onerous, end users tend to evade official channels. An example is end users sending unsecured documents by email.
+## Before you begin
+
+This article is number 1 in a series of 10 articles. We recommend you review the articles in order. Go to the **Next steps** section to see the entire series.
+ ## Scenario-based planning IT teams can delegate partner access to empower employees to collaborate with partners. This delegation can occur while maintaining sufficient security to protect intellectual property.
IT teams can delegate access decisions to business owners through entitlement ma
## Next steps
-See the following articles to learn more about securing external access to resources. We recommend you follow the listed order.
+Use the following series of articles to learn about securing external access to resources. We recommend you follow the listed order.
1. [Determine your security posture for external access with Azure AD](1-secure-access-posture.md) (You're here) 2. [Discover the current state of external collaboration in your organization](2-secure-access-current-state.md)
-3. [Create a security plan for external access](3-secure-access-plan.md)
+3. [Create a security plan for external access to resources](3-secure-access-plan.md)
4. [Secure external access with groups in Azure AD and Microsoft 365](4-secure-access-groups.md)
See the following articles to learn more about securing external access to resou
6. [Manage external access with Azure AD entitlement management](6-secure-access-entitlement-managment.md)
-7. [Manage external access with Conditional Access policies](7-secure-access-conditional-access.md)
+7. [Manage external access to resources with Conditional Access policies](7-secure-access-conditional-access.md)
8. [Control external access to resources in Azure AD with sensitivity labels](8-secure-access-sensitivity-labels.md)
-9. [Secure external access to Microsoft Teams, SharePoint, and OneDrive with Azure AD](9-secure-access-teams-sharepoint.md)
+9. [Secure external access to Microsoft Teams, SharePoint, and OneDrive for Business with Azure AD](9-secure-access-teams-sharepoint.md)
+
+10. [Convert local guest accounts to Azure Active Directory B2B guest accounts](10-secure-local-guest.md)
active-directory 10 Secure Local Guest https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/fundamentals/10-secure-local-guest.md
Previously updated : 02/22/2023 Last updated : 02/23/2023
-# Convert local guest accounts to Azure Active Directory B2B guest accounts
+# Convert local guest accounts to Azure Active Directory B2B guest accounts
With Azure Active Directory (Azure AD B2B), external users collaborate with their identities. Although organizations can issue local usernames and passwords to external users, this approach isn't recommended. Azure AD B2B has improved security, lower cost, and less complexity, compared to creating local accounts. In addition, if your organization issues local credentials that external users manage, you can use Azure AD B2B instead. Use the guidance in this document to make the transition. Learn more: [Plan an Azure AD B2B collaboration deployment](secure-external-access-resources.md)
+## Before you begin
+
+This article is number 10 in a series of 10 articles. We recommend you review the articles in order. Go to the **Next steps** section to see the entire series.
+ ## Identify external-facing applications Before migrating local accounts to Azure AD B2B, confirm the applications and workloads external users can access. For example, for applications hosted on-premises, validate the application is integrated with Azure AD. On-premises applications are a good reason to create local accounts.
After mapping external local accounts to identities, add external identities or
## End user communications
-Notify external users about migration timing. Communicate expectations, such as when external users must stop using a current password to enable authenticate by home and corporate credentials. Communications can include email campaigns and announcements.
+Notify external users about migration timing. Communicate expectations, for instance when external users must stop using a current password to enable authentication by home and corporate credentials. Communications can include email campaigns and announcements.
## Migrate local guest accounts to Azure AD B2B
If external user local accounts were synced from on-premises, reduce their on-pr
## Next steps
-See the following articles on securing external access to resources. We recommend you take the actions in the listed order.
-
-1. [Determine your desired security posture for external access](1-secure-access-posture.md)
-1. [Discover your current state](2-secure-access-current-state.md)
-1. [Create a governance plan](3-secure-access-plan.md)
-1. [Use groups for security](4-secure-access-groups.md)
-1. [Transition to Azure AD B2B](5-secure-access-b2b.md)
-1. [Secure access with Entitlement Management](6-secure-access-entitlement-managment.md)
-1. [Secure access with Conditional Access policies](7-secure-access-conditional-access.md)
-1. [Secure access with Sensitivity labels](8-secure-access-sensitivity-labels.md)
-1. [Secure access to Microsoft Teams, OneDrive, and SharePoint](9-secure-access-teams-sharepoint.md)
-1. [Convert local guest accounts to B2B](10-secure-local-guest.md) (YouΓÇÖre here)
+Use the following series of articles to learn about securing external access to resources. We recommend you follow the listed order.
+
+1. [Determine your security posture for external access with Azure AD](1-secure-access-posture.md)
+
+2. [Discover the current state of external collaboration in your organization](2-secure-access-current-state.md)
+
+3. [Create a security plan for external access to resources](3-secure-access-plan.md)
+
+4. [Secure external access with groups in Azure AD and Microsoft 365](4-secure-access-groups.md)
+
+5. [Transition to governed collaboration with Azure AD B2B collaboration](5-secure-access-b2b.md)
+
+6. [Manage external access with Azure AD entitlement management](6-secure-access-entitlement-managment.md)
+
+7. [Manage external access to resources with Conditional Access policies](7-secure-access-conditional-access.md)
+
+8. [Control external access to resources in Azure AD with sensitivity labels](8-secure-access-sensitivity-labels.md)
+
+9. [Secure external access to Microsoft Teams, SharePoint, and OneDrive for Business with Azure AD](9-secure-access-teams-sharepoint.md) (You're here)
+
+10. [Convert local guest accounts to Azure Active Directory B2B guest accounts](10-secure-local-guest.md) (You're here)
active-directory 2 Secure Access Current State https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/fundamentals/2-secure-access-current-state.md
Previously updated : 02/21/2023 Last updated : 02/23/2023
Users in your organization likely collaborate with users from other organization
* Collaborating with external users and organizations * Granting access to external users
+## Before you begin
+
+This article is number 2 in a series of 10 articles. We recommend you review the articles in order. Go to the **Next steps** section to see the entire series.
+ ## Determine who initiates external collaboration Generally, users seeking external collaboration know the applications to use, and when access ends. Therefore, determine users with delegated permissions to invite external users, create access packages, and complete access reviews.
If your email and network plans are enabled, you can investigate content sharing
## Next steps
-* [Determine your security posture for external access](1-secure-access-posture.md)
-* [Create a security plan for external access](3-secure-access-plan.md)
-* [Securing external access with groups](4-secure-access-groups.md)
-* [Transition to governed collaboration with Azure Active Directory B2B collaboration](5-secure-access-b2b.md)
-* [Manage external access with entitlement management](6-secure-access-entitlement-managment.md)
-* [Manage external access with Conditional Access policies](7-secure-access-conditional-access.md)
-* [Control access with sensitivity labels](8-secure-access-sensitivity-labels.md)
-* [Secure external access to Microsoft Teams, SharePoint, and OneDrive for Business](9-secure-access-teams-sharepoint.md)
+Use the following series of articles to learn about securing external access to resources. We recommend you follow the listed order.
+
+1. [Determine your security posture for external access with Azure AD](1-secure-access-posture.md)
+
+2. [Discover the current state of external collaboration in your organization](2-secure-access-current-state.md) (You're here)
+
+3. [Create a security plan for external access to resources](3-secure-access-plan.md)
+
+4. [Secure external access with groups in Azure AD and Microsoft 365](4-secure-access-groups.md)
+
+5. [Transition to governed collaboration with Azure AD B2B collaboration](5-secure-access-b2b.md)
+
+6. [Manage external access with Azure AD entitlement management](6-secure-access-entitlement-managment.md)
+
+7. [Manage external access to resources with Conditional Access policies](7-secure-access-conditional-access.md)
+
+8. [Control external access to resources in Azure AD with sensitivity labels](8-secure-access-sensitivity-labels.md)
+
+9. [Secure external access to Microsoft Teams, SharePoint, and OneDrive for Business with Azure AD](9-secure-access-teams-sharepoint.md)
+
+10. [Convert local guest accounts to Azure Active Directory B2B guest accounts](10-secure-local-guest.md)
+
active-directory 3 Secure Access Plan https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/fundamentals/3-secure-access-plan.md
Previously updated : 02/21/2023 Last updated : 02/23/2023
Before you create an external-access security plan, review the following two art
* [Determine your security posture for external access with Azure AD](1-secure-access-posture.md) * [Discover the current state of external collaboration in your organization](2-secure-access-current-state.md)
+## Before you begin
+
+This article is number 3 in a series of 10 articles. We recommend you review the articles in order. Go to the **Next steps** section to see the entire series.
+ ## Security plan documentation For your security plan, document the following information:
Items in bold are recommended actions.
| Conditional Access policies| Conditional Access policies for access control|N/A|N/A|N/A| | Other methods|N/A| Restrict SharePoint site access with security groups<br>Disallow direct sharing| **Restrict external invitations from a team**|N/A|
-### Next steps
+## Next steps
-* [Determine your security posture for external access](1-secure-access-posture.md)
-* [Discover the current state of external collaboration in your organization](2-secure-access-current-state.md)
-* [Securing external access with groups](4-secure-access-groups.md)
-* [Transition to governed collaboration with Azure Active Directory B2B collaboration](5-secure-access-b2b.md)
-* [Manage external access with entitlement management](6-secure-access-entitlement-managment.md)
-* [Secure access with Conditional Access policies](7-secure-access-conditional-access.md)
-* [Control access with sensitivity labels](8-secure-access-sensitivity-labels.md)
-* [Secure external access to Microsoft Teams, SharePoint, and OneDrive for Business](9-secure-access-teams-sharepoint.md)
+Use the following series of articles to learn about securing external access to resources. We recommend you follow the listed order.
+
+1. [Determine your security posture for external access with Azure AD](1-secure-access-posture.md)
+
+2. [Discover the current state of external collaboration in your organization](2-secure-access-current-state.md)
+
+3. [Create a security plan for external access to resources](3-secure-access-plan.md) (You're here)
+
+4. [Secure external access with groups in Azure AD and Microsoft 365](4-secure-access-groups.md)
+
+5. [Transition to governed collaboration with Azure AD B2B collaboration](5-secure-access-b2b.md)
+
+6. [Manage external access with Azure AD entitlement management](6-secure-access-entitlement-managment.md)
+
+7. [Manage external access to resources with Conditional Access policies](7-secure-access-conditional-access.md)
+
+8. [Control external access to resources in Azure AD with sensitivity labels](8-secure-access-sensitivity-labels.md)
+
+9. [Secure external access to Microsoft Teams, SharePoint, and OneDrive for Business with Azure AD](9-secure-access-teams-sharepoint.md)
+
+10. [Convert local guest accounts to Azure Active Directory B2B guest accounts](10-secure-local-guest.md)
active-directory 4 Secure Access Groups https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/fundamentals/4-secure-access-groups.md
Groups have the following roles:
* **Members** ΓÇô inherit permissions and access assigned to the group * **Guests** ΓÇô are members outside your organization
+## Before you begin
+
+This article is number 4 in a series of 10 articles. We recommend you review the articles in order. Go to the **Next steps** section to see the entire series.
+ ## Group strategy To develop a group strategy to secure external access to your resources, consider the security posture that you want.
Learn more:
* Can invite guests to join the group * [Manage guest access in Microsoft 365 groups](/microsoft-365/admin/create-groups/manage-guest-access-in-groups) * **Guests**
- * Are members from outside your organization.
+ * Are members from outside your organization
* Have some limits to functionality in Teams ### Microsoft 365 Group settings
Select email alias, privacy, and whether to enable the group for teams.
After setup, add members, and configure settings for email usage, etc.
-### Next steps
+## Next steps
-See the following articles to learn more about securing external access to resources. We recommend you follow the listed order.
+Use the following series of articles to learn about securing external access to resources. We recommend you follow the listed order.
-1. [Determine your security posture for external access](1-secure-access-posture.md)
+1. [Determine your security posture for external access with Azure AD](1-secure-access-posture.md)
2. [Discover the current state of external collaboration in your organization](2-secure-access-current-state.md)
-3. [Create a security plan for external access](3-secure-access-plan.md)
+3. [Create a security plan for external access to resources](3-secure-access-plan.md)
4. [Secure external access with groups in Azure AD and Microsoft 365](4-secure-access-groups.md) (You're here) 5. [Transition to governed collaboration with Azure AD B2B collaboration](5-secure-access-b2b.md)
-6. [Manage external access with Azure AD entitlement management](6-secure-access-entitlement-managment.md)
+6. [Manage external access with Azure AD entitlement management](6-secure-access-entitlement-managment.md)
+
+7. [Manage external access to resources with Conditional Access policies](7-secure-access-conditional-access.md)
-7. [Manage external access with Conditional Access policies](7-secure-access-conditional-access.md)
+8. [Control external access to resources in Azure AD with sensitivity labels](8-secure-access-sensitivity-labels.md)
-8. [Control access with sensitivity labels](8-secure-access-sensitivity-labels.md)
+9. [Secure external access to Microsoft Teams, SharePoint, and OneDrive for Business with Azure AD](9-secure-access-teams-sharepoint.md)
-9. [Secure external access to Microsoft Teams, SharePoint, and OneDrive for Business](9-secure-access-teams-sharepoint.md)
+10. [Convert local guest accounts to Azure Active Directory B2B guest accounts](10-secure-local-guest.md)
active-directory 5 Secure Access B2b https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/fundamentals/5-secure-access-b2b.md
# Transition to governed collaboration with Azure Active Directory B2B collaboration
-For context and needed information we recommend you read the first four articles in the series of ten articles.
-
-* [Determine your security posture for external access](1-secure-access-posture.md)
-* [Discover the current state of external collaboration in your organization](2-secure-access-current-state.md)
-* [Create a security plan for external access](3-secure-access-plan.md)
-* [Securing external access with groups](4-secure-access-groups.md)
- Understanding collaboration helps secure external access to your resources. Use the information in this article to move external collaboration into Azure Active Directory B2B (Azure AD B2B) collaboration. * See, [B2B collaboration overview](../external-identities/what-is-b2b.md) * Learn about: [External Identities in Azure AD](../external-identities/external-identities-overview.md)
+## Before you begin
+
+This article is number 5 in a series of 10 articles. We recommend you review the articles in order. Go to the **Next steps** section to see the entire series.
+ ## Control collaboration You can limit the organizations your users collaborate with (inbound and outbound), and who in your organization can invite guests. Most organizations permit business units to decide collaboration, and delegate approval and oversight. For example, organizations in government, education, and finance often don't permit open collaboration. You can use Azure AD features to control collaboration.
For more information on governing applications, see:
* [Governing connected apps](/defender-cloud-apps/governance-actions) * [Govern discovered apps](/defender-cloud-apps/governance-discovery)
-### Next steps
+## Next steps
-* [Determine your security posture for external access](1-secure-access-posture.md)
-* [Discover the current state of external collaboration in your organization](2-secure-access-current-state.md)
-* [Create a security plan for external access](3-secure-access-plan.md)
-* [Securing external access with groups](4-secure-access-groups.md)
-* [Manage external access with Entitlement Management](6-secure-access-entitlement-managment.md)
-* [Manage external access with Conditional Access policies](7-secure-access-conditional-access.md)
-* [Control access with sensitivity labels](8-secure-access-sensitivity-labels.md)
-* [Secure external access to Microsoft Teams, SharePoint, and OneDrive for Business](9-secure-access-teams-sharepoint.md)
+Use the following series of articles to learn about securing external access to resources. We recommend you follow the listed order.
+
+1. [Determine your security posture for external access with Azure AD](1-secure-access-posture.md)
+
+2. [Discover the current state of external collaboration in your organization](2-secure-access-current-state.md)
+
+3. [Create a security plan for external access to resources](3-secure-access-plan.md)
+
+4. [Secure external access with groups in Azure AD and Microsoft 365](4-secure-access-groups.md)
+
+5. [Transition to governed collaboration with Azure AD B2B collaboration](5-secure-access-b2b.md) (You're here)
+
+6. [Manage external access with Azure AD entitlement management](6-secure-access-entitlement-managment.md)
+
+7. [Manage external access to resources with Conditional Access policies](7-secure-access-conditional-access.md)
+
+8. [Control external access to resources in Azure AD with sensitivity labels](8-secure-access-sensitivity-labels.md)
+
+9. [Secure external access to Microsoft Teams, SharePoint, and OneDrive for Business with Azure AD](9-secure-access-teams-sharepoint.md)
+
+10. [Convert local guest accounts to Azure Active Directory B2B guest accounts](10-secure-local-guest.md)
active-directory 6 Secure Access Entitlement Managment https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/fundamentals/6-secure-access-entitlement-managment.md
Previously updated : 01/31/2023 Last updated : 02/23/2023
Learn more:
* [What are access packages and what resources can I manage with them?](../governance/entitlement-management-overview.md#what-are-access-packages-and-what-resources-can-i-manage-with-them) * [What is provisioning?](../governance/what-is-provisioning.md)
+## Before you begin
+
+This article is number 6 in a series of 10 articles. We recommend you review the articles in order. Go to the **Next steps** section to see the entire series.
+ ## Enable entitlement management The following key concepts are important to understand for entitlement management.
You can enforce reviews of guest-access packages to avoid inappropriate access f
Learn more: [Govern access for external users in entitlement management](../governance/entitlement-management-external-users.md)
-### Next steps
+## Next steps
-See the following articles to learn more about securing external access to resources. We recommend you follow the listed order.
+Use the following series of articles to learn about securing external access to resources. We recommend you follow the listed order.
-1. [Determine your security posture for external access](1-secure-access-posture.md)
+1. [Determine your security posture for external access with Azure AD](1-secure-access-posture.md)
2. [Discover the current state of external collaboration in your organization](2-secure-access-current-state.md)
-3. [Create a security plan for external access](3-secure-access-plan.md)
+3. [Create a security plan for external access to resources](3-secure-access-plan.md)
-4. [Securing external access with groups](4-secure-access-groups.md)
+4. [Secure external access with groups in Azure AD and Microsoft 365](4-secure-access-groups.md)
-5. [Transition to governed collaboration with Azure AD B2B collaboration](5-secure-access-b2b.md)
+5. [Transition to governed collaboration with Azure AD B2B collaboration](5-secure-access-b2b.md)
6. [Manage external access with Azure AD entitlement management](6-secure-access-entitlement-managment.md) (You're here)
-7. [Manage external access with Conditional Access policies](7-secure-access-conditional-access.md)
+7. [Manage external access to resources with Conditional Access policies](7-secure-access-conditional-access.md)
+
+8. [Control external access to resources in Azure AD with sensitivity labels](8-secure-access-sensitivity-labels.md)
+
+9. [Secure external access to Microsoft Teams, SharePoint, and OneDrive for Business with Azure AD](9-secure-access-teams-sharepoint.md)
-8. [Control access with sensitivity labels](8-secure-access-sensitivity-labels.md)
+10. [Convert local guest accounts to Azure Active Directory B2B guest accounts](10-secure-local-guest.md)
-9. [Secure external access to Microsoft Teams, SharePoint, and OneDrive for Business](9-secure-access-teams-sharepoint.md)
active-directory 7 Secure Access Conditional Access https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/fundamentals/7-secure-access-conditional-access.md
Previously updated : 02/22/2023 Last updated : 02/23/2023
The following diagram illustrates signals to Conditional Access that trigger acc
![Diagram of Conditional Access signal input and resulting access processes.](media/secure-external-access//7-conditional-access-signals.png)
+## Before you begin
+
+This article is number 7 in a series of 10 articles. We recommend you review the articles in order. Go to the **Next steps** section to see the entire series.
+ ## Align a security plan with Conditional Access policies In the third article, in the set of 10 articles, there's guidance on creating a security plan. Use that plan to help create Conditional Access policies for external access. Part of the security plan includes:
Learn more: [Conditional Access templates (Preview)](../conditional-access/conce
## Next steps
-See the following articles on securing external access to resources. We recommend you take the actions in the listed order.
-
-1. [Determine your desired security posture for external access](1-secure-access-posture.md)
-1. [Discover your current state](2-secure-access-current-state.md)
-1. [Create a governance plan](3-secure-access-plan.md)
-1. [Use groups for security](4-secure-access-groups.md)
-1. [Transition to Azure AD B2B](5-secure-access-b2b.md)
-1. [Secure access with Entitlement Management](6-secure-access-entitlement-managment.md)
-1. [Secure access with Conditional Access policies](7-secure-access-conditional-access.md) (YouΓÇÖre here)
-1. [Secure access with Sensitivity labels](8-secure-access-sensitivity-labels.md)
-1. [Secure access to Microsoft Teams, OneDrive, and SharePoint](9-secure-access-teams-sharepoint.md)
+Use the following series of articles to learn about securing external access to resources. We recommend you follow the listed order.
+
+1. [Determine your security posture for external access with Azure AD](1-secure-access-posture.md)
+
+2. [Discover the current state of external collaboration in your organization](2-secure-access-current-state.md)
+
+3. [Create a security plan for external access to resources](3-secure-access-plan.md)
+
+4. [Secure external access with groups in Azure AD and Microsoft 365](4-secure-access-groups.md)
+
+5. [Transition to governed collaboration with Azure AD B2B collaboration](5-secure-access-b2b.md)
+
+6. [Manage external access with Azure AD entitlement management](6-secure-access-entitlement-managment.md)
+
+7. [Manage external access to resources with Conditional Access policies](7-secure-access-conditional-access.md) (You're here)
+
+8. [Control external access to resources in Azure AD with sensitivity labels](8-secure-access-sensitivity-labels.md)
+
+9. [Secure external access to Microsoft Teams, SharePoint, and OneDrive for Business with Azure AD](9-secure-access-teams-sharepoint.md)
+
+10. [Convert local guest accounts to Azure Active Directory B2B guest accounts](10-secure-local-guest.md)
active-directory 8 Secure Access Sensitivity Labels https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/fundamentals/8-secure-access-sensitivity-labels.md
Previously updated : 02/01/2023 Last updated : 02/23/2023
Use sensitivity labels to help control access to your content in Office 365 appl
See, [Learn about sensitivity labels](/microsoft-365/compliance/sensitivity-labels?view=o365-worldwide&preserve-view=true)
+## Before you begin
+
+This article is number 8 in a series of 10 articles. We recommend you review the articles in order. Go to the **Next steps** section to see the entire series.
+ ## Assign classification and enforce protection settings You can classify content without adding any protection settings. Content classification assignment stays with the content while itΓÇÖs used and shared. The classification generates usage reports with sensitive-content activity data.
After you determine use of sensitivity labels, see the following documentation f
## Next steps
-See the following articles to learn more about securing external access to resources. We recommend you follow the listed order.
+Use the following series of articles to learn about securing external access to resources. We recommend you follow the listed order.
-1. [Determine your security posture for external access](1-secure-access-posture.md)
+1. [Determine your security posture for external access with Azure AD](1-secure-access-posture.md)
2. [Discover the current state of external collaboration in your organization](2-secure-access-current-state.md)
-3. [Create a security plan for external access](3-secure-access-plan.md)
+3. [Create a security plan for external access to resources](3-secure-access-plan.md)
-4. [Secure external access with groups in Azure AD and Microsoft 365](4-secure-access-groups.md)
+4. [Secure external access with groups in Azure AD and Microsoft 365](4-secure-access-groups.md)
-5. [Transition to governed collaboration with Azure AD B2B collaboration](5-secure-access-b2b.md)
+5. [Transition to governed collaboration with Azure AD B2B collaboration](5-secure-access-b2b.md)
-6. [Manage external access with Azure AD entitlement management](6-secure-access-entitlement-managment.md)
+6. [Manage external access with Azure AD entitlement management](6-secure-access-entitlement-managment.md)
-7. [Manage external access with Conditional Access policies](7-secure-access-conditional-access.md)
+7. [Manage external access to resources with Conditional Access policies](7-secure-access-conditional-access.md)
8. [Control external access to resources in Azure AD with sensitivity labels](8-secure-access-sensitivity-labels.md) (You're here)
-9. [Secure external access to Microsoft Teams, SharePoint, and OneDrive for Business](9-secure-access-teams-sharepoint.md)
+9. [Secure external access to Microsoft Teams, SharePoint, and OneDrive for Business with Azure AD](9-secure-access-teams-sharepoint.md)
+
+10. [Convert local guest accounts to Azure Active Directory B2B guest accounts](10-secure-local-guest.md)
active-directory 9 Secure Access Teams Sharepoint https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/fundamentals/9-secure-access-teams-sharepoint.md
Previously updated : 02/02/2023 Last updated : 02/23/2023
Use this article to determine and configure your organization's external collaboration using Microsoft Teams, OneDrive for Business, and SharePoint. A common challenge is balancing security and ease of collaboration for end users and external users. If an approved collaboration method is perceived as restrictive and onerous, end users evade the approved method. End users might email unsecured content, or set up external processes and applications, such as a personal DropBox or OneDrive.
+## Before you begin
+
+This article is number 9 in a series of 10 articles. We recommend you review the articles in order. Go to the **Next steps** section to see the entire series.
+ ## External Identities settings and Azure Active Directory
-Sharing in Microsoft 365 is partially governed by the **External Identities, External collaboration** settings in Azure Active Directory (Azure AD). If external sharing is disabled or restricted in Azure AD, it overrides sharing settings configured in Microsoft 365. An exception is if Azure AD B2B integration isn't enabled. You can configure SharePoint and OneDrive to support ad-hoc sharing via one-time password (OTP). The following screenshot shows the External Identities, External collaboration settings dialog.
+Sharing in Microsoft 365 is partially governed by the **External Identities, External collaboration settings** in Azure Active Directory (Azure AD). If external sharing is disabled or restricted in Azure AD, it overrides sharing settings configured in Microsoft 365. An exception is if Azure AD B2B integration isn't enabled. You can configure SharePoint and OneDrive to support ad-hoc sharing via one-time password (OTP). The following screenshot shows the External Identities, External collaboration settings dialog.
![Screenshot of options and entries under External Identities, External collaboration settings.](media/secure-external-access/9-external-collaboration-settings.png)
Teams differentiates between external users (outside your organization) and gues
Learn more: [Use guest access and external access to collaborate with people outside your organization](/microsoftteams/communicate-with-users-from-other-organizations).
-The External Identities collaboration feaure in Azure AD controls permissions. You can increase restrictions in Teams, but restrictions can't be lower than Azure AD settings.
+The External Identities collaboration feature in Azure AD controls permissions. You can increase restrictions in Teams, but restrictions can't be lower than Azure AD settings.
Learn more:
Learn more:
## Next steps
-See the following articles to learn more about securing external access to resources. We recommend you follow the listed order.
+Use the following series of articles to learn about securing external access to resources. We recommend you follow the listed order.
1. [Determine your security posture for external access with Azure AD](1-secure-access-posture.md) 2. [Discover the current state of external collaboration in your organization](2-secure-access-current-state.md)
-3. [Create a security plan for external access](3-secure-access-plan.md)
+3. [Create a security plan for external access to resources](3-secure-access-plan.md)
+
+4. [Secure external access with groups in Azure AD and Microsoft 365](4-secure-access-groups.md)
-4. [Secure external access with groups in Azure AD and Microsoft 365](4-secure-access-groups.md)
+5. [Transition to governed collaboration with Azure AD B2B collaboration](5-secure-access-b2b.md)
-5. [Transition to governed collaboration with Azure AD B2B collaboration](5-secure-access-b2b.md)
+6. [Manage external access with Azure AD entitlement management](6-secure-access-entitlement-managment.md)
-6. [Manage external access with Azure AD entitlement management](6-secure-access-entitlement-managment.md)
+7. [Manage external access to resources with Conditional Access policies](7-secure-access-conditional-access.md)
-7. [Manage external access with Conditional Access policies](7-secure-access-conditional-access.md)
+8. [Control external access to resources in Azure AD with sensitivity labels](8-secure-access-sensitivity-labels.md)
-8. [Control external access to resources in Azure AD with sensitivity labels](8-secure-access-sensitivity-labels.md)
+9. [Secure external access to Microsoft Teams, SharePoint, and OneDrive for Business with Azure AD](9-secure-access-teams-sharepoint.md) (You're here)
-9. [Secure external access to Microsoft Teams, SharePoint, and OneDrive with Azure AD](9-secure-access-teams-sharepoint.md) (You're here)
+10. [Convert local guest accounts to Azure Active Directory B2B guest accounts](10-secure-local-guest.md)
active-directory Data Protection Considerations https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/fundamentals/data-protection-considerations.md
For more information about Secret encryption at rest, see the following table.
* [Microsoft Service Trust Documents](https://servicetrust.microsoft.com/Documents/TrustDocuments) * [Microsoft Azure Trust Center](https://azure.microsoft.com/overview/trusted-cloud/)
-* [Where is my data? - Office 365 documentation](http://o365datacentermap.azurewebsites.net/)
* [Recover from deletions in Azure Active Directory](recover-from-deletions.md) ## Next steps
active-directory Users Default Permissions https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/fundamentals/users-default-permissions.md
Previously updated : 02/13/2023 Last updated : 02/17/2023
Permission | Setting explanation
- | **Guest user access restrictions** | Setting this option to **Guest users have the same access as members** grants all member user permissions to guest users by default.<p>Setting this option to **Guest user access is restricted to properties and memberships of their own directory objects** restricts guest access to only their own user profile by default. Access to other users is no longer allowed, even when they're searching by user principal name, object ID, or display name. Access to group information, including groups memberships, is also no longer allowed.<p>This setting doesn't prevent access to joined groups in some Microsoft 365 services like Microsoft Teams. To learn more, see [Microsoft Teams guest access](/MicrosoftTeams/guest-access).<p>Guest users can still be added to administrator roles regardless of this permission setting. **Guests can invite** | Setting this option to **Yes** allows guests to invite other guests. To learn more, see [Configure external collaboration settings](../external-identities/external-collaboration-settings-configure.md).
-**Members can invite** | Setting this option to **Yes** allows non-admin members of your directory to invite guests. To learn more, see [Configure external collaboration settings](../external-identities/external-collaboration-settings-configure.md).
-**Admins and users in the guest inviter role can invite** | Setting this option to **Yes** allows admins and users in the guest inviter role to invite guests. When you set this option to **Yes**, users in the guest inviter role will still be able to invite guests, regardless of the **Members can invite** setting. To learn more, see [Configure external collaboration settings](../external-identities/external-collaboration-settings-configure.md).
## Object ownership
active-directory Customize Workflow Schedule https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/governance/customize-workflow-schedule.md
-# Customize the schedule of workflows (Preview)
+# Customize the schedule of workflows
Workflows created using Lifecycle Workflows can be fully customized to match the schedule that fits your organization's needs. By default, workflows are scheduled to run every 3 hours, but the interval can be set as frequent as 1 hour, or as infrequent as 24 hours.
-## Customize the schedule of workflows using Microsoft Graph
+## Customize the schedule of workflows using the Azure portal
+
+Workflows created within Lifecycle Workflows follow the same schedule that you define within the **Workflow Settings** page. To adjust the schedule, you'd follow these steps:
+1. Sign in to the [Azure portal](https://portal.azure.com).
-First, to view the current schedule interval of your workflows, run the following get call:
+1. Select **Identity Governance** on the search bar near the top of the page.
-```http
-GET https://graph.microsoft.com/beta/identityGovernance/lifecycleWorkflows/settings
-```
+1. In the left menu, select **Lifecycle workflows (Preview)**.
+1. Select **Workflow settings (Preview)** from the Lifecycle workflows overview page.
-To customize a workflow in Microsoft Graph, use the following request and body:
-```http
-PATCH https://graph.microsoft.com/beta/identityGovernance/lifecycleWorkflows/settings
-Content-type: application/json
+1. On the workflow settings page you can set the schedule of workflows from an interval between 1-24.
+ :::image type="content" source="media/customize-workflow-schedule/workflow-schedule-settings.png" alt-text="Screenshot of the settings for workflow schedule.":::
+1. After setting the workflow schedule, select save.
-{
-"workflowScheduleIntervalInHours":<Interval between 0-24>
-}
+## Customize the schedule of workflows using Microsoft Graph
-```
+To schedule workflow settings using API via Microsoft Graph, see: Update lifecycleManagementSettings [tenant settings for Lifecycle Workflows](/graph/api/resources/identitygovernance-lifecyclemanagementsettings).
## Next steps
active-directory Entitlement Management Verified Id Settings https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/governance/entitlement-management-verified-id-settings.md
+
+ Title: Configure verified ID settings for an access package in entitlement management (Preview) - Azure AD
+description: Learn how to configure verified ID settings for an access package in entitlement management.
+
+documentationCenter: ''
++
+editor: HANKI
++
+ na
++ Last updated : 01/25/2023++++++
+# Configure verified ID settings for an access package in entitlement management (Preview)
+
+When setting up an access package policy, admins can specify whether itΓÇÖs for users in the directory, connected organizations, or any external user. Entitlement Management determines if the person requesting the access package is within the scope of the policy.
+
+Sometimes you might want users to present additional identity proofs during the request process such as a training certification, work authorization, or citizenship status. As an access package manager, you can require that requestors present a verified ID containing those credentials from a trusted issuer. Approvers can then quickly view if a userΓÇÖs verifiable credentials were validated at the time that the user presented their credentials and submitted the access package request.
+
+As an access package manager, you can include verified ID requirements for an access package at any time by editing an existing policy or adding a new policy for requesting access.
+
+This article describes how to configure the verified ID requirement settings for an access package.
+
+## Prerequisites
+
+Before you begin, you must set up your tenant to use the [Microsoft Entra Verified ID service](../verifiable-credentials/decentralized-identifier-overview.md). You can find detailed instructions on how to do that here: [Configure your tenant for Microsoft Entra Verified ID](../verifiable-credentials/verifiable-credentials-configure-tenant.md).
+
+## Create an access package with verified ID requirements (Preview)
+
+To add a verified ID requirement to an access package, you must start from the access packageΓÇÖs requests tab. Follow these steps to add a verified ID requirement to a new access package.
++
+**Prerequisite role**: Global administrator
+
+> [!NOTE]
+> Identity Governance administrator, User administrator, Catalog owner, or Access package manager will be able to add verified ID requirements to access packages soon.
+
+1. In the Azure portal, select **Azure Active Directory** and then select **Identity Governance**.
+
+1. In the left menu, select **Access packages** and then select **+ New access package**.
+
+1. On the **Requests** tab, scroll to the **Required Verified Ids** section.
+
+1. Select **+ Add issuer** and choose an issuer from the Entra Verified ID network. If you want to issue your own credentials to users, see: [Issue Microsoft Entra Verified ID credentials from an application](../verifiable-credentials/verifiable-credentials-configure-issuer.md).
+ :::image type="content" source="media/entitlement-management-verified-id-settings/select-issuer.png" alt-text="Select issuer for entra verified credentials.":::
+
+1. Select the **credential type(s)** you want users to present during the request process.
+ :::image type="content" source="media/entitlement-management-verified-id-settings/issuer-credentials.png" alt-text="Screenshot of credential types for entra verified IDs.":::
+ > [!NOTE]
+ > If you select multiple credential types from one issuer, users will be required to present credentials of all selected types. Similarly, if you include multiple issuers, users will be required to present credentials from each of the issuers you include in the policy. To give users the option of presenting different credentials from various issuers, configure separate policies for each issuer/credential type youΓÇÖll accept.
+1. Select **Add** to add the verified ID requirement to the access package policy.
+
+1. Once you have finished configuring the rest of the settings, you can review your selections on the **Review + create** tab. You can see all verified ID requirements for this access package policy in the **Verified IDs** section.
+ :::image type="content" source="media/entitlement-management-verified-id-settings/verified-ids-list.png" alt-text="Screenshot of a list of verified IDs.":::
++
+## Request an access package with verified ID requirements (Preview)
+
+Once an access package is configured with a verified ID requirement, end-users who are within the scope of the policy are able to request access using the My Access portal. Similarly, approvers are able to see the claims of the VCs presented by requestors when reviewing requests for approval.
+
+The requestor steps are as follows:
+
+1. Go to [myaccess.microsoft.com](HTTPS://myaccess.microsoft.com) and sign in.
+
+1. Search for the access package you want to request access to (you can browse the listed packages or use the search bar at the top of the page) and select **Request**.
+
+1. If the access package requires you to present a verified ID, you should see a grey information banner as shown here:
+ :::image type="content" source="media/entitlement-management-verified-id-settings/present-verified-id-access-package.png" alt-text="Screenshot of the present verified ID for access package option.":::
+1. Select **Request Access**. You should now see a QR code. Use your phone to scan the QR code. This launches Microsoft Authenticator, where you'll be prompted to share your credentials.
+ :::image type="content" source="media/entitlement-management-verified-id-settings/verified-id-qr-code.png" alt-text="Screenshot of use QR code for verified IDs.":::
+1. After you share your credentials, My Access will automatically take you to the next step of the request process.
++
+## Next steps
+
+[Delegate access governance to access package managers](entitlement-management-delegate-managers.md)
active-directory Configure Permission Classifications https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/manage-apps/configure-permission-classifications.md
Previously updated : 10/23/2021 Last updated : 2/24/2023
+zone_pivot_groups: enterprise-apps-all
#customer intent: As an admin, I want configure permission classifications for applications in Azure AD # Configure permission classifications
-In this article you'll learn how to configure permissions classifications in Azure Active Directory (Azure AD). Permission classifications allow you to identify the impact that different permissions have according to your organization's policies and risk evaluations. For example, you can use permission classifications in consent policies to identify the set of permissions that users are allowed to consent to.
+In this article, you learn how to configure permissions classifications in Azure Active Directory (Azure AD). Permission classifications allow you to identify the impact that different permissions have according to your organization's policies and risk evaluations. For example, you can use permission classifications in consent policies to identify the set of permissions that users are allowed to consent to.
Currently, only the "Low impact" permission classification is supported. Only delegated permissions that don't require admin consent can be classified as "Low impact".
-The minimum permissions needed to do basic sign in are `openid`, `profile`, `email`, and `offline_access`, which are all delegated permissions on the Microsoft Graph. With these permissions an app can read details of the signed-in user's profile, and can maintain this access even when the user is no longer using the app.
+The minimum permissions needed to do basic sign-in are `openid`, `profile`, `email`, and `offline_access`, which are all delegated permissions on the Microsoft Graph. With these permissions an app can read details of the signed-in user's profile, and can maintain this access even when the user is no longer using the app.
## Prerequisites To configure permission classifications, you need: - An Azure account with an active subscription. [Create an account for free](https://azure.microsoft.com/free/?WT.mc_id=A261C142F).-- One of the following roles: Global Administrator, Cloud Application Administrator, Application Administrator, or owner of the service principal.
+- One of the following roles: An administrator, or owner of the service principal.
## Manage permission classifications
-# [Portal](#tab/azure-portal)
Follow these steps to classify permissions using the Azure portal:
In this example, we've classified the minimum set of permission required for sin
:::image type="content" source="media/configure-permission-classifications/permission-classifications.png" alt-text="Permission classifications":::
-# [PowerShell](#tab/azure-powershell)
-You can use the latest Azure AD PowerShell Preview module, [AzureADPreview](/powershell/module/azuread/?preserve-view=true&view=azureadps-2.0-preview), to classify permissions. Permission classifications are configured on the **ServicePrincipal** object of the API that publishes the permissions.
-#### List the current permission classifications for an API
+
+You can use the latest [Azure AD PowerShell](/powershell/module/azuread/?preserve-view=true&view=azureadps-2.0), to classify permissions. Permission classifications are configured on the **ServicePrincipal** object of the API that publishes the permissions.
+
+Run the following command to connect to Azure AD PowerShell. To consent to the required scopes, sign in with one of the roles listed in the prerequisite section of this article.
+
+```powershell
+Connect-AzureAD -Scopes "Application.ReadWrite.All", "Directory.ReadWrite.All", "DelegatedPermissionGrant.ReadWrite.All".
+```
+
+### List the current permission classifications
1. Retrieve the **ServicePrincipal** object for the API. Here we retrieve the ServicePrincipal object for the Microsoft Graph API:
You can use the latest Azure AD PowerShell Preview module, [AzureADPreview](/pow
-ServicePrincipalId $api.ObjectId | Format-Table Id, PermissionName, Classification ```
-#### Classify a permission as "Low impact"
+### Classify a permission as "Low impact"
1. Retrieve the **ServicePrincipal** object for the API. Here we retrieve the ServicePrincipal object for the Microsoft Graph API:
You can use the latest Azure AD PowerShell Preview module, [AzureADPreview](/pow
-Classification "low" ```
-#### Remove a delegated permission classification
+### Remove a delegated permission classification
1. Retrieve the **ServicePrincipal** object for the API. Here we retrieve the ServicePrincipal object for the Microsoft Graph API:
You can use the latest Azure AD PowerShell Preview module, [AzureADPreview](/pow
-ServicePrincipalId $api.ObjectId ` -Id $classificationToRemove.Id ``` -
-## Next steps
+You can use [Microsoft Graph PowerShell](/powershell/microsoftgraph/get-started?preserve-view=true&view=graph-powershell-1.0), to classify permissions. Permission classifications are configured on the **ServicePrincipal** object of the API that publishes the permissions.
+
+Run the following command to connect to Microsoft Graph PowerShell. To consent to the required scopes, sign in with one of the roles listed in the prerequisite section of this article.
+
+```powershell
+Connect-MgGraph -Scopes "Application.ReadWrite.All", "Directory.ReadWrite.All", "DelegatedPermissionGrant.ReadWrite.All".
+```
+
+### List current permission classifications for an API
+
+1. Retrieve the servicePrincipal object for the API:
+
+ ```powershell
+ $api = Get-MgServicePrincipal -Filter "displayName eq 'Microsoft Graph'"
+ ```
+
+1. Read the delegated permission classifications for the API:
+
+ ```powershell
+ Get-MgServicePrincipalDelegatedPermissionClassification -ServicePrincipalId $api.Id
+ ```
+
+### Classify a permission as "Low impact"
+
+1. Retrieve the servicePrincipal object for the API:
+
+ ```powershell
+ $api = Get-MgServicePrincipal -Filter "displayName eq 'Microsoft Graph'"
+ ```
+
+1. Find the delegated permission you would like to classify:
+
+ ```powershell
+ $delegatedPermission = $api.Oauth2PermissionScopes | Where-Object {$_.Value -eq "openid"}
+ ```
+
+1. Set the permission classification:
+
+ ```powershell
+ $params = @{
+
+ PermissionId = $delegatedPermission.Id
+
+ PermissionName = $delegatedPermission.Value
+
+ Classification = "Low"
-To learn more:
+ }
+
+ New-MgServicePrincipalDelegatedPermissionClassification -ServicePrincipalId $api.Id -BodyParameter $params
+ ```
+
+### Remove a delegated permission classification
+
+1. Retrieve the servicePrincipal object for the API:
+
+ ```powershell
+ $api = Get-MgServicePrincipal -Filter "displayName eq 'Microsoft Graph'"
+ ```
+
+1. Find the delegated permission classification you wish to remove:
+
+ ```powershell
+ $classifications= Get-MgServicePrincipalDelegatedPermissionClassification -ServicePrincipalId $api.Id
+
+ $classificationToRemove = $classifications | Where-Object {$_.PermissionName -eq "openid"}
+ ```
+
+1. Delete the permission classification:
+
+```powershell
+Remove-MgServicePrincipalDelegatedPermissionClassification -DelegatedPermissionClassificationId $classificationToRemove.Id -ServicePrincipalId $api.id
+```
++
+To configure permissions classifications for an enterprise application, sign in to [Graph Explorer](https://developer.microsoft.com/graph/graph-explorer) with one of the roles listed in the prerequisite section.
+
+You need to consent to the following permissions:
+
+`Application.ReadWrite.All`, `Directory.ReadWrite.All`, `DelegatedPermissionGrant.ReadWrite.All`.
+
+Run the following queries on Microsoft Graph explorer to add a delegated permissions classification for an application.
+
+1. List current permission classifications for an API.
+
+ ```http
+ GET https://graph.microsoft.com/v1.0/servicePrincipals(appId='00000003-0000-0000-c000-000000000000')/delegatedPermissionClassifications
+ ```
+
+1. Add a delegated permission classification for an API. In the following example, we classify the permission as "low impact".
+
+ ```http
+ POST https://graph.microsoft.com/v1.0/servicePrincipals(appId='00000003-0000-0000-c000-000000000000')/delegatedPermissionClassifications
+ Content-type: application/json
+
+ {
+ "permissionId": "b4e74841-8e56-480b-be8b-910348b18b4c",
+ "classification": "low"
+ }
+ ```
+
+Run the following query on Microsoft Graph explorer to remove a delegated permissions classification for an API.
+
+```http
+DELETE https://graph.microsoft.com/v1.0/servicePrincipals(appId='00000003-0000-0000-c000-000000000000')/delegatedPermissionClassifications/QUjntFaOC0i-i5EDSLGLTAE
+```
+++
+## Next steps
-- Go to [Permissions and consent in the Microsoft identity platform](../develop/v2-permissions-and-consent.md)
+- [Manage app consent policies](manage-app-consent-policies.md)
+- [Permissions and consent in the Microsoft identity platform](../develop/v2-permissions-and-consent.md)
active-directory Disable User Sign In Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/manage-apps/disable-user-sign-in-portal.md
Previously updated : 09/06/2022 Last updated : 2/23/2023
+zone_pivot_groups: enterprise-apps-all
+ #customer intent: As an admin, I want to disable user sign-in for an application so that no user can sign in to it in Azure Active Directory. # Disable user sign-in for an application
In this article, you'll learn how to prevent users from signing in to an applica
To disable user sign-in, you need: - An Azure account with an active subscription. [Create an account for free](https://azure.microsoft.com/free/?WT.mc_id=A261C142F).-- One of the following roles: Global Administrator, Cloud Application Administrator, Application Administrator, or owner of the service principal.
+- One of the following roles: An administrator, or owner of the service principal.
## Disable how a user signs in + 1. Sign in to the [Azure portal](https://portal.azure.com) as the global administrator for your directory. 1. Search for and select **Azure Active Directory**. 1. Select **Enterprise applications**.
To disable user sign-in, you need:
1. Select **No** for **Enabled for users to sign-in?**. 1. Select **Save**.
-## Use Azure AD PowerShell to disable an unlisted app
-Ensure you've installed the AzureAD module (use the command Install-Module -Name AzureAD). In case you're prompted to install a NuGet module or the new Azure Active Directory V2 PowerShell module, type Y and press ENTER.
-You may know the AppId of an app that doesn't appear on the Enterprise apps list. For example, you may have deleted the app or the service principal hasn't yet been created due to the app being pre-authorized by Microsoft), you can manually create the service principal for the app and then disable it by using the following cmdlet.
+You may know the AppId of an app that doesn't appear on the Enterprise apps list. For example, you may have deleted the app or the service principal hasn't yet been created due to the app being pre-authorized by Microsoft, you can manually create the service principal for the app and then disable it by using the following Azure AD PowerShell cmdlet.
+
+Ensure you've installed the AzureAD module (use the command `Install-Module -Name AzureAD`). In case you're prompted to install a NuGet module or the new Azure AD V2 PowerShell module, type Y and press ENTER.
```PowerShell
+# Connect to Azure AD PowerShell
+Connect-AzureAD -Scopes "Application.ReadWrite.All"
+ # The AppId of the app to be disabled $appId = "{AppId}"
if ($servicePrincipal) {
$servicePrincipal = New-AzureADServicePrincipal -AppId $appId -AccountEnabled $false } ```++
+You may know the AppId of an app that doesn't appear on the Enterprise apps list. For example, you may have deleted the app or the service principal hasn't yet been created due to the app being pre-authorized by Microsoft, you can manually create the service principal for the app and then disable it by using the following Microsoft Graph PowerShell cmdlet.
+
+Ensure you've installed the Microsoft Graph module (use the command `Install-Module Microsoft.Graph`).
+
+```powershell
+# Connect to Microsoft Graph PowerShell
+Connect-MgGraph -Scopes "Application.ReadWrite.All"
+
+# The AppId of the app to be disabled
+$appId = "{AppId}"
+
+# Check if a service principal already exists for the app
+$servicePrincipal = Get-MgServicePrincipal -Filter "appId eq '$appId'"
+
+# If Service principal exists already, disable it , else, create it and disable it at the same time
+if ($servicePrincipal) { Update-MgServicePrincipal -ServicePrincipalId $servicePrincipal.Id -AccountEnabled:$false }
+
+else { $servicePrincipal = New-MgServicePrincipal -AppId $appId ΓÇôAccountEnabled:$false }
+```
+++
+You may know the AppId of an app that doesn't appear on the Enterprise apps list. For example, you may have deleted the app or the service principal hasn't yet been created due to the app being pre-authorized by Microsoft, you can manually create the service principal for the app and then disable it by using Microsoft Graph explorer.
+
+To disable sign-in to an application, sign in to [Graph Explorer](https://developer.microsoft.com/graph/graph-explorer) with one of the roles listed in the prerequisite section.
+
+You'll need to consent to the `Application.ReadWrite.All` permission.
+
+Run the following query to disable user sign-in to an application.
+
+```http
+PATCH https://graph.microsoft.com/v1.0/servicePrincipals/2a8f9e7a-af01-413a-9592-c32ec0e5c1a7
+
+Content-type: application/json
+
+{
+ "accountEnabled": false
+}
+```
+++ ## Next steps
active-directory How To View Managed Identity Service Principal Powershell https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/managed-identities-azure-resources/how-to-view-managed-identity-service-principal-powershell.md
na Previously updated : 01/11/2022 Last updated : 02/15/2022
In this article, you learn how to view the service principal of a managed identi
This following command demonstrates how to view the service principal of a VM or application with system assigned identity enabled. Replace `<Azure resource name>` with your own values.
-```azurepowershell-interactive
+```powershell
Get-AzADServicePrincipal -DisplayName <Azure resource name> ```
active-directory Overview Recommendations https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/reports-monitoring/overview-recommendations.md
na Previously updated : 02/16/2023 Last updated : 02/24/2023
Each recommendation contains a description, a summary of the value of addressing
Each recommendation provides the same set of details that explain what the recommendation is, why it's important, and how to fix it.
-The **Status** of a recommendation can be updated manually or automatically. If all resources are addressed according to the action plan, the status will automatically change to *Completed* the next time the recommendations service runs. The recommendation service runs every 24-48 hours, depending on the recommendation.
+The **Status** of a recommendation can be updated manually or automatically by the system. If all resources are addressed according to the action plan, the status automatically changes to *Completed* the next time the recommendations service runs. The recommendation service runs every 24-48 hours, depending on the recommendation.
-![Screenshot of the Mark as options.](./media/overview-recommendations/recommendations-object.png)
+![Screenshot of the Mark as options.](./media/overview-recommendations/recommendation-mark-as-options.png)
The **Priority** of a recommendation could be low, medium, or high. These values are determined by several factors, such as security implications, health concerns, or potential breaking changes.
The recommendations listed in the following table are available to all Azure AD
1. The recommendation service automatically marks the recommendation as complete, but if you need to manually change the status of a recommendation, select **Mark as** from the top of the page and select a status.
- ![Screenshot of the Mark as options, to highlight the difference from the resource menu.](./media/overview-recommendations/recommendations-object.png)
+ ![Screenshot of the Mark as options, to highlight the difference from the resource menu.](./media/overview-recommendations/recommendation-mark-as-options.png)
- - Mark a recommendation as **Completed** if all impacted resources have been addressed.
- - Active resources may still appear in the list of resources for manually completed recommendations. If the resource is completed, the service will update the status the next time the service runs.
- - If the service identifies an active resource for a manually completed recommendation the next time the service runs, the recommendation will automatically change back to **Active**.
- - Completing a recommendation is the only action collected in the audit log. To view these logs, go to **Azure AD** > **Audit logs** and filter the service to "Azure AD recommendations."
- Mark a recommendation as **Dismissed** if you think the recommendation is irrelevant or the data is wrong.
- - Azure AD will ask for a reason why you dismissed the recommendation so we can improve the service.
+ - Azure AD asks for a reason why you dismissed the recommendation so we can improve the service.
- Mark a recommendation as **Postponed** if you want to address the recommendation at a later time.
- - The recommendation will become **Active** when the selected date occurs.
+ - The recommendation becomes **Active** when the selected date occurs.
- You can reactivate a completed or postponed recommendation to keep it top of mind and reassess the resources.
+ - Recommendations change to **Completed** if all impacted resources have been addressed.
+ - If the service identifies an active resource for a completed recommendation the next time the service runs, the recommendation will automatically change back to **Active**.
+ - Completing a recommendation is the only action collected in the audit log. To view these logs, go to **Azure AD** > **Audit logs** and filter the service to "Azure AD recommendations."
Continue to monitor the recommendations in your tenant for changes.
active-directory Recommendation Migrate To Authenticator https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/reports-monitoring/recommendation-migrate-to-authenticator.md
Previously updated : 02/07/2023 Last updated : 02/24/2023
-# Azure AD recommendation: Migrate to Microsoft Authenticator
+# Azure AD recommendation: Migrate to Microsoft Authenticator (preview)
[Azure AD recommendations](overview-recommendations.md) is a feature that provides you with personalized insights and actionable guidance to align your tenant with recommended best practices.
-This article covers the recommendation to migrate users to the Microsoft Authenticator app. This recommendation is called `useAuthenticatorApp` in the recommendations API in Microsoft Graph.
+This article covers the recommendation to migrate users to the Microsoft Authenticator app, which is currently a preview recommendation. This recommendation is called `useAuthenticatorApp` in the recommendations API in Microsoft Graph.
## Description
active-directory Configure Cmmc Level 2 Access Control https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/standards/configure-cmmc-level-2-access-control.md
The following table provides a list of practice statement and objectives, and Az
| AC.L2-3.1.9<br><br>**Practice statement:** Provide privacy and security notices consistent with applicable CUI rules.<br><br>**Objectives:**<br>Determine if:<br>[a.] privacy and security notices required by CUI-specified rules are identified, consistent, and associated with the specific CUI category; and<br>[b.] privacy and security notices are displayed. | With Azure AD, you can deliver notification or banner messages for all apps that require and record acknowledgment before granting access. You can granularly target these terms of use policies to specific users (Member or Guest). You can also customize them per application via conditional access policies.<br><br>**Conditional access** <br>[What is conditional access in Azure AD?](../conditional-access/overview.md)<br><br>**Terms of use**<br>[Azure Active Directory terms of use](../conditional-access/terms-of-use.md)<br>[View report of who has accepted and declined](../conditional-access/terms-of-use.md) | | AC.L2-3.1.10<br><br>**Practice statement:** Use session lock with pattern-hiding displays to prevent access and viewing of data after a period of inactivity.<br><br>**Objectives:**<br>Determine if:<br>[a.] the period of inactivity after which the system initiates a session lock is defined;<br>[b.] access to the system and viewing of data is prevented by initiating a session lock after the defined period of inactivity; and<br>[c.] previously visible information is concealed via a pattern-hiding display after the defined period of inactivity. | Implement device lock by using a conditional access policy to restrict access to compliant or hybrid Azure AD joined devices. Configure policy settings on the device to enforce device lock at the OS level with MDM solutions such as Intune. Endpoint Manager or group policy objects can also be considered in hybrid deployments. For unmanaged devices, configure the Sign-In Frequency setting to force users to reauthenticate.<br>[Require device to be marked as compliant](../conditional-access/require-managed-devices.md)<br>[Grant controls in Conditional Access policy - Require hybrid Azure AD joined device](../conditional-access/concept-conditional-access-grant.md)<br>[User sign-in frequency](../conditional-access/howto-conditional-access-session-lifetime.md)<br><br>Configure devices for maximum minutes of inactivity until the screen locks ([Android](/mem/intune/configuration/device-restrictions-android), [iOS](/mem/intune/configuration/device-restrictions-ios), [Windows 10](/mem/intune/configuration/device-restrictions-windows-10)).| | AC.L2-3.1.11<br><br>**Practice statement:** Terminate (automatically) a user session after a defined condition.<br><br>**Objectives:**<br>Determine if:<br>[a.] conditions requiring a user session to terminate are defined; and<br>[b.] a user session is automatically terminated after any of the defined conditions occur. | Enable Continuous Access Evaluation (CAE) for all supported applications. For application that don't support CAE, or for conditions not applicable to CAE, implement policies in Microsoft Defender for Cloud Apps to automatically terminate sessions when conditions occur. Additionally, configure Azure Active Directory Identity Protection to evaluate user and sign-in Risk. Use conditional access with Identity protection to allow user to automatically remediate risk.<br>[Continuous access evaluation in Azure AD](../conditional-access/concept-continuous-access-evaluation.md)<br>[Control cloud app usage by creating policies](/defender-cloud-apps/control-cloud-apps-with-policies)<br>[What is Azure Active Directory Identity Protection?](../identity-protection/overview-identity-protection.md)
-|AC.L2-3.1.12<br><br>**Practice statement:** Monitor and control remote access sessions.<br><br>**Objectives:**<br>Determine if:<br>[a.] remote access sessions are permitted;<br>[b.] the types of permitted remote access are identified;<br>[c.] remote access sessions are controlled; and<br>[d.] remote access sessions are monitored. | In todayΓÇÖs world, users access cloud-based applications almost exclusively remotely from unknown or untrusted networks. It's critical to securing this pattern of access to adopt zero trust principals. To meet these controls requirements in a modern cloud world we must verify each access request explicitly, implement least privilege and assume breach.<br><br>Configure named locations to delineate internal vs external networks. Configure conditional access app control to route access via Microsoft Defender for Cloud Apps. Configure Defender for Cloud Apps to control and monitor all sessions.<br>[Zero Trust Deployment Guide for Microsoft Azure Active Directory](/security/blog/2020/04/30/zero-trust-deployment-guide-azure-active-directory/)<br>[Location condition in Azure Active Directory Conditional Access](../conditional-access/location-condition.md)<br>[Deploy Cloud App Security Conditional Access App Control for Azure AD apps](/cloud-app-security/proxy-deployment-aad.md)<br>[What is Microsoft Defender for Cloud Apps?](/cloud-app-security/what-is-cloud-app-security.md)<br>[Monitor alerts raised in Microsoft Defender for Cloud Apps](/cloud-app-security/monitor-alerts.md) |
+|AC.L2-3.1.12<br><br>**Practice statement:** Monitor and control remote access sessions.<br><br>**Objectives:**<br>Determine if:<br>[a.] remote access sessions are permitted;<br>[b.] the types of permitted remote access are identified;<br>[c.] remote access sessions are controlled; and<br>[d.] remote access sessions are monitored. | In todayΓÇÖs world, users access cloud-based applications almost exclusively remotely from unknown or untrusted networks. It's critical to securing this pattern of access to adopt zero trust principals. To meet these controls requirements in a modern cloud world we must verify each access request explicitly, implement least privilege and assume breach.<br><br>Configure named locations to delineate internal vs external networks. Configure conditional access app control to route access via Microsoft Defender for Cloud Apps. Configure Defender for Cloud Apps to control and monitor all sessions.<br>[Zero Trust Deployment Guide for Microsoft Azure Active Directory](https://www.microsoft.com/security/blog/2020/04/30/zero-trust-deployment-guide-azure-active-directory/)<br>[Location condition in Azure Active Directory Conditional Access](../conditional-access/location-condition.md)<br>[Deploy Cloud App Security Conditional Access App Control for Azure AD apps](/cloud-app-security/proxy-deployment-aad)<br>[What is Microsoft Defender for Cloud Apps?](/cloud-app-security/what-is-cloud-app-security)<br>[Monitor alerts raised in Microsoft Defender for Cloud Apps](/cloud-app-security/monitor-alerts) |
| AC.L2-3.1.13<br><br>**Practice statement:** Employ cryptographic mechanisms to protect the confidentiality of remote access sessions.<br><br>**Objectives:**<br>Determine if:<br>[a.] cryptographic mechanisms to protect the confidentiality of remote access sessions are identified; and<br>[b.] cryptographic mechanisms to protect the confidentiality of remote access sessions are implemented. | All Azure AD customer-facing web services are secured with the Transport Layer Security (TLS) protocol and are implemented using FIPS-validated cryptography.<br>[Azure Active Directory Data Security Considerations (microsoft.com)](https://azure.microsoft.com/resources/azure-active-directory-data-security-considerations/) |
-| AC.L2-3.1.14<br><br>**Practice statement:** Route remote access via managed access control points.<br><br>**Objectives:**<br>Determine if:<br>[a.] managed access control points are identified and implemented; and<br>[b.] remote access is routed through managed network access control points. | Configure named locations to delineate internal vs external networks. Configure conditional access app control to route access via Microsoft Defender for Cloud Apps. Configure Defender for Cloud Apps to control and monitor all sessions. Secure devices used by privileged accounts as part of the privileged access story.<br>[Location condition in Azure Active Directory Conditional Access](../conditional-access/location-condition.md)<br>[Session controls in Conditional Access policy](../conditional-access/concept-conditional-access-session.md)<br>[Securing privileged access overview](/security/compass/overview.md) |
-| AC.L2-3.1.15<br><br>**Practice statement:** Authorize remote execution of privileged commands and remote access to security-relevant information.<br><br>**Objectives:**<br>Determine if:<br>[a.] privileged commands authorized for remote execution are identified;<br>[b.] security-relevant information authorized to be accessed remotely is identified;<br>[c.] the execution of the identified privileged commands via remote access is authorized; and<br>[d.] access to the identified security-relevant information via remote access is authorized. | Conditional Access is the Zero Trust control plane to target policies for access to your apps when combined with authentication context. You can apply different policies in those apps. Secure devices used by privileged accounts as part of the privileged access story. Configure conditional access policies to require the use of these secured devices by privileged users when performing privileged commands.<br>[Cloud apps, actions, and authentication context in Conditional Access policy](../conditional-access/concept-conditional-access-cloud-apps.md)<br>[Securing privileged access overview](/security/compass/overview.md)<br>[Filter for devices as a condition in Conditional Access policy](../conditional-access/concept-condition-filters-for-devices.md) |
-| AC.L2-3.1.18<br><br>**Practice statement:** Control connection of mobile devices.<br><br>**Objectives:**<br>Determine if:<br>[a.] mobile devices that process, store, or transmit CUI are identified;<br>[b.] mobile device connections are authorized; and<br>[c.] mobile device connections are monitored and logged. | Configure device management policies via MDM (such as Microsoft Intune), Microsoft Endpoint Manager (MEM) or group policy objects (GPO) to enforce mobile device configuration and connection profile. Configure Conditional Access policies to enforce device compliance.<br><br>**Conditional Access**<br>[Require device to be marked as compliant](../conditional-access/concept-conditional-access-grant.md)<br>[Require hybrid Azure AD joined device](../conditional-access/concept-conditional-access-grant.md)<br><br>**InTune**<br>[Device compliance policies in Microsoft Intune](/mem/intune/protect/device-compliance-get-started.md)<br>[What is app management in Microsoft Intune?](/mem/intune/apps/app-management.md) |
+| AC.L2-3.1.14<br><br>**Practice statement:** Route remote access via managed access control points.<br><br>**Objectives:**<br>Determine if:<br>[a.] managed access control points are identified and implemented; and<br>[b.] remote access is routed through managed network access control points. | Configure named locations to delineate internal vs external networks. Configure conditional access app control to route access via Microsoft Defender for Cloud Apps. Configure Defender for Cloud Apps to control and monitor all sessions. Secure devices used by privileged accounts as part of the privileged access story.<br>[Location condition in Azure Active Directory Conditional Access](../conditional-access/location-condition.md)<br>[Session controls in Conditional Access policy](../conditional-access/concept-conditional-access-session.md)<br>[Securing privileged access overview](/security/compass/overview) |
+| AC.L2-3.1.15<br><br>**Practice statement:** Authorize remote execution of privileged commands and remote access to security-relevant information.<br><br>**Objectives:**<br>Determine if:<br>[a.] privileged commands authorized for remote execution are identified;<br>[b.] security-relevant information authorized to be accessed remotely is identified;<br>[c.] the execution of the identified privileged commands via remote access is authorized; and<br>[d.] access to the identified security-relevant information via remote access is authorized. | Conditional Access is the Zero Trust control plane to target policies for access to your apps when combined with authentication context. You can apply different policies in those apps. Secure devices used by privileged accounts as part of the privileged access story. Configure conditional access policies to require the use of these secured devices by privileged users when performing privileged commands.<br>[Cloud apps, actions, and authentication context in Conditional Access policy](../conditional-access/concept-conditional-access-cloud-apps.md)<br>[Securing privileged access overview](/security/compass/overview)<br>[Filter for devices as a condition in Conditional Access policy](../conditional-access/concept-condition-filters-for-devices.md) |
+| AC.L2-3.1.18<br><br>**Practice statement:** Control connection of mobile devices.<br><br>**Objectives:**<br>Determine if:<br>[a.] mobile devices that process, store, or transmit CUI are identified;<br>[b.] mobile device connections are authorized; and<br>[c.] mobile device connections are monitored and logged. | Configure device management policies via MDM (such as Microsoft Intune), Microsoft Endpoint Manager (MEM) or group policy objects (GPO) to enforce mobile device configuration and connection profile. Configure Conditional Access policies to enforce device compliance.<br><br>**Conditional Access**<br>[Require device to be marked as compliant](../conditional-access/concept-conditional-access-grant.md)<br>[Require hybrid Azure AD joined device](../conditional-access/concept-conditional-access-grant.md)<br><br>**InTune**<br>[Device compliance policies in Microsoft Intune](/mem/intune/protect/device-compliance-get-started)<br>[What is app management in Microsoft Intune?](/mem/intune/apps/app-management) |
| AC.L2-3.1.19<br><br>**Practice statement:** Encrypt CUI on mobile devices and mobile computing platforms.<br><br>**Objectives:**<br>Determine if:<br>[a.] mobile devices and mobile computing platforms that process, store, or transmit CUI are identified; and<br>[b.] encryption is employed to protect CUI on identified mobile devices and mobile computing platforms. | **Managed Device**<br>Configure conditional access policies to enforce compliant or HAADJ device and to ensure managed devices are configured appropriately via device management solution to encrypt CUI.<br><br>**Unmanaged Device**<br>Configure conditional access policies to require app protection policies.<br>[Grant controls in Conditional Access policy - Require device to be marked as compliant](../conditional-access/concept-conditional-access-grant.md)<br>[Grant controls in Conditional Access policy - Require hybrid Azure AD joined device](../conditional-access/concept-conditional-access-grant.md)<br>[Grant controls in Conditional Access policy - Require app protection policy](../conditional-access/concept-conditional-access-grant.md) |
-| AC.L2-3.1.21<br><br>**Practice statement:** Limit use of portable storage devices on external systems.<br><br>**Objectives:**<br>Determine if:<br>[a.] the use of portable storage devices containing CUI on external systems is identified and documented;<br>[b.] limits on the use of portable storage devices containing CUI on external systems are defined; and<br>[c.] the use of portable storage devices containing CUI on external systems is limited as defined. | Configure device management policies via MDM (such as Microsoft Intune), Microsoft Endpoint Manager (MEM) or group policy objects (GPO) to control the use of portable storage devices on systems. Configure policy settings on the Windows device to completely prohibit or restrict use of portable storage at the OS level. For all other devices where you may be unable to granularly control access to portable storage block download entirely with Microsoft Defender for Cloud Apps. Configure Conditional Access policies to enforce device compliance.<br><br>**Conditional Access**<br>[Require device to be marked as compliant](../conditional-access/concept-conditional-access-grant.md)<br>[Require hybrid Azure AD joined device](../conditional-access/concept-conditional-access-grant.md)<br>[Configure authentication session management - Azure Active Directory](../conditional-access/howto-conditional-access-session-lifetime.md)<br><br>**InTune**<br>[Device compliance policies in Microsoft Intune](/mem/intune/protect/device-compliance-get-started.md)<br>[Restrict USB devices using administrative templates in Microsoft Intune](/mem/intune/configuration/administrative-templates-restrict-usb.md)<br><br>**Microsoft Defender for Cloud Apps**<br>[Create session policies in Defender for Cloud Apps](/defender-cloud-apps/session-policy-aad.md)
+| AC.L2-3.1.21<br><br>**Practice statement:** Limit use of portable storage devices on external systems.<br><br>**Objectives:**<br>Determine if:<br>[a.] the use of portable storage devices containing CUI on external systems is identified and documented;<br>[b.] limits on the use of portable storage devices containing CUI on external systems are defined; and<br>[c.] the use of portable storage devices containing CUI on external systems is limited as defined. | Configure device management policies via MDM (such as Microsoft Intune), Microsoft Endpoint Manager (MEM) or group policy objects (GPO) to control the use of portable storage devices on systems. Configure policy settings on the Windows device to completely prohibit or restrict use of portable storage at the OS level. For all other devices where you may be unable to granularly control access to portable storage block download entirely with Microsoft Defender for Cloud Apps. Configure Conditional Access policies to enforce device compliance.<br><br>**Conditional Access**<br>[Require device to be marked as compliant](../conditional-access/concept-conditional-access-grant.md)<br>[Require hybrid Azure AD joined device](../conditional-access/concept-conditional-access-grant.md)<br>[Configure authentication session management - Azure Active Directory](../conditional-access/howto-conditional-access-session-lifetime.md)<br><br>**InTune**<br>[Device compliance policies in Microsoft Intune](/mem/intune/protect/device-compliance-get-started)<br>[Restrict USB devices using administrative templates in Microsoft Intune](/mem/intune/configuration/administrative-templates-restrict-usb)<br><br>**Microsoft Defender for Cloud Apps**<br>[Create session policies in Defender for Cloud Apps](/defender-cloud-apps/session-policy-aad)
### Next steps
active-directory Configure Cmmc Level 2 Additional Controls https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/standards/configure-cmmc-level-2-additional-controls.md
The following table provides a list of practice statement and objectives, and Az
| CMMC practice statement and objectives | Azure AD guidance and recommendations | | - | - |
-| CM.L2-3.4.2<br><br>**Practice statement:** Establish and enforce security configuration settings for information technology products employed in organizational systems.<br><br>**Objectives:**<br>Determine if:<br>[a.] security configuration settings for information technology products employed in the system are established and included in the baseline configuration; and<br>[b.] security configuration settings for information technology products employed in the system are enforced. | Adopt a zero-trust security posture. Use conditional access policies to restrict access to compliant devices. Configure policy settings on the device to enforce security configuration settings on the device with MDM solutions such as Microsoft Intune. Microsoft Endpoint Configuration Manager(MECM) or group policy objects can also be considered in hybrid deployments and combined with conditional access require hybrid Azure AD joined device.<br><br>**Zero-trust**<br>[Securing identity with Zero Trust](/security/zero-trust/identity.md)<br><br>**Conditional access**<br>[What is conditional access in Azure AD?](../conditional-access/overview.md)<br>[Grant controls in Conditional Access policy](../conditional-access/concept-conditional-access-grant.md)<br><br>**Device policies**<br>[What is Microsoft Intune?](/mem/intune/fundamentals/what-is-intune.md)<br>[What is Defender for Cloud Apps?](/cloud-app-security/what-is-cloud-app-security.md)<br>[What is app management in Microsoft Intune?](/mem/intune/apps/app-management.md)<br>[Microsoft Endpoint Manager overview](/mem/endpoint-manager-overview.md) |
+| CM.L2-3.4.2<br><br>**Practice statement:** Establish and enforce security configuration settings for information technology products employed in organizational systems.<br><br>**Objectives:**<br>Determine if:<br>[a.] security configuration settings for information technology products employed in the system are established and included in the baseline configuration; and<br>[b.] security configuration settings for information technology products employed in the system are enforced. | Adopt a zero-trust security posture. Use conditional access policies to restrict access to compliant devices. Configure policy settings on the device to enforce security configuration settings on the device with MDM solutions such as Microsoft Intune. Microsoft Endpoint Configuration Manager(MECM) or group policy objects can also be considered in hybrid deployments and combined with conditional access require hybrid Azure AD joined device.<br><br>**Zero-trust**<br>[Securing identity with Zero Trust](/security/zero-trust/identity)<br><br>**Conditional access**<br>[What is conditional access in Azure AD?](../conditional-access/overview.md)<br>[Grant controls in Conditional Access policy](../conditional-access/concept-conditional-access-grant.md)<br><br>**Device policies**<br>[What is Microsoft Intune?](/mem/intune/fundamentals/what-is-intune)<br>[What is Defender for Cloud Apps?](/cloud-app-security/what-is-cloud-app-security)<br>[What is app management in Microsoft Intune?](/mem/intune/apps/app-management)<br>[Microsoft Endpoint Manager overview](/mem/endpoint-manager-overview) |
| CM.L2-3.4.5<br><br>**Practice statement:** Define, document, approve, and enforce physical and logical access restrictions associated with changes to organizational systems.<br><br>**Objectives:**<br>Determine if:<br>[a.] physical access restrictions associated with changes to the system are defined;<br>[b.] physical access restrictions associated with changes to the system are documented;<br>[c.] physical access restrictions associated with changes to the system are approved;<br>[d.] physical access restrictions associated with changes to the system are enforced;<br>[e.] logical access restrictions associated with changes to the system are defined;<br>[f.] logical access restrictions associated with changes to the system are documented;<br>[g.] logical access restrictions associated with changes to the system are approved; and<br>[h.] logical access restrictions associated with changes to the system are enforced. | Azure Active Directory (Azure AD) is a cloud-based identity and access management service. Customers don't have physical access to the Azure AD datacenters. As such, each physical access restriction is satisfied by Microsoft and inherited by the customers of Azure AD. Implement Azure AD role based access controls. Eliminate standing privileged access, provide just in time access with approval workflows with Privileged Identity Management.<br>[Overview of Azure Active Directory role-based access control (RBAC)](../roles/custom-overview.md)<br>[What is Privileged Identity Management?](../privileged-identity-management/pim-configure.md)<br>[Approve or deny requests for Azure AD roles in PIM](../privileged-identity-management/azure-ad-pim-approval-workflow.md) |
-| CM.L2-3.4.6<br><br>**Practice statement:** Employ the principle of least functionality by configuring organizational systems to provide only essential capabilities.<br><br>**Objectives:**<br>Determine if:<br>[a.] essential system capabilities are defined based on the principle of least functionality; and<br>[b.] the system is configured to provide only the defined essential capabilities. | Configure device management solutions (Such as Microsoft Intune) to implement a custom security baseline applied to organizational systems to remove non-essential applications and disable unnecessary services. Leave only the fewest capabilities necessary for the systems to operate effectively. Configure conditional access to restrict access to compliant or hybrid Azure AD joined devices. <br>[What is Microsoft Intune](/mem/intune/fundamentals/what-is-intune.md)<br>[Require device to be marked as compliant](../conditional-access/require-managed-devices.md)<br>[Grant controls in Conditional Access policy - Require hybrid Azure AD joined device](../conditional-access/concept-conditional-access-grant.md) |
-| CM.L2-3.4.7<br><br>**Practice statement:** Restrict, disable, or prevent the use of nonessential programs, functions, ports, protocols, and services.<br><br>**Objectives:**<br>Determine if:<br>[a.]essential programs are defined;<br>[b.] the use of nonessential programs is defined;<br>[c.] the use of nonessential programs is restricted, disabled, or prevented as defined;<br>[d.] essential functions are defined;<br>[e.] the use of nonessential functions is defined;<br>[f.] the use of nonessential functions is restricted, disabled, or prevented as defined;<br>[g.] essential ports are defined;<br>[h.] the use of nonessential ports is defined;<br>[i.] the use of nonessential ports is restricted, disabled, or prevented as defined;<br>[j.] essential protocols are defined;<br>[k.] the use of nonessential protocols is defined;<br>[l.] the use of nonessential protocols is restricted, disabled, or prevented as defined;<br>[m.] essential services are defined;<br>[n.] the use of nonessential services is defined; and<br>[o.] the use of nonessential services is restricted, disabled, or prevented as defined. | Use Application Administrator role to delegate authorized use of essential applications. Use App Roles or group claims to manage least privilege access within application. Configure user consent to require admin approval and don't allow group owner consent. Configure Admin consent request workflows to enable users to request access to applications that require admin consent. Use Microsoft Defender for Cloud Apps to identify unsanctioned/unknown application use. Use this telemetry to then determine essential/non-essential apps.<br>[Azure AD built-in roles - Application Administrator](../roles/permissions-reference.md)<br>[Azure AD App Roles - App Roles vs. Groups ](../develop/howto-add-app-roles-in-azure-ad-apps.md)<br>[Configure how users consent to applications](../manage-apps/configure-user-consent.md?tabs=azure-portal.md)<br>[Configure group owner consent to apps accessing group data](../manage-apps/configure-user-consent-groups.md?tabs=azure-portal.md)<br>[Configure the admin consent workflow](../manage-apps/configure-admin-consent-workflow.md)<br>[What is Defender for Cloud Apps?](/defender-cloud-apps/what-is-defender-for-cloud-apps.d)<br>[Discover and manage Shadow IT tutorial](/defender-cloud-apps/tutorial-shadow-it.md) |
-| CM.L2-3.4.8<br><br>**Practice statement:** Apply deny-by-exception (blocklist) policy to prevent the use of unauthorized software or deny-all, permit-by-exception (allowlist) policy to allow the execution of authorized software.<br><br>**Objectives:**<br>Determine if:<br>[a.] a policy specifying whether allowlist or blocklist is to be implemented is specified;<br>[b.] the software allowed to execute under allowlist or denied use under blocklist is specified; and<br>[c.] allowlist to allow the execution of authorized software or blocklist to prevent the use of unauthorized software is implemented as specified.<br><br>CM.L2-3.4.9<br><br>**Practice statement:** Control and monitor user-installed software.<br><br>**Objectives:**<br>Determine if:<br>[a.] a policy for controlling the installation of software by users is established;<br>[b.] installation of software by users is controlled based on the established policy; and<br>[c.] installation of software by users is monitored. | Configure MDM/configuration management policy to prevent the use of unauthorized software. Configure conditional access grant controls to require compliant or hybrid joined device to incorporate device compliance with MDM/configuration management policy into the conditional access authorization decision.<br>[What is Microsoft Intune](/mem/intune/fundamentals/what-is-intune.md)<br>[Conditional Access - Require compliant or hybrid joined devices](../conditional-access/howto-conditional-access-policy-compliant-device.md) |
+| CM.L2-3.4.6<br><br>**Practice statement:** Employ the principle of least functionality by configuring organizational systems to provide only essential capabilities.<br><br>**Objectives:**<br>Determine if:<br>[a.] essential system capabilities are defined based on the principle of least functionality; and<br>[b.] the system is configured to provide only the defined essential capabilities. | Configure device management solutions (Such as Microsoft Intune) to implement a custom security baseline applied to organizational systems to remove non-essential applications and disable unnecessary services. Leave only the fewest capabilities necessary for the systems to operate effectively. Configure conditional access to restrict access to compliant or hybrid Azure AD joined devices. <br>[What is Microsoft Intune](/mem/intune/fundamentals/what-is-intune)<br>[Require device to be marked as compliant](../conditional-access/require-managed-devices.md)<br>[Grant controls in Conditional Access policy - Require hybrid Azure AD joined device](../conditional-access/concept-conditional-access-grant.md) |
+| CM.L2-3.4.7<br><br>**Practice statement:** Restrict, disable, or prevent the use of nonessential programs, functions, ports, protocols, and services.<br><br>**Objectives:**<br>Determine if:<br>[a.]essential programs are defined;<br>[b.] the use of nonessential programs is defined;<br>[c.] the use of nonessential programs is restricted, disabled, or prevented as defined;<br>[d.] essential functions are defined;<br>[e.] the use of nonessential functions is defined;<br>[f.] the use of nonessential functions is restricted, disabled, or prevented as defined;<br>[g.] essential ports are defined;<br>[h.] the use of nonessential ports is defined;<br>[i.] the use of nonessential ports is restricted, disabled, or prevented as defined;<br>[j.] essential protocols are defined;<br>[k.] the use of nonessential protocols is defined;<br>[l.] the use of nonessential protocols is restricted, disabled, or prevented as defined;<br>[m.] essential services are defined;<br>[n.] the use of nonessential services is defined; and<br>[o.] the use of nonessential services is restricted, disabled, or prevented as defined. | Use Application Administrator role to delegate authorized use of essential applications. Use App Roles or group claims to manage least privilege access within application. Configure user consent to require admin approval and don't allow group owner consent. Configure Admin consent request workflows to enable users to request access to applications that require admin consent. Use Microsoft Defender for Cloud Apps to identify unsanctioned/unknown application use. Use this telemetry to then determine essential/non-essential apps.<br>[Azure AD built-in roles - Application Administrator](../roles/permissions-reference.md)<br>[Azure AD App Roles - App Roles vs. Groups ](../develop/howto-add-app-roles-in-azure-ad-apps.md)<br>[Configure how users consent to applications](../manage-apps/configure-user-consent.md?tabs=azure-portal.md)<br>[Configure group owner consent to apps accessing group data](../manage-apps/configure-user-consent-groups.md?tabs=azure-portal.md)<br>[Configure the admin consent workflow](../manage-apps/configure-admin-consent-workflow.md)<br>[What is Defender for Cloud Apps?](/defender-cloud-apps/what-is-defender-for-cloud-apps)<br>[Discover and manage Shadow IT tutorial](/defender-cloud-apps/tutorial-shadow-it) |
+| CM.L2-3.4.8<br><br>**Practice statement:** Apply deny-by-exception (blocklist) policy to prevent the use of unauthorized software or deny-all, permit-by-exception (allowlist) policy to allow the execution of authorized software.<br><br>**Objectives:**<br>Determine if:<br>[a.] a policy specifying whether allowlist or blocklist is to be implemented is specified;<br>[b.] the software allowed to execute under allowlist or denied use under blocklist is specified; and<br>[c.] allowlist to allow the execution of authorized software or blocklist to prevent the use of unauthorized software is implemented as specified.<br><br>CM.L2-3.4.9<br><br>**Practice statement:** Control and monitor user-installed software.<br><br>**Objectives:**<br>Determine if:<br>[a.] a policy for controlling the installation of software by users is established;<br>[b.] installation of software by users is controlled based on the established policy; and<br>[c.] installation of software by users is monitored. | Configure MDM/configuration management policy to prevent the use of unauthorized software. Configure conditional access grant controls to require compliant or hybrid joined device to incorporate device compliance with MDM/configuration management policy into the conditional access authorization decision.<br>[What is Microsoft Intune](/mem/intune/fundamentals/what-is-intune)<br>[Conditional Access - Require compliant or hybrid joined devices](../conditional-access/howto-conditional-access-policy-compliant-device.md) |
## Incident Response (IR)
The following table provides a list of practice statement and objectives, and Az
| CMMC practice statement and objectives | Azure AD guidance and recommendations | | - | - | | MA.L2-3.7.5<br><br>**Practice statement:** Require multifactor authentication to establish nonlocal maintenance sessions via external network connections and terminate such connections when nonlocal maintenance is complete.<br><br>**Objectives:**<br>Determine if:<br>[a.] multifactor authentication is used to establish nonlocal maintenance sessions via external network connections; and<br>[b.] nonlocal maintenance sessions established via external network connections are terminated when nonlocal maintenance is complete.| Accounts assigned administrative rights are targeted by attackers, including accounts used to establish non-local maintenance sessions. Requiring multifactor authentication (MFA) on those accounts is an easy way to reduce the risk of those accounts being compromised.<br>[Conditional Access - Require MFA for administrators](../conditional-access/howto-conditional-access-policy-admin-mfa.md) |
-| MP.L2-3.8.7<br><br>**Practice statement:** Control the use of removable media on system components.<br><br>**Objectives:**<br>Determine if:<br>[a.] the use of removable media on system components is controlled. | Configure device management policies via MDM (such as Microsoft Intune), Microsoft Endpoint Manager (MEM) or group policy objects (GPO) to control the use of removable media on systems. Deploy and manage Removable Storage Access Control using Intune or Group Policy. Configure Conditional Access policies to enforce device compliance.<br><br>**Conditional Access**<br>[Require device to be marked as compliant](../conditional-access/concept-conditional-access-grant.md)<br>[Require hybrid Azure AD joined device](/conditional-access/concept-conditional-access-grant#require-hybrid-azure-ad-joined-device.md)<br><br>**Intune**<br>[Device compliance policies in Microsoft Intune](/mem/intune/protect/device-compliance-get-started.md)<br><br>**Removable storage access control**<br>[Deploy and manage Removable Storage Access Control using Intune](/microsoft-365/security/defender-endpoint/deploy-manage-removable-storage-intune?view=o365-worldwide&preserve-view=true)<br>[Deploy and manage Removable Storage Access Control using group policy](/microsoft-365/security/defender-endpoint/deploy-manage-removable-storage-group-policy?view=o365-worldwide&preserve-view=true) |
+| MP.L2-3.8.7<br><br>**Practice statement:** Control the use of removable media on system components.<br><br>**Objectives:**<br>Determine if:<br>[a.] the use of removable media on system components is controlled. | Configure device management policies via MDM (such as Microsoft Intune), Microsoft Endpoint Manager (MEM) or group policy objects (GPO) to control the use of removable media on systems. Deploy and manage Removable Storage Access Control using Intune or Group Policy. Configure Conditional Access policies to enforce device compliance.<br><br>**Conditional Access**<br>[Require device to be marked as compliant](../conditional-access/concept-conditional-access-grant.md)<br>[Require hybrid Azure AD joined device](../conditional-access/concept-conditional-access-grant.md#require-hybrid-azure-ad-joined-device)<br><br>**Intune**<br>[Device compliance policies in Microsoft Intune](/mem/intune/protect/device-compliance-get-started)<br><br>**Removable storage access control**<br>[Deploy and manage Removable Storage Access Control using Intune](/microsoft-365/security/defender-endpoint/deploy-manage-removable-storage-intune?view=o365-worldwide&preserve-view=true)<br>[Deploy and manage Removable Storage Access Control using group policy](/microsoft-365/security/defender-endpoint/deploy-manage-removable-storage-group-policy?view=o365-worldwide&preserve-view=true) |
## Personnel Security (PS)
The following table provides a list of practice statement and objectives, and Az
| CMMC practice statement and objectives | Azure AD guidance and recommendations | | - | - |
-| SC.L2-3.13.3<br><br>**Practice statement:** Separate user functionality form system management functionality. <br><br>**Objectives:**<br>Determine if:<br>[a.] user functionality is identified;<br>[b.] system management functionality is identified; and<br>[c.] user functionality is separated from system management functionality. | Maintain separate user accounts in Azure Active Directory for everyday productivity use and administrative or system/privileged management. Privileged accounts should be cloud-only or managed accounts and not synchronized from on-premises to protect the cloud environment from on-premises compromise. System/privileged access should only be permitted from a security hardened privileged access workstation (PAW). Configure Conditional Access device filters to restrict access to administrative applications from PAWs that are enabled using Azure Virtual Desktops.<br>[Why are privileged access devices important](/security/compass/privileged-access-devices.md)<br>[Device Roles and Profiles](/security/compass/privileged-access-devices.md)<br>[Filter for devices as a condition in Conditional Access policy](../conditional-access/concept-condition-filters-for-devices.md)<br>[Azure Virtual Desktop](https://azure.microsoft.com/products/virtual-desktop/) |
-| SC.L2-3.13.4<br><br>**Practice statement:** Prevent unauthorized and unintended information transfer via shared system resources.<br><br>**Objectives:**<br>Determine if:<br>[a.] unauthorized and unintended information transfer via shared system resources is prevented. | Configure device management policies via MDM (such as Microsoft Intune), Microsoft Endpoint Manager (MEM) or group policy objects (GPO) to ensure devices are compliant with system hardening procedures. Include compliance with company policy regarding software patches to prevent attackers from exploiting flaws.<br><br>Configure Conditional Access policies to enforce device compliance.<br><br>**Conditional Access**<br>[Require device to be marked as compliant](../conditional-access/concept-conditional-access-grant.md)<br>[Require hybrid Azure AD joined device](../conditional-access/concept-conditional-access-grant.md)<br><br>**InTune**<br>[Device compliance policies in Microsoft Intune](/mem/intune/protect/device-compliance-get-started.md) |
-| SC.L2-3.13.13<br><br>**Practice statement:** Control and monitor the use of mobile code.<br><br>**Objectives:**<br>Determine if:<br>[a.] use of mobile code is controlled; and<br>[b.] use of mobile code is monitored. | Configure device management policies via MDM (such as Microsoft Intune), Microsoft Endpoint Manager (MEM) or group policy objects (GPO) to disable the use of mobile code. Where use of mobile code is required monitor the use with endpoint security such as Microsoft Defender for Endpoint.<br><br>Configure Conditional Access policies to enforce device compliance.<br><br>**Conditional Access**<br>[Require device to be marked as compliant](../conditional-access/concept-conditional-access-grant.md)<br>[Require hybrid Azure AD joined device](../conditional-access/concept-conditional-access-grant.md)<br><br>**InTune**<br>[Device compliance policies in Microsoft Intune](/mem/intune/protect/device-compliance-get-started.md)<br><br>**Defender for Endpoint**<br>[Microsoft Defender for Endpoint](/microsoft-365/security/defender-endpoint/microsoft-defender-endpoint?view=o365-worldwide&preserve-view=true) |
+| SC.L2-3.13.3<br><br>**Practice statement:** Separate user functionality form system management functionality. <br><br>**Objectives:**<br>Determine if:<br>[a.] user functionality is identified;<br>[b.] system management functionality is identified; and<br>[c.] user functionality is separated from system management functionality. | Maintain separate user accounts in Azure Active Directory for everyday productivity use and administrative or system/privileged management. Privileged accounts should be cloud-only or managed accounts and not synchronized from on-premises to protect the cloud environment from on-premises compromise. System/privileged access should only be permitted from a security hardened privileged access workstation (PAW). Configure Conditional Access device filters to restrict access to administrative applications from PAWs that are enabled using Azure Virtual Desktops.<br>[Why are privileged access devices important](/security/compass/privileged-access-devices)<br>[Device Roles and Profiles](/security/compass/privileged-access-devices)<br>[Filter for devices as a condition in Conditional Access policy](../conditional-access/concept-condition-filters-for-devices.md)<br>[Azure Virtual Desktop](https://azure.microsoft.com/products/virtual-desktop/) |
+| SC.L2-3.13.4<br><br>**Practice statement:** Prevent unauthorized and unintended information transfer via shared system resources.<br><br>**Objectives:**<br>Determine if:<br>[a.] unauthorized and unintended information transfer via shared system resources is prevented. | Configure device management policies via MDM (such as Microsoft Intune), Microsoft Endpoint Manager (MEM) or group policy objects (GPO) to ensure devices are compliant with system hardening procedures. Include compliance with company policy regarding software patches to prevent attackers from exploiting flaws.<br><br>Configure Conditional Access policies to enforce device compliance.<br><br>**Conditional Access**<br>[Require device to be marked as compliant](../conditional-access/concept-conditional-access-grant.md)<br>[Require hybrid Azure AD joined device](../conditional-access/concept-conditional-access-grant.md)<br><br>**InTune**<br>[Device compliance policies in Microsoft Intune](/mem/intune/protect/device-compliance-get-started) |
+| SC.L2-3.13.13<br><br>**Practice statement:** Control and monitor the use of mobile code.<br><br>**Objectives:**<br>Determine if:<br>[a.] use of mobile code is controlled; and<br>[b.] use of mobile code is monitored. | Configure device management policies via MDM (such as Microsoft Intune), Microsoft Endpoint Manager (MEM) or group policy objects (GPO) to disable the use of mobile code. Where use of mobile code is required monitor the use with endpoint security such as Microsoft Defender for Endpoint.<br><br>Configure Conditional Access policies to enforce device compliance.<br><br>**Conditional Access**<br>[Require device to be marked as compliant](../conditional-access/concept-conditional-access-grant.md)<br>[Require hybrid Azure AD joined device](../conditional-access/concept-conditional-access-grant.md)<br><br>**InTune**<br>[Device compliance policies in Microsoft Intune](/mem/intune/protect/device-compliance-get-started)<br><br>**Defender for Endpoint**<br>[Microsoft Defender for Endpoint](/microsoft-365/security/defender-endpoint/microsoft-defender-endpoint?view=o365-worldwide&preserve-view=true) |
## System and Information Integrity (SI)
The following table provides a list of practice statement and objectives, and Az
| CMMC practice statement and objectives | Azure AD guidance and recommendations | | - | - |
-| SI.L2-3.14.7<br><br>**Practice statement:**<br><br>**Objectives:** Identify unauthorized use of organizational systems.<br>Determine if:<br>[a.] authorized use of the system is defined; and<br>[b.] unauthorized use of the system is identified. | Consolidate telemetry: Azure AD logs to stream to SIEM, such as Azure Sentinel Configure device management policies via MDM (such as Microsoft Intune), Microsoft Endpoint Manager (MEM), or group policy objects (GPO) to require Intrusion Detection/Protection (IDS/IPS) such as Microsoft Defender for Endpoint is installed and in use. Use telemetry provided by the IDS/IPS to identify unusual activities or conditions related to inbound and outbound communications traffic or unauthorized use.<br><br>Configure Conditional Access policies to enforce device compliance.<br><br>**Conditional Access**<br>[Require device to be marked as compliant](../conditional-access/concept-conditional-access-grant.md)<br>[Require hybrid Azure AD joined device](../conditional-access/concept-conditional-access-grant.md)<br><br>**InTune**<br>[Device compliance policies in Microsoft Intune](/mem/intune/protect/device-compliance-get-started.md)<br><br>**Defender for Endpoint**<br>[Microsoft Defender for Endpoint](/microsoft-365/security/defender-endpoint/microsoft-defender-endpoint?view=o365-worldwide&preserve-view=true) |
+| SI.L2-3.14.7<br><br>**Practice statement:**<br><br>**Objectives:** Identify unauthorized use of organizational systems.<br>Determine if:<br>[a.] authorized use of the system is defined; and<br>[b.] unauthorized use of the system is identified. | Consolidate telemetry: Azure AD logs to stream to SIEM, such as Azure Sentinel Configure device management policies via MDM (such as Microsoft Intune), Microsoft Endpoint Manager (MEM), or group policy objects (GPO) to require Intrusion Detection/Protection (IDS/IPS) such as Microsoft Defender for Endpoint is installed and in use. Use telemetry provided by the IDS/IPS to identify unusual activities or conditions related to inbound and outbound communications traffic or unauthorized use.<br><br>Configure Conditional Access policies to enforce device compliance.<br><br>**Conditional Access**<br>[Require device to be marked as compliant](../conditional-access/concept-conditional-access-grant.md)<br>[Require hybrid Azure AD joined device](../conditional-access/concept-conditional-access-grant.md)<br><br>**InTune**<br>[Device compliance policies in Microsoft Intune](/mem/intune/protect/device-compliance-get-started)<br><br>**Defender for Endpoint**<br>[Microsoft Defender for Endpoint](/microsoft-365/security/defender-endpoint/microsoft-defender-endpoint?view=o365-worldwide&preserve-view=true) |
### Next steps
active-directory Configure Cmmc Level 2 Identification And Authentication https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/standards/configure-cmmc-level-2-identification-and-authentication.md
The following table provides a list of practice statement and objectives, and Az
| IA.L2-3.5.3<br><br>**Practice statement:** Use multifactor authentication for local and network access to privileged accounts and for network access to non-privileged accounts. <br><br>**Objectives:**<br>Determine if:<br>[a.] privileged accounts are identified;<br>[b.] multifactor authentication is implemented for local access to privileged accounts;<br>[c.] multifactor authentication is implemented for network access to privileged accounts; and<br>[d.] multifactor authentication is implemented for network access to non-privileged accounts. | The following items are definitions for the terms used for this control area:<li>**Local Access** - Access to an organizational information system by a user (or process acting on behalf of a user) communicating through a direct connection without the use of a network.<li>**Network Access** - Access to an information system by a user (or a process acting on behalf of a user) communicating through a network (for example, local area network, wide area network, Internet).<li>**Privileged User** - A user that's authorized (and therefore, trusted) to perform security-relevant functions that ordinary users aren't authorized to perform.<br><br>Breaking down the previous requirement means:<li>All users are required MFA for network/remote access.<li>Only privileged users are required MFA for local access. If regular user accounts have administrative rights only on their computers, they're not a ΓÇ£privileged accountΓÇ¥ and don't require MFA for local access.<br><br> You're responsible for configuring Conditional Access to require multifactor authentication. Enable Azure AD Authentication methods that meet AAL2 and higher.<br>[Grant controls in Conditional Access policy - Azure Active Directory](../conditional-access/concept-conditional-access-grant.md)<br>[Achieve NIST authenticator assurance levels with Azure Active Directory](./nist-overview.md)<br>[Authentication methods and features - Azure Active Directory](../authentication/concept-authentication-methods.md) | | IA.L2-3.5.4<br><br>**Practice statement:** Employ replay-resistant authentication mechanisms for network access to privileged and non-privileged accounts.<br><br>**Objectives:**<br>Determine if:<br>[a.] replay-resistant authentication mechanisms are implemented for network account access to privileged and non-privileged accounts. | All Azure AD Authentication methods at AAL2 and above are replay resistant.<br>[Achieve NIST authenticator assurance levels with Azure Active Directory](./nist-overview.md) | | IA.L2-3.5.5<br><br>**Practice statement:** Prevent reuse of identifiers for a defined period.<br><br>**Objectives:**<br>Determine if:<br>[a.] a period within which identifiers can't be reused is defined; and<br>[b.] reuse of identifiers is prevented within the defined period. | All user, group, device object globally unique identifiers (GUIDs) are guaranteed unique and non-reusable for the lifetime of the Azure AD tenant.<br>[user resource type - Microsoft Graph v1.0](/graph/api/resources/user?view=graph-rest-1.0&preserve-view=true)<br>[group resource type - Microsoft Graph v1.0](/graph/api/resources/group?view=graph-rest-1.0&preserve-view=true)<br>[device resource type - Microsoft Graph v1.0](/graph/api/resources/device?view=graph-rest-1.0&preserve-view=true) |
-| IA.L2-3.5.6<br><br>**Practice statement:** Disable identifiers after a defined period of inactivity.<br><br>**Objectives:**<br>Determine if:<br>[a.] a period of inactivity after which an identifier is disabled is defined; and<br>[b.] identifiers are disabled after the defined period of inactivity. | Implement account management automation with Microsoft Graph and Azure AD PowerShell. Use Microsoft Graph to monitor sign-in activity and Azure AD PowerShell to take action on accounts within the required time frame.<br><br>**Determine inactivity**<br>[Manage inactive user accounts in Azure AD](../reports-monitoring/howto-manage-inactive-user-accounts.md)<br>[Manage stale devices in Azure AD](../devices/manage-stale-devices.md)<br><br>**Remove or disable accounts**<br>[Working with users in Microsoft Graph](/graph/api/resources/users.md)<br>[Get a user](/graph/api/user-get?tabs=http)<br>[Update user](/graph/api/user-update?tabs=http)<br>[Delete a user](/graph/api/user-delete?tabs=http)<br><br>**Work with devices in Microsoft Graph**<br>[Get device](/graph/api/device-get?tabs=http)<br>[Update device](/graph/api/device-update?tabs=http)<br>[Delete device](/graph/api/device-delete?tabs=http)<br><br>**[Use Azure AD PowerShell](/powershell/module/azuread/)**<br>[Get-AzureADUser](/powershell/module/azuread/get-azureaduser.md)<br>[Set-AzureADUser](/powershell/module/azuread/set-azureaduser)<br>[Get-AzureADDevice](/powershell/module/azuread/get-azureaddevice.md)<br>[Set-AzureADDevice](/powershell/module/azuread/set-azureaddevice.md) |
+| IA.L2-3.5.6<br><br>**Practice statement:** Disable identifiers after a defined period of inactivity.<br><br>**Objectives:**<br>Determine if:<br>[a.] a period of inactivity after which an identifier is disabled is defined; and<br>[b.] identifiers are disabled after the defined period of inactivity. | Implement account management automation with Microsoft Graph and Azure AD PowerShell. Use Microsoft Graph to monitor sign-in activity and Azure AD PowerShell to take action on accounts within the required time frame.<br><br>**Determine inactivity**<br>[Manage inactive user accounts in Azure AD](../reports-monitoring/howto-manage-inactive-user-accounts.md)<br>[Manage stale devices in Azure AD](../devices/manage-stale-devices.md)<br><br>**Remove or disable accounts**<br>[Working with users in Microsoft Graph](/graph/api/resources/user)<br>[Get a user](/graph/api/user-get?tabs=http)<br>[Update user](/graph/api/user-update?tabs=http)<br>[Delete a user](/graph/api/user-delete?tabs=http)<br><br>**Work with devices in Microsoft Graph**<br>[Get device](/graph/api/device-get?tabs=http)<br>[Update device](/graph/api/device-update?tabs=http)<br>[Delete device](/graph/api/device-delete?tabs=http)<br><br>**[Use Azure AD PowerShell](/powershell/module/azuread/)**<br>[Get-AzureADUser](/powershell/module/azuread/get-azureaduser)<br>[Set-AzureADUser](/powershell/module/azuread/set-azureaduser)<br>[Get-AzureADDevice](/powershell/module/azuread/get-azureaddevice)<br>[Set-AzureADDevice](/powershell/module/azuread/set-azureaddevice) |
| IA.L2-3.5.7<br><br>**Practice statement:**<br><br>**Objectives:** Enforce a minimum password complexity and change of characters when new passwords are created.<br>Determine if:<br>[a.] password complexity requirements are defined;<br>[b.] password change of character requirements are defined;<br>[c.] minimum password complexity requirements as defined are enforced when new passwords are created; and<br>[d.] minimum password change of character requirements as defined are enforced when new passwords are created.<br><br>IA.L2-3.5.8<br><br>**Practice statement:** Prohibit password reuse for a specified number of generations.<br><br>**Objectives:**<br>Determine if:<br>[a.] the number of generations during which a password cannot be reused is specified; and<br>[b.] reuse of passwords is prohibited during the specified number of generations. | We **strongly encourage** passwordless strategies. This control is only applicable to password authenticators, so removing passwords as an available authenticator renders this control not applicable.<br><br>Per NIST SP 800-63 B Section 5.1.1: Maintain a list of commonly used, expected, or compromised passwords.<br><br>With Azure AD password protection, default global banned password lists are automatically applied to all users in an Azure AD tenant. To support your business and security needs, you can define entries in a custom banned password list. When users change or reset their passwords, these banned password lists are checked to enforce the use of strong passwords.<br>For customers that require strict password character change, password reuse and complexity requirements use hybrid accounts configured with Password-Hash-Sync. This action ensures the passwords synchronized to Azure AD inherit the restrictions configured in Active Directory password policies. Further protect on-premises passwords by configuring on-premises Azure AD Password Protection for Active Directory Domain Services.<br>[NIST Special Publication 800-63 B](https://pages.nist.gov/800-63-3/sp800-63b.html)<br>[NIST Special Publication 800-53 Revision 5 (IA-5 - Control enhancement (1)](https://nvlpubs.nist.gov/nistpubs/SpecialPublications/NIST.SP.800-53r5.pdf)<br>[Eliminate bad passwords using Azure AD password protection](../authentication/concept-password-ban-bad.md)<br>[What is password hash synchronization with Azure AD?](../hybrid/whatis-phs.md) |
-| IA.L2-3.5.9<br><br>**Practice statement:** Allow temporary password use for system logons with an immediate change to a permanent password.<br><br>**Objectives:**<br>Determine if:<br>[a.] an immediate change to a permanent password is required when a temporary password is used for system sign-on. | An Azure AD user initial password is a temporary single use password that once successfully used is immediately required to be changed to a permanent password. Microsoft strongly encourages the adoption of passwordless authentication methods. Users can bootstrap Passwordless authentication methods using Temporary Access Pass (TAP). TAP is a time and use limited passcode issued by an admin that satisfies strong authentication requirements. Use of passwordless authentication along with the time and use limited TAP completely eliminates the use of passwords (and their reuse).<br>[Add or delete users - Azure Active Directory](../fundamentals/add-users-azure-active-directory.md)<br>[Configure a Temporary Access Pass in Azure AD to register Passwordless authentication methods](../authentication/howto-authentication-temporary-access-pass.md)<br>[Passwordless authentication](/security/business/solutions/passwordless-authentication?ef_id=369464fc2ba818d0bd6507de2cde3d58:G:s&OCID=AIDcmmdamuj0pc_SEM_369464fc2ba818d0bd6507de2cde3d58:G:s&msclkid=369464fc2ba818d0bd6507de2cde3d58) |
+| IA.L2-3.5.9<br><br>**Practice statement:** Allow temporary password use for system logons with an immediate change to a permanent password.<br><br>**Objectives:**<br>Determine if:<br>[a.] an immediate change to a permanent password is required when a temporary password is used for system sign-on. | An Azure AD user initial password is a temporary single use password that once successfully used is immediately required to be changed to a permanent password. Microsoft strongly encourages the adoption of passwordless authentication methods. Users can bootstrap Passwordless authentication methods using Temporary Access Pass (TAP). TAP is a time and use limited passcode issued by an admin that satisfies strong authentication requirements. Use of passwordless authentication along with the time and use limited TAP completely eliminates the use of passwords (and their reuse).<br>[Add or delete users - Azure Active Directory](../fundamentals/add-users-azure-active-directory.md)<br>[Configure a Temporary Access Pass in Azure AD to register Passwordless authentication methods](../authentication/howto-authentication-temporary-access-pass.md)<br>[Passwordless authentication](/azure/active-directory/authentication/concept-authentication-passwordless) |
| IA.L2-3.5.10<br><br>**Practice statement:** Store and transmit only cryptographically protected passwords.<br><br>**Objectives:**<br>Determine if:<br>[a.] passwords are cryptographically protected in storage; and<br>[b.] passwords are cryptographically protected in transit. | **Secret Encryption at Rest**:<br>In addition to disk level encryption, when at rest, secrets stored in the directory are encrypted using the Distributed Key Manager(DKM). The encryption keys are stored in Azure AD core store and in turn are encrypted with a scale unit key. The key is stored in a container that is protected with directory ACLs, for highest privileged users and specific services. The symmetric key is typically rotated every six months. Access to the environment is further protected with operational controls and physical security.<br><br>**Encryption in Transit**:<br>To assure data security, Directory Data in Azure AD is signed and encrypted while in transit between data centers within a scale unit. The data is encrypted and unencrypted by the Azure AD core store tier, which resides inside secured server hosting areas of the associated Microsoft data centers.<br><br>Customer-facing web services are secured with the Transport Layer Security (TLS) protocol.<br>For more information, [download](https://azure.microsoft.com/resources/azure-active-directory-data-security-considerations/) *Data Protection Considerations - Data Security*. On page 15, there are more details.<br>[Demystifying Password Hash Sync (microsoft.com)](https://www.microsoft.com/security/blog/2019/05/30/demystifying-password-hash-sync/)<br>[Azure Active Directory Data Security Considerations](https://aka.ms/aaddatawhitepaper) | |IA.L2-3.5.11<br><br>**Practice statement:** Obscure feedback of authentication information.<br><br>**Objectives:**<br>Determine if:<br>[a.] authentication information is obscured during the authentication process. | By default, Azure AD obscures all authenticator feedback. |
aks Azure Cni Overlay https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/azure-cni-overlay.md
Previously updated : 02/07/2023 Last updated : 02/24/2023 # Configure Azure CNI Overlay networking in Azure Kubernetes Service (AKS)
With Azure CNI Overlay, the cluster nodes are deployed into an Azure Virtual Net
> [!NOTE] > Azure CNI Overlay is currently **_unavailable_** in the following regions:
-> - East US 2
> - South Central US > - West US
-> - West US 2
## Overview of overlay networking
aks Configure Azure Cni https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/configure-azure-cni.md
By default, AKS clusters use [kubenet][kubenet] and create a virtual network and subnet. With *kubenet*, nodes get an IP address from a virtual network subnet. Network address translation (NAT) is then configured on the nodes, and pods receive an IP address "hidden" behind the node IP. This approach reduces the number of IP addresses that you need to reserve in your network space for pods to use.
-With [Azure Container Networking Interface (CNI)][cni-networking], every pod gets an IP address from the subnet and can be accessed directly. These IP addresses must be unique across your network space and must be planned in advance. Each node has a configuration parameter for the maximum number of pods that it supports. The equivalent number of IP addresses per node are then reserved up front for that node. This approach requires more planning, and often leads to IP address exhaustion or the need to rebuild clusters in a larger subnet as your application demands grow.
+With [Azure Container Networking Interface (CNI)][cni-networking], every pod gets an IP address from the subnet and can be accessed directly. Systems in the same virtual network as the AKS cluster see the pod IP as the source address for any traffic from the pod. Systems outside the AKS cluster virtual network see the node IP as the source address for any traffic from the pod. These IP addresses must be unique across your network space and must be planned in advance. Each node has a configuration parameter for the maximum number of pods that it supports. The equivalent number of IP addresses per node are then reserved up front for that node. This approach requires more planning, and often leads to IP address exhaustion or the need to rebuild clusters in a larger subnet as your application demands grow.
This article shows you how to use Azure CNI networking to create and use a virtual network subnet for an AKS cluster. For more information on network options and considerations, see [Network concepts for Kubernetes and AKS][aks-network-concepts].
When you create an AKS cluster, the following parameters are configurable for Az
**Azure Network Plugin**: When Azure network plugin is used, the internal LoadBalancer service with "externalTrafficPolicy=Local" can't be accessed from VMs with an IP in clusterCIDR that doesn't belong to AKS cluster.
-**Kubernetes service address range**: This parameter is the set of virtual IPs that Kubernetes assigns to internal [services][services] in your cluster. You can use any private address range that satisfies the following requirements:
+**Kubernetes service address range**: This parameter is the set of virtual IPs that Kubernetes assigns to internal [services][services] in your cluster. This range can't be updated after you create your cluster. You can use any private address range that satisfies the following requirements:
* Must not be within the virtual network IP address range of your cluster * Must not overlap with any other virtual networks with which the cluster virtual network peers
aks Configure Kubenet https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/configure-kubenet.md
This article shows you how to use *kubenet* networking to create and use a virtu
* The virtual network for the AKS cluster must allow outbound internet connectivity. * Don't create more than one AKS cluster in the same subnet.
-* AKS clusters may not use `169.254.0.0/16`, `172.30.0.0/16`, `172.31.0.0/16`, or `192.0.2.0/24` for the Kubernetes service address range, pod address range or cluster virtual network address range.
+* AKS clusters may not use `169.254.0.0/16`, `172.30.0.0/16`, `172.31.0.0/16`, or `192.0.2.0/24` for the Kubernetes service address range, pod address range, or cluster virtual network address range. This range can't be updated after you create your cluster.
* The cluster identity used by the AKS cluster must have at least [Network Contributor](../role-based-access-control/built-in-roles.md#network-contributor) role on the subnet within your virtual network. CLI helps do the role assignment automatically. If you are using ARM template or other clients, the role assignment needs to be done manually. You must also have the appropriate permissions, such as the subscription owner, to create a cluster identity and assign it permissions. If you wish to define a [custom role](../role-based-access-control/custom-roles.md) instead of using the built-in Network Contributor role, the following permissions are required: * `Microsoft.Network/virtualNetworks/subnets/join/action` * `Microsoft.Network/virtualNetworks/subnets/read`
aks Faq https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/faq.md
Starting with version 1.2.0, Azure CNI sets Transparent mode as default for sing
### Bridge mode
-As the name suggests, bridge mode Azure CNI, in a "just in time" fashion, will create a L2 bridge named "azure0". All the host side pod `veth` pair interfaces will be connected to this bridge. So Pod-Pod intra VM communication and the remaining traffic goes through this bridge. The bridge in question is a layer 2 virtual device that on its own cannot receive or transmit anything unless you bind one or more real devices to it. For this reason, eth0 of the Linux VM has to be converted into a subordinate to "azure0" bridge. This creates a complex network topology within the Linux VM and as a symptom CNI had to take care of other networking functions like DNS server update and so on.
+As the name suggests, bridge mode Azure CNI, in a "just in time" fashion, will create an L2 bridge named "azure0". All the host side pod `veth` pair interfaces will be connected to this bridge. So Pod-Pod intra VM communication and the remaining traffic goes through this bridge. The bridge in question is a layer 2 virtual device that on its own cannot receive or transmit anything unless you bind one or more real devices to it. For this reason, eth0 of the Linux VM has to be converted into a subordinate to "azure0" bridge. This creates a complex network topology within the Linux VM and as a symptom CNI had to take care of other networking functions like DNS server update and so on.
:::image type="content" source="media/faq/bridge-mode.png" alt-text="Bridge mode topology":::
aks Ingress Basic https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/ingress-basic.md
Previously updated : 05/17/2022 Last updated : 02/23/2023 # Create an ingress controller in Azure Kubernetes Service (AKS)
An ingress controller is a piece of software that provides reverse proxy, config
This article shows you how to deploy the [NGINX ingress controller][nginx-ingress] in an Azure Kubernetes Service (AKS) cluster. Two applications are then run in the AKS cluster, each of which is accessible over the single IP address. > [!NOTE]
-> There are two open source ingress controllers for Kubernetes based on Nginx: one is maintained by the Kubernetes community ([kubernetes/ingress-nginx][nginx-ingress]), and one is maintained by NGINX, Inc. ([nginxinc/kubernetes-ingress]). This article will be using the Kubernetes community ingress controller.
+> There are two open source ingress controllers for Kubernetes based on Nginx: one is maintained by the Kubernetes community ([kubernetes/ingress-nginx][nginx-ingress]), and one is maintained by NGINX, Inc. ([nginxinc/kubernetes-ingress]). This article will be using the Kubernetes community ingress controller.
## Before you begin
-This article uses [Helm 3][helm] to install the NGINX ingress controller on a [supported version of Kubernetes][aks-supported versions]. Make sure that you're using the latest release of Helm and have access to the *ingress-nginx* Helm repository. The steps outlined in this article may not be compatible with previous versions of the Helm chart, NGINX ingress controller, or Kubernetes.
-
-### [Azure CLI](#tab/azure-cli)
-
-This article also requires that you're running the Azure CLI version 2.0.64 or later. Run `az --version` to find the version. If you need to install or upgrade, see [Install Azure CLI][azure-cli-install].
-
-In addition, this article assumes you have an existing AKS cluster with an integrated Azure Container Registry (ACR). For more information on creating an AKS cluster with an integrated ACR, see [Authenticate with Azure Container Registry from Azure Kubernetes Service][aks-integrated-acr].
-
-### [Azure PowerShell](#tab/azure-powershell)
-
-This article also requires that you're running Azure PowerShell version 5.9.0 or later. Run `Get-InstalledModule -Name Az` to find the version. If you need to install or upgrade, see [Install Azure PowerShell][azure-powershell-install].
-
-In addition, this article assumes you have an existing AKS cluster with an integrated Azure Container Registry (ACR). For more information on creating an AKS cluster with an integrated ACR, see [Authenticate with Azure Container Registry from Azure Kubernetes Service][aks-integrated-acr-ps].
--
+* This article uses [Helm 3][helm] to install the NGINX ingress controller on a [supported version of Kubernetes][aks-supported versions]. Make sure that you're using the latest release of Helm and have access to the *ingress-nginx* Helm repository. The steps outlined in this article may not be compatible with previous versions of the Helm chart, NGINX ingress controller, or Kubernetes.
+* This article assumes you have an existing AKS cluster with an integrated Azure Container Registry (ACR). For more information on creating an AKS cluster with an integrated ACR, see [Authenticate with Azure Container Registry from Azure Kubernetes Service][aks-integrated-acr-ps].
+* The Kubernetes API health endpoint, `healthz` was deprecated in Kubernetes v1.16. You can replace this endpoint with the `livez` and `readyz` endpoints instead. See [Kubernetes API endpoints for health](https://kubernetes.io/docs/reference/using-api/health-checks/#api-endpoints-for-health) to determine which endpoint to use for your scenario.
+* If you're using Azure CLI, this article requires that you're running the Azure CLI version 2.0.64 or later. Run `az --version` to find the version. If you need to install or upgrade, see [Install Azure CLI][azure-cli-install].
+* If you're using Azure PowerShell, this article requires that you're running Azure PowerShell version 5.9.0 or later. Run `Get-InstalledModule -Name Az` to find the version. If you need to install or upgrade, see [Install Azure PowerShell][azure-powershell-install].
## Basic configuration
-To create a basic NGINX ingress controller without customizing the defaults, you'll use Helm.
+To create a basic NGINX ingress controller without customizing the defaults, you'll use Helm. The following configuration uses the default configuration for simplicity. You can add parameters for customizing the deployment, like `--set controller.replicaCount=3`.
### [Azure CLI](#tab/azure-cli)
helm install ingress-nginx ingress-nginx/ingress-nginx `
-The above configuration uses the default configuration for simplicity. You can add parameters for customizing the deployment, for example, `--set controller.replicaCount=3`. The next section will show a highly customized example of the ingress controller.
- ## Customized configuration As an alternative to the basic configuration presented in the above section, the next set of steps will show how to deploy a customized ingress controller. You'll have the option of using an internal static IP address, or using a dynamic public IP address.
az acr import --name $REGISTRY_NAME --source $SOURCE_REGISTRY/$DEFAULTBACKEND_IM
To control image versions, you'll want to import them into your own Azure Container Registry. The [NGINX ingress controller Helm chart][ingress-nginx-helm-chart] relies on three container images. Use `Import-AzContainerRegistryImage` to import those images into your ACR. - ```azurepowershell-interactive $RegistryName = "<REGISTRY_NAME>" $ResourceGroup = (Get-AzContainerRegistry | Where-Object {$_.name -eq $RegistryName} ).ResourceGroupName
Import-AzContainerRegistryImage -ResourceGroupName $ResourceGroup -RegistryName
> [!NOTE] > In addition to importing container images into your ACR, you can also import Helm charts into your ACR. For more information, see [Push and pull Helm charts to an Azure Container Registry][acr-helm].
-### Use an internal IP address
-
-By default, an NGINX ingress controller is created with a dynamic public IP address assignment. A common configuration requirement is to use an internal, private network and IP address. This approach allows you to restrict access to your services to internal users, with no external access.
+### Create an ingress controller
-Create a file named `internal-ingress.yaml` using the following example manifest:
+To create the ingress controller, use Helm to install *ingress-nginx*. The ingress controller needs to be scheduled on a Linux node. Windows Server nodes shouldn't run the ingress controller. A node selector is specified using the `--set nodeSelector` parameter to tell the Kubernetes scheduler to run the NGINX ingress controller on a Linux-based node.
-```yaml
-controller:
- service:
- loadBalancerIP: 10.224.0.42
- annotations:
- service.beta.kubernetes.io/azure-load-balancer-internal: "true"
-```
+For added redundancy, two replicas of the NGINX ingress controllers are deployed with the `--set controller.replicaCount` parameter. To fully benefit from running replicas of the ingress controller, make sure there's more than one node in your AKS cluster.
-This example assigns *10.224.0.42* to the *loadBalancerIP* resource. Provide your own internal IP address for use with the ingress controller. Make sure that this IP address isn't already in use within your virtual network. Also, if you're using an existing virtual network and subnet, you must configure your AKS cluster with the correct permissions to manage the virtual network and subnet. For more information, see [Use kubenet networking with your own IP address ranges in Azure Kubernetes Service (AKS)][aks-configure-kubenet-networking] or [Configure Azure CNI networking in Azure Kubernetes Service (AKS)][aks-configure-advanced-networking].
+The following example creates a Kubernetes namespace for the ingress resources named *ingress-basic* and is intended to work within that namespace. Specify a namespace for your own environment as needed. If your AKS cluster isn't Kubernetes role-based access control enabled, add `--set rbac.create=false` to the Helm commands.
-When you deploy the *nginx-ingress* chart with Helm, add the `-f internal-ingress.yaml` parameter.
+> [!NOTE]
+> If you would like to enable [client source IP preservation][client-source-ip] for requests to containers in your cluster, add `--set controller.service.externalTrafficPolicy=Local` to the Helm install command. The client source IP is stored in the request header under *X-Forwarded-For*. When you're using an ingress controller with client source IP preservation enabled, TLS pass-through won't work.
### [Azure CLI](#tab/azure-cli)
helm install ingress-nginx ingress-nginx/ingress-nginx \
--set defaultBackend.image.registry=$ACR_URL \ --set defaultBackend.image.image=$DEFAULTBACKEND_IMAGE \ --set defaultBackend.image.tag=$DEFAULTBACKEND_TAG \
- --set defaultBackend.image.digest="" \
- -f internal-ingress.yaml
+ --set defaultBackend.image.digest=""
``` ### [Azure PowerShell](#tab/azure-powershell)
helm repo add ingress-nginx https://kubernetes.github.io/ingress-nginx
$AcrUrl = (Get-AzContainerRegistry -ResourceGroupName $ResourceGroup -Name $RegistryName).LoginServer # Use Helm to deploy an NGINX ingress controller
-helm install nginx-ingress ingress-nginx/ingress-nginx `
+helm install ingress-nginx ingress-nginx/ingress-nginx `
--namespace ingress-basic ` --create-namespace ` --set controller.replicaCount=2 `
helm install nginx-ingress ingress-nginx/ingress-nginx `
--set defaultBackend.image.registry=$AcrUrl ` --set defaultBackend.image.image=$DefaultBackendImage ` --set defaultBackend.image.tag=$DefaultBackendTag `
- --set defaultBackend.image.digest="" `
- -f internal-ingress.yaml
+ --set defaultBackend.image.digest=""
```
+### Create an ingress controller using an internal IP address
-### Create an ingress controller
-
-To create the ingress controller, use Helm to install *nginx-ingress*. The ingress controller needs to be scheduled on a Linux node. Windows Server nodes shouldn't run the ingress controller. A node selector is specified using the `--set nodeSelector` parameter to tell the Kubernetes scheduler to run the NGINX ingress controller on a Linux-based node.
-
-For added redundancy, two replicas of the NGINX ingress controllers are deployed with the `--set controller.replicaCount` parameter. To fully benefit from running replicas of the ingress controller, make sure there's more than one node in your AKS cluster.
-
-The following example creates a Kubernetes namespace for the ingress resources named *ingress-basic* and is intended to work within that namespace. Specify a namespace for your own environment as needed. If your AKS cluster isn't Kubernetes role-based access control enabled, add `--set rbac.create=false` to the Helm commands.
+By default, an NGINX ingress controller is created with a dynamic public IP address assignment. A common configuration requirement is to use an internal, private network and IP address. This approach allows you to restrict access to your services to internal users, with no external access.
-> [!NOTE]
-> If you would like to enable [client source IP preservation][client-source-ip] for requests to containers in your cluster, add `--set controller.service.externalTrafficPolicy=Local` to the Helm install command. The client source IP is stored in the request header under *X-Forwarded-For*. When you're using an ingress controller with client source IP preservation enabled, TLS pass-through won't work.
+Use the `--set controller.service.loadBalancerIP` and `--set controller.service.annotations."service\.beta\.kubernetes\.io/azure-load-balancer-internal"=true` parameters to assign an internal IP address to your ingress controller. Provide your own internal IP address for use with the ingress controller. Make sure that this IP address isn't already in use within your virtual network. If you're using an existing virtual network and subnet, you must configure your AKS cluster with the correct permissions to manage the virtual network and subnet. For more information, see [Use kubenet networking with your own IP address ranges in Azure Kubernetes Service (AKS)][aks-configure-kubenet-networking] or [Configure Azure CNI networking in Azure Kubernetes Service (AKS)][aks-configure-advanced-networking].
### [Azure CLI](#tab/azure-cli)
helm repo add ingress-nginx https://kubernetes.github.io/ingress-nginx
ACR_URL=<REGISTRY_URL> # Use Helm to deploy an NGINX ingress controller
-helm install nginx-ingress ingress-nginx/ingress-nginx \
+helm install ingress-nginx ingress-nginx/ingress-nginx \
--version 4.1.3 \ --namespace ingress-basic \ --create-namespace \
helm install nginx-ingress ingress-nginx/ingress-nginx \
--set controller.image.tag=$CONTROLLER_TAG \ --set controller.image.digest="" \ --set controller.admissionWebhooks.patch.nodeSelector."kubernetes\.io/os"=linux \
+ --set controller.service.loadBalancerIP=10.224.0.42 \
+ --set controller.service.annotations."service\.beta\.kubernetes\.io/azure-load-balancer-internal"=true \
--set controller.service.annotations."service\.beta\.kubernetes\.io/azure-load-balancer-health-probe-request-path"=/healthz \ --set controller.admissionWebhooks.patch.image.registry=$ACR_URL \ --set controller.admissionWebhooks.patch.image.image=$PATCH_IMAGE \
helm install nginx-ingress ingress-nginx/ingress-nginx \
--set defaultBackend.image.registry=$ACR_URL \ --set defaultBackend.image.image=$DEFAULTBACKEND_IMAGE \ --set defaultBackend.image.tag=$DEFAULTBACKEND_TAG \
- --set defaultBackend.image.digest=""
+ --set defaultBackend.image.digest="" \
``` ### [Azure PowerShell](#tab/azure-powershell)
helm repo add ingress-nginx https://kubernetes.github.io/ingress-nginx
$AcrUrl = (Get-AzContainerRegistry -ResourceGroupName $ResourceGroup -Name $RegistryName).LoginServer # Use Helm to deploy an NGINX ingress controller
-helm install nginx-ingress ingress-nginx/ingress-nginx `
+helm install ingress-nginx ingress-nginx/ingress-nginx `
--namespace ingress-basic ` --create-namespace ` --set controller.replicaCount=2 `
helm install nginx-ingress ingress-nginx/ingress-nginx `
--set controller.image.tag=$ControllerTag ` --set controller.image.digest="" ` --set controller.admissionWebhooks.patch.nodeSelector."kubernetes\.io/os"=linux `
+ --set controller.service.loadBalancerIP=10.224.0.42 \
+ --set controller.service.annotations."service\.beta\.kubernetes\.io/azure-load-balancer-internal"=true \
--set controller.service.annotations."service\.beta\.kubernetes\.io/azure-load-balancer-health-probe-request-path"=/healthz ` --set controller.admissionWebhooks.patch.image.registry=$AcrUrl ` --set controller.admissionWebhooks.patch.image.image=$PatchImage `
helm install nginx-ingress ingress-nginx/ingress-nginx `
--set defaultBackend.image.registry=$AcrUrl ` --set defaultBackend.image.image=$DefaultBackendImage ` --set defaultBackend.image.tag=$DefaultBackendTag `
- --set defaultBackend.image.digest=""
+ --set defaultBackend.image.digest="" `
```
kubectl get services --namespace ingress-basic -o wide -w ingress-nginx-controll
When the Kubernetes load balancer service is created for the NGINX ingress controller, an IP address is assigned under *EXTERNAL-IP*, as shown in the following example output:
-```
+```console
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE SELECTOR ingress-nginx-controller LoadBalancer 10.0.65.205 EXTERNAL-IP 80:30957/TCP,443:32414/TCP 1m app.kubernetes.io/component=controller,app.kubernetes.io/instance=ingress-nginx,app.kubernetes.io/name=ingress-nginx ```
No ingress rules have been created yet, so the NGINX ingress controller's defaul
To see the ingress controller in action, run two demo applications in your AKS cluster. In this example, you use `kubectl apply` to deploy two instances of a simple *Hello world* application.
-Create an `aks-helloworld-one.yaml` file and copy in the following example YAML:
-
-```yml
-apiVersion: apps/v1
-kind: Deployment
-metadata:
- name: aks-helloworld-one
-spec:
- replicas: 1
- selector:
- matchLabels:
- app: aks-helloworld-one
- template:
+1. Create an `aks-helloworld-one.yaml` file and copy in the following example YAML:
+
+ ```yaml
+ apiVersion: apps/v1
+ kind: Deployment
metadata:
- labels:
- app: aks-helloworld-one
+ name: aks-helloworld-one
spec:
- containers:
- - name: aks-helloworld-one
- image: mcr.microsoft.com/azuredocs/aks-helloworld:v1
- ports:
- - containerPort: 80
- env:
- - name: TITLE
- value: "Welcome to Azure Kubernetes Service (AKS)"
-
-apiVersion: v1
-kind: Service
-metadata:
- name: aks-helloworld-one
-spec:
- type: ClusterIP
- ports:
- - port: 80
- selector:
- app: aks-helloworld-one
-```
+ replicas: 1
+ selector:
+ matchLabels:
+ app: aks-helloworld-one
+ template:
+ metadata:
+ labels:
+ app: aks-helloworld-one
+ spec:
+ containers:
+ - name: aks-helloworld-one
+ image: mcr.microsoft.com/azuredocs/aks-helloworld:v1
+ ports:
+ - containerPort: 80
+ env:
+ - name: TITLE
+ value: "Welcome to Azure Kubernetes Service (AKS)"
+
+ apiVersion: v1
+ kind: Service
+ metadata:
+ name: aks-helloworld-one
+ spec:
+ type: ClusterIP
+ ports:
+ - port: 80
+ selector:
+ app: aks-helloworld-one
+ ```
-Create an `aks-helloworld-two.yaml` file and copy in the following example YAML:
-
-```yml
-apiVersion: apps/v1
-kind: Deployment
-metadata:
- name: aks-helloworld-two
-spec:
- replicas: 1
- selector:
- matchLabels:
- app: aks-helloworld-two
- template:
+2. Create an `aks-helloworld-two.yaml` file and copy in the following example YAML:
+
+ ```yaml
+ apiVersion: apps/v1
+ kind: Deployment
metadata:
- labels:
- app: aks-helloworld-two
+ name: aks-helloworld-two
spec:
- containers:
- - name: aks-helloworld-two
- image: mcr.microsoft.com/azuredocs/aks-helloworld:v1
- ports:
- - containerPort: 80
- env:
- - name: TITLE
- value: "AKS Ingress Demo"
-
-apiVersion: v1
-kind: Service
-metadata:
- name: aks-helloworld-two
-spec:
- type: ClusterIP
- ports:
- - port: 80
- selector:
- app: aks-helloworld-two
-```
+ replicas: 1
+ selector:
+ matchLabels:
+ app: aks-helloworld-two
+ template:
+ metadata:
+ labels:
+ app: aks-helloworld-two
+ spec:
+ containers:
+ - name: aks-helloworld-two
+ image: mcr.microsoft.com/azuredocs/aks-helloworld:v1
+ ports:
+ - containerPort: 80
+ env:
+ - name: TITLE
+ value: "AKS Ingress Demo"
+
+ apiVersion: v1
+ kind: Service
+ metadata:
+ name: aks-helloworld-two
+ spec:
+ type: ClusterIP
+ ports:
+ - port: 80
+ selector:
+ app: aks-helloworld-two
+ ```
-Run the two demo applications using `kubectl apply`:
+3. Run the two demo applications using `kubectl apply`:
+
+ ```console
+ kubectl apply -f aks-helloworld-one.yaml --namespace ingress-basic
+ kubectl apply -f aks-helloworld-two.yaml --namespace ingress-basic
+ ```
-```console
-kubectl apply -f aks-helloworld-one.yaml --namespace ingress-basic
-kubectl apply -f aks-helloworld-two.yaml --namespace ingress-basic
-```
## Create an ingress route Both applications are now running on your Kubernetes cluster. To route traffic to each application, create a Kubernetes ingress resource. The ingress resource configures the rules that route traffic to one of the two applications. In the following example, traffic to *EXTERNAL_IP/hello-world-one* is routed to the service named `aks-helloworld-one`. Traffic to *EXTERNAL_IP/hello-world-two* is routed to the `aks-helloworld-two` service. Traffic to *EXTERNAL_IP/static* is routed to the service named `aks-helloworld-one` for static assets.
-Create a file named `hello-world-ingress.yaml` and copy in the following example YAML.
-
-```yaml
-apiVersion: networking.k8s.io/v1
-kind: Ingress
-metadata:
- name: hello-world-ingress
- annotations:
- nginx.ingress.kubernetes.io/ssl-redirect: "false"
- nginx.ingress.kubernetes.io/use-regex: "true"
- nginx.ingress.kubernetes.io/rewrite-target: /$2
-spec:
- ingressClassName: nginx
- rules:
- - http:
- paths:
- - path: /hello-world-one(/|$)(.*)
- pathType: Prefix
- backend:
- service:
- name: aks-helloworld-one
- port:
- number: 80
- - path: /hello-world-two(/|$)(.*)
- pathType: Prefix
- backend:
- service:
- name: aks-helloworld-two
- port:
- number: 80
- - path: /(.*)
- pathType: Prefix
- backend:
- service:
- name: aks-helloworld-one
- port:
- number: 80
-
-apiVersion: networking.k8s.io/v1
-kind: Ingress
-metadata:
- name: hello-world-ingress-static
- annotations:
- nginx.ingress.kubernetes.io/ssl-redirect: "false"
- nginx.ingress.kubernetes.io/rewrite-target: /static/$2
-spec:
- ingressClassName: nginx
- rules:
- - http:
- paths:
- - path: /static(/|$)(.*)
- pathType: Prefix
- backend:
- service:
- name: aks-helloworld-one
- port:
- number: 80
-```
-
-Create the ingress resource using the `kubectl apply` command.
+1. Create a file named `hello-world-ingress.yaml` and copy in the following example YAML:
-```console
-kubectl apply -f hello-world-ingress.yaml --namespace ingress-basic
-```
+ ```yaml
+ apiVersion: networking.k8s.io/v1
+ kind: Ingress
+ metadata:
+ name: hello-world-ingress
+ annotations:
+ nginx.ingress.kubernetes.io/ssl-redirect: "false"
+ nginx.ingress.kubernetes.io/use-regex: "true"
+ nginx.ingress.kubernetes.io/rewrite-target: /$2
+ spec:
+ ingressClassName: nginx
+ rules:
+ - http:
+ paths:
+ - path: /hello-world-one(/|$)(.*)
+ pathType: Prefix
+ backend:
+ service:
+ name: aks-helloworld-one
+ port:
+ number: 80
+ - path: /hello-world-two(/|$)(.*)
+ pathType: Prefix
+ backend:
+ service:
+ name: aks-helloworld-two
+ port:
+ number: 80
+ - path: /(.*)
+ pathType: Prefix
+ backend:
+ service:
+ name: aks-helloworld-one
+ port:
+ number: 80
+
+ apiVersion: networking.k8s.io/v1
+ kind: Ingress
+ metadata:
+ name: hello-world-ingress-static
+ annotations:
+ nginx.ingress.kubernetes.io/ssl-redirect: "false"
+ nginx.ingress.kubernetes.io/rewrite-target: /static/$2
+ spec:
+ ingressClassName: nginx
+ rules:
+ - http:
+ paths:
+ - path: /static(/|$)(.*)
+ pathType: Prefix
+ backend:
+ service:
+ name: aks-helloworld-one
+ port:
+ number: 80
+ ```
+
+2. Create the ingress resource using the `kubectl apply` command.
+
+ ```console
+ kubectl apply -f hello-world-ingress.yaml --namespace ingress-basic
+ ```
## Test the ingress controller
Now add the */hello-world-two* path to the IP address, such as *EXTERNAL_IP/hell
### Test an internal IP address
-To test the routes for the ingress controller using an internal IP, create a test pod and attach a terminal session to it:
+1. Create a test pod and attach a terminal session to it.
-```console
-kubectl run -it --rm aks-ingress-test --image=mcr.microsoft.com/dotnet/runtime-deps:6.0 --namespace ingress-basic
-```
+ ```console
+ kubectl run -it --rm aks-ingress-test --image=mcr.microsoft.com/dotnet/runtime-deps:6.0 --namespace ingress-basic
+ ```
-Install `curl` in the pod using `apt-get`:
+2. Install `curl` in the pod using `apt-get`.
-```console
-apt-get update && apt-get install -y curl
-```
+ ```console
+ apt-get update && apt-get install -y curl
+ ```
-Now access the address of your Kubernetes ingress controller using `curl`, such as *http://10.224.0.42*. Provide your own internal IP address specified when you deployed the ingress controller.
+3. Access the address of your Kubernetes ingress controller using `curl`, such as *http://10.224.0.42*. Provide your own internal IP address specified when you deployed the ingress controller.
-```console
-curl -L http://10.224.0.42
-```
+ ```console
+ curl -L http://10.224.0.42
+ ```
-No path was provided with the address, so the ingress controller defaults to the */* route. The first demo application is returned, as shown in the following condensed example output:
+ No path was provided with the address, so the ingress controller defaults to the */* route. The first demo application is returned, as shown in the following condensed example output:
-```
-$ curl -L http://10.224.0.42
-
-<!DOCTYPE html>
-<html xmlns="http://www.w3.org/1999/xhtml">
-<head>
- <link rel="stylesheet" type="text/css" href="/static/default.css">
- <title>Welcome to Azure Kubernetes Service (AKS)</title>
-[...]
-```
+ ```console
+ <!DOCTYPE html>
+ <html xmlns="http://www.w3.org/1999/xhtml">
+ <head>
+ <link rel="stylesheet" type="text/css" href="/static/default.css">
+ <title>Welcome to Azure Kubernetes Service (AKS)</title>
+ [...]
+ ```
-Now add */hello-world-two* path to the address, such as *http://10.224.0.42/hello-world-two*. The second demo application with the custom title is returned, as shown in the following condensed example output:
+4. Add the */hello-world-two* path to the address, such as *http://10.224.0.42/hello-world-two*.
-```
-$ curl -L -k http://10.224.0.42/hello-world-two
-
-<!DOCTYPE html>
-<html xmlns="http://www.w3.org/1999/xhtml">
-<head>
- <link rel="stylesheet" type="text/css" href="/static/default.css">
- <title>AKS Ingress Demo</title>
-[...]
-```
+ ```console
+ curl -L -k http://10.224.0.42/hello-world-two
+ ```
+
+ The second demo application with the custom title is returned, as shown in the following condensed example output:
+
+ ```console
+ <!DOCTYPE html>
+ <html xmlns="http://www.w3.org/1999/xhtml">
+ <head>
+ <link rel="stylesheet" type="text/css" href="/static/default.css">
+ <title>AKS Ingress Demo</title>
+ [...]
+ ```
kubectl delete namespace ingress-basic
### Delete resources individually
-Alternatively, a more granular approach is to delete the individual resources created. List the Helm releases with the `helm list` command. Look for charts named *nginx-ingress* and *aks-helloworld*, as shown in the following example output:
+Alternatively, a more granular approach is to delete the individual resources created.
-```
-$ helm list --namespace ingress-basic
+1. List the Helm releases with the `helm list` command.
-NAME NAMESPACE REVISION UPDATED STATUS CHART APP VERSION
-nginx-ingress ingress-basic 1 2020-01-06 19:55:46.358275 -0600 CST deployed nginx-ingress-1.27.1 0.26.1
-```
+ ```console
+ helm list --namespace ingress-basic
+ ```
-Uninstall the releases with the `helm uninstall` command. The following example uninstalls the NGINX ingress deployment.
+ Look for charts named *ingress-nginx* and *aks-helloworld*, as shown in the following example output:
-```
-$ helm uninstall ingress-nginx --namespace ingress-basic
+ ```console
+ NAME NAMESPACE REVISION UPDATED STATUS CHART APP VERSION
+ ingress-nginx ingress-basic 1 2020-01-06 19:55:46.358275 -0600 CST deployed nginx-ingress-1.27.1 0.26.1
+ ```
-release "ingress-nginx" uninstalled
-```
+2. Uninstall the releases with the `helm uninstall` command.
-Next, remove the two sample applications:
+ ```console
+ helm uninstall ingress-nginx --namespace ingress-basic
+ ```
-```console
-kubectl delete -f aks-helloworld-one.yaml --namespace ingress-basic
-kubectl delete -f aks-helloworld-two.yaml --namespace ingress-basic
-```
+3. Remove the two sample applications.
-Remove the ingress route that directed traffic to the sample apps:
+ ```console
+ kubectl delete -f aks-helloworld-one.yaml --namespace ingress-basic
+ kubectl delete -f aks-helloworld-two.yaml --namespace ingress-basic
+ ```
-```console
-kubectl delete -f hello-world-ingress.yaml
-```
+4. Remove the ingress route that directed traffic to the sample apps.
-Finally, you can delete the itself namespace. Use the `kubectl delete` command and specify your namespace name:
+ ```console
+ kubectl delete -f hello-world-ingress.yaml
+ ```
-```console
-kubectl delete namespace ingress-basic
-```
+5. Delete the namespace using the `kubectl delete` command and specifying your namespace name.
+
+ ```console
+ kubectl delete namespace ingress-basic
+ ```
## Next steps
aks Internal Lb https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/internal-lb.md
internal-app LoadBalancer 10.0.248.59 10.240.0.7 80:30555/TCP 2m
When you specify an IP address for the load balancer, the specified IP address must reside in the same subnet as the AKS cluster, but it can't already be assigned to a resource. For example, you shouldn't use an IP address in the range designated for the Kubernetes subnet within the AKS cluster.
+You can use the [`az network vnet subnet list`](https://learn.microsoft.com/cli/azure/network/vnet/subnet?view=azure-cli-latest#az-network-vnet-subnet-list) Azure CLI command or the [`Get-AzVirtualNetworkSubnetConfig`](https://learn.microsoft.com/powershell/module/az.network/get-azvirtualnetworksubnetconfig?view=azps-9.4.0) PowerShell cmdlet to get the subnets in your virtual network.
+ For more information on subnets, see [Add a node pool with a unique subnet][unique-subnet]. If you want to use a specific IP address with the load balancer, there are two ways:
aks Quickstart Helm https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/quickstart-helm.md
description: A Helm chart for Kubernetes
dependencies: - name: redis
- version: 14.7.1
+ version: 17.3.17
repository: https://charts.bitnami.com/bitnami ...
aks Trusted Access Feature https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/trusted-access-feature.md
+
+ Title: Enable Azure resources to access Azure Kubernetes Service (AKS) clusters using Trusted Access
+description: Learn how to use the Trusted Access feature to enable Azure resources to access Azure Kubernetes Service (AKS) clusters.
+++ Last updated : 02/23/2023+++
+# Enable Azure resources to access Azure Kubernetes Service (AKS) clusters using Trusted Access (Preview)
+
+Many Azure services that integrate with Azure Kubernetes Service (AKS) need access to the Kubernetes API server. In order to avoid granting these services admin access or having to keep your AKS clusters public for network access, you can use the AKS Trusted Access feature.
+
+This feature allows services to securely connect to AKS and Kubernetes via the Azure backend without requiring private endpoint. Instead of relying on identities with [Microsoft Azure Active Directory (Azure AD)](../active-directory/fundamentals/active-directory-whatis.md) permissions, this feature can use your system-assigned managed identity to authenticate with the managed services and applications you want to use on top of AKS.
+
+Trusted Access addresses the following scenarios:
+
+* Azure services may be unable to access the Kubernetes API server when the authorized IP range is enabled, or in private clusters unless you implement a private endpoint access model.
+
+* Providing admin access to the Kubernetes API to an Azure service doesn't follow the least privileged access best practices and could lead to privilege escalations or risks of credential leakage.
+
+ * For example, you may have to implement high-privileged service-to-service permissions, which aren't ideal during audit reviews.
+
+This article shows you how to enable secure access from your Azure services to your Kubernetes API server in AKS using Trusted Access.
++
+## Trusted Access feature overview
+
+Trusted Access enables you to give explicit consent to your system-assigned MSI of allowed resources to access your AKS clusters using an Azure resource *RoleBinding*. Your Azure resources access AKS clusters through the AKS regional gateway via system-assigned managed identity authentication with the appropriate Kubernetes permissions via an Azure resource *Role*. The Trusted Access feature allows you to access AKS clusters with different configurations, including but not limited to [private clusters](private-clusters.md), [clusters with local accounts disabled](managed-aad.md#disable-local-accounts), [Azure AD clusters](azure-ad-integration-cli.md), and [authorized IP range clusters](api-server-authorized-ip-ranges.md).
+
+## Prerequisites
+
+* An Azure account with an active subscription. [Create an account for free](https://azure.microsoft.com/free/?WT.mc_id=A261C142F).
+* Resource types that support [system-assigned managed identity](../active-directory/managed-identities-azure-resources/overview.md).
+* Pre-defined Roles with appropriate [AKS permissions](concepts-identity.md).
+ * To learn about what Roles to use in various scenarios, see [AzureML access to AKS clusters with special configurations](https://github.com/Azure/AML-Kubernetes/blob/master/docs/azureml-aks-ta-support.md).
+* If you're using Azure CLI, the **aks-preview** extension version **0.5.74 or later** is required.
+
+First, install the aks-preview extension by running the following command:
+
+```azurecli
+az extension add --name aks-preview
+```
+
+Run the following command to update to the latest version of the extension released:
+
+```azurecli
+az extension update --name aks-preview
+```
+
+Then register the `TrustedAccessPreview` feature flag by using the [`az feature register`][az-feature-register] command, as shown in the following example:
+
+```azurecli-interactive
+az feature register --namespace "Microsoft.ContainerService" --name "TrustedAccessPreview"
+```
+
+It takes a few minutes for the status to show *Registered*. Verify the registration status by using the [`az feature show`][az-feature-show] command:
+
+```azurecli-interactive
+az feature show --namespace "Microsoft.ContainerService" --name "TrustedAccessPreview"
+```
+
+When the status reflects *Registered*, refresh the registration of the *Microsoft.ContainerService* resource provider by using the [`az provider register`][az-provider-register] command:
+
+```azurecli-interactive
+az provider register --namespace Microsoft.ContainerService
+```
+
+## Create an AKS cluster
+
+[Create an AKS cluster](tutorial-kubernetes-deploy-cluster.md) in the same subscription as the Azure resource you want to access the cluster.
+
+## Select the required Trusted Access Roles
+
+The Roles you select depend on the different Azure services. These services help create Roles and RoleBindings, which build the connection from the Azure service to AKS.
+
+## Create a Trusted Access RoleBinding
+
+After confirming which Role to use, use the Azure CLI to create a Trusted Access RoleBinding in an AKS cluster. The RoleBinding associates your selected Role with the Azure service.
+
+```azurecli
+# Create a Trusted Access RoleBinding in an AKS cluster
+
+az aks trustedaccess rolebinding create --resource-group <AKS resource group> --cluster-name <AKS cluster name> -n <rolebinding name> -s <connected service resource ID> --roles <roleName1, roleName2>
+
+# Sample command
+
+az aks trustedaccess rolebinding create \
+-g myResourceGroup \
+--cluster-name myAKSCluster -n test-binding \
+-s /subscriptions/000-000-000-000-000/resourceGroups/myResourceGroup/providers/Microsoft.MachineLearningServices/workspaces/MyMachineLearning \
+--roles Microsoft.Compute/virtualMachineScaleSets/test-node-reader,Microsoft.Compute/virtualMachineScaleSets/test-admin
+```
+++
+## Update an existing Trusted Access RoleBinding with new roles
+
+For an existing RoleBinding with associated source service, you can update the RoleBinding with new Roles.
+
+> [!NOTE]
+> The new RoleBinding may take up to 5 minutes to take effect as addon manager updates clusters every 5 minutes. Before the new RoleBinding takes effect, the old RoleBinding still works.
+>
+> You can use `az aks trusted access rolebinding list --name <rolebinding name> --resource-group <resource group>` to check the current RoleBinding.
+
+```azurecli
+# Update RoleBinding command
+
+az aks trustedaccess rolebinding update --resource-group <AKS resource group> --cluster-name <AKS cluster name> -n <existing rolebinding name> --roles <newRoleName1, newRoleName2>
+
+# Update RoleBinding command with sample resource group, cluster, and Roles
+
+az aks trustedaccess rolebinding update \
+--resource-group myResourceGroup \
+--cluster-name myAKSCluster -n test-binding \
+--roles Microsoft.Compute/virtualMachineScaleSets/test-node-reader,Microsoft.Compute/virtualMachineScaleSets/test-admin
+```
+++
+## Show the Trusted Access RoleBinding
+
+Use the Azure CLI to show a specific Trusted Access RoleBinding.
+
+```azurecli
+az aks trustedaccess rolebinding show --name <rolebinding name> --resource-group <AKS resource group> --cluster-name <AKS cluster name>
+```
+++
+## List all the Trusted Access RoleBindings for a cluster
+
+Use the Azure CLI to list all the Trusted Access RoleBindings for a cluster.
+
+```azurecli
+az aks trustedaccess rolebinding list --resource-group <AKS resource group> --cluster-name <AKS cluster name>
+```
+
+## Delete the Trusted Access RoleBinding for a cluster
+
+> [!WARNING]
+> Deleting the existing Trusted Access RoleBinding will cause disconnection from AKS cluster to the Azure service.
+
+Use the Azure CLI to delete an existing Trusted Access RoleBinding.
+
+```azurecli
+az aks trustedaccess rolebinding delete --name <rolebinding name> --resource-group <AKS resource group> --cluster-name <AKS cluster name>
+```
+
+## Next steps
+
+For more information on AKS, see:
+
+* [Deploy and manage cluster extensions for AKS](/cluster-extensions.md)
+* [Deploy AzureML extension on AKS or Arc Kubernetes cluster](../machine-learning/how-to-deploy-kubernetes-extension.md)
+
+<!-- LINKS -->
+
+[az-feature-register]: /cli/azure/feature#az-feature-register
+[az-feature-show]: /cli/azure/feature#az-feature-show
+[az-provider-register]: /cli/azure/provider#az-provider-register
aks Use Pod Sandboxing https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/use-pod-sandboxing.md
+
+ Title: Pod Sandboxing (preview) with Azure Kubernetes Service (AKS)
+description: Learn about and deploy Pod Sandboxing (preview), also referred to as Kernel Isolation, on an Azure Kubernetes Service (AKS) cluster.
++ Last updated : 02/23/2023+++
+# Pod Sandboxing (preview) with Azure Kubernetes Service (AKS)
+
+To help secure and protect your container workloads from untrusted or potentially malicious code, AKS now includes a mechanism called Pod Sandboxing (preview). Pod Sandboxing provides an isolation boundary between the container application, and the shared kernel and compute resources of the container host. For example CPU, memory, and networking. Pod Sandboxing complements other security measures or data protection controls with your overall architecture to help you meet regulatory, industry, or governance compliance requirements for securing sensitive information.
+
+This article helps you understand this new feature, and how to implement it.
+
+## Prerequisites
+
+- The Azure CLI version 2.44.1 or later. Run `az --version` to find the version, and run `az upgrade` to upgrade the version. If you need to install or upgrade, see [Install Azure CLI][install-azure-cli].
+
+- The `aks-preview` Azure CLI extension version 0.5.123 or later to select the [Mariner operating system][mariner-cluster-config] generation 2 SKU.
+
+- Register the `KataVMIsolationPreview` feature in your Azure subscription.
+
+- AKS supports Pod Sandboxing (preview) on version 1.24.0 and higher.
+
+- To manage a Kubernetes cluster, use the Kubernetes command-line client [kubectl][kubectl]. Azure Cloud Shell comes with `kubectl`. You can install kubectl locally using the [az aks install-cli][az-aks-install-cmd] command.
+
+### Install the aks-preview Azure CLI extension
++
+To install the aks-preview extension, run the following command:
+
+```azurecli
+az extension add --name aks-preview
+```
+
+Run the following command to update to the latest version of the extension released:
+
+```azurecli
+az extension update --name aks-preview
+```
+
+### Register the KataVMIsolationPreview feature flag
+
+Register the `KataVMIsolationPreview` feature flag by using the [az feature register][az-feature-register] command, as shown in the following example:
+
+```azurecli-interactive
+az feature register --namespace "Microsoft.ContainerService" --name "KataVMIsolationPreview"
+```
+
+It takes a few minutes for the status to show *Registered*. Verify the registration status by using the [az feature show][az-feature-show] command:
+
+```azurecli-interactive
+az feature show --namespace "Microsoft.ContainerService" --name "KataVMIsolationPreview"
+```
+
+When the status reflects *Registered*, refresh the registration of the *Microsoft.ContainerService* resource provider by using the [az provider register][az-provider-register] command:
+
+```azurecli-interactive
+az provider register --namespace "Microsoft.ContainerService"
+```
+
+## Limitations
+
+The following are constraints with this preview of Pod Sandboxing (preview):
+
+* Kata containers may not reach the IOPS performance limits that traditional containers can reach on Azure Files and high performance local SSD.
+
+* [Microsoft Defender for Containers][defender-for-containers] doesn't support assessing Kata runtime pods.
+
+* [Container Insights][container-insights] doesn't support monitoring of Kata runtime pods in the preview release.
+
+* [Kata][kata-network-limitations] host-network isn't supported.
+
+* AKS does not support [Container Storage Interface drivers][csi-storage-driver] and [Secrets Store CSI driver][csi-secret-store driver] in this preview release.
+
+## How it works
+
+To achieve this functionality on AKS, [Kata Containers][kata-containers-overview] running on Mariner AKS Container Host (MACH) stack delivers hardware-enforced isolation. Pod Sandboxing extends the benefits of hardware isolation such as a separate kernel for each Kata pod. Hardware isolation allocates resources for each pod and doesn't share them with other Kata Containers or namespace containers running on the same host.
+
+The solution architecture is based on the following components:
+
+* [Mariner][mariner-overview] AKS Container Host
+* Microsoft Hyper-V Hypervisor
+* Azure-tuned Dom0 Linux Kernel
+* Open-source [Cloud-Hypervisor][cloud-hypervisor] Virtual Machine Monitor (VMM)
+* Integration with [Kata Container][kata-container] framework
+
+Deploying Pod Sandboxing using Kata Containers is similar to the standard containerd workflow to deploy containers. The deployment includes kata-runtime options that you can define in the pod template.
+
+To use this feature with a pod, the only difference is to add **runtimeClassName** *kata-mshv-vm-isolation* to the pod spec.
+
+When a pod uses the *kata-mshv-vm-isolation* runtimeClass, it creates a VM to serve as the pod sandbox to host the containers. The VM's default memory is 2 GB and the default CPU is one core if the [Container resource manifest][container-resource-manifest] (`containers[].resources.limits`) doesn't specify a limit for CPU and memory. When you specify a limit for CPU or memory in the container resource manifest, the VM has `containers[].resources.limits.cpu` with the `1` argument to use *one + xCPU*, and `containers[].resources.limits.memory` with the `2` argument to specify *2 GB + yMemory*. Containers can only use CPU and memory to the limits of the containers. The `containers[].resources.requests` are ignored in this preview while we work to reduce the CPU and memory overhead.
+
+## Deploy new cluster
+
+Perform the following steps to deploy an AKS Mariner cluster using the Azure CLI.
+
+1. Create an AKS cluster using the [az aks create][az-aks-create] command and specifying the following parameters:
+
+ * **--workload-runtime**: Specify *KataMshvVmIsolation* to enable the Pod Sandboxing feature on the node pool. With this parameter, these other parameters shall satisfy the following requirements. Otherwise, the command fails and reports an issue with the corresponding parameter(s).
+ * **--os-sku**: *mariner*. Only the Mariner os-sku supports this feature in this preview release.
+ * **--node-vm-size**: Any Azure VM size that is a generation 2 VM and supports nested virtualization works. For example, [Dsv3][dv3-series] VMs.
+
+ The following example creates a cluster named *myAKSCluster* with one node in the *myResourceGroup*:
+
+ ```azurecli
+ az aks create --name myAKSCluster --resource-group myResourceGroup --os-sku mariner --workload-runtime KataMshvVmIsolation --node-vm-size Standard_D4s_v3 --node-count 1
+
+2. Run the following command to get access credentials for the Kubernetes cluster. Use the [az aks get-credentials][aks-get-credentials] command and replace the values for the cluster name and the resource group name.
+
+ ```azurecli
+ az aks get-credentials --resource-group myResourceGroup --name myAKSCluster
+ ```
+
+3. List all Pods in all namespaces using the [kubectl get pods][kubectl-get-pods] command.
+
+ ```bash
+ kubectl get pods --all-namespaces
+ ```
+
+## Deploy to an existing cluster
+
+To use this feature with an existing AKS cluster, the following requirements must be met:
+
+* Follow the steps to [register the KataVMIsolationPreview][register-the-katavmisolationpreview-feature-flag] feature flag.
+* Verify the cluster is running Kubernetes version 1.24.0 and higher.
+
+Use the following command to enable Pod Sandboxing (preview) by creating a node pool to host it.
+
+1. Add a node pool to your AKS cluster using the [az aks nodepool add][az-aks-nodepool-add] command. Specify the following parameters:
+
+ * **--resource-group**: Enter the name of an existing resource group to create the AKS cluster in.
+ * **--cluster-name**: Enter a unique name for the AKS cluster, such as *myAKSCluster*.
+ * **--name**: Enter a unique name for your clusters node pool, such as *nodepool2*.
+ * **--workload-runtime**: Specify *KataMshvVmIsolation* to enable the Pod Sandboxing feature on the node pool. Along with the `--workload-runtime` parameter, these other parameters shall satisfy the following requirements. Otherwise, the command fails and reports an issue with the corresponding parameter(s).
+ * **--os-sku**: *mariner*. Only the Mariner os-sku supports this feature in the preview release.
+ * **--node-vm-size**: Any Azure VM size that is a generation 2 VM and supports nested virtualization works. For example, [Dsv3][dv3-series] VMs.
+
+ The following example adds a node pool to *myAKSCluster* with one node in *nodepool2* in the *myResourceGroup*:
+
+ ```azurecli
+ az aks nodepool add --cluster-name myAKSCluster --resource-group myResourceGroup --name nodepool2 --os-sku mariner --workload-runtime KataMshvVmIsolation --node-vm-size Standard_D4s_v3
+ ```
+
+2. Run the [az aks update][az-aks-update] command to enable pod sandboxing (preview) on the cluster.
+
+ ```bash
+ az aks update --name myAKSCluster --resource-group myResourceGroup
+ ```
+
+## Deploy a trusted application
+
+To demonstrate the isolation of an application on the AKS cluster, perform the following steps.
+
+1. Create a file named *trusted-app.yaml* to describe a trusted pod, and then paste the following manifest.
+
+ ```yml
+ kind: Pod
+ apiVersion: v1
+ metadata:
+ name: trusted
+ spec:
+ containers:
+ - name: trusted
+ image: mcr.microsoft.com/aks/fundamental/base-ubuntu:v0.0.11
+ command: ["/bin/sh", "-ec", "while :; do echo '.'; sleep 5 ; done"]
+ ```
+
+2. Deploy the Kubernetes pod by running the [kubectl apply][kubectl-apply] command and specify your *trusted-app.yaml* file:
+
+ ```bash
+ kubectl apply -f trusted-app.yaml
+ ```
+
+ The output of the command resembles the following example:
+
+ ```output
+ pod/trusted created
+ ```
+
+## Deploy an untrusted application
+
+To demonstrate the deployed application on the AKS cluster isn't isolated and is on the untrusted shim, perform the following steps.
+
+1. Create a file named *untrusted-app.yaml* to describe an untrusted pod, and then paste the following manifest.
+
+ ```yml
+ kind: Pod
+ apiVersion: v1
+ metadata:
+ name: untrusted
+ spec:
+ runtimeClassName: kata-mshv-vm-isolation
+ containers:
+ - name: untrusted
+ image: mcr.microsoft.com/aks/fundamental/base-ubuntu:v0.0.11
+ command: ["/bin/sh", "-ec", "while :; do echo '.'; sleep 5 ; done"]
+ ```
+
+ The value for **runtimeClassNameSpec** is `kata-mhsv-vm-isolation`.
+
+2. Deploy the Kubernetes pod by running the [kubectl apply][kubectl-apply] command and specify your *untrusted-app.yaml* file:
+
+ ```bash
+ kubectl apply -f untrusted-app.yaml
+ ```
+
+ The output of the command resembles the following example:
+
+ ```output
+ pod/untrusted created
+ ```
+
+## Verify Kernel Isolation configuration
+
+1. To access a container inside the AKS cluster, start a shell session by running the [kubectl exec][kubectl-exec] command. In this example, you're accessing the container inside the *untrusted* pod.
+
+ ```bash
+ kubectl exec -it untrusted -- /bin/bash
+ ```
+
+ Kubectl connects to your cluster, runs `/bin/sh` inside the first container within the *untrusted* pod, and forward your terminal's input and output streams to the container's process. You can also start a shell session to the container hosting the *trusted* pod.
+
+2. After starting a shell session to the container of the *untrusted* pod, you can run commands to verify that the *untrusted* container is running in a pod sandbox. You'll notice that it has a different kernel version compared to the *trusted* container outside the sandbox.
+
+ To see the kernel version run the following command:
+
+ ```bash
+ uname -r
+ ```
+
+ The following example resembles output from the pod sandbox kernel:
+
+ ```output
+ root@untrusted:/# uname -r
+ 5.15.48.1-8.cm2
+ ```
+
+3. Start a shell session to the container of the *trusted* pod to verify the kernel output:
+
+ ```bash
+ kubectl exec -it trusted -- /bin/bash
+ ```
+
+ To see the kernel version run the following command:
+
+ ```bash
+ uname -r
+ ```
+
+ The following example resembles output from the VM that is running the *trusted* pod, which is a different kernel than the *untrusted* pod running within the pod sandbox:
+
+ ```output
+ 5.15.80.mshv2-hvl1.m2
+
+## Cleanup
+
+When you're finished evaluating this feature, to avoid Azure charges, clean up your unnecessary resources. If you deployed a new cluster as part of your evaluation or testing, you can delete the cluster using the [az aks delete][az-aks-delete] command.
+
+```azurecli
+az aks delete --resource-group myResourceGroup --name myAKSCluster
+```
+
+If you enabled Pod Sandboxing (preview) on an existing cluster, you can remove the pod(s) using the [kubectl delete pod][kubectl-delete-pod] command.
+
+```bash
+kubectl delete pod pod-name
+```
+
+## Next steps
+
+* Learn more about [Azure Dedicated hosts][azure-dedicated-hosts] for nodes with your AKS cluster to use hardware isolation and control over Azure platform maintenance events.
+
+<!-- EXTERNAL LINKS -->
+[kata-containers-overview]: https://katacontainers.io/
+[kubectl]: https://kubernetes.io/docs/user-guide/kubectl/
+[azurerm-mariner]: https://registry.terraform.io/providers/hashicorp/azurerm/latest/docs/resources/kubernetes_cluster_node_pool#os_sku
+[kubectl-get-pods]: https://kubernetes.io/docs/reference/generated/kubectl/kubectl-commands#get
+[kubectl-exec]: https://kubernetes.io/docs/reference/generated/kubectl/kubectl-commands#exec
+[container-resource-manifest]: https://kubernetes.io/docs/tasks/configure-pod-container/assign-cpu-resource/
+[kubectl-delete-pod]: https://kubernetes.io/docs/reference/generated/kubectl/kubectl-commands#delete
+[kubectl-apply]: https://kubernetes.io/docs/reference/generated/kubectl/kubectl-commands#apply
+[kata-network-limitations]: https://github.com/kata-containers/kata-containers/blob/main/docs/Limitations.md#host-network
+[cloud-hypervisor]: https://www.cloudhypervisor.org
+[kata-container]: https://katacontainers.io
+
+<!-- INTERNAL LINKS -->
+[install-azure-cli]: /cli/azu
+[az-feature-register]: /cli/azure/feature#az_feature_register
+[az-provider-register]: /cli/azure/provider#az-provider-register
+[az-feature-show]: /cli/azure/feature#az-feature-show
+[aks-get-credentials]: /cli/azure/aks#az-aks-get-credentials
+[az-aks-create]: /cli/azure/aks#az-aks-create
+[az-deployment-group-create]: /cli/azure/deployment/group#az-deployment-group-create
+[connect-to-aks-cluster-nodes]: node-access.md
+[dv3-series]: ../virtual-machines/dv3-dsv3-series.md#dsv3-series
+[az-aks-nodepool-add]: /cli/azure/aks/nodepool#az-aks-nodepool-add
+[create-ssh-public-key-linux]: ../virtual-machines/linux/mac-create-ssh-keys.md
+[az-aks-delete]: /cli/azure/aks#az-aks-delete
+[cvm-on-aks]: use-cvm.md
+[azure-dedicated-hosts]: use-azure-dedicated-hosts.md
+[container-insights]: ../azure-monitor/containers/container-insights-overview.md
+[defender-for-containers]: ../defender-for-cloud/defender-for-containers-introduction.md
+[az-aks-install-cmd]: /cli/azure/aks#az-aks-install-cli
+[mariner-overview]: use-mariner.md
+[csi-storage-driver]: csi-storage-drivers.md
+[csi-secret-store driver]: csi-secrets-store-driver.md
+[az-aks-update]: /cli/azure/aks#az-aks-update
+[mariner-cluster-config]: cluster-configuration.md#mariner-os
+[register-the-katavmisolationpreview-feature-flag]: #register-the-katavmisolationpreview-feature-flag
azure-arc Create Data Controller Direct Cli https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/data/create-data-controller-direct-cli.md
The following command creates the Arc data services extension.
##### [Linux](#tab/linux) ```azurecli
-az k8s-extension create --cluster-name ${clusterName} --resource-group ${resourceGroup} --name ${adsExtensionName} --cluster-type connectedClusters --extension-type microsoft.arcdataservices --auto-upgrade false --scope cluster --release-namespace ${namespace} --config Microsoft.CustomLocation.ServiceAccount=sa-arc-bootstrapper
+az k8s-extension create --cluster-name ${clusterName} --resource-group ${resourceGroup} --name ${adsExtensionName} --cluster-type connectedClusters --extension-type microsoft.arcdataservices --auto-upgrade false --auto-upgrade-minor-version false --scope cluster --release-namespace ${namespace} --config Microsoft.CustomLocation.ServiceAccount=sa-arc-bootstrapper
az k8s-extension show --resource-group ${resourceGroup} --cluster-name ${resourceName} --name ${adsExtensionName} --cluster-type connectedclusters ``` ##### [Windows (PowerShell)](#tab/windows) ```azurecli
-az k8s-extension create --cluster-name $ENV:clusterName --resource-group $ENV:resourceGroup --name $ENV:adsExtensionName --cluster-type connectedClusters --extension-type microsoft.arcdataservices --auto-upgrade false --scope cluster --release-namespace $ENV:namespace --config Microsoft.CustomLocation.ServiceAccount=sa-arc-bootstrapper
+az k8s-extension create --cluster-name $ENV:clusterName --resource-group $ENV:resourceGroup --name $ENV:adsExtensionName --cluster-type connectedClusters --extension-type microsoft.arcdataservices --auto-upgrade false --auto-upgrade-minor-version false --scope cluster --release-namespace $ENV:namespace --config Microsoft.CustomLocation.ServiceAccount=sa-arc-bootstrapper
az k8s-extension show --resource-group $ENV:resourceGroup --cluster-name $ENV:clusterName --name $ENV:adsExtensionName --cluster-type connectedclusters ```
az k8s-extension show --resource-group $ENV:resourceGroup --cluster-name $ENV:cl
Use the below command if you are deploying from your private repository: ```azurecli
-az k8s-extension create --cluster-name "<connected cluster name>" --resource-group "<resource group>" --name "<extension name>" --cluster-type connectedClusters --extension-type microsoft.arcdataservices --scope cluster --release-namespace "<namespace>" --config Microsoft.CustomLocation.ServiceAccount=sa-arc-bootstrapper --config imageCredentials.registry=<registry info> --config imageCredentials.username=<username> --config systemDefaultValues.image=<registry/repo/arc-bootstrapper:<imagetag>> --config-protected imageCredentials.password=$ENV:DOCKER_PASSWORD --debug
+az k8s-extension create --cluster-name "<connected cluster name>" --resource-group "<resource group>" --name "<extension name>" --cluster-type connectedClusters -auto-upgrade false --auto-upgrade-minor-version false --extension-type microsoft.arcdataservices --scope cluster --release-namespace "<namespace>" --config Microsoft.CustomLocation.ServiceAccount=sa-arc-bootstrapper --config imageCredentials.registry=<registry info> --config imageCredentials.username=<username> --config systemDefaultValues.image=<registry/repo/arc-bootstrapper:<imagetag>> --config-protected imageCredentials.password=$ENV:DOCKER_PASSWORD --debug
``` For example: ```azurecli
-az k8s-extension create --cluster-name "my-connected-cluster" --resource-group "my-resource-group" --name "arc-data-services" --cluster-type connectedClusters --extension-type microsoft.arcdataservices --scope cluster --release-namespace "arc" --config Microsoft.CustomLocation.ServiceAccount=sa-bootstrapper --config imageCredentials.registry=mcr.microsoft.com --config imageCredentials.username=arcuser --config systemDefaultValues.image=mcr.microsoft.com/arcdata/arc-bootstrapper:latest --config-protected imageCredentials.password=$ENV:DOCKER_PASSWORD --debug
+az k8s-extension create --cluster-name "my-connected-cluster" --resource-group "my-resource-group" --name "arc-data-services" --cluster-type connectedClusters -auto-upgrade false --auto-upgrade-minor-version false --extension-type microsoft.arcdataservices --scope cluster --release-namespace "arc" --config Microsoft.CustomLocation.ServiceAccount=sa-bootstrapper --config imageCredentials.registry=mcr.microsoft.com --config imageCredentials.username=arcuser --config systemDefaultValues.image=mcr.microsoft.com/arcdata/arc-bootstrapper:latest --config-protected imageCredentials.password=$ENV:DOCKER_PASSWORD --debug
```
azure-arc Validation Program https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/kubernetes/validation-program.md
The following Microsoft-provided Kubernetes distributions and infrastructure pro
| Cluster API Provider on Azure | Release version: [0.4.12](https://github.com/kubernetes-sigs/cluster-api-provider-azure/releases/tag/v0.4.12); Kubernetes version: [1.18.2](https://github.com/kubernetes/kubernetes/releases/tag/v1.18.2) | | AKS on Azure Stack HCI | Release version: [December 2020 Update](https://github.com/Azure/aks-hci/releases/tag/AKS-HCI-2012); Kubernetes version: [1.18.8](https://github.com/kubernetes/kubernetes/releases/tag/v1.18.8) | | K8s on Azure Stack Edge | Release version: Azure Stack Edge 2207 (2.2.2037.5375); Kubernetes version: [1.22.6](https://github.com/kubernetes/kubernetes/releases/tag/v1.22.6) |
+| AKS Edge Essentials | Release version [1.0.406.0]( https://github.com/Azure/AKS-Edge/releases/tag/1.0.406.0); Kubernetes version [1.24.3](https://github.com/kubernetes/kubernetes/releases/tag/v1.24.3) |
The following providers and their corresponding Kubernetes distributions have successfully passed the conformance tests for Azure Arc-enabled Kubernetes:
azure-functions Durable Functions Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/durable/durable-functions-overview.md
Durable Functions is designed to work with all Azure Functions programming langu
| .NET / C# / F# | Functions 1.0+ | In-process <br/> Out-of-process| n/a | | JavaScript/TypeScript | Functions 2.0+ | Node 8+ | 2.x bundles | | Python | Functions 2.0+ | Python 3.7+ | 2.x bundles |
+| Python (V2 prog. model) | Functions 4.0+ | Python 3.7+ | 3.15+ bundles |
| PowerShell | Functions 3.0+ | PowerShell 7+ | 2.x bundles | | Java | Functions 4.0+ | Java 8+ | 4.x bundles |
-Like Azure Functions, there are templates to help you develop Durable Functions using [Visual Studio 2019](durable-functions-create-first-csharp.md), [Visual Studio Code](quickstart-js-vscode.md), and the [Azure portal](durable-functions-create-portal.md).
+> [!NOTE]
+> The new programming model for authoring Functions in Python (V2) is currently in preview. Compared to the current model, the new experience is designed to have a more idiomatic and intuitive. To learn more, see Azure Functions Python [developer guide](/azure/azure-functions/functions-reference-python.md?pivots=python-mode-decorators).
+>
+> In the following code snippets, Python (PM2) denotes programming model V2, the new experience.
+
+Like Azure Functions, there are templates to help you develop Durable Functions using [Visual Studio](durable-functions-create-first-csharp.md), [Visual Studio Code](quickstart-js-vscode.md), and the [Azure portal](durable-functions-create-portal.md).
## Application patterns
You can use the `context` object to invoke other functions by name, pass paramet
> [!NOTE] > The `context` object in Python represents the orchestration context. Access the main Azure Functions context using the `function_context` property on the orchestration context.
+# [Python (PM2)](#tab/python-v2)
+
+```python
+import azure.functions as func
+import azure.durable_functions as df
+
+myApp = df.DFApp(http_auth_level=func.AuthLevel.ANONYMOUS)
+
+@myApp.orchestration_trigger(context_name="context")
+def orchestrator_function(context: df.DurableOrchestrationContext):
+ x = yield context.call_activity("F1", None)
+ y = yield context.call_activity("F2", x)
+ z = yield context.call_activity("F3", y)
+ result = yield context.call_activity("F4", z)
+ return result
+
+```
+
+You can use the `context` object to invoke other functions by name, pass parameters, and return function output. Each time the code calls `yield`, the Durable Functions framework checkpoints the progress of the current function instance. If the process or virtual machine recycles midway through the execution, the function instance resumes from the preceding `yield` call. For more information, see the next section, Pattern #2: Fan out/fan in.
+
+> [!NOTE]
+> The `context` object in Python represents the orchestration context. Access the main Azure Functions context using the `function_context` property on the orchestration context.
+ # [PowerShell](#tab/powershell) ```PowerShell
The fan-out work is distributed to multiple instances of the `F2` function. The
The automatic checkpointing that happens at the `yield` call on `context.task_all` ensures that a potential midway crash or reboot doesn't require restarting an already completed task.
+# [Python (PM2)](#tab/python-v2)
+
+```python
+import azure.functions as func
+import azure.durable_functions as df
+
+myApp = df.DFApp(http_auth_level=func.AuthLevel.ANONYMOUS)
+
+@myApp.orchestration_trigger(context_name="context")
+def orchestrator_function(context: df.DurableOrchestrationContext):
+ # Get a list of N work items to process in parallel.
+ work_batch = yield context.call_activity("F1", None)
+
+ parallel_tasks = [ context.call_activity("F2", b) for b in work_batch ]
+
+ outputs = yield context.task_all(parallel_tasks)
+
+ # Aggregate all N outputs and send the result to F3.
+ total = sum(outputs)
+ yield context.call_activity("F3", total)
+```
+
+The fan-out work is distributed to multiple instances of the `F2` function. The work is tracked by using a dynamic list of tasks. `context.task_all` API is called to wait for all the called functions to finish. Then, the `F2` function outputs are aggregated from the dynamic task list and passed to the `F3` function.
+
+The automatic checkpointing that happens at the `yield` call on `context.task_all` ensures that a potential midway crash or reboot doesn't require restarting an already completed task.
+ # [PowerShell](#tab/powershell) ```PowerShell
def orchestrator_function(context: df.DurableOrchestrationContext):
main = df.Orchestrator.create(orchestrator_function) ```
+# [Python (PM2)](#tab/python-v2)
+
+```python
+import json
+from datetime import timedelta
+
+import azure.functions as func
+import azure.durable_functions as df
+
+myApp = df.DFApp(http_auth_level=func.AuthLevel.ANONYMOUS)
+
+@myApp.orchestration_trigger(context_name="context")
+def orchestrator_function(context: df.DurableOrchestrationContext):
+ job = json.loads(context.get_input())
+ job_id = job["jobId"]
+ polling_interval = job["pollingInterval"]
+ expiry_time = job["expiryTime"]
+
+ while context.current_utc_datetime < expiry_time:
+ job_status = yield context.call_activity("GetJobStatus", job_id)
+ if job_status == "Completed":
+ # Perform an action when a condition is met.
+ yield context.call_activity("SendAlert", job_id)
+ break
+
+ # Orchestration sleeps until this time.
+ next_check = context.current_utc_datetime + timedelta(seconds=polling_interval)
+ yield context.create_timer(next_check)
+
+ # Perform more work here, or let the orchestration end.
+```
+ # [PowerShell](#tab/powershell) ```powershell
main = df.Orchestrator.create(orchestrator_function)
To create the durable timer, call `context.create_timer`. The notification is received by `context.wait_for_external_event`. Then, `context.task_any` is called to decide whether to escalate (timeout happens first) or process the approval (the approval is received before timeout).
+# [Python (PM2)](#tab/python-v2)
+
+```python
+import json
+from datetime import timedelta
+
+import azure.functions as func
+import azure.durable_functions as df
+
+myApp = df.DFApp(http_auth_level=func.AuthLevel.ANONYMOUS)
+
+@myApp.orchestration_trigger(context_name="context")
+def orchestrator_function(context: df.DurableOrchestrationContext):
+ yield context.call_activity("RequestApproval", None)
+
+ due_time = context.current_utc_datetime + timedelta(hours=72)
+ durable_timeout_task = context.create_timer(due_time)
+ approval_event_task = context.wait_for_external_event("ApprovalEvent")
+
+ winning_task = yield context.task_any([approval_event_task, durable_timeout_task])
+
+ if approval_event_task == winning_task:
+ durable_timeout_task.cancel()
+ yield context.call_activity("ProcessApproval", approval_event_task.result)
+ else:
+ yield context.call_activity("Escalate", None)
+```
+
+To create the durable timer, call `context.create_timer`. The notification is received by `context.wait_for_external_event`. Then, `context.task_any` is called to decide whether to escalate (timeout happens first) or process the approval (the approval is received before timeout).
+ # [PowerShell](#tab/powershell) ```powershell
module.exports = async function (context) {
# [Python](#tab/python) ```python
+import azure.functions as func
import azure.durable_functions as df
+myApp = df.DFApp(http_auth_level=func.AuthLevel.ANONYMOUS)
+
+# An HTTP-Triggered Function with a Durable Functions Client binding
+@myApp.route(route="orchestrators/{functionName}")
+@myApp.durable_client_input(client_name="client")
+async def main(client):
+ is_approved = True
+ await client.raise_event(instance_id, "ApprovalEvent", is_approved)
+```
+
+# [Python (PM2)](#tab/python-v2)
+
+```python
+import azure.functions as func
+import azure.durable_functions as df
+
+myApp = df.DFApp(http_auth_level=func.AuthLevel.ANONYMOUS)
+
+@myApp.route(route="orchestrators/{functionName}")
+@myApp.durable_client_input(client_name="client")
async def main(client: str):
- durable_client = df.DurableOrchestrationClient(client)
is_approved = True
- await durable_client.raise_event(instance_id, "ApprovalEvent", is_approved)
+ await client.raise_event(instance_id, "ApprovalEvent", is_approved)
``` # [PowerShell](#tab/powershell)
def entity_function(context: df.DurableOrchestrationContext):
main = df.Entity.create(entity_function) ```
+# [Python (PM2)](#tab/python-v2)
+
+```python
+import azure.functions as func
+import azure.durable_functions as df
+
+myApp = df.DFApp(http_auth_level=func.AuthLevel.ANONYMOUS)
+
+@myApp.entity_trigger(context_name="context")
+def entity_function(context: df.DurableOrchestrationContext):
+
+ current_value = context.get_state(lambda: 0)
+ operation = context.operation_name
+ if operation == "add":
+ amount = context.get_input()
+ current_value += amount
+ context.set_result(current_value)
+ elif operation == "reset":
+ current_value = 0
+ elif operation == "get":
+ context.set_result(current_value)
+
+ context.set_state(current_value)
+```
+ # [PowerShell](#tab/powershell) Durable entities are currently not supported in PowerShell.
module.exports = async function (context) {
import azure.functions as func import azure.durable_functions as df - async def main(req: func.HttpRequest, starter: str) -> func.HttpResponse: client = df.DurableOrchestrationClient(starter) entity_id = df.EntityId("Counter", "myCounter")
async def main(req: func.HttpRequest, starter: str) -> func.HttpResponse:
return func.HttpResponse("Entity signaled") ```
+# [Python (PM2)](#tab/python-v2)
+
+```python
+import azure.functions as func
+import azure.durable_functions as df
+
+myApp = df.DFApp(http_auth_level=func.AuthLevel.ANONYMOUS)
+
+@myApp.route(route="orchestrators/{functionName}")
+@myApp.durable_client_input(client_name="client")
+async def main(req: func.HttpRequest, client) -> func.HttpResponse:
+ entity_id = df.EntityId("Counter", "myCounter")
+ instance_id = await client.signal_entity(entity_id, "add", 1)
+ return func.HttpResponse("Entity signaled")
+```
+ # [PowerShell](#tab/powershell) Durable entities are currently not supported in PowerShell.
azure-functions Quickstart Python Vscode https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/durable/quickstart-python-vscode.md
Last updated 06/15/2022
ms.devlang: python
+zone_pivot_groups: python-mode-functions
# Create your first durable function in Python
In this article, you learn how to use the Visual Studio Code Azure Functions ext
:::image type="content" source="./media/quickstart-python-vscode/functions-vs-code-complete.png" alt-text="Screenshot of the running durable function in Azure.":::
+> [!NOTE]
+> The new programming model for authoring Functions in Python (V2) is currently in preview. Compared to the current model, the new experience is designed to have a more idiomatic and intuitive. To learn more, see Azure Functions Python [developer guide](/azure/azure-functions/functions-reference-python.md?pivots=python-mode-decorators).
+ ## Prerequisites To complete this tutorial:
In this section, you use Visual Studio Code to create a local Azure Functions pr
1. Choose an empty folder location for your project and choose **Select**. + 1. Follow the prompts and provide the following information: | Prompt | Value | Description |
In this section, you use Visual Studio Code to create a local Azure Functions pr
| Python version | Python 3.7, 3.8, or 3.9 | Visual Studio Code will create a virtual environment with the version you select. | | Select a template for your project's first function | Skip for now | | | Select how you would like to open your project | Open in current window | Reopens Visual Studio Code in the folder you selected. |++
+1. Follow the prompts and provide the following information:
+
+ | Prompt | Value | Description |
+ | | -- | -- |
+ | Select a language | Python (Programming Model V2) | Create a local Python Functions project using the V2 programming model. |
+ | Select a version | Azure Functions v4 | You only see this option when the Core Tools aren't already installed. In this case, Core Tools are installed the first time you run the app. |
+ | Python version | Python 3.7, 3.8, or 3.9 | Visual Studio Code will create a virtual environment with the version you select. |
+ | Select how you would like to open your project | Open in current window | Reopens Visual Studio Code in the folder you selected. |
+ Visual Studio Code installs the Azure Functions Core Tools if needed. It also creates a function app project in a folder. This project contains the [host.json](../functions-host-json.md) and [local.settings.json](../functions-develop-local.md#local-settings-file) configuration files.
A basic Durable Functions app contains three functions:
* *Activity function*: It's called by the orchestrator function, performs work, and optionally returns a value. * *Client function*: It's a regular Azure Function that starts an orchestrator function. This example uses an HTTP triggered function. + ### Orchestrator function You use a template to create the durable function code in your project.
Finally, you'll add an HTTP triggered function that starts the orchestration.
You've added an HTTP triggered function that starts an orchestration. Open *DurableFunctionsHttpStart/\_\_init__.py* to see that it uses `client.start_new` to start a new orchestration. Then it uses `client.create_check_status_response` to return an HTTP response containing URLs that can be used to monitor and manage the new orchestration. You now have a Durable Functions app that can be run locally and deployed to Azure.++
+To create a basic Durable Functions app using these 3 function types, replace the contents of `function_app.py` with the following Python code.
+
+```Python
+import azure.functions as func
+import azure.durable_functions as df
+
+myApp = df.DFApp(http_auth_level=func.AuthLevel.ANONYMOUS)
+
+# An HTTP-Triggered Function with a Durable Functions Client binding
+@myApp.route(route="orchestrators/{functionName}")
+@myApp.durable_client_input(client_name="client")
+async def http_start(req: func.HttpRequest, client):
+ function_name = req.route_params.get('functionName')
+ instance_id = await client.start_new(function_name)
+ response = client.create_check_status_response(req, instance_id)
+ return response
+
+# Orchestrator
+@myApp.orchestration_trigger(context_name="context")
+def hello_orchestrator(context):
+ result1 = yield context.call_activity("hello", "Seattle")
+ result2 = yield context.call_activity("hello", "Tokyo")
+ result3 = yield context.call_activity("hello", "London")
+
+ return [result1, result2, result3]
+
+# Activity
+@myApp.activity_trigger(input_name="city")
+def hello(city: str):
+ return "Hello " + city
+```
+
+Review the table below for an explanation of each function and its purpose in the sample.
+
+| Method | Description |
+| -- | -- |
+| **`hello_orchestrator`** | The orchestrator function, which describes the workflow. In this case, the orchestration starts, invokes three functions in a sequence, and returns the ordered results of all 3 functions in a list. |
+| **`hello`** | The activity function, which performs the work being orchestrated. The function returns a simple greeting to the city passed as an argument. |
+| **`http_start`** | An [HTTP-triggered function](../functions-bindings-http-webhook.md) that starts an instance of the orchestration and returns a check status response. |
+ ## Test the function locally Azure Functions Core Tools lets you run an Azure Functions project on your local development computer. If you don't have it installed, you're prompted to install these tools the first time you start a function from Visual Studio Code. + 1. To test your function, set a breakpoint in the `Hello` activity function code (*Hello/\_\_init__.py*). Press <kbd>F5</kbd> or select `Debug: Start Debugging` from the command palette to start the function app project. Output from Core Tools is displayed in the **Terminal** panel.
- > [!NOTE]
- > For more information on debugging, see [Durable Functions Diagnostics](durable-functions-diagnostics.md#debugging).
++
+1. To test your function, set a breakpoint in the `hello` activity function code. Press <kbd>F5</kbd> or select `Debug: Start Debugging` from the command palette to start the function app project. Output from Core Tools is displayed in the **Terminal** panel.
+
-1. Durable Functions require an Azure storage account to run. When Visual Studio Code prompts you to select a storage account, select **Select storage account**.
+> [!NOTE]
+> For more information on debugging, see [Durable Functions Diagnostics](durable-functions-diagnostics.md#debugging).
+
+2. Durable Functions require an Azure storage account to run. When Visual Studio Code prompts you to select a storage account, select **Select storage account**.
:::image type="content" source="media/quickstart-python-vscode/functions-select-storage.png" alt-text="Screenshot of how to create a storage account.":::
-1. Follow the prompts and provide the following information to create a new storage account in Azure:
+3. Follow the prompts and provide the following information to create a new storage account in Azure:
| Prompt | Value | Description | | | -- | -- |
Azure Functions Core Tools lets you run an Azure Functions project on your local
| Select a resource group | *unique name* | Name of the resource group to create | | Select a location | *region* | Select a region close to you |
-1. In the **Terminal** panel, copy the URL endpoint of your HTTP-triggered function.
+4. In the **Terminal** panel, copy the URL endpoint of your HTTP-triggered function.
:::image type="content" source="media/quickstart-python-vscode/functions-f5.png" alt-text="Screenshot of Azure local output.":::
-1. Use your browser, or a tool like [Postman](https://www.getpostman.com/) or [cURL](https://curl.haxx.se/), send an HTTP request to the URL endpoint. Replace the last segment with the name of the orchestrator function (`HelloOrchestrator`). The URL must be similar to `http://localhost:7071/api/orchestrators/HelloOrchestrator`.
+5. Use your browser, or a tool like [Postman](https://www.getpostman.com/) or [cURL](https://curl.haxx.se/), send an HTTP request to the URL endpoint. Replace the last segment with the name of the orchestrator function (`HelloOrchestrator`). The URL must be similar to `http://localhost:7071/api/orchestrators/HelloOrchestrator`.
The response is the initial result from the HTTP function letting you know the durable orchestration has started successfully. It isn't yet the end result of the orchestration. The response includes a few useful URLs. For now, let's query the status of the orchestration.
-1. Copy the URL value for `statusQueryGetUri`, paste it in the browser's address bar, and execute the request. Alternatively, you can also continue to use Postman to issue the GET request.
+6. Copy the URL value for `statusQueryGetUri`, paste it in the browser's address bar, and execute the request. Alternatively, you can also continue to use Postman to issue the GET request.
The request will query the orchestration instance for the status. You must get an eventual response, which shows the instance has completed and includes the outputs or results of the durable function. It looks like:
- ```json
- {
- "name": "HelloOrchestrator",
- "instanceId": "9a528a9e926f4b46b7d3deaa134b7e8a",
- "runtimeStatus": "Completed",
- "input": null,
- "customStatus": null,
- "output": [
- "Hello Tokyo!",
- "Hello Seattle!",
- "Hello London!"
- ],
- "createdTime": "2020-03-18T21:54:49Z",
- "lastUpdatedTime": "2020-03-18T21:54:54Z"
- }
- ```
-
-1. To stop debugging, press <kbd>Shift+F5</kbd> in Visual Studio Code.
+
+```json
+{
+ "name": "HelloOrchestrator",
+ "instanceId": "9a528a9e926f4b46b7d3deaa134b7e8a",
+ "runtimeStatus": "Completed",
+ "input": null,
+ "customStatus": null,
+ "output": [
+ "Hello Tokyo!",
+ "Hello Seattle!",
+ "Hello London!"
+ ],
+ "createdTime": "2020-03-18T21:54:49Z",
+ "lastUpdatedTime": "2020-03-18T21:54:54Z"
+}
+```
+```json
+{
+ "name": "hello_orchestrator",
+ "instanceId": "9a528a9e926f4b46b7d3deaa134b7e8a",
+ "runtimeStatus": "Completed",
+ "input": null,
+ "customStatus": null,
+ "output": [
+ "Hello Tokyo!",
+ "Hello Seattle!",
+ "Hello London!"
+ ],
+ "createdTime": "2020-03-18T21:54:49Z",
+ "lastUpdatedTime": "2020-03-18T21:54:54Z"
+}
+```
++
+7. To stop debugging, press <kbd>Shift+F5</kbd> in Visual Studio Code.
After you've verified that the function runs correctly on your local computer, it's time to publish the project to Azure.
After you've verified that the function runs correctly on your local computer, i
## Test your function in Azure 1. Copy the URL of the HTTP trigger from the **Output** panel. The URL that calls your HTTP-triggered function must be in this format: `http://<functionappname>.azurewebsites.net/api/orchestrators/HelloOrchestrator`
+1. Copy the URL of the HTTP trigger from the **Output** panel. The URL that calls your HTTP-triggered function must be in this format: `http://<functionappname>.azurewebsites.net/api/orchestrators/hello_orchestrator`
+ 1. Paste this new URL for the HTTP request in your browser's address bar. You must get the same status response as before when using the published app.
azure-functions Functions Bindings Azure Sql Trigger https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/functions-bindings-azure-sql-trigger.md
zone_pivot_groups: programming-languages-set-functions-lang-workers
# Azure SQL trigger for Functions (preview) + > [!NOTE]
-> The Azure SQL trigger is only supported on **Premium and Dedicated** plans. Consumption is not supported.
+> The Azure SQL trigger is only supported on **Premium and Dedicated** plans. Consumption is not currently supported.
The Azure SQL trigger uses [SQL change tracking](/sql/relational-databases/track-changes/about-change-tracking-sql-server) functionality to monitor a SQL table for changes and trigger a function when a row is created, updated, or deleted. For configuration details for change tracking for use with the Azure SQL trigger, see [Set up change tracking](#set-up-change-tracking-required). For information on setup details of the Azure SQL extension for Azure Functions, see the [SQL binding overview](./functions-bindings-azure-sql.md).
-## Example usage
-<a id="example"></a>
+## Functionality Overview
+The Azure SQL Trigger binding uses a polling loop to check for changes, triggering the user function when changes are detected. At a high level the loop looks like this:
+
+```
+while (true) {
+ 1. Get list of changes on table - up to a maximum number controlled by the Sql_Trigger_MaxBatchSize setting
+ 2. Trigger function with list of changes
+ 3. Wait for delay controlled by Sql_Trigger_PollingIntervalMs setting
+}
+```
+
+Changes will always be processed in the order that their changes were made, with the oldest changes being processed first. A couple notes about this:
+
+1. If changes to multiple rows are made at once the exact order that they'll be sent to the function is based on the order returned by the CHANGETABLE function
+2. Changes are "batched" together for a row - if multiple changes are made to a row between each iteration of the loop then only a single change entry will exist for that row that shows the difference between the last processed state and the current state
+3. If changes are made to a set of rows, and then another set of changes are made to half of those same rows then the half that wasn't changed a second time will be processed first. This is due to the above note with the changes being batched - the trigger will only see the "last" change made and use that for the order it processes them in
+
+See [Work with change tracking](/sql/relational-databases/track-changes/work-with-change-tracking-sql-server) for more information on change tracking and how it's used by applications such as Azure SQL triggers.
+
+## Example usage
More samples for the Azure SQL trigger are available in the [GitHub repository](https://github.com/Azure/azure-functions-sql-extension/tree/main/samples/samples-csharp).
Isolated worker process isn't currently supported.
--> ---
-> [!NOTE]
-> In the current preview, Azure SQL triggers are only supported by [C# class library functions](functions-dotnet-class-library.md)
---
-## Attributes
+## Attributes
The [C# library](functions-dotnet-class-library.md) uses the [SqlTrigger](https://github.com/Azure/azure-functions-sql-extension/blob/main/src/TriggerBinding/SqlTriggerAttribute.cs) attribute to declare the SQL trigger on the function, which has the following properties: | Attribute property |Description| ||| | **TableName** | Required. The name of the table being monitored by the trigger. |
-| **ConnectionStringSetting** | Required. The name of an app setting that contains the connection string for the database which contains the table being monitored for changes. The connection string setting name corresponds to the application setting (in `local.settings.json` for local development) that contains the [connection string](/dotnet/api/microsoft.data.sqlclient.sqlconnection.connectionstring?view=sqlclient-dotnet-core-5.&preserve-view=true#Microsoft_Data_SqlClient_SqlConnection_ConnectionString) to the Azure SQL or SQL Server instance.|
--
+| **ConnectionStringSetting** | Required. The name of an app setting that contains the connection string for the database which contains the table being monitored for changes. The connection string setting name corresponds to the application setting (in `local.settings.json` for local development) that contains the [connection string](/dotnet/api/microsoft.data.sqlclient.sqlconnection.connectionstring?view=sqlclient-dotnet-core-5.&preserve-view=true#Microsoft_Data_SqlClient_SqlConnection_ConnectionString) to the Azure SQL or SQL Server instance.|
## Configuration <!-- ### for another day ###- The following table explains the binding configuration properties that you set in the function.json file. |function.json property | Description| -
+-->
In addition to the required ConnectionStringSetting [application setting](./functions-how-to-use-azure-function-app-settings.md#settings), the following optional settings can be configured for the SQL trigger: | App Setting | Description| |||
-|**Sql_Trigger_BatchSize** |This controls the number of changes processed at once before being sent to the triggered function. The default value is 100.|
+|**Sql_Trigger_BatchSize** |This controls the maximum number of changes processed with each iteration of the trigger loop before being sent to the triggered function. The default value is 100.|
|**Sql_Trigger_PollingIntervalMs**|This controls the delay in milliseconds between processing each batch of changes. The default value is 1000 (1 second).| |**Sql_Trigger_MaxChangesPerWorker**|This controls the upper limit on the number of pending changes in the user table that are allowed per application-worker. If the count of changes exceeds this limit, it may result in a scale out. The setting only applies for Azure Function Apps with [runtime driven scaling enabled](#enable-runtime-driven-scaling). The default value is 1000.|
In addition to the required ConnectionStringSetting [application setting](./func
## Set up change tracking (required)
-Setting up change tracking for use with the Azure SQL trigger requires two steps. These steps can be completed from any SQL tool that supports running queries, including [VS Code](/sql/tools/visual-studio-code/mssql-extensions), [Azure Data Studio](/sql/azure-data-studio/download-azure-data-studio) or [SQL Server Management Studio](/sql/ssms/download-sql-server-management-studio-ssms).
+Setting up change tracking for use with the Azure SQL trigger requires two steps. These steps can be completed from any SQL tool that supports running queries, including [Visual Studio Code](/sql/tools/visual-studio-code/mssql-extensions), [Azure Data Studio](/sql/azure-data-studio/download-azure-data-studio) or [SQL Server Management Studio](/sql/ssms/download-sql-server-management-studio-ssms).
1. Enable change tracking on the SQL database, substituting `your database name` with the name of the database where the table to be monitored is located:
Setting up change tracking for use with the Azure SQL trigger requires two steps
## Enable runtime-driven scaling
-Optionally, your functions can scale automatically based on the amount of changes that are pending to be processed in the user table. To allow your functions to scale properly on the Premium plan when using SQL triggers, you need to enable runtime scale monitoring.
+Optionally, your functions can scale automatically based on the number of changes that are pending to be processed in the user table. To allow your functions to scale properly on the Premium plan when using SQL triggers, you need to enable runtime scale monitoring.
[!INCLUDE [functions-runtime-scaling](../../includes/functions-runtime-scaling.md)]
Optionally, your functions can scale automatically based on the amount of change
## Next steps - [Read data from a database (Input binding)](./functions-bindings-azure-sql-input.md)-- [Save data to a database (Output binding)](./functions-bindings-azure-sql-output.md)
+- [Save data to a database (Output binding)](./functions-bindings-azure-sql-output.md)
+++
+> [!NOTE]
+> In the current preview, Azure SQL triggers are only supported by [C# class library functions](functions-dotnet-class-library.md)
+
azure-functions Functions Bindings Azure Sql https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/functions-bindings-azure-sql.md
This set of articles explains how to work with [Azure SQL](/azure/azure-sql/inde
## Install extension
-The extension NuGet package you install depends on the C# mode you're using in your function app:
+The extension NuGet package you install depends on the C# mode you're using in your function app:
# [In-process](#tab/in-process)
You can install this version of the extension in your function app by registerin
::: zone pivot="programming-language-javascript, programming-language-powershell"
-## Install bundle
+## Install bundle
-The SQL bindings extension is part of a preview [extension bundle], which is specified in your host.json project file.
+The SQL bindings extension is part of a preview [extension bundle], which is specified in your host.json project file.
# [Preview Bundle v4.x](#tab/extensionv4)
Azure SQL bindings for Azure Functions aren't available for the v3 version of th
::: zone-end ## Functions runtime > [!NOTE]
-> Python language support for the SQL bindings extension is available starting with v4.5.0 of the [functions runtime](./set-runtime-version.md#view-and-update-the-current-runtime-version). You may need to update your install of Azure Functions [Core Tools](functions-run-local.md) for local development. Learn more about determining the runtime in Azure regions from the [functions runtime](./set-runtime-version.md#view-and-update-the-current-runtime-version) documentation. Please see the tracking [GitHub issue](https://github.com/Azure/azure-functions-sql-extension/issues/250) for the latest update on availability.
+> Python language support for the SQL bindings extension is available starting with v4.5.0 of the [functions runtime](./set-runtime-version.md#view-and-update-the-current-runtime-version). You may need to update your install of Azure Functions [Core Tools](functions-run-local.md) for local development.
## Install bundle
-The SQL bindings extension is part of a preview [extension bundle], which is specified in your host.json project file.
+The SQL bindings extension is part of a preview [extension bundle], which is specified in your host.json project file.
# [Preview Bundle v4.x](#tab/extensionv4)
Azure SQL bindings for Azure Functions aren't available for the v3 version of th
-## Update packages
-
-Support for the SQL bindings extension is available in the 1.11.3b1 version of the [Azure Functions Python library](https://pypi.org/project/azure-functions/). Add this version of the library to your functions project with an update to the line for `azure-functions==` in the `requirements.txt` file in your Python Azure Functions project as seen in the following snippet:
-
-```
-azure-functions==1.11.3b1
-```
-
-Following setting the library version, update your application settings to [isolate the dependencies](./functions-app-settings.md#python_isolate_worker_dependencies) by adding `PYTHON_ISOLATE_WORKER_DEPENDENCIES` with the value `1` to your application settings. Locally, this is set in the `local.settings.json` file as seen below:
-
-```json
-"PYTHON_ISOLATE_WORKER_DEPENDENCIES": "1"
-```
-
-Support for Python durable functions with SQL bindings isn't yet available.
-- ::: zone-end ## Install bundle
-The SQL bindings extension is part of a preview [extension bundle], which is specified in your host.json project file.
+The SQL bindings extension is part of a preview [extension bundle], which is specified in your host.json project file.
# [Preview Bundle v4.x](#tab/extensionv4)
Add the Java library for SQL bindings to your functions project with an update t
## SQL connection string
-Azure SQL bindings for Azure Functions have a required property for connection string on both [input](./functions-bindings-azure-sql-input.md) and [output](./functions-bindings-azure-sql-output.md) bindings. SQL bindings passes the connection string to the Microsoft.Data.SqlClient library and supports the connection string as defined in the [SqlClient ConnectionString documentation](/dotnet/api/microsoft.data.sqlclient.sqlconnection.connectionstring?view=sqlclient-dotnet-core-5.0&preserve-view=true#Microsoft_Data_SqlClient_SqlConnection_ConnectionString). Notable keywords include:
+Azure SQL bindings for Azure Functions have a required property for the connection string on all bindings and triggers. These pass the connection string to the Microsoft.Data.SqlClient library and supports the connection string as defined in the [SqlClient ConnectionString documentation](/dotnet/api/microsoft.data.sqlclient.sqlconnection.connectionstring?view=sqlclient-dotnet-core-5.0&preserve-view=true#Microsoft_Data_SqlClient_SqlConnection_ConnectionString). Notable keywords include:
- `Authentication` allows a function to connect to Azure SQL with Azure Active Directory, including [Active Directory Managed Identity](./functions-identity-access-azure-sql-with-managed-identity.md) - `Command Timeout` allows a function to wait for specified amount of time in seconds before terminating a query (default 30 seconds)
azure-monitor Data Collection Iis https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/agents/data-collection-iis.md
The [data collection rule](../essentials/data-collection-rule-overview.md) defin
- How Azure Monitor transforms events during ingestion. - The destination Log Analytics workspace and table to which Azure Monitor sends the data.
-You can define a data collection rule to send data from multiple machines to multiple Log Analytics workspaces, including workspaces in a different region or tenant. Create the data collection rule in the *same region* as your VM / VMSS / Arc enabled server.
+You can define a data collection rule to send data from multiple machines to multiple Log Analytics workspaces, including workspaces in a different region or tenant. Create the data collection rule in the *same region* as your Analytics workspace.
> [!NOTE] > To send data across tenants, you must first enable [Azure Lighthouse](../../lighthouse/overview.md).
azure-monitor Container Insights Prometheus https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/containers/container-insights-prometheus.md
Perform the following steps to configure your ConfigMap configuration file for y
Example: `kubectl apply -f container-azm-ms-agentconfig.yaml`.
-The configuration change can take a few minutes to finish before taking effect. You must restart all Azure Monitor Agent pods manually. When the restarts are finished, a message appears that's similar to the following and includes the result `configmap "container-azm-ms-agentconfig" created`.
+The configuration change can take a few minutes to finish before taking effect. All ama-logs pods in the cluster will restart. When the restarts are finished, a message appears that's similar to the following and includes the result `configmap "container-azm-ms-agentconfig" created`.
### Verify configuration
azure-netapp-files Application Volume Group Introduction https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-netapp-files/application-volume-group-introduction.md
na Previously updated : 10/13/2022 Last updated : 02/24/2023 # Understand Azure NetApp Files application volume group for SAP HANA
Application volume group for SAP HANA helps you simplify the deployment process
* Use of proximity placement group (PPG) instead of manual pinning. * You will anchor the SAP HANA VMs using a PPG to guaranty lowest possible latency. This PPG will be used to enforce that the data, log, and shared volumes are created in the close proximity to the SAP HANA VMs. See [Best practices about Proximity Placement Groups](application-volume-group-considerations.md#best-practices-about-proximity-placement-groups) for detail.
-* Different IP addresses for data and log volumes.
- * This setup will provide better performance and throughput for the SAP HANA database.
+* Creation of separate storage endpoints (with different IP addresses) for data and log volumes.
+ * This deployment method provides better performance and throughput for the SAP HANA database.
## Next steps
azure-netapp-files Azacsnap Release Notes https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-netapp-files/azacsnap-release-notes.md
na Previously updated : 12/16/2022 Last updated : 02/24/2023
This page lists major changes made to AzAcSnap to provide new functionality or resolve defects.
+Download the [latest release](https://aka.ms/azacsnapinstaller) of the installer and review how to [get started](azacsnap-get-started.md).
+
+For specific information on Preview features, refer to the [AzAcSnap Preview](azacsnap-preview.md) page.
+
+## Feb-2023
+
+### AzAcSnap 7a (Build: 1AA8343)
+
+AzAcSnap 7a is being released with the following fixes:
+
+- Fixes for `-c restore` commands:
+ - Enable mounting volumes on HLI (BareMetal) where the volumes have been reverted to a prior state when using `-c restore --restore revertvolume`.
+ - Correctly set ThroughputMiBps on volume clones for Azure NetApp Files volumes in an Auto QoS Capacity Pool when using `-c restore --restore snaptovol`.
+ ## Dec-2022 ### AzAcSnap 7 (Build: 1A8FDFF)
Download the [latest release](https://aka.ms/azacsnapinstaller) of the installer
> [!IMPORTANT] > AzAcSnap 6 brings a new release model for AzAcSnap and includes fully supported GA features and Preview features in a single release.
-Since AzAcSnap v5.0 was released as GA in April 2021, there have been 8 releases of AzAcSnap across two branches. Our goal with the new release model is to align with how Azure components are released. This change allows moving features from Preview to GA (without having to move an entire branch), and introduce new Preview features (without having to create a new branch). From AzAcSnap 6 we'll have a single branch with fully supported GA features and Preview features (which are subject to Microsoft's Preview Ts&Cs). ItΓÇÖs important to note customers can't accidentally use Preview features, and must enable them with the `--preview` command line option. This means the next release will be AzAcSnap 7, which could include; patches (if necessary) for GA features, current Preview features moving to GA, or new Preview features.
+Since AzAcSnap v5.0 was released as GA in April 2021, there have been eight releases of AzAcSnap across two branches. Our goal with the new release model is to align with how Azure components are released. This change allows moving features from Preview to GA (without having to move an entire branch), and introduce new Preview features (without having to create a new branch). From AzAcSnap 6, we'll have a single branch with fully supported GA features and Preview features (which are subject to Microsoft's Preview Ts&Cs). ItΓÇÖs important to note customers can't accidentally use Preview features, and must enable them with the `--preview` command line option. This means the next release will be AzAcSnap 7, which could include; patches (if necessary) for GA features, current Preview features moving to GA, or new Preview features.
AzAcSnap 6 is being released with the following fixes and improvements:
AzAcSnap 6 is being released with the following fixes and improvements:
- Azure Managed Disk as an alternate storage back-end. - ANF Client API Version updated to 2021-10-01. - Change to workflow for handling Backint to re-enable backint configuration should there be a failure when putting SAP HANA in a consistent state for snapshot.
-
-Download the [latest release](https://aka.ms/azacsnapinstaller) of the installer and review how to [get started](azacsnap-get-started.md). For specific information on Preview features refer to the [AzAcSnap Preview](azacsnap-preview.md) page.
## May-2022
Download the [latest release of the Preview installer](https://aka.ms/azacsnap-p
AzAcSnap v5.1 Preview (Build: 20220302.81795) has been released with the following new features: - Azure Key Vault support for securely storing the Service Principal.-- A new option for `-c backup --volume` which has the `all` parameter value.
+- A new option for `-c backup --volume`, which has the `all` parameter value.
## Feb-2022
AzAcSnap v5.0 (Build: 20210421.6349) has been made Generally Available and for t
## March-2021
-### AzAcSnap v5.0 Preview (Build:20210318.30771)
+### AzAcSnap v5.0 Preview (Build: 20210318.30771)
-AzAcSnap v5.0 Preview (Build:20210318.30771) has been released with the following fixes and improvements:
+AzAcSnap v5.0 Preview (Build: 20210318.30771) has been released with the following fixes and improvements:
- Removed the need to add the AZACSNAP user into the SAP HANA Tenant DBs, see the [Enable communication with database](azacsnap-installation.md#enable-communication-with-database) section. - Fix to allow a [restore](azacsnap-cmd-ref-restore.md) with volumes configured with Manual QOS.
AzAcSnap v5.0 Preview (Build:20210318.30771) has been released with the followin
## Next steps - [Get started with Azure Application Consistent Snapshot tool](azacsnap-get-started.md)
+- [Download the latest release of the installer](https://aka.ms/azacsnapinstaller)
azure-netapp-files Whats New https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-netapp-files/whats-new.md
Azure NetApp Files is updated regularly. This article provides a summary about t
* [Volume user and group quotas](default-individual-user-group-quotas-introduction.md) (Preview)
- Azure NetApp Files volumes provide flexible, large and scalable storage shares for applications and users. Storage capacity and consumption by users is only limited by the size of the volume. In some scenarios, you may want to limit this storage consumption of users and groups within the volume. With Azure NetApp Files volume and group quotas, you can now do so. User and/or group quotas enable you to restrict the storage space that a user or group can use within a specific Azure NetApp Files volume. You can choose to set default (same for all users) or individual user quotas on all NFS, SMB, and dual protocol-enabled volumes. On all NFS-enabled volumes, you can set default (same for all users) or individual group quotas.
+ Azure NetApp Files volumes provide flexible, large and scalable storage shares for applications and users. Storage capacity and consumption by users is only limited by the size of the volume. In some scenarios, you may want to limit this storage consumption of users and groups within the volume. With Azure NetApp Files volume user and group quotas, you can now do so. User and/or group quotas enable you to restrict the storage space that a user or group can use within a specific Azure NetApp Files volume. You can choose to set default (same for all users) or individual user quotas on all NFS, SMB, and dual protocol-enabled volumes. On all NFS-enabled volumes, you can set default (same for all users) or individual group quotas.
* [Large volumes](large-volumes-requirements-considerations.md) (Preview)
azure-resource-manager Template Functions Resource https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-resource-manager/templates/template-functions-resource.md
The identifier is returned in the following format:
You use this function to get the resource ID for resources that are [deployed to the management group](deploy-to-management-group.md) rather than a resource group. The returned ID differs from the value returned by the [resourceId](#resourceid) function by not including a subscription ID and a resource group value.
-### managementGrouopResourceID example
+### managementGroupResourceID example
The following template creates and assigns a policy definition. It uses the `managementGroupResourceId` function to get the resource ID for policy definition.
azure-vmware Attach Azure Netapp Files To Azure Vmware Solution Hosts https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-vmware/attach-azure-netapp-files-to-azure-vmware-solution-hosts.md
Title: Attach Azure NetApp Files datastores to Azure VMware Solution hosts
description: Learn how to create Azure NetApp Files-based NFS datastores for Azure VMware Solution hosts. Previously updated : 02/21/2023 Last updated : 02/24/2023
Before you begin the prerequisites, review the [Performance best practices](#per
>[!NOTE] >Azure NetApp Files datastores for Azure VMware Solution are generally available. To use it, you must register Azure NetApp Files datastores for Azure VMware Solution.
-Azure VMware Solution is currently supported in these [regions](https://azure.microsoft.com/global-infrastructure/services/?products=azure-vmware).
+## Supported regions
+
+Azure VMware Solution are currently supported in the following regions:
+**Asia**: East Asia, Japan East, Japan West, Southeast Asia.
+**Australia**: Australia East, Australia Southeast.
+**Brazil**: Brazil South.
+**Europe**: France Central, Germany West Central, North Europe, Sweden Central, Sweden North, Switzerland West, UK South, UK West, West Europe.
+**North America**: Canada Central, Canada East, Central US, East US, East US 2, North Central US, South Central US, West US, West US 2.
+ ## Performance best practices
batch Batch Custom Image Pools To Azure Compute Gallery Migration Guide https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/batch/batch-custom-image-pools-to-azure-compute-gallery-migration-guide.md
+
+ Title: Migrate Azure Batch Custom Image Pools to Azure Compute Gallery
+description: Learn how to migrate Azure Batch custom image pools to Azure compute gallery and plan for feature end of support.
++++ Last updated : 02/23/2023++
+# Migrate Azure Batch custom image pools to Azure Compute Gallery
+
+To improve reliability, scale, and align with modern Azure offerings, Azure Batch will retire custom image Batch pools specified from virtual hard disk (VHD) blobs in Azure Storage and Azure Managed Images on *March 31, 2024*. Learn how to migrate your Azure Batch custom image pools using Azure Compute Gallery.
++
+## Feature end of support
+
+When you create an Azure Batch pool using the Virtual Machine Configuration, you specify an image reference that provides the operating system for each compute node in the pool. You can create a pool of virtual machines either with a supported Azure Marketplace image or with a custom image. Custom images from VHD blobs and managed Images are either legacy offerings or non-scalable solutions for Azure Batch. To ensure reliable infrastructure provisioning at scale, all custom image sources other than Azure Compute Gallery will be retired on *March 31, 2024*.
+
+## Alternative: Use Azure Compute Gallery references for Batch custom image pools
+
+When you use the Azure Compute Gallery (formerly known as Shared Image Gallery) for your custom image, you have control over the operating system type and configuration, as well as the type of data disks. Your shared image can include applications and reference data that become available on all the Batch pool nodes as soon as they're provisioned. You can also have multiple versions of an image as needed for your environment. When you use an image version to create a VM, the image version is used to create new disks for the VM.
+
+Using a shared image saves time in preparing your pool's compute nodes to run your Batch workload. It's possible to use an Azure Marketplace image and install software on each compute node after provisioning, but using a shared image can lead to more efficiencies, in faster compute node to ready state and reproducible workloads. Additionally, you can specify multiple replicas for the shared image so when you create pools with many compute nodes, provisioning latencies can be lower.
+
+## Migrate Your Eligible Pools
+
+To migrate your Batch custom image pools from managed image to shared image, review the Azure Batch guide on using [Azure Compute Gallery to create a custom image pool](batch-sig-images.md).
+
+If you have either a VHD blob or a managed image, you can convert them directly to a Compute Gallery image that can be used with Azure Batch custom image pools. When you're creating a VM image definition for a Compute Gallery, on the Version tab, there is an option to select the source for image types to migrate that're being retired for Batch custom image pools:
+
+| Source | Other fields |
+|||
+| Managed image | Select the **Source image** from the drop-down. The managed image must be in the same region that you chose in **Instance details.** |
+| VHD in a storage account | Select **Browse** to choose the storage account for the VHD. |
+
+For more information about this process, see [creating an image definition and version for Compute Gallery](../virtual-machines/image-version.md#create-an-image).
+
+## FAQs
+
+- How can I create an Azure Compute Gallery?
+
+ See the [guide](../virtual-machines/create-gallery.md#create-a-private-gallery) for Compute Gallery creation.
+
+- How do I create a Pool with a Compute Gallery image?
+
+ See the [guide](batch-sig-images.md) for creating a Pool with a Compute Gallery image.
+
+- What considerations are there for Compute Gallery image based Pools?
+
+ See the [guide](batch-sig-images.md#considerations-for-large-pools) for more information.
+
+- Can I use Azure Compute Gallery images in different subscriptions or in different Azure AD tenants?
+
+ If the Shared Image is not in the same subscription as the Batch account, you must register the Microsoft.Batch resource provider for that subscription. The two subscriptions must be in the same Azure AD tenant. The image can be in a different region as long as it has replicas in the same region as your Batch account.
++
+## Next steps
+
+For more information, see [Azure Compute Gallery](../virtual-machines/azure-compute-gallery.md).
batch Batch Pools To Simplified Compute Node Communication Model Migration Guide https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/batch/batch-pools-to-simplified-compute-node-communication-model-migration-guide.md
+
+ Title: Migrate Azure Batch pools to the Simplified compute node communication model
+description: Learn how to migrate Azure Batch pools to the simplified compute node communication model and plan for feature end of support.
++++ Last updated : 02/23/2023++
+# Migrate Azure Batch pools to the Simplified compute node communication model
+
+To improve security, simplify the user experience, and enable key future improvements, Azure Batch will retire the classic compute node communication model on *March 31, 2026*. Learn how to migrate your Batch pools to using the simplified compute node communication model.
++
+## About the feature
+
+An Azure Batch pool contains one or more compute nodes, which execute user-specified workloads in the form of Batch tasks. To enable Batch functionality and Batch pool infrastructure management, compute nodes must communicate with the Azure Batch service. In the Classic compute node communication model, the Batch service initiates communication to the compute nodes and compute nodes must be able to communicate with Azure Storage for baseline operations. In the Simplified compute node communication model, Batch pools only require outbound access to the Batch service for baseline operations.
+
+## Feature end of support
+
+The simplified compute node communication model will replace the classic compute node communication model after *March 31, 2026*. The change is introduced in two phases. From now until *September 30, 2024*, the default node communication mode for newly created [Batch pools with virtual networks](./batch-virtual-network.md) will remain as classic. After *September 30, 2024*, the default node communication mode for newly created Batch pools with virtual networks will switch to the simplified. After *March 31, 2026*, the option to use classic compute node communication mode will no longer be honored. Batch pools without user-specified virtual networks are unaffected by this change and the default communication mode is controlled by the Batch service.
+
+## Alternative: Use Simplified Compute Node Communication Model
+
+The simplified compute node communication mode streamlines the way Batch pool infrastructure is managed on behalf of users. This communication mode reduces the complexity and scope of inbound and outbound networking connections required in the baseline operations.
+
+The simplified model also provides more fine-grained data exfiltration control, since outbound communication to *Storage.region* is no longer required. You can explicitly lock down outbound communication to Azure Storage if necessary for your workflow (such as AppPackage storage accounts, other storage accounts for resource files or output files, or other similar scenarios).
+
+## Migrate Your Eligible Pools
+
+To migrate your Batch pools from classic to the simplified compute node communication model, please follow this document from the section entitled [potential impact between classic and simplified communication modes](simplified-compute-node-communication.md#potential-impact-between-classic-and-simplified-communication-modes) to either create new pools or update existing pools with simplified compute node communication.
+
+## FAQs
+
+- Will I still require a public IP address for my nodes?
+
+ The public IP address is still needed to initiate the outbound connection to Azure Batch. If you want to eliminate the need for public IP addresses entirely, see the guide to [create a simplified node communication pool without public IP addresses](./simplified-node-communication-pool-no-public-ip.md)
+
+- How can I connect to my nodes for diagnostic purposes?
+
+ RDP or SSH connectivity to the node is unaffected ΓÇô load balancer(s) will still be created which can route those requests through to the node when provisioned with a public IP address.
+
+- What differences will I see in billing?
+
+ There should be no cost or billing implications for the new model.
+
+- Are there any changes to agents on the compute node?
+
+ An extra agent will be running on compute nodes for both Windows and Linux, azbatch-cluster-agent.
+
+- Will there be any change to how my linked resources from Azure Storage in Batch pools and tasks are downloaded?
+
+ This behavior is unaffected ΓÇô all user-specified resources that require Azure Storage such as resource files, output files, or application packages will still be made from the compute node directly to Azure Storage. You'll need to ensure your networking configuration allows these flows.
++
+## Next steps
+
+For more information, see [Simplified compute node communication](./simplified-compute-node-communication.md).
+
batch Virtual File Mount https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/batch/virtual-file-mount.md
net use S: \\<storage-account-name>.file.core.windows.net\<fileshare> /u:AZURE\<
The specified network password is not correct. ```
-1. Troubleshoot the problem using [Troubleshoot Azure Files problems in Windows Server Message Block (SMB)](../storage/files/storage-troubleshoot-windows-file-connection-problems.md).
+1. Troubleshoot the problem using the [Azure file shares troubleshooter](https://support.microsoft.com/help/4022301/troubleshooter-for-azure-files-shares).
# [Linux](#tab/linux)
net use S: \\<storage-account-name>.file.core.windows.net\<fileshare> /u:AZURE\<
1. Review the error messages. For example, `mount error(13): Permission denied`.
-1. Troubleshoot the problem using [Troubleshoot Azure Files problems in Linux (SMB)](../storage/files/storage-troubleshoot-linux-file-connection-problems.md).
+1. Troubleshoot the problem using [Troubleshoot Azure Files connectivity and access issues (SMB)](../storage/files/files-troubleshoot-smb-connectivity.md).
If you can't use RDP or SSH to check the log files on the node, check the Batch
The specified network password is not correct. ```
-1. Troubleshoot the problem using [Troubleshoot Azure Files problems in Windows (SMB)](../storage/files/storage-troubleshoot-windows-file-connection-problems.md) or [Troubleshoot Azure Files problems in Linux (SMB)](../storage/files/storage-troubleshoot-linux-file-connection-problems.md).
+1. Troubleshoot the problem using the [Azure file shares troubleshooter](https://support.microsoft.com/help/4022301/troubleshooter-for-azure-files-shares).
If you're still unable to find the cause of the failure, you can [mount the file share manually with PowerShell](#manually-mount-file-share-with-powershell) instead.
cdn Cdn Caching Rules https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cdn/cdn-caching-rules.md
Title: Control Azure CDN caching behavior with caching rules | Microsoft Docs description: You can use CDN caching rules to set or modify default cache expiration behavior both globally and with conditions, such as a URL path and file extensions. --+ - Previously updated : 03/19/2019 Last updated : 02/21/2023 # Control Azure CDN caching behavior with caching rules
+This article describes how you can use content delivery network (CDN) caching rules to set or modify default cache expiration behavior. These caching rules can either be global or with custom conditions, such as a URL path and file extension.
+ > [!NOTE] > Caching rules are available only for **Azure CDN Standard from Verizon** and **Azure CDN Standard from Akamai** profiles. For **Azure CDN from Microsoft** profiles, you must use the [Standard rules engine](cdn-standard-rules-engine-reference.md) For **Azure CDN Premium from Verizon** profiles, you must use the [Verizon Premium rules engine](./cdn-verizon-premium-rules-engine.md) in the **Manage** portal for similar functionality.
-Azure Content Delivery Network (CDN) offers two ways to control how your files are cached:
+Azure Content Delivery Network (CDN) offers two ways to control how your files get cached:
-- Caching rules: This article describes how you can use content delivery network (CDN) caching rules to set or modify default cache expiration behavior both globally and with custom conditions, such as a URL path and file extension. Azure CDN provides two types of caching rules:
+**Caching rules:** Azure CDN provides two types of caching rules: global and custom.
- - Global caching rules: You can set one global caching rule for each endpoint in your profile, which affects all requests to the endpoint. The global caching rule overrides any HTTP cache-directive headers, if set.
+- Global caching rules - You can set one global caching rule for each endpoint in your profile, which affects all requests to the endpoint. The global caching rule overrides any HTTP cache-directive headers, if set.
- - Custom caching rules: You can set one or more custom caching rules for each endpoint in your profile. Custom caching rules match specific paths and file extensions, are processed in order, and override the global caching rule, if set.
+- Custom caching rules - You can set one or more custom caching rules for each endpoint in your profile. Custom caching rules match specific paths and file extensions, get processed in order, and override the global caching rule, if set.
-- Query string caching: You can adjust how the Azure CDN treats caching for requests with query strings. For information, see [Control Azure CDN caching behavior with query strings](cdn-query-string.md). If the file is not cacheable, the query string caching setting has no effect, based on caching rules and CDN default behaviors.
+**Query string caching:** You can adjust how the Azure CDN treats caching for requests with query strings. For information, see [Control Azure CDN caching behavior with query strings](cdn-query-string.md). If the file isn't cacheable, the query string caching setting has no effect, based on caching rules and CDN default behaviors.
For information about default caching behavior and caching directive headers, see [How caching works](cdn-how-caching-works.md). - ## Accessing Azure CDN caching rules 1. Open the Azure portal, select a CDN profile, then select an endpoint.
For information about default caching behavior and caching directive headers, se
## Caching behavior settings For global and custom caching rules, you can specify the following **Caching behavior** settings: -- **Bypass cache**: Do not cache and ignore origin-provided cache-directive headers.
+- **Bypass cache**: Don't cache and ignore origin-provided cache-directive headers.
-- **Override**: Ignore origin-provided cache duration; use the provided cache duration instead. This will not override cache-control: no-cache.
+- **Override**: Ignore origin-provided cache duration; use the provided cache duration instead. This setting doesn't override cache-control: no-cache.
> [!NOTE] > For **Azure CDN from Microsoft** profiles, cache expiration override is only applicable to status codes 200 and 206.
For global and custom caching rules, you can specify the cache expiration durati
- For the **Override** and **Set if missing** **Caching behavior** settings, valid cache durations range between 0 seconds and 366 days. For a value of 0 seconds, the CDN caches the content, but must revalidate each request with the origin server. -- For the **Bypass cache** setting, the cache duration is automatically set to 0 seconds and cannot be changed.
+- For the **Bypass cache** setting, the cache duration gets automatically set to 0 seconds, which isn't a modifiable value.
## Custom caching rules match conditions
For custom cache rules, two match conditions are available:
- **Extension**: This condition matches the file extension of the requested file. You can provide a list of comma-separated file extensions to match. For example, _.jpg_, _.mp3_, or _.png_. The maximum number of extensions is 50 and the maximum number of characters per extension is 16. ## Global and custom rule processing order
-Global and custom caching rules are processed in the following order:
+Global and custom caching rules get processed in the following order:
- Global caching rules take precedence over the default CDN caching behavior (HTTP cache-directive header settings). -- Custom caching rules take precedence over global caching rules, where they apply. Custom caching rules are processed in order from top to bottom. That is, if a request matches both conditions, rules at the bottom of the list take precedence over rules at the top of the list. Therefore, you should place more specific rules lower in the list.
+- Custom caching rules take precedence over global caching rules, where they apply. Custom caching rules get processed in order from top to bottom. That is, if a request matches both conditions, rules at the bottom of the list take precedence over rules at the top of the list. Therefore, you should place more specific rules lower in the list.
**Example**: - Global caching rule: - Caching behavior: **Override**
- - Cache expiration duration: 1 day
+ - Cache expiration duration: One day
- Custom caching rule #1: - Match condition: **Path** - Match value: _/home/*_ - Caching behavior: **Override**
- - Cache expiration duration: 2 days
+ - Cache expiration duration: Two days
- Custom caching rule #2: - Match condition: **Extension** - Match value: _.html_ - Caching behavior: **Set if missing**
- - Cache expiration duration: 3 days
+ - Cache expiration duration: Three days
-When these rules are set, a request for _&lt;endpoint hostname&gt;_.azureedge.net/home/https://docsupdatetracker.net/index.html triggers custom caching rule #2, which is set to: **Set if missing** and 3 days. Therefore, if the *https://docsupdatetracker.net/index.html* file has `Cache-Control` or `Expires` HTTP headers, they are honored; otherwise, if these headers are not set, the file is cached for 3 days.
+When you set these rules, a request for _&lt;endpoint hostname&gt;_.azureedge.net/home/https://docsupdatetracker.net/index.html triggers custom caching rule #2, which get set to: **Set if missing** and 3 days. Therefore, if the *https://docsupdatetracker.net/index.html* file has `Cache-Control` or `Expires` HTTP headers, they get honored; otherwise, if you don't set these headers, the file gets cached for three days.
> [!NOTE] > Files that are cached before a rule change maintain their origin cache duration setting. To reset their cache durations, you must [purge the file](cdn-purge-endpoint.md).
cdn Cdn How Caching Works https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cdn/cdn-how-caching-works.md
Title: How caching works | Microsoft Docs
+ Title: How caching works in Azure CDN | Microsoft Docs
description: Caching is the process of storing data locally so that future requests for that data can be accessed more quickly. --+ na Previously updated : 10/19/2021 Last updated : 02/23/2023 - + # How caching works This article provides an overview of general caching concepts and how [Azure Content Delivery Network (CDN)](cdn-overview.md) uses caching to improve performance. If youΓÇÖd like to learn about how to customize caching behavior on your CDN endpoint, see [Control Azure CDN caching behavior with caching rules](cdn-caching-rules.md) and [Control Azure CDN caching behavior with query strings](cdn-query-string.md).
This article provides an overview of general caching concepts and how [Azure Con
Caching is the process of storing data locally so that future requests for that data can be accessed more quickly. In the most common type of caching, web browser caching, a web browser stores copies of static data locally on a local hard drive. By using caching, the web browser can avoid making multiple round-trips to the server and instead access the same data locally, thus saving time and resources. Caching is well-suited for locally managing small, static data such as static images, CSS files, and JavaScript files.
-Similarly, caching is used by a content delivery network on edge servers close to the user to avoid requests traveling back to the origin and reducing end-user latency. Unlike a web browser cache, which is used only for a single user, the CDN has a shared cache. In a CDN shared cache, a file that is requested by one user can be accessed later by other users, which greatly decreases the number of requests to the origin server.
+Similarly, caching is used by a content delivery network on edge servers close to the user to avoid requests traveling back to the origin and reducing end-user latency. Unlike a web browser cache, which is used only for a single user, the CDN has a shared cache. In a CDN shared cache, a file request by a user can be used by another user, which greatly decreases the number of requests to the origin server.
-Dynamic resources that change frequently or are unique to an individual user cannot be cached. Those types of resources, however, can take advantage of dynamic site acceleration (DSA) optimization on the Azure Content Delivery Network for performance improvements.
+Dynamic resources that change frequently or are unique to an individual user can't be cached. Those types of resources, however, can take advantage of dynamic site acceleration (DSA) optimization on the Azure Content Delivery Network for performance improvements.
Caching can occur at multiple levels between the origin server and the end user:
Each cache typically manages its own resource freshness and performs validation
### Resource freshness
-Because a cached resource can potentially be out-of-date, or stale (as compared to the corresponding resource on the origin server), it is important for any caching mechanism to control when content is refreshed. To save time and bandwidth consumption, a cached resource is not compared to the version on the origin server every time it is accessed. Instead, as long as a cached resource is considered to be fresh, it is assumed to be the most current version and is sent directly to the client. A cached resource is considered to be fresh when its age is less than the age or period defined by a cache setting. For example, when a browser reloads a web page, it verifies that each cached resource on your hard drive is fresh and loads it. If the resource is not fresh (stale), an up-to-date copy is loaded from the server.
+Since a cached resource can potentially be out-of-date, or stale (as compared to the corresponding resource on the origin server), it's important for any caching mechanism to control when content gets a refresh. To save time and bandwidth consumption, a cached resource isn't compared to the version on the origin server every time it's accessed. Instead, as long as a cached resource is considered to be fresh, it's assumed to be the most current version and is sent directly to the client. A cached resource is considered to be fresh when its age is less than the age or period defined by a cache setting. For example, when a browser reloads a web page, it verifies that each cached resource on your hard drive is fresh and loads it. If the resource isn't fresh (stale), an up-to-date copy is loaded from the server.
### Validation
-If a resource is considered to be stale, the origin server is asked to validate it, that is, determine whether the data in the cache still matches whatΓÇÖs on the origin server. If the file has been modified on the origin server, the cache updates its version of the resource. Otherwise, if the resource is fresh, the data is delivered directly from the cache without validating it first.
+If a resource is considered stale, the origin server gets asked to validate it to determine whether the data in the cache still matches whatΓÇÖs on the origin server. If the file has been modified on the origin server, the cache updates its version of the resource. Otherwise, if the resource is fresh, the data is delivered directly from the cache without validating it first.
### CDN caching
Caching is integral to the way a CDN operates to speed up delivery and reduce or
- By offloading work to a CDN, caching can reduce network traffic and the load on the origin server. Doing so reduces cost and resource requirements for the application, even when there are large numbers of users.
-Similar to how caching is implemented in a web browser, you can control how caching is performed in a CDN by sending cache-directive headers. Cache-directive headers are HTTP headers, which are typically added by the origin server. Although most of these headers were originally designed to address caching in client browsers, they are now also used by all intermediate caches, such as CDNs.
+Similar to how caching is implemented in a web browser, you can control how caching is performed in a CDN by sending cache-directive headers. Cache-directive headers are HTTP headers, which are typically added by the origin server. Although most of these headers were originally designed to address caching in client browsers, they're now also used by all intermediate caches, such as CDNs.
Two headers can be used to define cache freshness: `Cache-Control` and `Expires`. `Cache-Control` is more current and takes precedence over `Expires`, if both exist. There are also two types of headers used for validation (called validators): `ETag` and `Last-Modified`. `ETag` is more current and takes precedence over `Last-Modified`, if both are defined.
Azure CDN supports the following HTTP cache-directive headers, which define cach
**Cache-Control:** - Introduced in HTTP 1.1 to give web publishers more control over their content and to address the limitations of the `Expires` header. - Overrides the `Expires` header, if both it and `Cache-Control` are defined.-- When used in an HTTP request from the client to the CDN POP, `Cache-Control` is ignored by all Azure CDN profiles, by default.
+- When used in an HTTP request from the client to the CDN POP, `Cache-Control` gets ignored by all Azure CDN profiles, by default.
- When used in an HTTP response from the origin server to the CDN POP: - **Azure CDN Standard/Premium from Verizon** and **Azure CDN Standard from Microsoft** support all `Cache-Control` directives. - **Azure CDN Standard/Premium from Verizon** and **Azure CDN Standard from Microsoft** honors caching behaviors for Cache-Control directives in [RFC 7234 - Hypertext Transfer Protocol (HTTP/1.1): Caching (ietf.org)](https://tools.ietf.org/html/rfc7234#section-5.2.2.8).
Azure CDN supports the following HTTP cache-directive headers, which define cach
When the cache is stale, HTTP cache validators are used to compare the cached version of a file with the version on the origin server. **Azure CDN Standard/Premium from Verizon** supports both `ETag` and `Last-Modified` validators by default, while **Azure CDN Standard from Microsoft** and **Azure CDN Standard from Akamai** supports only `Last-Modified` by default. **ETag:**-- **Azure CDN Standard/Premium from Verizon** supports `ETag` by default, while **Azure CDN Standard from Microsoft** and **Azure CDN Standard from Akamai** do not.
+- **Azure CDN Standard/Premium from Verizon** supports `ETag` by default, while **Azure CDN Standard from Microsoft** and **Azure CDN Standard from Akamai** don't.
- `ETag` defines a string that is unique for every file and version of a file. For example, `ETag: "17f0ddd99ed5bbe4edffdd6496d7131f"`. - Introduced in HTTP 1.1 and is more current than `Last-Modified`. Useful when the last modified date is difficult to determine. - Supports both strong validation and weak validation; however, Azure CDN supports only strong validation. For strong validation, the two resource representations must be byte-for-byte identical. - A cache validates a file that uses `ETag` by sending an `If-None-Match` header with one or more `ETag` validators in the request. For example, `If-None-Match: "17f0ddd99ed5bbe4edffdd6496d7131f"`. If the serverΓÇÖs version matches an `ETag` validator on the list, it sends status code 304 (Not Modified) in its response. If the version is different, the server responds with status code 200 (OK) and the updated resource. **Last-Modified:**-- For **Azure CDN Standard/Premium from Verizon** only, `Last-Modified` is used if `ETag` is not part of the HTTP response.
+- For **Azure CDN Standard/Premium from Verizon** only, `Last-Modified` is used if `ETag` isn't part of the HTTP response.
- Specifies the date and time that the origin server has determined the resource was last modified. For example, `Last-Modified: Thu, 19 Oct 2017 09:28:00 GMT`.-- A cache validates a file using `Last-Modified` by sending an `If-Modified-Since` header with a date and time in the request. The origin server compares that date with the `Last-Modified` header of the latest resource. If the resource has not been modified since the specified time, the server returns status code 304 (Not Modified) in its response. If the resource has been modified, the server returns status code 200 (OK) and the updated resource.
+- A cache validates a file using `Last-Modified` by sending an `If-Modified-Since` header with a date and time in the request. The origin server compares that date with the `Last-Modified` header of the latest resource. If the resource hasn't been modified since the specified time, the server returns status code 304 (Not Modified) in its response. If the resource has been modified, the server returns status code 200 (OK) and the updated resource.
## Determining which files can be cached
-Not all resources can be cached. The following table shows what resources can be cached, based on the type of HTTP response. Resources delivered with HTTP responses that don't meet all of these conditions cannot be cached. For **Azure CDN Premium from Verizon** only, you can use the rules engine to customize some of these conditions.
+Not all resources can be cached. The following table shows what resources can be cached, based on the type of HTTP response. Resources delivered with HTTP responses that don't meet all of these conditions can't be cached. For **Azure CDN Premium from Verizon** only, you can use the rules engine to customize some of these conditions.
| | Azure CDN from Microsoft | Azure CDN from Verizon | Azure CDN from Akamai | |--|--|||
Not all resources can be cached. The following table shows what resources can be
| **HTTP methods** | GET, HEAD | GET | GET | | **File size limits** | 300 GB | 300 GB | - General web delivery optimization: 1.8 GB<br />- Media streaming optimizations: 1.8 GB<br />- Large file optimization: 150 GB |
-For **Azure CDN Standard from Microsoft** caching to work on a resource, the origin server must support any HEAD and GET HTTP requests and the content-length values must be the same for any HEAD and GET HTTP responses for the asset. For a HEAD request, the origin server must support the HEAD request, and must respond with the same headers as if it had received a GET request.
+For **Azure CDN Standard from Microsoft** caching to work on a resource, the origin server must support any HEAD and GET HTTP requests and the content-length values must be the same for any HEAD and GET HTTP responses for the asset. For a HEAD request, the origin server must support the HEAD request, and must respond with the same headers as if it received a GET request.
## Default caching behavior
The following table describes the default caching behavior for the Azure CDN pro
| | Microsoft: General web delivery | Verizon: General web delivery | Verizon: DSA | Akamai: General web delivery | Akamai: DSA | Akamai: Large file download | Akamai: general or VOD media streaming | ||--|-||--||-|--| | **Honor origin** | Yes | Yes | No | Yes | No | Yes | Yes |
-| **CDN cache duration** | 2 days |7 days | None | 7 days | None | 1 day | 1 year |
+| **CDN cache duration** | Two days |Seven days | None | Seven days | None | One day | One year |
**Honor origin**: Specifies whether to honor the supported cache-directive headers if they exist in the HTTP response from the origin server.
cdn Cdn Pop Locations https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cdn/cdn-pop-locations.md
Title: Azure CDN POP locations by region | Microsoft Docs description: This article lists Azure CDN POP locations, sorted by region, for Azure CDN products. ---++ ms.assetid: 669ef140-a6dd-4b62-9b9d-3f375a14215e na Previously updated : 05/18/2021 Last updated : 02/21/2023
> * [Microsoft POP locations by abbreviation](microsoft-pop-abbreviations.md) > - This article lists current metros containing point-of-presence (POP) locations, sorted by region, for Azure Content Delivery Network (CDN) products. Each metro may contain more than one POP. For example, Azure CDN from Microsoft has 118 POPs across 100 metro cities. > [!IMPORTANT]
cdn Cdn Purge Endpoint https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cdn/cdn-purge-endpoint.md
description: Learn how to purge all cached content from an Azure Content Deliver
documentationcenter: '' --+ ms.assetid: 0b50230b-fe82-4740-90aa-95d4dde8bd4f na Previously updated : 06/30/2021 Last updated : 02/21/2023 + # Purge an Azure CDN endpoint
-## Overview
-Azure CDN edge nodes will cache assets until the asset's time-to-live (TTL) expires. After the asset's TTL expires, when a client requests the asset from the edge node, the edge node will retrieve a new updated copy of the asset to serve the client request and store refresh the cache.
-The best practice to make sure your users always obtain the latest copy of your assets is to version your assets for each update and publish them as new URLs. CDN will immediately retrieve the new assets for the next client requests. Sometimes you may wish to purge cached content from all edge nodes and force them all to retrieve new updated assets. This might be due to updates to your web application, or to quickly update assets that contain incorrect information.
+Azure CDN edge nodes cache contents until the content's time-to-live (TTL) expires. After the TTL expires, when a client makes a request for the content from the edge node, the edge node will retrieve a new updated copy of the content to serve to the client. Then the refreshed content in cache of the edge node.
+
+The best practice to make sure your users always obtain the latest copy of your assets is to version your assets for each update and publish them as new URLs. CDN will immediately retrieve the new assets for the next client requests. Sometimes you may wish to purge cached content from all edge nodes and force them all to retrieve new updated assets. The reason might be due to updates to your web application, or to quickly update assets that contain incorrect information.
> [!TIP] > Note that purging only clears the cached content on the CDN edge servers. Any downstream caches, such as proxy servers and local browser caches, may still hold a cached copy of the file. It's important to remember this when you set a file's time-to-live. You can force a downstream client to request the latest version of your file by giving it a unique name every time you update it, or by taking advantage of [query string caching](cdn-query-string.md).
->
->
+>
-This tutorial walks you through purging assets from all edge nodes of an endpoint.
+This guide walks you through purging assets from all edge nodes of an endpoint.
-## Walkthrough
-1. In the [Azure Portal](https://portal.azure.com), browse to the CDN profile containing the endpoint you wish to purge.
-2. From the CDN profile blade, click the purge button.
-
- ![CDN profile blade](./media/cdn-purge-endpoint/cdn-profile-blade.png)
-
- The Purge blade opens.
-
- ![CDN purge blade](./media/cdn-purge-endpoint/cdn-purge-blade.png)
-3. On the Purge blade, select the service address you wish to purge from the URL dropdown.
+## Purge contents from an Azure CDN endpoint
+
+1. In the [Azure portal](https://portal.azure.com), browse to the CDN profile containing the endpoint you wish to purge.
+
+1. From the CDN profile page, select the purge button.
+
+ :::image type="content" source="./media/cdn-purge-endpoint/cdn-profile-blade.png" alt-text="Screenshot of the overview page for an Azure CDN profile.":::
- ![Purge form](./media/cdn-purge-endpoint/cdn-purge-form.png)
+1. On the Purge page, select the service address you wish to purge from the URL dropdown.
+
+ :::image type="content" source="./media/cdn-purge-endpoint/cdn-purge-form.png" alt-text="Alt text here.":::
> [!NOTE]
- > You can also get to the Purge blade by clicking the **Purge** button on the CDN endpoint blade. In that case, the **URL** field will be pre-populated with the service address of that specific endpoint.
- >
+ > You can also get to the purge page by clicking the **Purge** button on the CDN endpoint blade. In that case, the **URL** field will be pre-populated with the service address of that specific endpoint.
>
-4. Select what assets you wish to purge from the edge nodes. If you wish to clear all assets, click the **Purge all** checkbox. Otherwise, type the path of each asset you wish to purge in the **Path** textbox. Below formats are supported in the path.
- 1. **Single URL purge**: Purge individual asset by specifying the full URL, with or without the file extension, e.g.,`/pictures/strasbourg.png`; `/pictures/strasbourg`
- 2. **Wildcard purge**: Asterisk (\*) may be used as a wildcard. Purge all folders, sub-folders and files under an endpoint with `/*` in the path or purge all sub-folders and files under a specific folder by specifying the folder followed by `/*`, e.g.,`/pictures/*`. Note that wildcard purge is not supported by Azure CDN from Akamai currently.
+
+1. Select what assets you wish to purge from the edge nodes. If you wish to clear all assets, select the **Purge all** checkbox. Otherwise, type the path of each asset you wish to purge in the **Path** textbox. The following formats for paths are supported:
+
+ 1. **Single URL purge**: Purge individual asset by specifying the full URL, with or without the file extension, for example,`/pictures/strasbourg.png`; `/pictures/strasbourg`
+ 2. **Wildcard purge**: You can use an asterisk (\*) as a wildcard. Purge all folders, subfolders and files under an endpoint with `/*` in the path or purge all subfolders and files under a specific folder by specifying the folder followed by `/*`, for example,`/pictures/*`. Wildcard purge isn't supported by Azure CDN from Akamai currently.
3. **Root domain purge**: Purge the root of the endpoint with "/" in the path. > [!TIP]
This tutorial walks you through purging assets from all edge nodes of an endpoin
> > 1. In Azure CDN from Microsoft, query strings in the purge URL path are not considered. If the path to purge is provided as `/TestCDN?myname=max`, only `/TestCDN` is considered. The query string `myname=max` is omitted. Both `TestCDN?myname=max` and `TestCDN?myname=clark` will be purged.
-5. Click the **Purge** button.
+5. Select the **Purge** button.
![Purge button](./media/cdn-purge-endpoint/cdn-purge-button.png)
This tutorial walks you through purging assets from all edge nodes of an endpoin
> >
-## See also
+## Next steps
+ * [Pre-load assets on an Azure CDN endpoint](cdn-preload-endpoint.md) * [Azure CDN REST API reference - Purge or Pre-Load an Endpoint](/rest/api/cdn/endpoints)-
cdn Create Profile Endpoint Bicep https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cdn/create-profile-endpoint-bicep.md
One Azure resource is defined in the Bicep file:
```azurecli az group create --name exampleRG --location eastus
- az deployment group create --resource-group exampleRG --template-file main.bicep --parameters profileName=<profile-name> endpointName=<endpoint-name> originURL=<origin-url>
+ az deployment group create --resource-group exampleRG --template-file main.bicep --parameters profileName=<profile-name> endpointName=<endpoint-name> originUrl=<origin-url>
``` # [PowerShell](#tab/PowerShell) ```azurepowershell New-AzResourceGroup -Name exampleRG -Location eastus
- New-AzResourceGroupDeployment -ResourceGroupName exampleRG -TemplateFile ./main.bicep -profileName "<profile-name>" -endpointName "<endpoint-name>" -originURL "<origin-url>"
+ New-AzResourceGroupDeployment -ResourceGroupName exampleRG -TemplateFile ./main.bicep -profileName "<profile-name>" -endpointName "<endpoint-name>" -originUrl "<origin-url>"
```
cognitive-services Pronunciation Assessment Tool https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Speech-Service/pronunciation-assessment-tool.md
The complete transcription is shown in the `text` attribute. You can see accurac
### Assessment scores in streaming mode
-Pronunciation Assessment supports uninterrupted streaming mode. The demo on the Speech Studio supports up to 60 minutes of recording in streaming mode for evaluation. The Speech Studio demo allows for up to 60 minutes of recording in streaming mode for evaluation. As long as you do not press the stop recording button, the evaluation process does not finish and you can pause and resume evaluation conveniently.
+Pronunciation Assessment supports uninterrupted streaming mode. The Speech Studio demo allows for up to 60 minutes of recording in streaming mode for evaluation. As long as you don't press the stop recording button, the evaluation process doesn't finish and you can pause and resume evaluation conveniently.
Pronunciation Assessment evaluates three aspects of pronunciation: accuracy, fluency, and completeness. At the bottom of **Assessment result**, you can see **Pronunciation score** as aggregated overall score which includes 3 sub aspects: **Accuracy score**, **Fluency score**, and **Completeness score**. In streaming mode, since the **Accuracy score**, **Fluency score and Completeness score** will vary over time throughout the recording process, we demonstrate an approach on Speech Studio to display approximate overall score incrementally before the end of the evaluation, which weighted only with Accuracy score and Fluency score. The **Completeness score** is only calculated at the end of the evaluation after you press the stop button, so the final overall score is aggregated from **Accuracy score**, **Fluency score**, and **Completeness score** with weight.
cognitive-services Models https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/openai/concepts/models.md
When using our embeddings models, keep in mind their limitations and risks.
### GPT-3 Models | Model ID | Supports Completions | Supports Embeddings | Base model Regions | Fine-Tuning Regions | | | | | | |
-| ada | Yes | No | N/A | East US, South Central US, West Europe |
-| text-ada-001 | Yes | No | East US, South Central US, West Europe | N/A |
-| babbage | Yes | No | N/A | East US, South Central US, West Europe |
-| text-babbage-001 | Yes | No | East US, South Central US, West Europe | N/A |
-| curie | Yes | No | N/A | East US, South Central US, West Europe |
-| text-curie-001 | Yes | No | East US, South Central US, West Europe | N/A |
-| davinci<sup>1</sup> | Yes | No | N/A | East US, South Central US, West Europe |
+| ada | Yes | No | N/A | East US<sup>2</sup>, South Central US, West Europe |
+| text-ada-001 | Yes | No | East US<sup>2</sup>, South Central US, West Europe | N/A |
+| babbage | Yes | No | N/A | East US<sup>2</sup>, South Central US, West Europe |
+| text-babbage-001 | Yes | No | East US<sup>2</sup>, South Central US, West Europe | N/A |
+| curie | Yes | No | N/A | East US<sup>2</sup>, South Central US, West Europe |
+| text-curie-001 | Yes | No | East US<sup>2</sup>, South Central US, West Europe | N/A |
+| davinci<sup>1</sup> | Yes | No | N/A | East US<sup>2</sup>, South Central US, West Europe |
| text-davinci-001 | Yes | No | South Central US, West Europe | N/A | | text-davinci-002 | Yes | No | East US, South Central US, West Europe | N/A | | text-davinci-003 | Yes | No | East US | N/A | | text-davinci-fine-tune-002<sup>1</sup> | Yes | No | N/A | East US, West Europe | <sup>1</sup> The model is available by request only. Currently we aren't accepting new requests to use the model.
+<br><sup>2</sup> East US is currently unavailable for new customers to fine-tune due to high demand. Please use US South Central region for US based training.
### Codex Models | Model ID | Supports Completions | Supports Embeddings | Base model Regions | Fine-Tuning Regions | | | | | | |
-| code-cushman-001<sup>2</sup> | Yes | No | South Central US, West Europe | East US, South Central US, West Europe |
+| code-cushman-001<sup>1</sup> | Yes | No | South Central US, West Europe | East US<sup>2</sup> , South Central US, West Europe |
| code-davinci-002 | Yes | No | East US, West Europe | N/A |
-| code-davinci-fine-tune-002<sup>2</sup> | Yes | No | N/A | East US, West Europe |
+| code-davinci-fine-tune-002<sup>1</sup> | Yes | No | N/A | East US<sup>2</sup> , West Europe |
-<sup>2</sup> The model is available for fine-tuning by request only. Currently we aren't accepting new requests to fine-tune the model.
+<sup>1</sup> The model is available for fine-tuning by request only. Currently we aren't accepting new requests to fine-tune the model.
+<br><sup>2</sup> East US is currently unavailable for new customers to fine-tune due to high demand. Please use US South Central region for US based training.
### Embeddings Models | Model ID | Supports Completions | Supports Embeddings | Base model Regions | Fine-Tuning Regions |
cognitive-services Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/openai/overview.md
Azure OpenAI Service provides REST API access to OpenAI's powerful language mode
| Feature | Azure OpenAI | | | | | Models available | GPT-3 base series <br> Codex series <br> Embeddings series <br> Learn more in our [Models](./concepts/models.md) page.|
-| Fine-tuning | Ada <br> Babbage <br> Curie <br> Cushman* <br> Davinci* <br> \* Currently unavailable.|
+| Fine-tuning | Ada <br> Babbage <br> Curie <br> Cushman* <br> Davinci* <br> \* Currently unavailable. \*\*East US Fine-tuning is currently unavailable to new customers. Please use US South Central for US based training|
| Price | [Available here](https://azure.microsoft.com/pricing/details/cognitive-services/openai-service/) |
-| Virtual network support | Yes |
+| Virtual network support & private link support | Yes |
| Managed Identity| Yes, via Azure Active Directory | | UI experience | **Azure Portal** for account & resource management, <br> **Azure OpenAI Service Studio** for model exploration and fine tuning | | Regional availability | East US <br> South Central US <br> West Europe |
communication-services Call Readiness Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communication-services/tutorials/call-readiness/call-readiness-overview.md
+
+ Title: Creating a Call Readiness Experience using Azure Communication Services UI Library
+
+description: Learn how to use Azure Communication Services with the UI Library to create an experience that gets users ready to join a call.
+++++ Last updated : 11/17/2022++++
+# Getting started with Call Readiness and the UI Library
++
+![Flow of a user joining a call from an email link](../media/call-readiness/joining-call-from-email-link.png)
+
+When a user intends to join a web call, their primary focus is on the conversation they want to have with the other person(s) on the call ΓÇô this persona could be a doctor, teacher, financial advisor, or friend. The conversation itself may pose enough stress, let alone navigating the process of making sure they and their device(s) are ready to be seen and/or heard. It's critical to ensure the device and client they're using is ready for the call
+
+It may be impossible to predict every issue or combination of issues that may arise, but by applying this tutorial you can:
+
+- Reduce the likelihood of issues affecting a user during a call
+- Only expose an issue if it's going to negatively impact the experience
+- Avoid making a user hunt for a resolution; Offer guided help to resolve the issue
+
+Related to this tutorial is the Azure Communication Services [Network Testing Diagnostic Tool](../../concepts/developer-tools/network-diagnostic.md). Users can use the Network Testing Diagnostics Tool for further troubleshooting in customer support scenarios.
+
+## Tutorial Structure
+
+In this tutorial, we use the Azure Communication Services UI Library to create an experience that gets the user ready to join a call. This tutorial is structured into three parts:
+
+- Part 1: [Getting your user onto a supported browser](./call-readiness-tutorial-part-1-browser-support.md)
+- Part 2: [Ensuring your App has access to the microphone and camera](./call-readiness-tutorial-part-2-requesting-device-access.md)
+- Part 3: [Having your user select their desired microphone and camera](./call-readiness-tutorial-part-3-camera-microphone-setup.md)
+
+## Prerequisites
+
+- [Visual Studio Code](https://code.visualstudio.com/) on one of the [supported platforms](https://code.visualstudio.com/docs/supporting/requirements#_platforms).
+- [Node.js](https://nodejs.org/), Active LTS and Maintenance LTS versions (10.14.1 recommended). Use the `node --version` command to check your version.
+
+## Download code
+
+Access the full code for this tutorial on [GitHub](https://github.com/Azure-Samples/communication-services-javascript-quickstarts/tree/main/ui-library-call-readiness).
+
+## App Structure
+
+Users have several hurdles to cross when joining a call from browser support to selecting the correct camera. This tutorial uses [React](https://reactjs.org/) with Azure Communication Services [UI Library](https://aka.ms/acsstorybook) to create an app that performs call readiness checks. These checks guide the user through browser support, camera and microphone permissions and finally device setup.
+
+The user flow of the App is as follows:
+
+![flow diagram showing user flow through the call readiness sample](../media/call-readiness/call-readiness-flow-diagram.png)
+<!--
+This is the mermaid definition for the above graph. Use this to edit and regenerate the graph.
+Note: Arrows have been split with a / to prevent this comment block from breaking.
+```mermaid
+flowchart TD
+ Start -.-> BrowserCheck{Is Environment supported}
+ subgraph S1[Part 1: Check Browser Support]
+ BrowserCheck -/-> |supported| C1[Continue]
+ BrowserCheck -/-> |operating system unsupported|BrowserUnsupportedPrompt[Show 'Browser Unsupported' Prompt]
+ BrowserCheck -/-> |browser unsupported|BrowserUnsupportedPrompt[Show 'Browser Unsupported' Prompt]
+ BrowserCheck -/-> |browser version unsupported|BrowserUnsupportedPrompt[Show 'Browser Unsupported' Prompt]
+ end
+ subgraph S2[Part 2: Get Device Permissions]
+ C1 -.-> DeviceCheckStart{Check Device Permission State}
+ DeviceCheckStart -/-> |Device Permissions Unknown|DeviceCheckerGeneric[Show 'Checking for device permissions' Prompt]
+ DeviceCheckerGeneric -/->|Permissions updated| DeviceCheckStart
+ DeviceCheckStart -/-> |User needs prompted|DeviceCheckerPrompt[Show 'Please Accept Permissions' Prompt]
+ DeviceCheckerPrompt -/->|Permissions updated| DeviceCheckStart
+ DeviceCheckStart -/-> |Permissions Denied|DeviceCheckerDenied[Show 'Permissions Denied' Prompt]
+ DeviceCheckStart --/-> |Permissions Accepted|C2[Continue]
+ end
+ subgraph Part 3: Device Setup
+ C2 -.-> DeviceSetup[Camera and Microphone Setup]
+ DeviceSetup -/-> |User updates Audio and Video| DeviceSetup
+ end
+ DeviceSetup -.-> TestComplete[Call Readiness complete. User is ready to join their Call]
+```
+-->
+
+Your final app prompts the user onto a supported browser and access for the camera and microphone, then let the user choose and preview their microphone and camera settings before joining the call:
+
+![Gif showing the end to end experience of the call readiness checks and device setup](../media/call-readiness/call-readiness-user-flow.gif)
+
+## Set up the Project
+
+To set up the [React](https://reactjs.org/) App, we use the create-react-app template for this quickstart. This `create-react-app` command creates an easy to run TypeScript App powered by React. The command installs the Azure Communication Services npm packages, and the [FluentUI](https://developer.microsoft.com/fluentui/) npm package for creating advanced UI. For more information on create-react-app, see: [Get Started with React](https://reactjs.org/docs/create-a-new-react-app.html).
+
+```bash
+# Create an Azure Communication Services App powered by React.
+npx create-react-app ui-library-call-readiness-app --template communication-react
+
+# Change to the directory of the newly created App.
+cd ui-library-call-readiness-app
+```
+
+At the end of this process, you should have a full application inside of the folder `ui-library-call-readiness-app`.
+For this quickstart, we modify the files inside of the `src` folder.
+
+### Install Packages
+
+As this feature is in public preview, you must use the beta versions of the Azure Communication Services npm packages. Use the `npm install` command to install these packages:
+
+```bash
+# Install Public Preview versions of the Azure Communication Services Libraries.
+npm install @azure/communication-react@1.5.1-beta.1 @azure/communication-calling@1.10.0-beta.1
+```
+
+> [!NOTE]
+> If you are installing the communication packages into an existing App, `@azure/communication-react` currently does not support React v18. To downgrade to React v17 or less follow [these instructions](https://azure.github.io/communication-ui-library/?path=/docs/setup-communication-react--page).
+
+### Initial App Setup
+
+To get us started, we replace the create-react-app default `App.tsx` content with a basic setup that:
+
+- Registers the necessary icons we use in this tutorial
+- Sets a theme provider that can be used to set a custom theme
+- Create a [`StatefulCallClient`](https://azure.github.io/communication-ui-library/?path=/docs/statefulclient-overview--page) with a provider that gives child components access to the call client
+
+`src/App.tsx`
+
+```ts
+import { CallClientProvider, createStatefulCallClient, FluentThemeProvider, useTheme } from '@azure/communication-react';
+import { initializeIcons, registerIcons, Stack, Text } from '@fluentui/react';
+import { DEFAULT_COMPONENT_ICONS } from '@azure/communication-react';
+import { CheckmarkCircle48Filled } from '@fluentui/react-icons';
+
+// Initializing and registering icons should only be done once per app.
+initializeIcons();
+registerIcons({ icons: DEFAULT_COMPONENT_ICONS });
+
+const USER_ID = 'user1'; // In your production app replace this with an Azure Communication Services User ID
+const callClient = createStatefulCallClient({ userId: { communicationUserId: USER_ID } });
+
+/**
+ * Entry point of a React app.
+ */
+const App = (): JSX.Element => {
+ return (
+ <FluentThemeProvider>
+ <CallClientProvider callClient={callClient}>
+ <TestComplete />
+ </CallClientProvider>
+ </FluentThemeProvider>
+ );
+}
+
+export default App;
+
+/**
+ * Final page to highlight the call readiness checks have completed.
+ * Replace this with your own App's next stage.
+ */
+export const TestComplete = (): JSX.Element => {
+ const theme = useTheme();
+ return (
+ <Stack verticalFill verticalAlign="center" horizontalAlign="center" tokens={{ childrenGap: "1rem" }}>
+ <CheckmarkCircle48Filled primaryFill={theme.palette.green} />
+ <Text variant="xLarge">Call Readiness Complete</Text>
+ <Text variant="medium">From here you can have the user join their call using their chosen settings.</Text>
+ </Stack>
+ );
+};
+```
+
+### Run Create React App
+
+Let's test our setup by running:
+
+```bash
+# Run the React App
+npm start
+```
+
+Once the App is running visit `http://localhost:3000` in your browser to see your running App.
+You should see a green checkmark with a `Test Complete` message.
+
+## Next steps
+
+> [!div class="nextstepaction"]
+> [Part 1: Browser Support](./call-readiness-tutorial-part-1-browser-support.md)
communication-services Call Readiness Tutorial Part 1 Browser Support https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communication-services/tutorials/call-readiness/call-readiness-tutorial-part-1-browser-support.md
+
+ Title: Ensuring a user is on a supported browser using Azure Communication Services UI Library
+
+description: Learn how to use Azure Communication Services with the UI Library to create an experience that gets users ready to join a call - Part 1.
+++++ Last updated : 11/17/2022+++++
+# Creating a Call Readiness Experience using Azure Communication Services
++
+In this tutorial, we're using Azure Communication Services with the [UI Library](https://aka.ms/acsstorybook) to create an experience that gets users ready to join a call. The UI Library provides a set of rich components and UI controls that can be used to produce a Call Readiness experience, and a rich set of APIs to understand the user state.
+
+## Prerequisites
+
+- Follow the App setup process on the previous part of this tutorial: [Call Readiness - Overview](./call-readiness-overview.md)
+
+## Download code
+
+Access the full code for this tutorial on [GitHub](https://github.com/Azure-Samples/communication-services-javascript-quickstarts/tree/main/ui-library-call-readiness).
+
+## Checking for Browser Support
+
+To ensure the user gets the best experience, we want to first make sure they're on a [supported browser](../../concepts/voice-video-calling/calling-sdk-features.md#javascript-calling-sdk-support-by-os-and-browser).
+In this section, we create a page that displays "Preparing your session" whilst we perform a quick support check in the background on the user's browser.
+
+![Gif showing browser check being performed](../media/call-readiness/checking-browser-support.gif)
+
+### Preparing Your Session Page
+
+Create a new file called `PreparingYourSession.tsx` where we create a spinner to show to the user while we perform asynchronous checks in the background:
+
+`src/PreparingYourSession.tsx`
+
+```ts
+import { useTheme } from '@azure/communication-react';
+import { ISpinnerStyles, IStackStyles, ITextStyles, ITheme, Spinner, Stack, Text } from '@fluentui/react';
+
+/** This page displays a spinner to the user. This is used to show the user that background checks are being performed. */
+export const PreparingYourSession = (): JSX.Element => {
+ const theme = useTheme();
+ return (
+ <Stack verticalFill verticalAlign="center" horizontalAlign="center" tokens={{ childrenGap: '3rem' }}>
+ <Stack styles={spinnerContainerStyles(theme)}>
+ <Spinner styles={spinnerStyles} />
+ </Stack>
+ <Stack horizontalAlign="center">
+ <Text styles={headingStyles} variant="large">Preparing your session</Text>
+ <Text variant="medium">Please be patient</Text>
+ </Stack>
+ </Stack>
+ );
+};
+
+const headingStyles: ITextStyles = {
+ root: {
+ fontWeight: '600',
+ lineHeight: '2rem'
+ }
+};
+
+const spinnerStyles: ISpinnerStyles = {
+ circle: {
+ height: '2.75rem',
+ width: '2.75rem',
+ borderWidth: '0.2rem'
+ }
+};
+
+const spinnerContainerStyles = (theme: ITheme): IStackStyles => ({
+ root: {
+ padding: '1.75rem',
+ borderRadius: '50%',
+ background: theme.palette?.themeLighterAlt
+ }
+});
+```
+
+We can then hook up this Preparing your session screen into our App.
+In the `App.tsx` and a variable `testState` to track the state of the app and while `testState` is in `runningEnvironmentChecks` state we show the Preparing Your Session Screen.
+
+First, add the following imports to our `App.tsx` file that we created in the overview:
+
+```ts
+import { useState } from 'react';
+import { PreparingYourSession } from './PreparingYourSession';
+```
+
+After that's done, update our `App.tsx` file to include the new spinner.
+
+```ts
+type TestingState = 'runningEnvironmentChecks' | 'finished';
+
+const App = (): JSX.Element => {
+ const [testState, setTestState] = useState<TestingState>('runningEnvironmentChecks');
+
+ return (
+ <FluentThemeProvider>
+ <CallClientProvider callClient={callClient}>
+ {/* Show a Preparing your session screen while running the call readiness checks */}
+ {testState === 'runningEnvironmentChecks' && (
+ <>
+ <PreparingYourSession />
+ </>
+ )}
+
+ {/* After the device setup is complete, take the user to the call. For this sample we show a test complete page. */}
+ {testState === 'finished' && <TestComplete />}
+ </CallClientProvider>
+ </FluentThemeProvider>
+ );
+}
+```
+
+### Performing an Environment information check
+
+First create a utility file call `environmentSupportUtils.ts`. Inside this call, we add a method `checkEnvironmentSupport`. This method uses the [Calling Stateful Client](https://azure.github.io/communication-ui-library/?path=/docs/statefulclient-overview--page) to perform a request for the environment information that the Calling Stateful Client is running on.
+
+`src/environmentSupportUtils.ts`
+
+```ts
+import { Features, EnvironmentInfo } from "@azure/communication-calling";
+import { StatefulCallClient } from "@azure/communication-react";
+
+/** Use the CallClient's getEnvironmentInfo() method to check if the browser is supported. */
+export const checkEnvironmentSupport = async (callClient: StatefulCallClient): Promise<EnvironmentInfo> => {
+ const environmentInfo = await callClient.feature(Features.DebugInfo).getEnvironmentInfo();
+ console.info(environmentInfo); // view console logs in the browser to see what environment info is returned
+ return environmentInfo;
+}
+```
+
+The data returned from `checkEnvironmentSupport` contains the following information:
+
+- Browser support
+- Browser version support
+- Operating system (Platform) support
+- Detailed environment information
+
+### Informing the user they are on an unsupported browser
+
+Next, we need to use this information provided from the Calling SDK to inform the user of the state of their environment if there's an issue. The UI library provides three different components to serve this purpose depending on what the issue is.
+
+- `UnsupportedOperatingSystem`
+- `UnsupportedBrowser`
+- `UnsupportedBrowserVersion`
+
+We start by hosting the UI Library's components inside a [FluentUI Modal](https://developer.microsoft.com/fluentui#/controls/web/modal):
+Create a new file called `UnsupportedEnvironmentPrompts.tsx` where we create the different prompts:
+
+`src/UnsupportedEnvironmentPrompts.tsx`
+
+```ts
+import { UnsupportedOperatingSystem, UnsupportedBrowser, UnsupportedBrowserVersion } from '@azure/communication-react';
+import { Modal } from '@fluentui/react';
+
+/**
+ * Modal dialog that shows a Browser Version Unsupported Prompt
+ * Use the `onTroubleShootingClick` argument to redirect the user to further troublshooting.
+ * Use the `onContinueAnywayClick` argument to allow the user to continue to the next step even though they are on an unsupported browser version.
+ */
+export const BrowserVersionUnsupportedPrompt = (props: { isOpen: boolean, onContinueAnyway:() => void }): JSX. Element => (
+ <Modal isOpen={props.isOpen}>
+ <UnsupportedBrowserVersion
+ onTroubleshootingClick={() => alert('This callback should be used to take the user to further troubleshooting')}
+ onContinueAnywayClick={() => props.onContinueAnyway()}
+ />
+ </Modal>
+);
+
+/**
+ * Modal dialog that shows a Browser Unsupported Prompt
+ * Use the `onTroubleShootingClick` argument to redirect the user to further troublshooting.
+ */
+export const BrowserUnsupportedPrompt = (props: { isOpen: boolean }): JSX.Element => (
+ <Modal isOpen={props.isOpen}>
+ <UnsupportedBrowser
+ onTroubleshootingClick={() => alert('This callback should be used to take the user to further troubleshooting')}
+ />
+ </Modal>
+);
+
+/**
+ * Modal dialog that shows an Operating System Unsupported Prompt
+ * Use the `onTroubleShootingClick` argument to redirect the user to further troublshooting.
+ */
+export const OperatingSystemUnsupportedPrompt = (props: { isOpen: boolean }): JSX.Element => (
+ <Modal isOpen={props.isOpen}>
+ <UnsupportedOperatingSystem
+ onTroubleshootingClick={() => alert('This callback should be used to take the user to further troubleshooting')}
+ />
+ </Modal>
+);
+```
+
+We can then show these prompts in an Environment Check Component.
+Create a file called `EnvironmentChecksComponent.tsx` that contains the logic for showing this prompt:
+This component has a callback `onTestsSuccessful` that can take the user to the next page in the App.
+
+`src/EnvironmentChecksComponent.tsx`
+
+```ts
+import { useEffect, useState } from 'react';
+import { BrowserUnsupportedPrompt, BrowserVersionUnsupportedPrompt, OperatingSystemUnsupportedPrompt } from './UnsupportedEnvironmentPrompts';
+import { useCallClient } from '@azure/communication-react';
+import { checkEnvironmentSupport } from './environmentSupportUtils';
+
+export type EnvironmentChecksState = 'runningEnvironmentChecks' |
+ 'operatingSystemUnsupported' |
+ 'browserUnsupported' |
+ 'browserVersionUnsupported';
+
+/**
+ * This component is a demo of how to use the StatefulCallClient with CallReadiness Components to get a user
+ * ready to join a call.
+ * This component checks the browser support.
+ */
+export const EnvironmentChecksComponent = (props: {
+ /**
+ * Callback triggered when the tests are complete and successful
+ */
+ onTestsSuccessful: () => void
+}): JSX.Element => {
+ const [currentCheckState, setCurrentCheckState] = useState<EnvironmentChecksState>('runningEnvironmentChecks');
+
+
+ // Run call readiness checks when component mounts
+ const callClient = useCallClient();
+ useEffect(() => {
+ const runEnvironmentChecks = async (): Promise<void> => {
+
+ // First we get the environment information from the calling SDK.
+ const environmentInfo = await checkEnvironmentSupport(callClient);
+
+ if (!environmentInfo.isSupportedPlatform) {
+ setCurrentCheckState('operatingSystemUnsupported');
+ // If the platform or operating system is not supported we stop here and display a modal to the user.
+ return;
+ } else if (!environmentInfo.isSupportedBrowser) {
+ setCurrentCheckState('browserUnsupported');
+ // If browser support fails, we stop here and display a modal to the user.
+ return;
+ } else if (!environmentInfo.isSupportedBrowserVersion) {
+ setCurrentCheckState('browserVersionUnsupported');
+ /**
+ * If the browser version is unsupported, we stop here and show a modal that can allow the user
+ * to continue into the call.
+ */
+ return;
+ } else {
+ props.onTestsSuccessful();
+ }
+ };
+
+ runEnvironmentChecks();
+ // eslint-disable-next-line react-hooks/exhaustive-deps
+ }, []);
+
+ return (
+ <>
+ {/* We show this when the operating system is unsupported */}
+ <OperatingSystemUnsupportedPrompt isOpen={currentCheckState === 'operatingSystemUnsupported'} />
+
+ {/* We show this when the browser is unsupported */}
+ <BrowserUnsupportedPrompt isOpen={currentCheckState === 'browserUnsupported'} />
+
+ {/* We show this when the browser version is unsupported */}
+ <BrowserVersionUnsupportedPrompt isOpen={currentCheckState === 'browserVersionUnsupported'} onContinueAnyway={props.onTestsSuccessful} />
+ </>
+ );
+}
+```
+
+We can then add the `EnvironmentChecksComponent` to the `App.tsx`. The App then move the user to the _Device Checks_ stage once the test is successful using the `onTestsSuccessful` callback:
+
+Now we import the new component into our app in `App.tsx`
+
+```ts
+import { EnvironmentChecksComponent } from './EnvironmentChecksComponent';
+```
+
+Then let's update the `App` component in `App.tsx`:
+
+```ts
+const App = (): JSX.Element => {
+ const [testState, setTestState] = useState<TestingState>('runningEnvironmentChecks');
+
+ return (
+ <FluentThemeProvider>
+ <CallClientProvider callClient={callClient}>
+ {/* Show a Preparing your session screen while running the call readiness checks */}
+ {testState === 'runningEnvironmentChecks' && (
+ <>
+ <PreparingYourSession />
+ <EnvironmentChecksComponent
+ onTestsSuccessful={() => setTestState('finished')}
+ />
+ </>
+ )}
+
+ {/* After the device setup is complete, take the user to the call. For this sample we show a test complete page. */}
+ {testState === 'finished' && <TestComplete />}
+ </CallClientProvider>
+ </FluentThemeProvider>
+ );
+}
+```
+
+You can now run the app. Try running on an [unsupported browser](../../concepts/voice-video-calling/calling-sdk-features.md#javascript-calling-sdk-support-by-os-and-browser) and you see the unsupported browser prompt:
+
+![Gif showing browser check failing](../media/call-readiness/browser-support-check-failed.gif)
+
+## Next steps
+
+> [!div class="nextstepaction"]
+> [Part 2: Request camera and microphone access](./call-readiness-tutorial-part-2-requesting-device-access.md)
communication-services Call Readiness Tutorial Part 2 Requesting Device Access https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communication-services/tutorials/call-readiness/call-readiness-tutorial-part-2-requesting-device-access.md
+
+ Title: Request camera and microphone access using Azure Communication Services UI Library
+
+description: Learn how to use Azure Communication Services with the UI Library to create an experience that gets users ready to join a call - Part 2.
+++++ Last updated : 11/17/2022+++++
+# Request camera and microphone access using Azure Communication Services UI Library
++
+This tutorial is a continuation of a three part series of Call Readiness tutorials and follows on from the previous: [Ensure user is on a supported browser](./call-readiness-tutorial-part-1-browser-support.md).
+
+## Download code
+
+Access the full code for this tutorial on [GitHub](https://github.com/Azure-Samples/communication-services-javascript-quickstarts/tree/main/ui-library-call-readiness).
+
+## Requesting access to the camera and microphone
+
+For calling applications, it's often vital a user has given permission to use the microphone and camera.
+In this section, we create a series of components that encourages the user to grant access to the camera and microphone.
+We display prompts to the user to guide them through granting access.
+We inform the user with a prompt if access isn't granted.
+
+### Creating prompts for camera and microphone access
+
+We first create a series of device permissions prompts to get users into a state where they've accepted the microphone and camera permissions. These prompts use the `CameraAndMicrophoneSitePermissions` component
+from the UI Library. Like the Unsupported Browser prompt, we host these prompts inside a FluentUI `modal`.
+
+`src/DevicePermissionPrompts.tsx`
+
+```ts
+import { CameraAndMicrophoneSitePermissions } from '@azure/communication-react';
+import { Modal } from '@fluentui/react';
+
+/** Modal dialog that prompt the user to accept the Browser's device permission request. */
+export const AcceptDevicePermissionRequestPrompt = (props: { isOpen: boolean }): JSX.Element => (
+ <PermissionsModal isOpen={props.isOpen} kind="request" />
+);
+
+/** Modal dialog that informs the user we are checking for device access. */
+export const CheckingDeviceAccessPrompt = (props: { isOpen: boolean }): JSX.Element => (
+ <PermissionsModal isOpen={props.isOpen} kind="check" />
+)
+
+/** Modal dialog that informs the user they denied permission to the camera or microphone with corrective steps. */
+export const PermissionsDeniedPrompt = (props: { isOpen: boolean }): JSX.Element => (
+ <PermissionsModal isOpen={props.isOpen} kind="denied" />
+);
+
+/** Base component utilitzed by the above prompts for better code separation. */
+const PermissionsModal = (props: { isOpen: boolean, kind: "denied" | "request" | "check" }): JSX.Element => (
+ <Modal isOpen={props.isOpen}>
+ <CameraAndMicrophoneSitePermissions
+ appName={'this site'}
+ kind={props.kind}
+ onTroubleshootingClick={() => alert('This callback should be used to take the user to further troubleshooting')}
+ />
+ </Modal>
+);
+```
+
+### Checking for camera and microphone access
+
+Here we add two new utility functions to check and request for camera and microphone access. Create a file called `devicePermissionUtils.ts` with two functions `checkDevicePermissionsState` and `requestCameraAndMicrophonePermissions`.
+`checkDevicePermissionsState` uses the [PermissionAPI](https://developer.mozilla.org/docs/Web/API/Permissions_API). However, querying for camera and microphone isn't supported on Firefox and thus we ensure this method returns `unknown` in this case. Later we ensure we handle the `unknown` case when prompting the user for permissions.
+
+`src/DevicePermissionUtils.ts`
+
+```ts
+import { DeviceAccess } from "@azure/communication-calling";
+import { StatefulCallClient } from "@azure/communication-react";
+
+/**
+ * Check if the user needs to be prompted for camera and microphone permissions.
+ *
+ * @remarks
+ * The Permissions API we are using is not supported in Firefox, Android WebView or Safari < 16.
+ * In those cases this returns 'unknown'.
+ */
+export const checkDevicePermissionsState = async (): Promise<{camera: PermissionState, microphone: PermissionState} | 'unknown'> => {
+ try {
+ const [micPermissions, cameraPermissions] = await Promise.all([
+ navigator.permissions.query({ name: "microphone" as PermissionName }),
+ navigator.permissions.query({ name: "camera" as PermissionName })
+ ]);
+ console.info('PermissionAPI results', [micPermissions, cameraPermissions]); // view console logs in the browser to see what the PermissionsAPI info is returned
+ return { camera: cameraPermissions.state, microphone: micPermissions.state };
+ } catch (e) {
+ console.warn("Permissions API unsupported", e);
+ return 'unknown';
+ }
+}
+
+/** Use the DeviceManager to request for permissions to access the camera and microphone. */
+export const requestCameraAndMicrophonePermissions = async (callClient: StatefulCallClient): Promise<DeviceAccess> => {
+ const response = await (await callClient.getDeviceManager()).askDevicePermission({ audio: true, video: true });
+ console.info('AskDevicePermission response', response); // view console logs in the browser to see what device access info is returned
+ return response
+}
+```
+
+### Prompting the user to grant access to the camera and microphone
+
+Now we have the prompts and check and request logic, we create a `DeviceAccessComponent` to prompt the user regarding device permissions.
+In this component, we display different prompts to the user based on the device permission state:
+
+- If the device permission state is unknown, we display a prompt to the user informing them we're checking for device permissions.
+- If we're requesting permissions, we display a prompt to the user encouraging them to accept the permissions request.
+- If the permissions are denied, we display a prompt to the user informing them that they've denied permissions, and that they need to grant permissions to continue.
+
+`src/DeviceAccessChecksComponent.tsx`
+
+```ts
+import { useEffect, useState } from 'react';
+import { CheckingDeviceAccessPrompt, PermissionsDeniedPrompt, AcceptDevicePermissionRequestPrompt } from './DevicePermissionPrompts';
+import { useCallClient } from '@azure/communication-react';
+import { checkDevicePermissionsState, requestCameraAndMicrophonePermissions } from './DevicePermissionUtils';
+
+export type DevicesAccessChecksState = 'runningDeviceAccessChecks' |
+ 'checkingDeviceAccess' |
+ 'promptingForDeviceAccess' |
+ 'deniedDeviceAccess';
+
+/**
+ * This component is a demo of how to use the StatefulCallClient with CallReadiness Components to get a user
+ * ready to join a call.
+ * This component checks the browser support and if camera and microphone permissions have been granted.
+ */
+export const DeviceAccessChecksComponent = (props: {
+ /**
+ * Callback triggered when the tests are complete and successful
+ */
+ onTestsSuccessful: () => void
+}): JSX.Element => {
+ const [currentCheckState, setCurrentCheckState] = useState<DevicesAccessChecksState>('runningDeviceAccessChecks');
+
+
+ // Run call readiness checks when component mounts
+ const callClient = useCallClient();
+ useEffect(() => {
+ const runDeviceAccessChecks = async (): Promise<void> => {
+
+ // First we check if we need to prompt the user for camera and microphone permissions.
+ // The prompt check only works if the browser supports the PermissionAPI for querying camera and microphone.
+ // In the event that is not supported, we show a more generic prompt to the user.
+ const devicePermissionState = await checkDevicePermissionsState();
+ if (devicePermissionState === 'unknown') {
+ // We don't know if we need to request camera and microphone permissions, so we'll show a generic prompt.
+ setCurrentCheckState('checkingDeviceAccess');
+ } else if (devicePermissionState.camera === 'prompt' || devicePermissionState.microphone === 'prompt') {
+ // We know we need to request camera and microphone permissions, so we'll show the prompt.
+ setCurrentCheckState('promptingForDeviceAccess');
+ }
+
+ // Now the user has an appropriate prompt, we can request camera and microphone permissions.
+ const devicePermissionsState = await requestCameraAndMicrophonePermissions(callClient);
+
+ if (!devicePermissionsState.audio || !devicePermissionsState.video) {
+ // If the user denied camera and microphone permissions, we prompt the user to take corrective action.
+ setCurrentCheckState('deniedDeviceAccess');
+ } else {
+ // Test finished successfully, trigger callback to parent component to take user to the next stage of the app.
+ props.onTestsSuccessful();
+ }
+ };
+
+ runDeviceAccessChecks();
+ // eslint-disable-next-line react-hooks/exhaustive-deps
+ }, []);
+
+ return (
+ <>
+ {/* We show this when we are prompting the user to accept device permissions */}
+ <AcceptDevicePermissionRequestPrompt isOpen={currentCheckState === 'promptingForDeviceAccess'} />
+
+ {/* We show this when the PermissionsAPI is not supported and we are checking what permissions the user has granted or denied */}
+ <CheckingDeviceAccessPrompt isOpen={currentCheckState === 'checkingDeviceAccess'} />
+
+ {/* We show this when the user has failed to grant camera and microphone access */}
+ <PermissionsDeniedPrompt isOpen={currentCheckState === 'deniedDeviceAccess'} />
+ </>
+ );
+}
+
+```
+
+After we have finished creating this component, we add it to the `App.tsx`. First, add the import:
+
+```ts
+import { DeviceAccessChecksComponent } from './DeviceAccessChecksComponent';
+```
+
+Then update the `TestingState` type to be the following value:
+
+```ts
+type TestingState = 'runningEnvironmentChecks' | 'runningDeviceAccessChecks' | 'finished';
+```
+
+Finally, update the `App` component:
+
+```ts
+/**
+ * Entry point of a React app.
+ *
+ * This shows a PreparingYourSession component while the CallReadinessChecks are running.
+ * Once the CallReadinessChecks are finished, the TestComplete component is shown.
+ */
+const App = (): JSX.Element => {
+ const [testState, setTestState] = useState<TestingState>('runningEnvironmentChecks');
+
+ return (
+ <FluentThemeProvider>
+ <CallClientProvider callClient={callClient}>
+ {/* Show a Preparing your session screen while running the environment checks */}
+ {testState === 'runningEnvironmentChecks' && (
+ <>
+ <PreparingYourSession />
+ <EnvironmentChecksComponent onTestsSuccessful={() => setTestState('runningDeviceAccessChecks')} />
+ </>
+ )}
+
+ {/* Show a Preparing your session screen while running the device access checks */}
+ {testState === 'runningDeviceAccessChecks' && (
+ <>
+ <PreparingYourSession />
+ <DeviceAccessChecksComponent onTestsSuccessful={() => setTestState('finished')} />
+ </>
+ )}
++
+ {/* After the device setup is complete, take the user to the call. For this sample we show a test complete page. */}
+ {testState === 'finished' && <TestComplete />}
+ </CallClientProvider>
+ </FluentThemeProvider>
+ );
+}
+```
+
+The app presents the user with prompts to guide them through device access:
+
+![Gif showing user being prompted for camera and microphone access](../media/call-readiness/prompt-device-permissions.gif)
+
+> [!NOTE]
+> For testing we recommend visiting your app in InPrivate/Incognito mode so that camera and microphone permissions have not been previously granted for `localhost:3000`.
+
+## Next steps
+
+> [!div class="nextstepaction"]
+> [Part 3: Selecting a microphone and camera for the call](./call-readiness-tutorial-part-3-camera-microphone-setup.md)
communication-services Call Readiness Tutorial Part 3 Camera Microphone Setup https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communication-services/tutorials/call-readiness/call-readiness-tutorial-part-3-camera-microphone-setup.md
+
+ Title: Microphone and camera setup before a call using Azure Communication Services UI Library
+
+description: Learn how to use Azure Communication Services with the UI Library to create an experience that gets users ready to join a call - Part 3.
+++++ Last updated : 11/17/2022+++++
+# Microphone and camera setup before a call using Azure Communication Services UI Library
++
+This tutorial is a continuation of a three part series of Call Readiness tutorials and follows on from the previous two parts:
+
+- [Ensure user is on a supported browser](./call-readiness-tutorial-part-1-browser-support.md).
+- [Request camera and microphone access](./call-readiness-tutorial-part-2-requesting-device-access.md ).
+
+## Download code
+
+Access the full code for this tutorial on [GitHub](https://github.com/Azure-Samples/communication-services-javascript-quickstarts/tree/main/ui-library-call-readiness).
+
+## Letting the user choose their camera, microphone and speaker
+
+From the previous two parts of the tutorial, the user is on a supported browser, and they have given us permission to access their camera and microphone. We can now make sure the user can choose the correct microphone, camera and speaker they want to use for their call.
+We present the user with a rich interface to choose their camera, microphone and speaker. Our final device setup UI looks like this:
+
+![Image of the device setup page](../media/call-readiness/device-setup-page.png)
+
+### Creating a Configuration Screen
+
+First we create a new file called `DeviceSetup.tsx` and add some setup code, with a callback that returns the users chosen devices back to the App:
+
+`src/DeviceSetup.tsx`
+
+```ts
+import { PrimaryButton, Stack } from '@fluentui/react';
+
+export const DeviceSetup = (props: {
+ /** Callback to let the parent component know what the chosen user device settings were */
+ onDeviceSetupComplete: (userChosenDeviceState: { cameraOn: boolean; microphoneOn: boolean }) => void
+}): JSX.Element => {
+ return (
+ <Stack tokens={{ childrenGap: '1rem' }} verticalAlign="center" verticalFill>
+ <PrimaryButton text="Continue" onClick={() => props.onDeviceSetupComplete({ cameraOn: false, microphoneOn: false })} />
+ </Stack>
+ );
+}
+```
+
+We can then add this DeviceSetup to our App.
+
+- When the PreCallChecksComponent completes, it forwards the user to the `deviceSetup` state.
+- When the user is in the `deviceSetup` state, we render the `DeviceSetup` component.
+- When the device setup is complete, the user is forwarded to the `finished` state. In a production App, this forward is typically when you would move the user to a call screen.
+
+First import the DeviceSetup component we created:
+
+`src/App.tsx`
+
+```ts
+import { DeviceSetup } from './DeviceSetup';
+```
+
+Then update the App to have a new testing state `deviceSetup`:
+
+```ts
+type TestingState = 'runningEnvironmentChecks' | 'runningDeviceAccessChecks' | 'deviceSetup' | 'finished';
+```
+
+And finally update our `App` component to transition the App to the device setup once the device access checks complete:
+
+```ts
+/**
+ * Entry point of a React app.
+ *
+ * This shows a PreparingYourSession component while the CallReadinessChecks are running.
+ * Once the CallReadinessChecks are finished, the TestComplete component is shown.
+ */
+const App = (): JSX.Element => {
+ const [testState, setTestState] = useState<TestingState>('runningEnvironmentChecks');
+
+ return (
+ <FluentThemeProvider>
+ <CallClientProvider callClient={callClient}>
+ {/* Show a Preparing your session screen while running the environment checks */}
+ {testState === 'runningEnvironmentChecks' && (
+ <>
+ <PreparingYourSession />
+ <EnvironmentChecksComponent onTestsSuccessful={() => setTestState('runningDeviceAccessChecks')} />
+ </>
+ )}
+
+ {/* Show a Preparing your session screen while running the device access checks */}
+ {testState === 'runningDeviceAccessChecks' && (
+ <>
+ <PreparingYourSession />
+ <DeviceAccessChecksComponent onTestsSuccessful={() => setTestState('deviceSetup')} />
+ </>
+ )}
+
+ {/* After the initial checks are complete, take the user to a device setup page call readiness checks are finished */}
+ {testState === 'deviceSetup' && (
+ <DeviceSetup
+ onDeviceSetupComplete={(userChosenDeviceState) => {
+ setTestState('finished');
+ }}
+ />
+ )}
+
+ {/* After the device setup is complete, take the user to the call. For this sample we show a test complete page. */}
+ {testState === 'finished' && <TestComplete />}
+ </CallClientProvider>
+ </FluentThemeProvider>
+ );
+}
+```
+
+#### Retrieving and updating microphone, camera and speaker lists from the stateful client
+
+To present a list of selectable cameras, microphones and speakers to the user we can use the stateful call client.
+Here we create a series of React hooks. These React hooks use the call client to query for available devices.
+The hooks ensure our application re-renders anytime the list changes, for example, if a new camera is plugged into the user's machine.
+For these hooks, we create a file called `deviceSetupHooks.ts` and we create three hooks: `useMicrophones`, `useSpeakers` and `useCameras`.
+Each of these hooks uses `useCallClientStateChange` to update their lists anytime the user plugs/unplugs a device:
+
+`src/deviceSetupHooks.ts`
+
+```ts
+import { AudioDeviceInfo, VideoDeviceInfo } from "@azure/communication-calling";
+import { CallClientState, StatefulDeviceManager, useCallClient, VideoStreamRendererViewState } from "@azure/communication-react";
+import { useCallback, useEffect, useRef, useState } from "react";
+
+/** A helper hook to get and update microphone device information */
+export const useMicrophones = (): {
+ microphones: AudioDeviceInfo[],
+ selectedMicrophone: AudioDeviceInfo | undefined,
+ setSelectedMicrophone: (microphone: AudioDeviceInfo) => Promise<void>
+} => {
+ const callClient = useCallClient();
+ useEffect(() => {
+ callClient.getDeviceManager().then(deviceManager => deviceManager.getMicrophones())
+ }, [callClient]);
+
+ const setSelectedMicrophone = async (microphone: AudioDeviceInfo) =>
+ (await callClient.getDeviceManager()).selectMicrophone(microphone);
+
+ const state = useCallClientStateChange();
+ return {
+ microphones: state.deviceManager.microphones,
+ selectedMicrophone: state.deviceManager.selectedMicrophone,
+ setSelectedMicrophone
+ };
+}
+
+/** A helper hook to get and update speaker device information */
+export const useSpeakers = (): {
+ speakers: AudioDeviceInfo[],
+ selectedSpeaker: AudioDeviceInfo | undefined,
+ setSelectedSpeaker: (speaker: AudioDeviceInfo) => Promise<void>
+} => {
+ const callClient = useCallClient();
+ useEffect(() => {
+ callClient.getDeviceManager().then(deviceManager => deviceManager.getSpeakers())
+ }, [callClient]);
+
+ const setSelectedSpeaker = async (speaker: AudioDeviceInfo) =>
+ (await callClient.getDeviceManager()).selectSpeaker(speaker);
+
+ const state = useCallClientStateChange();
+ return {
+ speakers: state.deviceManager.speakers,
+ selectedSpeaker: state.deviceManager.selectedSpeaker,
+ setSelectedSpeaker
+ };
+}
+
+/** A helper hook to get and update camera device information */
+export const useCameras = (): {
+ cameras: VideoDeviceInfo[],
+ selectedCamera: VideoDeviceInfo | undefined,
+ setSelectedCamera: (camera: VideoDeviceInfo) => Promise<void>
+} => {
+ const callClient = useCallClient();
+ useEffect(() => {
+ callClient.getDeviceManager().then(deviceManager => deviceManager.getCameras())
+ }, [callClient]);
+
+ const setSelectedCamera = async (camera: VideoDeviceInfo) =>
+ (await callClient.getDeviceManager() as StatefulDeviceManager).selectCamera(camera);
+
+ const state = useCallClientStateChange();
+ return {
+ cameras: state.deviceManager.cameras,
+ selectedCamera: state.deviceManager.selectedCamera,
+ setSelectedCamera
+ };
+}
+
+/** A helper hook to act when changes to the stateful client occur */
+const useCallClientStateChange = (): CallClientState => {
+ const callClient = useCallClient();
+ const [state, setState] = useState<CallClientState>(callClient.getState());
+ useEffect(() => {
+ const updateState = (newState: CallClientState) => {
+ setState(newState);
+ }
+ callClient.onStateChange(updateState);
+ return () => {
+ callClient.offStateChange(updateState);
+ };
+ }, [callClient]);
+ return state;
+}
+```
+
+#### Creating dropdowns to choose devices
+
+To allow the user to choose their camera, microphone and speaker, we use the `Dropdown` component from Fluent UI React.
+We create new components that use the hooks we created in `deviceSetupHooks.tsx` to populate the dropdown and update
+the chosen device when the user selects a different device from the dropdown.
+To house these new components, we create a file called `DeviceSelectionComponents.tsx` that export three new components: `CameraSelectionDropdown`, `MicrophoneSelectionDropdown` and `SpeakerSelectionDropdown`.
+
+`src/DeviceSelectionComponents.tsx`
+
+```ts
+import { Dropdown } from '@fluentui/react';
+import { useCameras, useMicrophones, useSpeakers } from './deviceSetupHooks';
+
+/** Dropdown that allows the user to choose their desired camera */
+export const CameraSelectionDropdown = (): JSX.Element => {
+ const { cameras, selectedCamera, setSelectedCamera } = useCameras();
+ return (
+ <DeviceSelectionDropdown
+ placeholder={cameras.length === 0 ? 'No cameras found' : 'Select a camera'}
+ label={'Camera'}
+ devices={cameras}
+ selectedDevice={selectedCamera}
+ onSelectionChange={(selectedDeviceId) => {
+ const newlySelectedCamera = cameras.find((camera) => camera.id === selectedDeviceId);
+ if (newlySelectedCamera) {
+ setSelectedCamera(newlySelectedCamera);
+ }
+ }}
+ />
+ );
+};
+
+/** Dropdown that allows the user to choose their desired microphone */
+export const MicrophoneSelectionDropdown = (): JSX.Element => {
+ const { microphones, selectedMicrophone, setSelectedMicrophone } = useMicrophones();
+ return (
+ <DeviceSelectionDropdown
+ placeholder={microphones.length === 0 ? 'No microphones found' : 'Select a microphone'}
+ label={'Microphone'}
+ devices={microphones}
+ selectedDevice={selectedMicrophone}
+ onSelectionChange={(selectedDeviceId) => {
+ const newlySelectedMicrophone = microphones.find((microphone) => microphone.id === selectedDeviceId);
+ if (newlySelectedMicrophone) {
+ setSelectedMicrophone(newlySelectedMicrophone);
+ }
+ }}
+ />
+ );
+};
+
+/** Dropdown that allows the user to choose their desired speaker */
+export const SpeakerSelectionDropdown = (): JSX.Element => {
+ const { speakers, selectedSpeaker, setSelectedSpeaker } = useSpeakers();
+ return (
+ <DeviceSelectionDropdown
+ placeholder={speakers.length === 0 ? 'No speakers found' : 'Select a speaker'}
+ label={'Speaker'}
+ devices={speakers}
+ selectedDevice={selectedSpeaker}
+ onSelectionChange={(selectedDeviceId) => {
+ const newlySelectedSpeaker = speakers.find((speaker) => speaker.id === selectedDeviceId);
+ if (newlySelectedSpeaker) {
+ setSelectedSpeaker(newlySelectedSpeaker);
+ }
+ }}
+ />
+ );
+};
+
+const DeviceSelectionDropdown = (props: {
+ placeholder: string,
+ label: string,
+ devices: { id: string, name: string }[],
+ selectedDevice: { id: string, name: string } | undefined,
+ onSelectionChange: (deviceId: string | undefined) => void
+}): JSX.Element => {
+ return (
+ <Dropdown
+ placeholder={props.placeholder}
+ label={props.label}
+ options={props.devices.map((device) => ({ key: device.id, text: device.name }))}
+ selectedKey={props.selectedDevice?.id}
+ onChange={(_, option) => props.onSelectionChange?.(option?.key as string | undefined)}
+ />
+ );
+};
+```
+
+##### Add dropdowns to the Device Setup
+
+The camera, microphone and speaker dropdowns can then be added to the Device Setup component.
+
+First, import the new Dropdowns:
+
+`src/DeviceSetup.tsx`
+
+```ts
+import { CameraSelectionDropdown, MicrophoneSelectionDropdown, SpeakerSelectionDropdown } from './DeviceSelectionComponents';
+```
+
+Then create a component called `DeviceSetup` that houses these dropdowns. This component holds the local video preview we create later.
+
+```ts
+export const DeviceSetup = (props: {
+ /** Callback to let the parent component know what the chosen user device settings were */
+ onDeviceSetupComplete: (userChosenDeviceState: { cameraOn: boolean; microphoneOn: boolean }) => void
+}): JSX.Element => {
+ return (
+ <Stack verticalFill verticalAlign="center" horizontalAlign="center" tokens={{ childrenGap: '1rem' }}>
+ <Stack horizontal tokens={{ childrenGap: '2rem' }}>
+ <Stack tokens={{ childrenGap: '1rem' }} verticalAlign="center" verticalFill>
+ <CameraSelectionDropdown />
+ <MicrophoneSelectionDropdown />
+ <SpeakerSelectionDropdown />
+ <Stack.Item styles={{ root: { paddingTop: '0.5rem' }}}>
+ <PrimaryButton text="Continue" onClick={() => props.onDeviceSetupComplete({ cameraOn: false, microphoneOn: false })} />
+ </Stack.Item>
+ </Stack>
+ </Stack>
+ </Stack>
+ );
+};
+```
+
+#### Creating a local video preview
+
+Beside the dropdowns, we create a local video preview to allow the user to see what their camera is capturing. It contains a small call controls bar with camera and microphone buttons to toggle camera on/off and microphone mute/unmute.
+
+First we add a new hook to our `deviceSetupHooks.ts` called `useLocalPreview`. This hook provides our react component with a localPreview to render and functions to start and stop the local preview:
+
+`src/deviceSetupHooks.ts`
+
+```ts
+/** A helper hook to providing functionality to create a local video preview */
+export const useLocalPreview = (): {
+ localPreview: VideoStreamRendererViewState | undefined,
+ startLocalPreview: () => Promise<void>,
+ stopLocalPreview: () => void
+} => {
+ const callClient = useCallClient();
+ const state = useCallClientStateChange();
+ const localPreview = state.deviceManager.unparentedViews[0];
+
+ const startLocalPreview = useCallback(async () => {
+ const selectedCamera = state.deviceManager.selectedCamera;
+ if (!selectedCamera) {
+ console.warn('no camera selected to start preview with');
+ return;
+ }
+ callClient.createView(
+ undefined,
+ undefined,
+ {
+ source: selectedCamera,
+ mediaStreamType: 'Video'
+ },
+ {
+ scalingMode: 'Crop'
+ }
+ );
+ }, [callClient, state.deviceManager.selectedCamera]);
+
+ const stopLocalPreview = useCallback(() => {
+ if (!localPreview) {
+ console.warn('no local preview ti dispose');
+ return;
+ }
+ callClient.disposeView(undefined, undefined, localPreview)
+ }, [callClient, localPreview]);
+
+ const selectedCameraRef = useRef(state.deviceManager.selectedCamera);
+ useEffect(() => {
+ if (selectedCameraRef.current !== state.deviceManager.selectedCamera) {
+ stopLocalPreview();
+ startLocalPreview();
+ selectedCameraRef.current = state.deviceManager.selectedCamera;
+ }
+ }, [startLocalPreview, state.deviceManager.selectedCamera, stopLocalPreview]);
+
+ return {
+ localPreview: localPreview?.view,
+ startLocalPreview,
+ stopLocalPreview
+ }
+}
+```
+
+Then we create a new component called `LocalPreview.tsx` that uses that hook to display the local video preview to the user:
+
+`src/LocalPreview.tsx`
+
+```ts
+import { StreamMedia, VideoTile, ControlBar, CameraButton, MicrophoneButton, useTheme } from '@azure/communication-react';
+import { Stack, mergeStyles, Text, ITheme } from '@fluentui/react';
+import { VideoOff20Filled } from '@fluentui/react-icons';
+import { useEffect } from 'react';
+import { useCameras, useLocalPreview } from './deviceSetupHooks';
+
+/** LocalPreview component has a camera and microphone toggle buttons, along with a video preview of the local camera. */
+export const LocalPreview = (props: {
+ cameraOn: boolean,
+ microphoneOn: boolean,
+ cameraToggled: (isCameraOn: boolean) => void,
+ microphoneToggled: (isMicrophoneOn: boolean) => void
+}): JSX.Element => {
+ const { cameraOn, microphoneOn, cameraToggled, microphoneToggled } = props;
+ const { localPreview, startLocalPreview, stopLocalPreview } = useLocalPreview();
+ const canTurnCameraOn = useCameras().cameras.length > 0;
+
+ // Start and stop the local video preview based on if the user has turned the camera on or off and if the camera is available.
+ useEffect(() => {
+ if (!localPreview && cameraOn && canTurnCameraOn) {
+ startLocalPreview();
+ } else if (!cameraOn) {
+ stopLocalPreview();
+ }
+ }, [canTurnCameraOn, cameraOn, localPreview, startLocalPreview, stopLocalPreview]);
+
+ const theme = useTheme();
+ const shouldShowLocalVideo = canTurnCameraOn && cameraOn && localPreview;
+ return (
+ <Stack verticalFill verticalAlign="center">
+ <Stack className={localPreviewContainerMergedStyles(theme)}>
+ <VideoTile
+ renderElement={shouldShowLocalVideo ? <StreamMedia videoStreamElement={localPreview.target} /> : undefined}
+ onRenderPlaceholder={() => <CameraOffPlaceholder />}
+ >
+ <ControlBar layout="floatingBottom">
+ <CameraButton
+ checked={cameraOn}
+ onClick={() => {
+ cameraToggled(!cameraOn)
+ }}
+ />
+ <MicrophoneButton
+ checked={microphoneOn}
+ onClick={() => {
+ microphoneToggled(!microphoneOn)
+ }}
+ />
+ </ControlBar>
+ </VideoTile>
+ </Stack>
+ </Stack>
+ );
+};
+
+/** Placeholder shown in the local preview window when the camera is off */
+const CameraOffPlaceholder = (): JSX.Element => {
+ const theme = useTheme();
+ return (
+ <Stack style={{ width: '100%', height: '100%' }} verticalAlign="center">
+ <Stack.Item align="center">
+ <VideoOff20Filled primaryFill="currentColor" />
+ </Stack.Item>
+ <Stack.Item align="center">
+ <Text variant='small' styles={{ root: { color: theme.palette.neutralTertiary }}}>Your camera is turned off</Text>
+ </Stack.Item>
+ </Stack>
+ );
+};
+
+/** Default styles for the local preview container */
+const localPreviewContainerMergedStyles = (theme: ITheme): string =>
+ mergeStyles({
+ minWidth: '25rem',
+ maxHeight: '18.75rem',
+ minHeight: '16.875rem',
+ margin: '0 auto',
+ background: theme.palette.neutralLighter,
+ color: theme.palette.neutralTertiary
+ });
+```
+
+##### Add the local preview to the device setup
+
+The local preview component can then be added to the Device Setup:
+
+`src/DeviceSetup.tsx`
+
+```ts
+import { LocalPreview } from './LocalPreview';
+import { useState } from 'react';
+```
+
+```ts
+export const DeviceSetup = (props: {
+ /** Callback to let the parent component know what the chosen user device settings were */
+ onDeviceSetupComplete: (userChosenDeviceState: { cameraOn: boolean; microphoneOn: boolean }) => void
+}): JSX.Element => {
+ const [microphoneOn, setMicrophoneOn] = useState(false);
+ const [cameraOn, setCameraOn] = useState(false);
+
+ return (
+ <Stack verticalFill verticalAlign="center" horizontalAlign="center" tokens={{ childrenGap: '1rem' }}>
+ <Stack horizontal tokens={{ childrenGap: '2rem' }}>
+ <Stack.Item>
+ <LocalPreview
+ cameraOn={cameraOn}
+ microphoneOn={microphoneOn}
+ cameraToggled={setCameraOn}
+ microphoneToggled={setMicrophoneOn}
+ />
+ </Stack.Item>
+ <Stack tokens={{ childrenGap: '1rem' }} verticalAlign="center" verticalFill>
+ <CameraSelectionDropdown />
+ <MicrophoneSelectionDropdown />
+ <SpeakerSelectionDropdown />
+ <Stack.Item styles={{ root: { paddingTop: '0.5rem' }}}>
+ <PrimaryButton text="Continue" onClick={() => props.onDeviceSetupComplete({ cameraOn, microphoneOn })} />
+ </Stack.Item>
+ </Stack>
+ </Stack>
+ </Stack>
+ );
+};
+```
+
+### Running the experience
+
+Now you've created the device configuration screen, you can run the app and see the experience:
+
+![Gif showing the end to end experience of the call readiness checks and device setup](../media/call-readiness/call-readiness-user-flow.gif)
+
+## Next steps
+
+> [!div class="nextstepaction"]
+> [Check the rest of the UI Library](https://azure.github.io/communication-ui-library/)
cosmos-db How To Move Regions https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/how-to-move-regions.md
The following steps demonstrate how to migrate an Azure Cosmos DB account for th
To create a new database and container, see [Create an Azure Cosmos DB container](how-to-create-container.md).
-1. Migrate data by using the Azure Cosmos DB Live Data Migrator tool.
+1. Migrate data by using the Azure Cosmos DB Spark Connector live migration sample.
- To migrate data with near zero downtime, see [Azure Cosmos DB Live Data Migrator tool](https://github.com/Azure-Samples/azure-cosmosdb-live-data-migrator).
+ To migrate data with near zero downtime, see [Live Migrate Azure Cosmos DB SQL API Containers data with Spark Connector](https://github.com/Azure/azure-sdk-for-java/tree/main/sdk/cosmos/azure-cosmos-spark_3_2-12/Samples/DatabricksLiveContainerMigration).
1. Update the application connection string.
- With the Live Data Migrator tool still running, update the connection information in the new deployment of your application. You can retrieve the endpoints and keys for your application from the Azure portal.
+ With the Live Data Migration sample still running, update the connection information in the new deployment of your application. You can retrieve the endpoints and keys for your application from the Azure portal.
:::image type="content" source="./media/secure-access-to-data/nosql-database-security-master-key-portal.png" alt-text="Access control in the Azure portal, demonstrating NoSQL database security.":::
The following steps demonstrate how to migrate an Azure Cosmos DB account for th
1. Delete any resources that you no longer need.
- With requests now fully redirected to the new instance, you can delete the old Azure Cosmos DB account and the Live Data Migrator tool.
+ With requests now fully redirected to the new instance, you can delete the old Azure Cosmos DB account and stop the Live Data Migrator sample.
## Next steps
cosmos-db Tutorial Mongotools Cosmos Db https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/mongodb/tutorial-mongotools-cosmos-db.md
After you migrate the data stored in MongoDB database to Azure Cosmos DBΓÇÖs API
## Next steps
-* Review migration guidance for additional scenarios in the Microsoft [Database Migration Guide](https://datamigration.microsoft.com/).
+* Review migration guidance for additional scenarios in the Microsoft [Database Migration Guide](/data-migration/).
cost-management-billing Understand Cost Mgt Data https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cost-management-billing/costs/understand-cost-mgt-data.md
Cost Management receives tags as part of each usage record submitted by the indi
- Resource tags are only included in usage data while the tag is applied ΓÇô tags aren't applied to historical data. - Resource tags are only available in Cost Management after the data is refreshed. - Resource tags are only available in Cost Management when the resource is active/running and producing usage records. For example, when a VM is deallocated.-- Managing tags requires contributor access to each resource.
+- Managing tags requires contributor access to each resource or the [tag contributor](../../role-based-access-control/built-in-roles.md#tag-contributor) RBAC role.
- Managing tag policies requires either owner or policy contributor access to a management group, subscription, or resource group. If you don't see a specific tag in Cost Management, consider the following questions:
cost-management-billing Pay Bill https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cost-management-billing/understand/pay-bill.md
tags: billing, past due, pay now, bill, invoice, pay
Previously updated : 02/10/2023 Last updated : 02/24/2023 # Pay your Microsoft Customer Agreement Azure or Microsoft Online Subscription Program Azure bill
-This article applies to customers with a Microsoft Customer Agreement (MCA) and to customers who signed up for Azure through the Azure website, Azure.com (for a Microsoft Online Services Program account also called pay-as-you-go account).
+This article applies to customers with a Microsoft Customer Agreement (MCA) and to customers who signed up for Azure through the Azure website, Azure.com (for a Microsoft Online Services Program (MOSP) account also called pay-as-you-go account).
[Check your access to a Microsoft Customer Agreement](#check-access-to-a-microsoft-customer-agreement).
If you have Azure credits, they automatically apply to your invoice each billing
Here's a table summarizing payment methods for different agreement types
-|Agreement type| Credit card | Wire transfer┬╣ | Check┬▓ |
-| | - | | |
-| Microsoft Customer Agreement<br>purchased through a Microsoft representative | Γ£ö (with a $50,000.00 USD limit) | Γ£ö | Γ£ö |
-| Enterprise Agreement | Γ£ÿ | Γ£ö | Γ£ö |
-| Azure.com | Γ£ö | Γ£ö if approved to pay by invoice | Γ£ÿ |
+|Agreement type| Credit card | Wire transfer┬╣ |
+| | - | |
+| Microsoft Customer Agreement<br>purchased through a Microsoft representative | Γ£ö (with a $50,000.00 USD limit) | Γ£ö |
+| Enterprise Agreement | Γ£ÿ | Γ£ö |
+| MOSP | Γ£ö | Γ£ö if approved to pay by invoice |
┬╣ If supported by your bank, an ACH credit transaction can be made automatically.
-┬▓ As noted previously, on April 1, 2023, Microsoft will stop accepting checks as a payment method for subscriptions that are paid by invoice.
- ## Reserve Bank of India **The Reserve Bank of India has issued new directives.**
dms How To Migrate Ssis Packages Managed Instance https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/dms/how-to-migrate-ssis-packages-managed-instance.md
After an instance of the service is created, locate it within the Azure portal,
## Next steps
-* Review the migration guidance in the Microsoft [Database Migration Guide](https://datamigration.microsoft.com/).
+* Review the migration guidance in the Microsoft [Database Migration Guide](/data-migration/).
dms How To Migrate Ssis Packages https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/dms/how-to-migrate-ssis-packages.md
If the deployment of your project succeeds without failure, you can select any p
## Next steps
-* Review the migration guidance in the Microsoft [Database Migration Guide](https://datamigration.microsoft.com/).
+* Review the migration guidance in the Microsoft [Database Migration Guide](/data-migration/).
dms How To Monitor Migration Activity https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/dms/how-to-monitor-migration-activity.md
The following table describes the fields shown in table level migration progress
> CDC values of Insert, Update and Delete and Total Applied may decrease when database is cutover or migration is restarted. ## Next steps-- Review the migration guidance in the Microsoft [Database Migration Guide](https://datamigration.microsoft.com/).
+- Review the migration guidance in the Microsoft [Database Migration Guide](/data-migration/).
dms Howto Sql Server To Azure Sql Managed Instance Powershell Offline https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/dms/howto-sql-server-to-azure-sql-managed-instance-powershell-offline.md
Remove-AzDms -ResourceGroupName myResourceGroup -ServiceName MyDMS
Find out more about Azure Database Migration Service in the article [What is the Azure Database Migration Service?](./dms-overview.md).
-For information about additional migrating scenarios (source/target pairs), see the Microsoft [Database Migration Guide](https://datamigration.microsoft.com/).
+For information about additional migrating scenarios (source/target pairs), see the Microsoft [Database Migration Guide](/data-migration/).
dms Howto Sql Server To Azure Sql Managed Instance Powershell Online https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/dms/howto-sql-server-to-azure-sql-managed-instance-powershell-online.md
Remove-AzDms -ResourceGroupName myResourceGroup -ServiceName MyDMS
## Additional resources
-For information about additional migrating scenarios (source/target pairs), see the Microsoft [Database Migration Guide](https://datamigration.microsoft.com/).
+For information about additional migrating scenarios (source/target pairs), see the Microsoft [Database Migration Guide](/data-migration/).
## Next steps
dms Howto Sql Server To Azure Sql Powershell https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/dms/howto-sql-server-to-azure-sql-powershell.md
Remove-AzDms -ResourceGroupName myResourceGroup -ServiceName MyDMS
## Next step
-* Review the migration guidance in the Microsoft [Database Migration Guide](https://datamigration.microsoft.com/).
+* Review the migration guidance in the Microsoft [Database Migration Guide](/data-migration/).
dms Resource Custom Roles Sql Database Ads https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/dms/resource-custom-roles-sql-database-ads.md
To assign a role to a user or an app ID:
## Next steps -- Review the [migration guidance for your scenario](https://datamigration.microsoft.com/).
+- Review the [migration guidance for your scenario](/data-migration/).
dms Resource Custom Roles Sql Db Managed Instance Ads https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/dms/resource-custom-roles-sql-db-managed-instance-ads.md
To assign a role to users/APP ID, open the Azure portal, perform the following s
## Next steps
-* Review the migration guidance for your scenario in the Microsoft [Database Migration Guide](https://datamigration.microsoft.com/).
+* Review the migration guidance for your scenario in the Microsoft [Database Migration Guide](/data-migration/).
dms Resource Custom Roles Sql Db Managed Instance https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/dms/resource-custom-roles-sql-db-managed-instance.md
To assign a role to users/APP ID, open the Azure portal, perform the following s
## Next steps
-* Review the migration guidance for your scenario in the Microsoft [Database Migration Guide](https://datamigration.microsoft.com/).
+* Review the migration guidance for your scenario in the Microsoft [Database Migration Guide](/data-migration/).
dms Resource Custom Roles Sql Db Virtual Machine Ads https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/dms/resource-custom-roles-sql-db-virtual-machine-ads.md
To assign a role to users/APP ID, open the Azure portal, perform the following s
## Next steps
-* Review the migration guidance for your scenario in the Microsoft [Database Migration Guide](https://datamigration.microsoft.com/).
+* Review the migration guidance for your scenario in the Microsoft [Database Migration Guide](/data-migration/).
dms Tutorial Mongodb Cosmos Db Online https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/dms/tutorial-mongodb-cosmos-db-online.md
After you migrate the data stored in MongoDB database to Azure Cosmos DB for Mon
* [Azure Cosmos DB service information](https://azure.microsoft.com/services/cosmos-db/) * Trying to do capacity planning for a migration to Azure Cosmos DB?
- * If all you know is the number of vcores and servers in your existing database cluster, read about [estimating request units using vCores or vCPUs](../cosmos-db/convert-vcore-to-request-unit.md)
+ * If all you know is the number of vCores and servers in your existing database cluster, read about [estimating request units using vCores or vCPUs](../cosmos-db/convert-vcore-to-request-unit.md)
* If you know typical request rates for your current database workload, read about [estimating request units using Azure Cosmos DB capacity planner](../cosmos-db/mongodb/estimate-ru-capacity-planner.md) ## Next steps
-* Review migration guidance for additional scenarios in the Microsoft [Database Migration Guide](https://datamigration.microsoft.com/).
+* Review migration guidance for additional scenarios in the Microsoft [Database Migration Guide](/data-migration/).
dms Tutorial Mongodb Cosmos Db https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/dms/tutorial-mongodb-cosmos-db.md
After you migrate the data stored in MongoDB database to the Azure Cosmos DB for
## Additional resources * Trying to do capacity planning for a migration to Azure Cosmos DB?
- * If all you know is the number of vcores and servers in your existing database cluster, read about [estimating request units using vCores or vCPUs](../cosmos-db/convert-vcore-to-request-unit.md)
+ * If all you know is the number of vCores and servers in your existing database cluster, read about [estimating request units using vCores or vCPUs](../cosmos-db/convert-vcore-to-request-unit.md)
* If you know typical request rates for your current database workload, read about [estimating request units using Azure Cosmos DB capacity planner](../cosmos-db/mongodb/estimate-ru-capacity-planner.md) ## Next steps
-Review migration guidance for additional scenarios in the [Azure Database Migration Guide](https://datamigration.microsoft.com/).
+Review migration guidance for additional scenarios in the [Azure Database Migration Guide](/data-migration/).
event-grid Subscribe To Graph Api Events https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/event-grid/subscribe-to-graph-api-events.md
Last updated 09/01/2022
This article describes steps to subscribe to events published by Microsoft Graph API. The following table lists the resources for which events are available through Graph API. For every resource, events for create, update and delete state changes are supported. > [!IMPORTANT]
-> Microsoft Graph API's ability to send events to Azure Event Grid is currently in **private preview**.
+> Microsoft Graph API's ability to send events to Azure Event Grid is currently in **private preview**. If you have questions or need support, please email us [mailto:ask-graph-and-grid@microsoft.com?subject=Support Request](<mailto:ask-graph-and-grid@microsoft.com?subject=Support Request>).
|Microsoft event source |Resource(s) | Available event types | |: | : | :-|
Besides the ability to subscribe to Microsoft Graph API events via Event Grid, y
## Enable Graph API events to flow to your partner topic
-> [!IMPORTANT]
-> In the following steps, you will follow instructions from [Node.js](https://github.com/microsoftgraph/nodejs-webhooks-sample), [Java](https://github.com/microsoftgraph/java-spring-webhooks-sample), and[.NET Core](https://github.com/microsoftgraph/aspnetcore-webhooks-sample) Webhook samples to enable flow of events from Microsoft Graph API. At some point in the sample, you will have an application registered with Azure AD. Email your application ID to <a href="mailto:ask-graph-and-grid@microsoft.com?subject=Please allow my application ID">mailto:ask-graph-and-grid@service.microsoft.com?subject=Please allow my Azure AD application with ID to send events through Graph API</a> so that the Microsoft Graph API team can add your application ID to allow list to use this new capability.
- You request Microsoft Graph API to send events by creating a Graph API subscription. When you create a Graph API subscription, the http request should look like the following sample: ```json POST to https://graph.microsoft.com/beta/subscriptions
+x-ms-enable-features: EventGrid
+ Body: { "changeType": "Updated,Deleted,Created",
Body:
} ```
-Here are some of the key payload properties:
+Here are some of the key headers and payload properties:
+- `x-ms-enable-features`: Header used to indicate your desire to participate in the private preview capability to send events to Azure Event Grid. Its value must be "EventGrid". This header must be included with the request when creating a Microsoft Graph API subscription.
- `changeType`: the kind of resource changes for which you want to receive events. Valid values: `Updated`, `Deleted`, and `Created`. You can specify one or more of these values separated by commas. - `notificationUrl`: a URI that conforms to the following pattern: `EventGrid:?azuresubscriptionid=<you-azure-subscription-id>&resourcegroup=<your-resource-group-name>&partnertopic=<the-name-for-your-partner-topic>&location=<the-Azure-region-where-you-want-the-topic-created>`. - resource: the resource for which you need events announcing state changes.
healthcare-apis Understand Service https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/healthcare-apis/iot/understand-service.md
Previously updated : 02/22/2023 Last updated : 02/23/2023
This article provides an overview of the device message data processing stages w
The MedTech service device message data processing follows these steps and in this order:
-> [!div class="checklist"]
-> - Ingest
-> - Normalize - Device mappings applied.
-> - Group - (Optional)
-> - Transform - FHIR destination mappings applied.
-> - Persist
+* Ingest
+* Normalize - Device mappings applied.
+* Group - (Optional)
+* Transform - FHIR destination mappings applied.
+* Persist
:::image type="content" source="media/understand-service/understand-device-message-flow.png" alt-text="Screenshot of a device message as it processed by the MedTech service." lightbox="media/understand-service/understand-device-message-flow.png":::
The normalization process not only simplifies data processing at later stages, b
## Group - (Optional) Group is the next *optional* stage where the normalized messages available from the MedTech service normalization stage are grouped using three different parameters:
-> [!div class="checklist"]
-> - Device identity
-> - Measurement type
-> - Time period
+* Device identity
+* Measurement type
+* Time period
-`Device identity` and `measurement type` grouping is optional and enabled by the use of the [SampledData](https://www.hl7.org/fhir/datatypes.html#SampledData) measurement type. The SampledData measurement type provides a concise way to represent a time-based series of measurements from a device message into FHIR Observation resources. When you use the SampledData measurement type, measurements can be grouped into a single FHIR Observation resource that represents a 1-hour period or a 24-hour period.
+Device identity and measurement type grouping are optional and enabled by the use of the [SampledData](https://www.hl7.org/fhir/datatypes.html#SampledData) measurement type. The SampledData measurement type provides a concise way to represent a time-based series of measurements from a device message into FHIR Observation resources. When you use the SampledData measurement type, measurements can be grouped into a single FHIR Observation resource that represents a 1-hour period or a 24-hour period.
## Transform Transform is the next stage where normalized messages are processed using user-selected/user-created conforming and valid [FHIR destination mappings](how-to-configure-fhir-mappings.md). Normalized messages get transformed into FHIR Observation resources if a matching FHIR destination mapping has been authored.
At this point, the [Device](https://www.hl7.org/fhir/device.html) resource, alon
> [!NOTE] > All identity look ups are cached once resolved to decrease load on the FHIR service. If you plan on reusing devices with multiple patients, it is advised you create a virtual device resource that is specific to the patient and send the virtual device identifier in the device message payload. The virtual device can be linked to the actual device resource as a parent.
-If no Device resource for a given device identifier exists in the FHIR service, the outcome depends upon the value of [Resolution Type](deploy-new-config.md#configure-the-destination-tab) set at the time of the MedTech service deployment. When set to `Lookup`, the specific message is ignored, and the pipeline continues to process other incoming device messages. If set to `Create`, the MedTech service creates minimal Device and Patient resources in the FHIR service.
+If no Device resource for a given device identifier exists in the FHIR service, the outcome depends upon the value of [**Resolution type**](deploy-new-config.md#configure-the-destination-tab) set at the time of the MedTech service deployment. When set to **Lookup**, the specific message is ignored, and the pipeline continues to process other incoming device messages. If set to **Create**, the MedTech service creates minimal Device and Patient resources in the FHIR service.
> [!NOTE]
-> The `Resolution Type` can also be adjusted post deployment of the MedTech service if a different `Resolution Type` is later required.
+> The **Resolution type** can also be adjusted post deployment of the MedTech service if a different **Resolution type** is later required.
The MedTech service provides near real-time processing and will also attempt to reduce the number of requests made to the FHIR service by grouping requests into batches of 300 [normalized messages](#normalize). If there's a low volume of data, and 300 normalized messages haven't been added to the group, then the corresponding FHIR Observations in that group are persisted to the FHIR service after ~five minutes. This means that when there's fewer than 300 normalized messages to be processed, there may be a delay of ~five minutes before FHIR Observations are created or updated in the FHIR service.
hpc-cache Access Policies https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/hpc-cache/access-policies.md
Last updated 05/19/2022-+ # Control client access
hpc-cache Cache Usage Models https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/hpc-cache/cache-usage-models.md
Last updated 06/29/2022-+ <!-- filename is referenced from GUI in aka.ms/hpc-cache-usagemodel -->
hpc-cache Configuration https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/hpc-cache/configuration.md
Last updated 05/16/2022-+ # Configure additional Azure HPC Cache settings
hpc-cache Custom Flush Script https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/hpc-cache/custom-flush-script.md
Last updated 07/07/2022-+ # Customize file write-back in Azure HPC Cache
hpc-cache Customer Keys https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/hpc-cache/customer-keys.md
Last updated 11/02/2022-+ # Use customer-managed encryption keys for Azure HPC Cache
hpc-cache Directory Services https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/hpc-cache/directory-services.md
Last updated 07/27/2022-+ # Configure directory services
hpc-cache Hpc Cache Ingest Msrsync https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/hpc-cache/hpc-cache-ingest-msrsync.md
Last updated 10/30/2019-+ # Azure HPC Cache data ingest - msrsync method
hpc-cache Hpc Cache Ingest Parallelcp https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/hpc-cache/hpc-cache-ingest-parallelcp.md
Last updated 10/30/2019-+ # Azure HPC Cache data ingest - parallel copy script method
hpc-cache Hpc Cache Ingest https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/hpc-cache/hpc-cache-ingest.md
Last updated 05/02/2022-+ # Move data to Azure Blob storage
hpc-cache Hpc Cache Manage https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/hpc-cache/hpc-cache-manage.md
Last updated 06/29/2022-+ # Manage your cache
hpc-cache Hpc Cache Namespace https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/hpc-cache/hpc-cache-namespace.md
Last updated 05/02/2022-+ # Plan the aggregated namespace
hpc-cache Hpc Cache Support Ticket https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/hpc-cache/hpc-cache-support-ticket.md
Last updated 07/21/2022-+ # Contact support for help with Azure HPC Cache
hpc-cache Increase Quota https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/hpc-cache/increase-quota.md
Last updated 07/25/2022-+ # Request an HPC Cache quota increase
hpc-cache Manage Storage Targets https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/hpc-cache/manage-storage-targets.md
Last updated 06/29/2022-+ # View and manage storage targets
hpc-cache Prime Cache https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/hpc-cache/prime-cache.md
Last updated 06/01/2022-+ # Pre-load files in Azure HPC Cache
hpc-cache Troubleshoot Nas https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/hpc-cache/troubleshoot-nas.md
Last updated 08/29/2022-+ # Troubleshoot NAS configuration and NFS storage target issues
internet-peering Faqs https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/internet-peering/faqs.md
Title: Internet peering - FAQs
-description: Internet peering - FAQs
+ Title: Internet peering - FAQ
+description: Internet peering frequently asked questions (FAQ)
- Previously updated : 11/27/2019+ Last updated : 02/24/2023 +
-# Internet peering - FAQs
+# Internet peering frequently asked questions (FAQ)
-You may review information below for general questions.
-**What is the difference between Internet peering and Peering Service?**
+## What is the difference between Internet peering and Peering Service?
-Peering Service is a service that intends to provide enterprise grade public IP connectivity to Microsoft for its enterprise customers. Enterprise grade Internet includes connectivity through ISPs that have high throughput connectivity to Microsoft and redundancy for a HA connectivity. Additionally, user traffic is optimized for latency to the nearest Microsoft Edge. Peering Service builds on peering connectivity with partner carrier. The peering connectivity with partner must be Direct peering as opposed to Exchange peering. Direct peering must have local and geo-redundancy.
+Peering Service is a service that intends to provide enterprise grade public IP connectivity to Microsoft for its enterprise customers. Enterprise grade internet includes connectivity through ISPs that have high throughput connectivity to Microsoft and redundancy for a HA connectivity. Additionally, user traffic is optimized for latency to the nearest Microsoft Edge. Peering Service builds on peering connectivity with partner carrier. The peering connectivity with partner must be Direct peering as opposed to Exchange peering. Direct peering must have local and geo-redundancy.
-**What is legacy peering?**
+## What is legacy peering?
-Peering connection set up using Azure PowerShell is managed as an Azure resource. Peering connections set up in the past are stored in our system as legacy peering which you may choose to convert to manage as an Azure resource.
+Peering connections set up in the past are stored in our system as legacy peering which you may choose to convert to manage as an Azure resource. Any new peering connection set up using Azure PowerShell is managed as an Azure resource.
-**When New-AzPeeringDirectConnectionObject is called, what IP addresses are given to Microsoft and Peer devices?**
+## When New-AzPeeringDirectConnectionObject is called, what IP addresses are given to Microsoft and Peer devices?
-When calling New-AzPeeringDirectConnectionObject cmdlet, a `/31` address (`a.b.c.d/31`) or a `/30` address (`a.b.c.d/30`) is entered. The first IP address (`a.b.c.d+0`) is given to Peer's device and second IP address (`a.b.c.d+1`) is given to Microsoft device.
+When you use `New-AzPeeringDirectConnectionObject` cmdlet, you enter a `/31` prefix (`a.b.c.d/31`) or a `/30` prefix (`a.b.c.d/30`). The first IP address (`a.b.c.d+0`) is given to the peer device and second IP address (`a.b.c.d+1`) is given to Microsoft device.
-**What is MaxPrefixesAdvertisedIPv4 and MaxPrefixesAdvertisedIPv6 parameters in New-AzPeeringDirectConnectionObject cmdlet?**
+## What is MaxPrefixesAdvertisedIPv4 and MaxPrefixesAdvertisedIPv6 parameters in New-AzPeeringDirectConnectionObject cmdlet?
-MaxPrefixesAdvertisedIPv4 and MaxPrefixesAdvertisedIPv6 parameters represent the maximum number of IPv4 and IPv6 prefixes a Peer wants Microsoft to accept. These parameters can be modified anytime.
+`MaxPrefixesAdvertisedIPv4` and `MaxPrefixesAdvertisedIPv6` parameters represent the maximum number of IPv4 and IPv6 prefixes a Peer wants Microsoft to accept. These parameters can be modified anytime.
internet-peering Policy https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/internet-peering/policy.md
Title: Microsoft peering policy
-description: Microsoft peering policy.
+ Title: Peering policy
+
+description: Learn about Microsoft's peering policy.
Previously updated : 12/15/2020 Last updated : 02/23/2023 -+ # Peering policy
-Microsoft maintains a selective peering policy designed to ensure the best possible customer experience backed by industry standards and best practices, scaling for future demand and strategic placement of peering. As such, Microsoft reserves the right to make exceptions to the policy as deemed necessary. Microsoft's general requirements from your network are explained in the sections below. These are applicable to both Direct peering and Exchange peering requests.
+Microsoft maintains a selective peering policy designed to ensure the best possible customer experience backed by industry standards and best practices, scaling for future demand and strategic placement of peering. As such, Microsoft reserves the right to make exceptions to the policy as deemed necessary. Microsoft's general requirements from your network are explained in the following sections. These requirements are applicable to both Direct peering and Exchange peering requests.
## Technical requirements * A fully redundant network with sufficient capacity to exchange traffic without congestion.
-* Peer will have a publicly routable Autonomous System Number (ASN).
+* Peer has a publicly routable Autonomous System Number (ASN).
* Both IPv4 and IPv6 are supported and Microsoft expects to establish sessions of both types in each peering location.
-* MD5 is not supported.
+* MD5 isn't supported.
* **ASN details:** * Microsoft manages AS8075 along with the following ASNs: AS8068, AS8069, AS12076. For a complete list of ASNs with AS8075 peering, reference AS-SET MICROSOFT.
- * All parties peering with Microsoft agree not to accept routes from AS12076 (Express Route) under any circumstances and should filter out AS12076 on all peers.
+ * All parties peering with Microsoft agree not to accept routes from AS12076 (ExpressRoute) under any circumstances and should filter out AS12076 on all peers.
* **Routing policy:**
- * Peer will have at least one publicly routable /24.
- * Microsoft will overwrite received Multi-Exit Discriminators (MED).
- * Microsoft prefers to receive BGP community-tags from peers to indicate route origination.
+ * Peer has at least one publicly routable /24 prefix.
+ * Microsoft overwrites received Multi Exit Discriminators (MED).
+ * Microsoft prefers to receive BGP community tags from peers to indicate route origination.
* We recommend peers set a max-prefix of 2000 (IPv4) and 500 (IPv6) routes on peering sessions with Microsoft. * Unless specifically agreed upon beforehand, peers are expected to announce consistent routes in all locations where they peer with Microsoft. * In general, peering sessions with AS8075 will advertise all AS-MICROSOFT routes. Microsoft may announce some regional specifics. * Neither party will establish a static route, a route of last resort, or otherwise send traffic to the other party for a route not announced via BGP.
- * Peer are required to register their routes in a public Internet Routing Registry (IRR) database, for the purpose of filtering, and will keep this information up to date.
- * Peers will adhere to MANRS industry standards for route security. At its sole discretion, Microsoft may choose: 1.) not to establish peering with companies that do not have routes signed and registered; 2.) to remove invalid RPKI routes; 3.) not to accept routes from established peers that are not registered and signed.
+ * Peers are required to register their routes in a public Internet Routing Registry (IRR) database, for the purposes of filtering, and keep this information up to date.
+ * Peers adhere to MANRS industry standards for route security. At its sole discretion, Microsoft may choose:
+ 1. not to establish peering with companies that don't have routes signed and registered
+ 1. to remove invalid RPKI routes
+ 1. not to accept routes from established peers that aren't registered and signed
## Operational requirements
-* A fully staffed 24x7 Network Operations Center (NOC), capable of assisting in the resolution of all technical and performance issues, security violations, denial of service attacks, or any other abuse originating within the peer or their customers.
-* Peers are expected to have a complete and up-to-date profile on [PeeringDB](https://www.peeringdb.com) including a 24x7 NOC email from corporate domain and phone number. We use this information to validate the peer's details such as NOC information, technical contact information, and their presence at the peering facilities etc. Personal Yahoo, Gmail, and hotmail accounts are not accepted.
+* A fully staffed 24x7 Network Operations Center (NOC) that's capable of assisting in the resolution of all technical and performance issues, security violations, denial of service attacks, or any other abuse originating within the peer or their customers.
+* Peers are expected to have a complete and up-to-date profile on [PeeringDB](https://www.peeringdb.com) including a 24x7 NOC email from corporate domain and phone number. We use this information to validate the peer's details such as NOC information, technical contact information, and their presence at the peering facilities etc. Personal Yahoo, Gmail, and Hotmail accounts aren't accepted.
## Physical connection requirements * The locations where you can connect with Microsoft for Direct peering or Exchange peering are listed in [PeeringDB](https://www.peeringdb.com/net/694) * **Exchange peering:**
- * Peers are expected to have at minimum a 10 Gb connection to the exchange.
+ * Peers are expected to have at minimum a 10 Gbps connection to the exchange.
* Peers are expected to upgrade their ports when peak utilization exceeds 50%. * Microsoft encourages peers to maintain diverse connectivity to exchange to support failover scenarios. * **Direct peering:**
- * Interconnection must be over single-mode fiber using 100 Gbps optics.
- * Microsoft will only establish Direct peering with ISP or Network Service providers.
+ * Interconnection must be over 100 Gbps single-mode fiber.
+ * Microsoft only establishes Direct peering with internet service providers (ISPs) or network service providers (NSPs).
* Peers are expected to upgrade their ports when peak utilization exceeds 50% and maintain diverse capacity in each metro, either within a single location or across several locations in a metro.
- * Each Direct peering consists of two connections to two Microsoft edge routers from the Peer's routers located in Peer's edge. Microsoft requires dual BGP sessions across these connections. The peer may choose not to deploy redundant devices at their end.
+ * Each Direct peering consists of two connections to two Microsoft edge routers from the peer edge routers. Microsoft requires dual BGP sessions across these connections. The peer may choose not to deploy redundant devices at their end.
## Traffic requirements
-* Peers over Exchange peering must have at minimum 500 Mb of traffic and less than 2 Gb. For traffic exceeding 2 Gb Direct peering should be established.
-* Microsoft requires at minimum 2 Gb for direct peering. Each mutually agreed to peering location must support failover that ensures peering remains localized during a failover scenario.
+* Peers over Exchange peering must have at minimum 500 MB of traffic and less than 2 GB. For traffic exceeding 2 GB, Direct peering should be established.
+* Microsoft requires at minimum 2 GB for direct peering. Each mutually agreed to peering location must support failover that ensures peering remains localized during a failover scenario.
## Next steps
-* To learn about steps to set up Direct peering with Microsoft, follow [Direct peering walkthrough](walkthrough-direct-all.md).
-* To learn about steps to set up Exchange peering with Microsoft, follow [Exchange peering walkthrough](walkthrough-exchange-all.md).
+* To learn how to set up Direct peering with Microsoft, see [Direct peering walkthrough](walkthrough-direct-all.md).
+* To learn how to set up Exchange peering with Microsoft, see [Exchange peering walkthrough](walkthrough-exchange-all.md).
internet-peering Walkthrough Direct All https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/internet-peering/walkthrough-direct-all.md
Title: Direct peering walkthrough
-description: Direct peering walkthrough.
+
+description: Get started with Direct peering.
Previously updated : 12/15/2020 Last updated : 02/24/2023 -+ # Direct peering walkthrough
-This section explains the steps you need to follow to set up and manage a Direct peering.
+In this article, you learn how to set up and manage a Direct peering.
## Create a Direct peering
-> [!div class="mx-imgBorder"]
-> ![Direct peering workflow and connection states](./media/direct-peering.png)
-The following steps must be followed in order to provision a Direct peering:
-1. Review Microsoft [peering policy](https://peering.azurewebsites.net/peering) to understand requirements for Direct peering.
+
+The following steps must be followed to provision a Direct peering:
+
+1. Review Microsoft [peering policy](policy.md) to understand requirements for Direct peering.
1. Follow the instructions in [Create or modify a Direct peering](howto-direct-powershell.md) to submit a peering request. 1. After you submit a peering request, Microsoft will contact using your registered email address to provide LOA (Letter Of Authorization) or for other information.
-1. Once peering request is approved, connection state changes to ProvisioningStarted.
+1. Once peering request is approved, connection state changes to *ProvisioningStarted*.
1. You need to: 1. complete wiring according to the LOA 1. (optionally) perform link test using 169.254.0.0/16
- 1. configure BGP session and then notify us.
+ 1. configure BGP session and then notify Microsoft.
1. Microsoft provisions BGP session with DENY ALL policy and validate end-to-end.
-1. If successful, you will receive a notification that peering connection state is Active.
+1. If successful, you receive a notification that peering connection state is *Active*.
1. Traffic will then be allowed through the new peering.
-Note that connection states are not to be confused with standard [BGP](https://en.wikipedia.org/wiki/Border_Gateway_Protocol) session states.
+> [!NOTE]
+> Connection states are different from standard BGP session states.
## Convert a legacy Direct peering to Azure resource
-The following steps must be followed in order to convert a legacy Direct peering to Azure resource:
-1. Follow the instructions in [Convert a legacy Direct peering to Azure resource](howto-legacy-direct-powershell.md)
-1. After you submit the conversion request, Microsoft will review the request and contact you if required.
-1. Once approved, you will see your Direct peering with a connection state as Active.
+
+The following steps must be followed to convert a legacy Direct peering to Azure resource:
+1. Follow the instructions in [Convert a legacy Direct peering to Azure resource](howto-legacy-direct-portal.md)
+1. After you submit the conversion request, Microsoft will review the request and contact you if necessary.
+1. Once approved, you see your Direct peering with a connection state as *Active*.
## Deprovision Direct peering
-Contact [Microsoft peering](mailto:peering@microsoft.com) team to deprovision Direct peering.
-When a Direct peering is set for deprovision, you will see the connection state as **PendingRemove**
+Contact [Microsoft peering](mailto:peering@microsoft.com) team to deprovision a Direct peering.
+
+When a Direct peering is set for deprovision, you see the connection state as *PendingRemove*.
> [!NOTE]
-> If you run PowerShell cmdlet to delete the Direct peering when the ConnectionState is ProvisioningStarted or ProvisioningCompleted the operation will fail.
+> If you run PowerShell cmdlet to delete the Direct peering when the ConnectionState is *ProvisioningStarted* or *ProvisioningCompleted*, the operation will fail.
## Next steps
-* Learn about [Prerequisites to set up peering with Microsoft](prerequisites.md).
+* Learn about the [Prerequisites to set up peering with Microsoft](prerequisites.md).
internet-peering Walkthrough Exchange All https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/internet-peering/walkthrough-exchange-all.md
Title: Exchange peering walkthrough
-description: Exchange peering walkthrough.
+
+description: Get started with Exchange peering.
Previously updated : 12/15/2020 Last updated : 02/23/2023 -+ - # Exchange peering walkthrough
-This section explains the steps you need to follow to set up and manage an Exchange peering.
+In this article, you learn how to set up and manage an Exchange peering.
## Create an Exchange peering
-> [!div class="mx-imgBorder"]
-> ![Exchange peering workflow and connection states](./media/exchange-peering.png)
+ The following steps must be followed in order to provision an Exchange peering:
-1. Review Microsoft [peering policy](https://peering.azurewebsites.net/peering) to understand requirements for Exchange peering.
-1. Find Microsoft peering location and peering facility id in [PeeringDB](https://www.peeringdb.com/net/694)
-1. Request Exchange peering for a peering location using the instructions in [Create and modify an Exchange peering using PowerShell](howto-exchange-powershell.md) article for more details.
-1. After you submit a peering request, Microsoft will review the request and contact you if required.
-1. Once approved, connection state changes to Approved
-1. Configure BGP session at your end and notify Microsoft
-1. We will provision BGP session with DENY ALL policy and validate end-to-end.
-1. If successful, you will receive a notification that peering connection state is Active.
+1. Review Microsoft [peering policy](policy.md) to understand requirements for Exchange peering.
+1. Find Microsoft peering location and peering facility ID in [PeeringDB](https://www.peeringdb.com/net/694)
+1. Request Exchange peering for a peering location using the instructions in [Create and modify an Exchange peering](howto-exchange-portal.md).
+1. After you submit a peering request, Microsoft will review the request and contact you if necessary.
+1. Once peering request is approved, connection state changes to *Approved*.
+1. Configure BGP session at your end and notify Microsoft.
+1. Microsoft provisions BGP session with DENY ALL policy and validate end-to-end.
+1. If successful, you receive a notification that peering connection state is *Active*.
1. Traffic will then be allowed through the new peering.
-Note that connection states are not to be confused with standard [BGP](https://en.wikipedia.org/wiki/Border_Gateway_Protocol) session states.
+> [!NOTE]
+> Connection states aren't to be confused with standard BGP session states.
## Convert a legacy Exchange peering to Azure resource The following steps must be followed in order to convert a legacy Exchange peering to Azure resource:
-1. Follow the instructions in [Convert a legacy Exchange peering to Azure resource](howto-legacy-exchange-powershell.md)
-1. After you submit the conversion request, Microsoft will review the request and contact you if required.
-1. Once approved, you will see your Exchange peering with connection state as Active.
+1. Follow the instructions in [Convert a legacy Exchange peering to Azure resource](howto-legacy-exchange-portal.md)
+1. After you submit the conversion request, Microsoft will review the request and contact you if necessary.
+1. Once approved, you see your Exchange peering with a connection state as *Active*.
## Deprovision Exchange peering+ Contact [Microsoft peering](mailto:peering@microsoft.com) to deprovision Exchange peering.
-When an Exchange peering is set for deprovision, you will see the connection state as **PendingRemove**
+When an Exchange peering is set for deprovision, you see the connection state as *PendingRemove*.
> [!NOTE]
-> If you run PowerShell cmdlet to delete the Exchange peering when the connection state is ProvisioningStarted or ProvisioningCompleted the operation will fail.
+> If you run PowerShell cmdlet to delete the Exchange peering when the connection state is *ProvisioningStarted* or *ProvisioningCompleted*, the operation will fail.
## Next steps
-* Learn about [Prerequisites to set up peering with Microsoft](prerequisites.md).
+* Learn about the [Prerequisites to set up peering with Microsoft](prerequisites.md).
internet-peering Walkthrough Peering Service All https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/internet-peering/walkthrough-peering-service-all.md
Title: Peering Service partner walkthrough
-description: Peering Service partner walkthrough.
+
+description: Get started with Peering Service partner.
Previously updated : 12/15/2020 Last updated : 02/23/2023 -+ # Peering Service partner walkthrough
-This section explains the steps a provider needs to follow to enable a Direct peering for Peering Service.
+This article explains the steps a provider needs to follow to enable a Direct peering for Peering Service.
## Create Direct peering connection for Peering Service
-Service Providers can expand their geographical reach by creating new Direct peering that support Peering Service. To accomplish this,
-1. Become a Peering Service partner if not already
-1. Follow the instructions to [Create or modify a Direct peering using the portal](howto-direct-portal.md). Ensure it meets high-availability requirement.
-1. Then, follow steps to [Enable Peering Service on a Direct peering using the portal](howto-peering-service-portal.md).
+
+Service Providers can expand their geographical reach by creating a new Direct peering that support Peering Service as follows:
+
+1. Become a Peering Service partner.
+1. Follow the instructions to [Create or modify a Direct peering](howto-direct-portal.md). Ensure it meets high-availability requirement.
+1. Follow the steps to [Enable Peering Service on a Direct peering using the portal](howto-peering-service-portal.md).
## Use legacy Direct peering connection for Peering Service
-If you have legacy Direct peering that you want to use to support Peering Service,
-1. Become a Peering Service partner if not already.
-1. Follow the instructions to [Convert a legacy Direct peering to Azure resource using the portal](howto-legacy-direct-portal.md). If required, order additional circuits to meet high-availability requirement.
-1. Then, follow steps to [Enable Peering Service on a Direct peering using the portal](howto-peering-service-portal.md).
+
+If you have a legacy Direct peering that you want to use to support Peering Service:
+
+1. Become a Peering Service partner.
+1. Follow the instructions to [Convert a legacy Direct peering to Azure resource](howto-legacy-direct-portal.md). If necessary, order more circuits to meet high-availability requirement.
+1. Follow the steps to [Enable Peering Service on a Direct peering](howto-peering-service-portal.md).
## Next steps
-* Learn about [peering policy](https://peering.azurewebsites.net/peering).
-* To learn about steps to set up Direct peering with Microsoft, follow [Direct peering walkthrough](walkthrough-direct-all.md).
-* To learn about steps to set up Exchange peering with Microsoft, follow [Exchange peering walkthrough](walkthrough-exchange-all.md).
+* Learn about Microsoft's [peering policy](policy.md).
+* To learn how to set up Direct peering with Microsoft, see [Direct peering walkthrough](walkthrough-direct-all.md).
+* To learn how to set up Exchange peering with Microsoft, see [Exchange peering walkthrough](walkthrough-exchange-all.md).
iot-dps Quick Enroll Device Tpm https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-dps/quick-enroll-device-tpm.md
AToAAQALAAMAsgAgg3GXZ0SEs/gakMyNRqXXJP1S124GUgtk8qHaGzMUaaoABgCAAEMAEAgAAAAAAAEA
:::zone-end
-To verify that the enrollment group has been created:
+To verify that the individual enrollment has been created:
1. In the Azure portal, select your Device Provisioning Service.
iot-edge Configure Connect Verify Gpu https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-edge/configure-connect-verify-gpu.md
Let's add an [NVIDIA DIGITS](https://docs.nvidia.com/deeplearning/digits/index.h
1. Select the **Environment Variables** tab.
-1. Add the environment variable name `NVIDIA_VISIBLE_DEVICES` with the value `0`. The value represents a list of your modules on your device, with `0` being the beginning of the list. This value is how many devices you want assigned to a GPU. Since we only have one module here, we want the first one on our list to be GPU-enabled.
+1. Add the environment variable name `NVIDIA_VISIBLE_DEVICES` with the value `0`. This variable controls which GPUs are visible to the containerized application running on the edge device. The `NVIDIA_VISIBLE_DEVICES` environment variable can be set to a comma-separated list of device IDs, which correspond to the physical GPUs in the system. For example, if there are two GPUs in the system with device IDs 0 and 1, the variable can be set to "NVIDIA_VISIBLE_DEVICES=0,1" to make both GPUs visible to the container. In this article, since the VM only has one GPU, we will use the first (and only) one.
| Name | Type | Value | | : | - | -- |
iot-hub-device-update Device Update Howto Proxy Updates https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-hub-device-update/device-update-howto-proxy-updates.md
This tutorial uses an Ubuntu Server 18.04 LTS virtual machine (VM) as an example
5. Restart the Device Update agent: ```sh
- sudo systemctl restart adu-agent
+ sudo systemctl restart deviceupdate-agent
``` ### Set up mock components
iot-hub Iot Hub Devguide Quotas Throttling https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-hub/iot-hub-devguide-quotas-throttling.md
IoT Hub enforces other operational limits:
| Automatic device and module configurations<sup>1</sup> | 100 configurations per basic or standard SKU hub. 10 configurations per free SKU hub. | | IoT Edge automatic deployments<sup>1</sup> | 50 modules per deployment. 100 deployments (including layered deployments) per basic or standard SKU hub. 10 deployments per free SKU hub. | | Twins<sup>1</sup> | Maximum size of desired properties and reported properties sections are 32 KB each. Maximum size of tags section is 8 KB. Maximum size of each individual property in every section is 4 KB. |
-| Shared access policies | Maximum number of shared access policies is 16. |
+| Shared access policies | Maximum number of shared access policies is 16. Within that limit, the maximum number of shared access policies that grant *service connect* access is 10. |
| Restrict outbound network access | Maximum number of allowed FQDNs is 20. | | x509 CA certificates | Maximum number of x509 CA certificates that can be registered on IoT Hub is 25. |
load-balancer Cross Region Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/load-balancer/cross-region-overview.md
Traffic started by the user will travel to the closest participating region thro
Cross-region load balancer routes the traffic to the appropriate regional load balancer. + ### Participating regions * East US * West Europe
machine-learning Azure Machine Learning Ci Image Release Notes https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/azure-machine-learning-ci-image-release-notes.md
# Azure Machine Learning compute instance image release notes
-In this article, learn about Azure Machine Learning compute instance image releases. Azure Machine Learning maintains host operating system images for [Azure ML compute instance](./concept-compute-instance.md) and [Data Science Virtual Machines](./data-science-virtual-machine/release-notes.md). Due to the rapidly evolving needs and package updates, we target to release new images every month.
+In this article, learn about Azure Machine Learning compute instance image releases. Azure Machine Learning maintains host operating system images for [Azure Machine Learning compute instance](./concept-compute-instance.md) and [Data Science Virtual Machines](./data-science-virtual-machine/release-notes.md). Due to the rapidly evolving needs and package updates, we target to release new images every month.
Azure Machine Learning checks and validates any machine learning packages that may require an upgrade. Updates incorporate the latest OS-related patches from Canonical as the original Linux OS publisher. In addition to patches applied by the original publisher, Azure Machine Learning updates system packages when updates are available. For details on the patching process, see [Vulnerability Management](./concept-vulnerability-management.md).
Version: `23.01.19`
Main changes: - Added new conda environment `jupyter-env`-- Moved jupyter service to new `jupyter-env` conda environment
+- Moved Jupyter service to new `jupyter-env` conda environment
- `Azure Machine Learning SDK` to version `1.48.0` Main environment specific updates:
machine-learning Azure Machine Learning Glossary https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/azure-machine-learning-glossary.md
A compute is a designated compute resource where you run your job or host your e
* **Compute cluster** - a managed-compute infrastructure that allows you to easily create a cluster of CPU or GPU compute nodes in the cloud. * **Compute instance** - a fully configured and managed development environment in the cloud. You can use the instance as a training or inference compute for development and testing. It's similar to a virtual machine on the cloud.
-* **Kubernetes cluster** - used to deploy trained machine learning models to Azure Kubernetes Service. You can create an Azure Kubernetes Service (AKS) cluster from your Azure ML workspace, or attach an existing AKS cluster.
+* **Kubernetes cluster** - used to deploy trained machine learning models to Azure Kubernetes Service. You can create an Azure Kubernetes Service (AKS) cluster from your Azure Machine Learning workspace, or attach an existing AKS cluster.
* **Attached compute** - You can attach your own compute resources to your workspace and use them for training and inference. ## Data
Azure Machine Learning environments are an encapsulation of the environment wher
### Types of environment
-Azure ML supports two types of environments: curated and custom.
+Azure Machine Learning supports two types of environments: curated and custom.
Curated environments are provided by Azure Machine Learning and are available in your workspace by default. Intended to be used as is, they contain collections of Python packages and settings to help you get started with various machine learning frameworks. These pre-created environments also allow for faster deployment time. For a full list, see the [curated environments article](resource-curated-environments.md).
-In custom environments, you're responsible for setting up your environment. Make sure to install the packages and any other dependencies that your training or scoring script needs on the compute. Azure ML allows you to create your own environment using
+In custom environments, you're responsible for setting up your environment. Make sure to install the packages and any other dependencies that your training or scoring script needs on the compute. Azure Machine Learning allows you to create your own environment using
* A docker image * A base docker image with a conda YAML to customize further
In custom environments, you're responsible for setting up your environment. Make
## Model
-Azure machine learning models consist of the binary file(s) that represent a machine learning model and any corresponding metadata. Models can be created from a local or remote file or directory. For remote locations `https`, `wasbs` and `azureml` locations are supported. The created model will be tracked in the workspace under the specified name and version. Azure ML supports three types of storage format for models:
+Azure machine learning models consist of the binary file(s) that represent a machine learning model and any corresponding metadata. Models can be created from a local or remote file or directory. For remote locations `https`, `wasbs` and `azureml` locations are supported. The created model will be tracked in the workspace under the specified name and version. Azure Machine Learning supports three types of storage format for models:
* `custom_model` * `mlflow_model`
machine-learning Azure Machine Learning Release Notes Cli V2 https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/azure-machine-learning-release-notes-cli-v2.md
__RSS feed__: Get notified when this page is updated by copying and pasting the
- `az ml job` - For all job types, flattened the `code` section of the YAML schema. Instead of `code.local_path` to specify the path to the source code directory, it is now just `code`
- - For all job types, changed the schema for defining data inputs to the job in the job YAML. Instead of specifying the data path using either the `file` or `folder` fields, use the `path` field to specify either a local path, a URI to a cloud path containing the data, or a reference to an existing registered Azure ML data asset via `path: azureml:<data_name>:<data_version>`. Also specify the `type` field to clarify whether the data source is a single file (`uri_file`) or a folder (`uri_folder`). If `type` field is omitted, it defaults to `type: uri_folder`. For more information, see the section of any of the [job YAML references](reference-yaml-job-command.md) that discuss the schema for specifying input data.
+ - For all job types, changed the schema for defining data inputs to the job in the job YAML. Instead of specifying the data path using either the `file` or `folder` fields, use the `path` field to specify either a local path, a URI to a cloud path containing the data, or a reference to an existing registered Azure Machine Learning data asset via `path: azureml:<data_name>:<data_version>`. Also specify the `type` field to clarify whether the data source is a single file (`uri_file`) or a folder (`uri_folder`). If `type` field is omitted, it defaults to `type: uri_folder`. For more information, see the section of any of the [job YAML references](reference-yaml-job-command.md) that discuss the schema for specifying input data.
- In the [sweep job YAML schema](reference-yaml-job-sweep.md), changed the `sampling_algorithm` field from a string to an object in order to support additional configurations for the random sampling algorithm type - Removed the component job YAML schema. With this release, if you want to run a command job inside a pipeline that uses a component, just specify the component to the `component` field of the command job YAML definition. - For all job types, added support for referencing the latest version of a nested asset in the job YAML configuration. When referencing a registered environment or data asset to use as input in a job, you can alias by latest version rather than having to explicitly specify the version. For example: `environment: azureml:AzureML-Minimal@latest`
__RSS feed__: Get notified when this page is updated by copying and pasting the
- Added support for running pipeline jobs ([pipeline job YAML schema](reference-yaml-job-pipeline.md)) - Added support for job input literals and input data URIs for all job types - Added support for job outputs for all job types
- - Changed the expression syntax from `{ <expression> }` to `${{ <expression> }}`. For more information, see [Expression syntax for configuring Azure ML jobs](reference-yaml-core-syntax.md#expression-syntax-for-configuring-azure-ml-jobs-and-components)
+ - Changed the expression syntax from `{ <expression> }` to `${{ <expression> }}`. For more information, see [Expression syntax for configuring Azure Machine Learning jobs](reference-yaml-core-syntax.md#expression-syntax-for-configuring-azure-machine-learning-jobs-and-components)
- `az ml environment` - Updated [environment YAML schema](reference-yaml-environment.md) - Added support for creating environments from Docker build context
__RSS feed__: Get notified when this page is updated by copying and pasting the
- Renamed `az ml data` subgroup to `az ml dataset` - Updated dataset YAML schema - `az ml component`
- - Added the `az ml component` commands for managing Azure ML components
+ - Added the `az ml component` commands for managing Azure Machine Learning components
- Added support for command components ([command component YAML schema](reference-yaml-component-command.md)) - `az ml online-endpoint` - `az ml endpoint` subgroup split into two separate groups: `az ml online-endpoint` and `az ml batch-endpoint`
machine-learning Component Reference V2 https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/component-reference-v2/component-reference-v2.md
Azure Machine Learning designer components (Designer) allow users to create mach
This reference content provides background on each of the custom components (v2) available in Azure Machine Learning designer.
-You can navigate to Custom components in AzureML Studio as shown in the following image.
+You can navigate to Custom components in Azure Machine Learning Studio as shown in the following image.
:::image type="content" source="media/designer-new-pipeline.png" alt-text="Diagram showing the Designer UI for selecting a custom component.":::
machine-learning Convert To Csv https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/component-reference/convert-to-csv.md
Even if you do most of your work in Azure Machine Learning, there are times when
+ Save the CSV file to cloud storage and connect to it from Power BI to create visualizations. + Use the CSV format to prepare data for use in R and Python.
-When you convert a dataset to CSV, the csv is saved in your Azure ML workspace. You can use an Azure storage utility to open and use the file directly. You can also access the CSV in the designer by selecting the **Convert to CSV** component, then select the histogram icon under the **Outputs** tab in the right panel to view the output. You can download the CSV from the Results folder to a local directory.
+When you convert a dataset to CSV, the csv is saved in your Azure Machine Learning workspace. You can use an Azure storage utility to open and use the file directly. You can also access the CSV in the designer by selecting the **Convert to CSV** component, then select the histogram icon under the **Outputs** tab in the right panel to view the output. You can download the CSV from the Results folder to a local directory.
## How to configure Convert to CSV
When you convert a dataset to CSV, the csv is saved in your Azure ML workspace.
Select the **Outputs** tab in the right panel of **Convert to CSV**, and select on one of these icons under the **Port outputs**.
-+ **Register dataset**: Select the icon and save the CSV file back to the Azure ML workspace as a separate dataset. You can find the dataset as a component in the component tree under the **My Datasets** section.
++ **Register dataset**: Select the icon and save the CSV file back to the Azure Machine Learning workspace as a separate dataset. You can find the dataset as a component in the component tree under the **My Datasets** section. + **View output**: Select the eye icon, and follow the instruction to browse the **Results_dataset** folder, and download the data.csv file.
machine-learning Create Python Model https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/component-reference/create-python-model.md
This article shows how to use **Create Python Model** with a simple pipeline. He
```Python
- # The script MUST define a class named AzureMLModel.
+ # The script MUST define a class named Azure Machine LearningModel.
# This class MUST at least define the following three methods: # __init__: in which self.model must be assigned, # train: which trains self.model, the two input arguments must be pandas DataFrame,
machine-learning Designer Error Codes https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/component-reference/designer-error-codes.md
See the following articles for help with Hive queries for machine learning:
**Resolution:** Revisit the component and inspect the SQL query for mistakes.
- Verify that the query works correctly outside of Azure ML by logging in to the database server directly and running the query.
+ Verify that the query works correctly outside of Azure Machine Learning by logging in to the database server directly and running the query.
If there is a SQL generated message reported by the component exception, take action based on the reported error. For example, the error messages sometimes include specific guidance on the likely error: + *No such column or missing database*, indicating that you might have typed a column name wrong. If you are sure the column name is correct, try using brackets or quotation marks to enclose the column identifier.
Resolution:
|Exception Messages| || |Datastore information is invalid.|
-|Datastore information is invalid. Failed to get AzureML datastore '{datastore_name}' in workspace '{workspace_name}'.|
+|Datastore information is invalid. Failed to get Azure Machine Learning datastore '{datastore_name}' in workspace '{workspace_name}'.|
## Error 0158
machine-learning Concept Automated Ml https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/concept-automated-ml.md
Using **Azure Machine Learning**, you can design and run your automated ML train
1. **Identify the ML problem** to be solved: classification, forecasting, regression, computer vision or NLP.
-1. **Choose whether you want a code-first experience or a no-code studio web experience**: Users who prefer a code-first experience can use the [AzureML SDKv2](how-to-configure-auto-train.md) or the [AzureML CLIv2](how-to-train-cli.md). Get started with [Tutorial: Train an object detection model with AutoML and Python](tutorial-auto-train-image-models.md). Users who prefer a limited/no-code experience can use the [web interface](how-to-use-automated-ml-for-ml-models.md) in Azure Machine Learning studio at [https://ml.azure.com](https://ml.azure.com/). Get started with [Tutorial: Create a classification model with automated ML in Azure Machine Learning](tutorial-first-experiment-automated-ml.md).
+1. **Choose whether you want a code-first experience or a no-code studio web experience**: Users who prefer a code-first experience can use the [Azure Machine Learning SDKv2](how-to-configure-auto-train.md) or the [Azure Machine Learning CLIv2](how-to-train-cli.md). Get started with [Tutorial: Train an object detection model with AutoML and Python](tutorial-auto-train-image-models.md). Users who prefer a limited/no-code experience can use the [web interface](how-to-use-automated-ml-for-ml-models.md) in Azure Machine Learning studio at [https://ml.azure.com](https://ml.azure.com/). Get started with [Tutorial: Create a classification model with automated ML in Azure Machine Learning](tutorial-first-experiment-automated-ml.md).
-1. **Specify the source of the labeled training data**: You can bring your data to AzureML in [many different ways](concept-data.md).
+1. **Specify the source of the labeled training data**: You can bring your data to Azure Machine Learning in [many different ways](concept-data.md).
1. **Configure the automated machine learning parameters** that determine how many iterations over different models, hyperparameter settings, advanced preprocessing/featurization, and what metrics to look at when determining the best model. 1. **Submit the training job.**
See an example of classification and automated machine learning in this Python n
### Regression
-Similar to classification, regression tasks are also a common supervised learning task. AzureML offers featurization specific to regression problems. Learn more about [featurization options](how-to-configure-auto-train.md#data-featurization). You can also find the list of algorithms supported by AutoML [here](how-to-configure-auto-train.md#supported-algorithms).
+Similar to classification, regression tasks are also a common supervised learning task. Azure Machine Learning offers featurization specific to regression problems. Learn more about [featurization options](how-to-configure-auto-train.md#data-featurization). You can also find the list of algorithms supported by AutoML [here](how-to-configure-auto-train.md#supported-algorithms).
Different from classification where predicted output values are categorical, regression models predict numerical output values based on independent predictors. In regression, the objective is to help establish the relationship among those independent predictor variables by estimating how one variable impacts the others. For example, automobile price based on features like, gas mileage, safety rating, etc.
With this capability you can:
* Download or deploy the resulting model as a web service in Azure Machine Learning. * Operationalize at scale, leveraging Azure Machine Learning [MLOps](concept-model-management-and-deployment.md) and [ML Pipelines](concept-ml-pipelines.md) capabilities.
-Authoring AutoML models for vision tasks is supported via the Azure ML Python SDK. The resulting experimentation jobs, models, and outputs can be accessed from the Azure Machine Learning studio UI.
+Authoring AutoML models for vision tasks is supported via the Azure Machine Learning Python SDK. The resulting experimentation jobs, models, and outputs can be accessed from the Azure Machine Learning studio UI.
Learn how to [set up AutoML training for computer vision models](how-to-auto-train-image-models.md).
machine-learning Concept Automl Forecasting Methods https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/concept-automl-forecasting-methods.md
Lags of feature columns | Optional
Rolling window aggregations (for example, rolling average) of target quantity | Optional Seasonal decomposition ([STL](https://otexts.com/fpp3/stl.html)) | Optional
-You can configure featurization from the AutoML SDK via the [ForecastingJob](/python/api/azure-ai-ml/azure.ai.ml.automl.forecastingjob#azure-ai-ml-automl-forecastingjob-set-forecast-settings) class or from the [AzureML Studio web interface](how-to-use-automated-ml-for-ml-models.md#customize-featurization).
+You can configure featurization from the AutoML SDK via the [ForecastingJob](/python/api/azure-ai-ml/azure.ai.ml.automl.forecastingjob#azure-ai-ml-automl-forecastingjob-set-forecast-settings) class or from the [Azure Machine Learning Studio web interface](how-to-use-automated-ml-for-ml-models.md#customize-featurization).
### Non-stationary time series detection and handling
machine-learning Concept Azure Machine Learning V2 https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/concept-azure-machine-learning-v2.md
Azure Machine Learning includes several resources and assets to enable you to pe
* [Workspace](#workspace) * [Compute](#compute) * [Datastore](#datastore)
-* **Assets**: created using Azure ML commands or as part of a training/scoring run. Assets are versioned and can be registered in the Azure ML workspace. They include:
+* **Assets**: created using Azure Machine Learning commands or as part of a training/scoring run. Assets are versioned and can be registered in the Azure Machine Learning workspace. They include:
* [Model](#model) * [Environment](#environment) * [Data](#data)
ws_basic = Workspace(
ml_client.workspaces.begin_create(ws_basic) # use MLClient to connect to the subscription and resource group and create workspace ```
-This [Jupyter notebook](https://github.com/Azure/azureml-examples/blob/main/sdk/python/resources/workspace/workspace.ipynb) shows more ways to create an Azure ML workspace using SDK v2.
+This [Jupyter notebook](https://github.com/Azure/azureml-examples/blob/main/sdk/python/resources/workspace/workspace.ipynb) shows more ways to create an Azure Machine Learning workspace using SDK v2.
A compute is a designated compute resource where you run your job or host your e
* **Compute cluster** - a managed-compute infrastructure that allows you to easily create a cluster of CPU or GPU compute nodes in the cloud. * **Compute instance** - a fully configured and managed development environment in the cloud. You can use the instance as a training or inference compute for development and testing. It's similar to a virtual machine on the cloud.
-* **Inference cluster** - used to deploy trained machine learning models to Azure Kubernetes Service. You can create an Azure Kubernetes Service (AKS) cluster from your Azure ML workspace, or attach an existing AKS cluster.
+* **Inference cluster** - used to deploy trained machine learning models to Azure Kubernetes Service. You can create an Azure Kubernetes Service (AKS) cluster from your Azure Machine Learning workspace, or attach an existing AKS cluster.
* **Attached compute** - You can attach your own compute resources to your workspace and use them for training and inference. ### [Azure CLI](#tab/cli)
This [Jupyter notebook](https://github.com/Azure/azureml-examples/blob/main/sdk/
## Model
-Azure machine learning models consist of the binary file(s) that represent a machine learning model and any corresponding metadata. Models can be created from a local or remote file or directory. For remote locations `https`, `wasbs` and `azureml` locations are supported. The created model will be tracked in the workspace under the specified name and version. Azure ML supports three types of storage format for models:
+Azure machine learning models consist of the binary file(s) that represent a machine learning model and any corresponding metadata. Models can be created from a local or remote file or directory. For remote locations `https`, `wasbs` and `azureml` locations are supported. The created model will be tracked in the workspace under the specified name and version. Azure Machine Learning supports three types of storage format for models:
* `custom_model` * `mlflow_model`
Azure Machine Learning environments are an encapsulation of the environment wher
### Types of environment
-Azure ML supports two types of environments: curated and custom.
+Azure Machine Learning supports two types of environments: curated and custom.
Curated environments are provided by Azure Machine Learning and are available in your workspace by default. Intended to be used as is, they contain collections of Python packages and settings to help you get started with various machine learning frameworks. These pre-created environments also allow for faster deployment time. For a full list, see the [curated environments article](resource-curated-environments.md).
-In custom environments, you're responsible for setting up your environment and installing packages or any other dependencies that your training or scoring script needs on the compute. Azure ML allows you to create your own environment using
+In custom environments, you're responsible for setting up your environment and installing packages or any other dependencies that your training or scoring script needs on the compute. Azure Machine Learning allows you to create your own environment using
* A docker image * A base docker image with a conda YAML to customize further * A docker build context
-### Create an Azure ML custom environment
+### Create an Azure Machine Learning custom environment
### [Azure CLI](#tab/cli)
machine-learning Concept Component https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/concept-component.md
To build components, the first thing is to define the machine learning pipeline.
Once the steps in the workflow are defined, the next thing is to specify how each step is connected in the pipeline. For example, to connect your data processing step and model training step, you may want to define a data processing component to output a folder that contains the processed data. A training component takes a folder as input and outputs a folder that contains the trained model. These inputs and outputs definition will become part of your component interface definition.
-Now, it's time to develop the code of executing a step. You can use your preferred languages (python, R, etc.). The code must be able to be executed by a shell command. During the development, you may want to add a few inputs to control how this step is going to be executed. For example, for a training step, you may like to add learning rate, number of epochs as the inputs to control the training. These additional inputs plus the inputs and outputs required to connect with other steps are the interface of the component. The argument of a shell command is used to pass inputs and outputs to the code. The environment to execute the command and the code needs to be specified. The environment could be a curated AzureML environment, a docker image or a conda environment.
+Now, it's time to develop the code of executing a step. You can use your preferred languages (python, R, etc.). The code must be able to be executed by a shell command. During the development, you may want to add a few inputs to control how this step is going to be executed. For example, for a training step, you may like to add learning rate, number of epochs as the inputs to control the training. These additional inputs plus the inputs and outputs required to connect with other steps are the interface of the component. The argument of a shell command is used to pass inputs and outputs to the code. The environment to execute the command and the code needs to be specified. The environment could be a curated Azure Machine Learning environment, a docker image or a conda environment.
Finally, you can package everything including code, cmd, environment, input, outputs, metadata together into a component. Then connects these components together to build pipelines for your machine learning workflow. One component can be used in multiple pipelines. To learn more about how to build a component, see: -- How to [build a component using Azure MLCLI v2](how-to-create-component-pipelines-cli.md).-- How to [build a component using Azure ML SDK v2](how-to-create-component-pipeline-python.md).
+- How to [build a component using Azure Machine Learning CLI v2](how-to-create-component-pipelines-cli.md).
+- How to [build a component using Azure Machine Learning SDK v2](how-to-create-component-pipeline-python.md).
## Next steps -- [Define component with the Azure ML CLI v2](./how-to-create-component-pipelines-cli.md).-- [Define component with the Azure ML SDK v2](./how-to-create-component-pipeline-python.md).
+- [Define component with the Azure Machine Learning CLI v2](./how-to-create-component-pipelines-cli.md).
+- [Define component with the Azure Machine Learning SDK v2](./how-to-create-component-pipeline-python.md).
- [Define component with Designer](./how-to-create-component-pipelines-ui.md). - [Component CLI v2 YAML reference](./reference-yaml-component-command.md). - [What is Azure Machine Learning Pipeline?](concept-ml-pipelines.md).
machine-learning Concept Data Encryption https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/concept-data-encryption.md
Microsoft may collect non-user identifying information like resource names (for
Microsoft also recommends not storing sensitive information (such as account key secrets) in environment variables. Environment variables are logged, encrypted, and stored by us. Similarly when naming your jobs, avoid including sensitive information such as user names or secret project names. This information may appear in telemetry logs accessible to Microsoft Support engineers.
-You may opt out from diagnostic data being collected by setting the `hbi_workspace` parameter to `TRUE` while provisioning the workspace. This functionality is supported when using the AzureML Python SDK, the Azure CLI, REST APIs, or Azure Resource Manager templates.
+You may opt out from diagnostic data being collected by setting the `hbi_workspace` parameter to `TRUE` while provisioning the workspace. This functionality is supported when using the Azure Machine Learning Python SDK, the Azure CLI, REST APIs, or Azure Resource Manager templates.
## Using Azure Key Vault
machine-learning Concept Data https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/concept-data.md
A Uniform Resource Identifier (URI) represents a storage location on your local
|Blob storage | `wasbs://<containername>@<accountname>.blob.core.windows.net/<folder>/`| |Azure Data Lake (gen2) | `abfss://<file_system>@<account_name>.dfs.core.windows.net/<folder>/<file>.csv` | | Azure Data Lake (gen1) | `adl://<accountname>.azuredatalakestore.net/<folder1>/<folder2>`
-|Azure ML [Datastore](#datastore) | `azureml://datastores/<data_store_name>/paths/<folder1>/<folder2>/<folder3>/<file>.parquet` |
+|Azure Machine Learning [Datastore](#datastore) | `azureml://datastores/<data_store_name>/paths/<folder1>/<folder2>/<folder3>/<file>.parquet` |
-An Azure ML job maps URIs to the compute target filesystem. This mapping means that in a command that consumes or produces a URI, that URI works like a file or a folder. A URI uses **identity-based authentication** to connect to storage services, with either your Azure Active Directory ID (default), or Managed Identity. Azure ML [Datastore](#datastore) URIs can apply either identity-based authentication, or **credential-based** (for example, Service Principal, SAS token, account key) without exposure of secrets.
+An Azure Machine Learning job maps URIs to the compute target filesystem. This mapping means that in a command that consumes or produces a URI, that URI works like a file or a folder. A URI uses **identity-based authentication** to connect to storage services, with either your Azure Active Directory ID (default), or Managed Identity. Azure Machine Learning [Datastore](#datastore) URIs can apply either identity-based authentication, or **credential-based** (for example, Service Principal, SAS token, account key) without exposure of secrets.
-A URI can serve as either *input* or an *output* to an Azure ML job, and it can map to the compute target filesystem with one of four different *mode* options:
+A URI can serve as either *input* or an *output* to an Azure Machine Learning job, and it can map to the compute target filesystem with one of four different *mode* options:
- **Read-*only* mount (`ro_mount`)**: The URI represents a storage location that is *mounted* to the compute target filesystem. The mounted data location supports read-only output exclusively. - **Read-*write* mount (`rw_mount`)**: The URI represents a storage location that is *mounted* to the compute target filesystem. The mounted data location supports both read output from it *and* data writes to it.
A URI (storage location) can reference a file, a folder, or a data table. A mach
|||||| |**File**<br>Reference a single file | `uri_file` | `FileDataset` | Read/write a single file - the file can have any format. | A type new to V2 APIs. In V1 APIs, files always mapped to a folder on the compute target filesystem; this mapping required an `os.path.join`. In V2 APIs, the single file is mapped. This way, you can refer to that location in your code. | |**Folder**<br> Reference a single folder | `uri_folder` | `FileDataset` | You must read/write a folder of parquet/CSV files into Pandas/Spark.<br><br>Deep-learning with images, text, audio, video files located in a folder. | In V1 APIs, `FileDataset` had an associated engine that could take a file sample from a folder. In V2 APIs, a Folder is a simple mapping to the compute target filesystem. |
-|**Table**<br> Reference a data table | `mltable` | `TabularDataset` | You have a complex schema subject to frequent changes, or you need a subset of large tabular data.<br><br>AutoML with Tables. | In V1 APIs, the Azure ML back-end stored the data materialization blueprint. As a result, `TabularDataset` only worked if you had an Azure ML workspace. `mltable` stores the data materialization blueprint in *your* storage. This storage location means you can use it *disconnected to Azure ML* - for example, local, on-premises. In V2 APIs, you'll find it easier to transition from local to remote jobs. Read [Working with tables in Azure Machine Learning](how-to-mltable.md) for more information. |
+|**Table**<br> Reference a data table | `mltable` | `TabularDataset` | You have a complex schema subject to frequent changes, or you need a subset of large tabular data.<br><br>AutoML with Tables. | In V1 APIs, the Azure Machine Learning back-end stored the data materialization blueprint. As a result, `TabularDataset` only worked if you had an Azure Machine Learning workspace. `mltable` stores the data materialization blueprint in *your* storage. This storage location means you can use it *disconnected to AzureML* - for example, local, on-premises. In V2 APIs, you'll find it easier to transition from local to remote jobs. Read [Working with tables in Azure Machine Learning](how-to-mltable.md) for more information. |
## Data runtime capability
-Azure ML uses its own *data runtime* for mounts/uploads/downloads, to map storage URIs to the compute target filesystem, or to materialize tabular data into pandas/spark with Azure ML tables (`mltable`). The Azure ML data runtime is designed for machine learning task *high speed and high efficiency*. Its key benefits include:
+Azure Machine Learning uses its own *data runtime* for mounts/uploads/downloads, to map storage URIs to the compute target filesystem, or to materialize tabular data into pandas/spark with Azure Machine Learning tables (`mltable`). The Azure Machine Learning data runtime is designed for machine learning task *high speed and high efficiency*. Its key benefits include:
> [!div class="checklist"] > - [Rust](https://www.rust-lang.org/) language architecture. The Rust language is known for high speed and high memory efficiency.
-> - Light weight; the Azure ML data runtime has *no* dependencies on other technologies - JVM, for example - so the runtime installs quickly on compute targets.
+> - Light weight; the Azure Machine Learning data runtime has *no* dependencies on other technologies - JVM, for example - so the runtime installs quickly on compute targets.
> - Multi-process (parallel) data loading. > - Data pre-fetches operate as background task on the CPU(s), to enhance utilization of the GPU(s) in deep-learning operations. > - Seamless authentication to cloud storage. ## Datastore
-An Azure ML datastore serves as a *reference* to an *existing* Azure storage account. The benefits of Azure ML datastore creation and use include:
+An Azure Machine Learning datastore serves as a *reference* to an *existing* Azure storage account. The benefits of Azure Machine Learning datastore creation and use include:
1. A common, easy-to-use API that interacts with different storage types (Blob/Files/ADLS). 1. Easier discovery of useful datastores in team operations.
-1. For credential-based access (service principal/SAS/key), Azure ML datastore secures connection information. This way, you won't need to place that information in your scripts.
+1. For credential-based access (service principal/SAS/key), Azure Machine Learning datastore secures connection information. This way, you won't need to place that information in your scripts.
When you create a datastore with an existing Azure storage account, you can choose between two different authentication methods:
Read [Create datastores](how-to-datastore.md) for more information about datasto
## Data asset
-An Azure ML data asset resembles web browser bookmarks (favorites). Instead of remembering long storage paths (URIs) that point to your most frequently used data, you can create a data asset, and then access that asset with a friendly name.
+An Azure Machine Learning data asset resembles web browser bookmarks (favorites). Instead of remembering long storage paths (URIs) that point to your most frequently used data, you can create a data asset, and then access that asset with a friendly name.
-Data asset creation also creates a *reference* to the data source location, along with a copy of its metadata. Because the data remains in its existing location, you incur no extra storage cost, and you don't risk data source integrity. You can create Data assets from Azure ML datastores, Azure Storage, public URLs, or local files.
+Data asset creation also creates a *reference* to the data source location, along with a copy of its metadata. Because the data remains in its existing location, you incur no extra storage cost, and you don't risk data source integrity. You can create Data assets from Azure Machine Learning datastores, Azure Storage, public URLs, or local files.
Read [Create data assets](how-to-create-data-assets.md) for more information about data assets.
Read [Create data assets](how-to-create-data-assets.md) for more information abo
- [Create datastores](how-to-datastore.md#create-datastores) - [Create data assets](how-to-create-data-assets.md#create-data-assets) - [Access data in a job](how-to-read-write-data-v2.md)-- [Data administration](how-to-administrate-data-authentication.md#data-administration)
+- [Data administration](how-to-administrate-data-authentication.md#data-administration)
machine-learning Concept Endpoints https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/concept-endpoints.md
Learn how to deploy online endpoints from the [CLI/SDK](how-to-deploy-online-end
### Test and deploy locally for faster debugging
-Deploy locally to test your endpoints without deploying to the cloud. Azure Machine Learning creates a local Docker image that mimics the Azure ML image. Azure Machine Learning will build and run deployments for you locally, and cache the image for rapid iterations.
+Deploy locally to test your endpoints without deploying to the cloud. Azure Machine Learning creates a local Docker image that mimics the Azure Machine Learning image. Azure Machine Learning will build and run deployments for you locally, and cache the image for rapid iterations.
### Native blue/green deployment
However [managed online endpoints](#managed-online-endpoints-vs-kubernetes-onlin
### Security -- Authentication: Key and Azure ML Tokens
+- Authentication: Key and Azure Machine Learning Tokens
- Managed identity: User assigned and system assigned - SSL by default for endpoint invocation
You can [override compute resource settings](batch-inference/how-to-use-batch-en
You can use the following options for input data when invoking a batch endpoint: - Cloud data: Either a path on Azure Machine Learning registered datastore, a reference to Azure Machine Learning registered V2 data asset, or a public URI. For more information, see [Data in Azure Machine Learning](concept-data.md).-- Data stored locally: The data will be automatically uploaded to the Azure ML registered datastore and passed to the batch endpoint.
+- Data stored locally: The data will be automatically uploaded to the Azure Machine Learning registered datastore and passed to the batch endpoint.
> [!NOTE] > - If you're using existing V1 FileDatasets for batch endpoints, we recommend migrating them to V2 data assets. You can then refer to the V2 data assets directly when invoking batch endpoints. Currently, only data assets of type `uri_folder` or `uri_file` are supported. Batch endpoints created with GA CLIv2 (2.4.0 and newer) or GA REST API (2022-05-01 and newer) will not support V1 Datasets.
machine-learning Concept Environments https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/concept-environments.md
You use system-managed environments when you want [conda](https://conda.io/docs/
## Create and manage environments
-You can create environments from clients like the AzureML Python SDK, Azure Machine Learning CLI, Environments page in Azure Machine Learning studio, and [VS Code extension](how-to-manage-resources-vscode.md#create-environment). Every client allows you to customize the base image, Dockerfile, and Python layer if needed.
+You can create environments from clients like the Azure Machine Learning Python SDK, Azure Machine Learning CLI, Environments page in Azure Machine Learning studio, and [VS Code extension](how-to-manage-resources-vscode.md#create-environment). Every client allows you to customize the base image, Dockerfile, and Python layer if needed.
For specific code samples, see the "Create an environment" section of [How to use environments](how-to-manage-environments-v2.md#create-an-environment).
If you use the same environment definition for another job, Azure Machine Learni
To view the details of a cached image, check the Environments page in Azure Machine Learning studio or use [`MLClient.environments`](/python/api/azure-ai-ml/azure.ai.ml.mlclient#azure-ai-ml-mlclient-environments) to get and inspect the environment.
-To determine whether to reuse a cached image or build a new one, AzureML computes a [hash value](https://en.wikipedia.org/wiki/Hash_table) from the environment definition and compares it to the hashes of existing environments. The hash is based on the environment definition's:
+To determine whether to reuse a cached image or build a new one, Azure Machine Learning computes a [hash value](https://en.wikipedia.org/wiki/Hash_table) from the environment definition and compares it to the hashes of existing environments. The hash is based on the environment definition's:
* Base image * Custom docker steps
Actual cached images in your workspace ACR will have names like `azureml/azureml
### Image patching
-Microsoft is responsible for patching the base images for known security vulnerabilities. Updates for supported images are released every two weeks, with a commitment of no unpatched vulnerabilities older than 30 days in the the latest version of the image. Patched images are released with a new immutable tag and the `:latest` tag is updated to the latest version of the patched image.
+Microsoft is responsible for patching the base images for known security vulnerabilities. Updates for supported images are released every two weeks, with a commitment of no unpatched vulnerabilities older than 30 days in the latest version of the image. Patched images are released with a new immutable tag and the `:latest` tag is updated to the latest version of the patched image.
If you provide your own images, you are responsible for updating them. For more information on the base images, see the following links: * [Azure Machine Learning base images](https://github.com/Azure/AzureML-Containers) GitHub repository.
-* [Deploy a TensorFlow model using a custom container](how-to-deploy-custom-container.md)
+* [Use a custom container to deploy a model to an online endpoint](how-to-deploy-custom-container.md)
## Next steps
machine-learning Concept Machine Learning Registries Mlops https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/concept-machine-learning-registries-mlops.md
In this article, you'll learn how to scale MLOps across development, testing and
* Subscriptions - Are your development environments in one subscription and production environments in a different subscription? Often separate subscriptions are used to account for billing, budgeting, and cost management purposes. * Regions - Do you need to deploy to different Azure regions to support latency and redundancy requirements?
-In such scenarios, you may be using different AzureML workspaces for development, testing and production. This configuration presents the following challenges for model training and deployment:
+In such scenarios, you may be using different Azure Machine Learning workspaces for development, testing and production. This configuration presents the following challenges for model training and deployment:
* You need to train a model in a development workspace but deploy it an endpoint in a production workspace, possibly in a different Azure subscription or region. In this case, you must be able to trace back the training job. For example, to analyze the metrics, logs, code, environment, and data used to train the model if you encounter accuracy or performance issues with the production deployment. * You need to develop a training pipeline with test data or anonymized data in the development workspace but retrain the model with production data in the production workspace. In this case, you may need to compare training metrics on sample vs production data to ensure the training optimizations are performing well with actual data.
machine-learning Concept Ml Pipelines https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/concept-ml-pipelines.md
Once the teams get familiar with pipelines and want to do more machine learning
Once a team has built a collection of machine learnings pipelines and reusable components, they could start to build the machine learning pipeline from cloning previous pipeline or tie existing reusable component together. At this stage, the teamΓÇÖs overall productivity will be improved significantly.
-Azure Machine Learning offers different methods to build a pipeline. For users who are familiar with DevOps practices, we recommend using [CLI](how-to-create-component-pipelines-cli.md). For data scientists who are familiar with python, we recommend writing pipeline using the [Azure ML SDK v1](v1/how-to-create-machine-learning-pipelines.md). For users who prefer to use UI, they could use the [designer to build pipeline by using registered components](how-to-create-component-pipelines-ui.md).
+Azure Machine Learning offers different methods to build a pipeline. For users who are familiar with DevOps practices, we recommend using [CLI](how-to-create-component-pipelines-cli.md). For data scientists who are familiar with python, we recommend writing pipeline using the [Azure Machine Learning SDK v1](v1/how-to-create-machine-learning-pipelines.md). For users who prefer to use UI, they could use the [designer to build pipeline by using registered components](how-to-create-component-pipelines-ui.md).
<a name="compare"></a> ## Which Azure pipeline technology should I use?
The Azure cloud provides several types of pipeline, each with a different purpos
Azure Machine Learning pipelines are a powerful facility that begins delivering value in the early development stages.
-+ [Define pipelines with the Azure ML CLI v2](./how-to-create-component-pipelines-cli.md)
-+ [Define pipelines with the Azure ML SDK v2](./how-to-create-component-pipeline-python.md)
++ [Define pipelines with the Azure Machine Learning CLI v2](./how-to-create-component-pipelines-cli.md)++ [Define pipelines with the Azure Machine Learning SDK v2](./how-to-create-component-pipeline-python.md) + [Define pipelines with Designer](./how-to-create-component-pipelines-ui.md) + Try out [CLI v2 pipeline example](https://github.com/Azure/azureml-examples/tree/sdk-preview/cli/jobs/pipelines-with-components) + Try out [Python SDK v2 pipeline example](https://github.com/Azure/azureml-examples/tree/main/sdk/python/jobs/pipelines)
machine-learning Concept Mlflow Models https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/concept-mlflow-models.md
A model in MLflow is also an artifact. However, we make stronger assumptions abo
In Azure Machine Learning, logging models has the following advantages: > [!div class="checklist"] > * You can deploy them on real-time or batch endpoints without providing an scoring script nor an environment.
-> * When deployed, Model's deployments have a Swagger generated automatically and the __Test__ feature can be used in Azure ML studio.
+> * When deployed, Model's deployments have a Swagger generated automatically and the __Test__ feature can be used in Azure Machine Learning studio.
> * Models can be used as pipelines inputs directly. > * You can use the [Responsible AI dashbord (preview)](how-to-responsible-ai-dashboard.md).
signature:
``` > [!TIP]
-> Azure Machine Learning generates Swagger for model's deployment in MLflow format with a signature available. This makes easier to test deployed endpoints using the Azure ML studio.
+> Azure Machine Learning generates Swagger for model's deployment in MLflow format with a signature available. This makes easier to test deployed endpoints using the Azure Machine Learning studio.
### Model's environment
name: mlflow-env
``` > [!NOTE]
-> MLflow environments and Azure Machine Learning environments are different concepts. While the former opperates at the level of the model, the latter operates at the level of the workspace (for registered environments) or jobs/deployments (for annonymous environments). When you deploy MLflow models in Azure Machine Learning, the model's environment is built and used for deployment. Alternatively, you can override this behaviour with the [Azure ML CLI v2](concept-v2.md) and deploy MLflow models using a specific Azure Machine Learning environments.
+> MLflow environments and Azure Machine Learning environments are different concepts. While the former opperates at the level of the model, the latter operates at the level of the workspace (for registered environments) or jobs/deployments (for annonymous environments). When you deploy MLflow models in Azure Machine Learning, the model's environment is built and used for deployment. Alternatively, you can override this behaviour with the [Azure Machine Learning CLI v2](concept-v2.md) and deploy MLflow models using a specific Azure Machine Learning environments.
### Model's predict function
machine-learning Concept Mlflow https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/concept-mlflow.md
Capabilities include:
* [Manage runs and experiments with MLflow](https://github.com/Azure/azureml-examples/blob/main/sdk/python/using-mlflow/runs-management/run_history.ipynb): Demonstrates how to query experiments, runs, metrics, parameters, and artifacts from Azure Machine Learning by using MLflow. > [!IMPORTANT]
-> - MLflow in R support is limited to tracking experiment's metrics, parameters and models on Azure Machine Learning jobs. Interactive training on RStudio, Posit (formerly RStudio Workbench) or Jupyter Notebooks with R kernels is not supported. Model management and registration is not supported using the MLflow R SDK. As an alternative, use Azure ML CLI or [Azure ML studio](https://ml.azure.com) for model registration and management. View the following [R example about using the MLflow tracking client with Azure Machine Learning](https://github.com/Azure/azureml-examples/tree/main/cli/jobs/single-step/r).
+> - MLflow in R support is limited to tracking experiment's metrics, parameters and models on Azure Machine Learning jobs. Interactive training on RStudio, Posit (formerly RStudio Workbench) or Jupyter Notebooks with R kernels is not supported. Model management and registration is not supported using the MLflow R SDK. As an alternative, use Azure Machine Learning CLI or [Azure Machine Learning studio](https://ml.azure.com) for model registration and management. View the following [R example about using the MLflow tracking client with Azure Machine Learning](https://github.com/Azure/azureml-examples/tree/main/cli/jobs/single-step/r).
> - MLflow in Java support is limited to tracking experiment's metrics and parameters on Azure Machine Learning jobs. Artifacts and models can't be tracked using the MLflow Java SDK. As an alternative, use the `Outputs` folder in jobs along with the method `mlflow.save_model` to save models (or artifacts) you want to capture. View the following [Java example about using the MLflow tracking client with the Azure Machine Learning](https://github.com/Azure/azureml-examples/tree/main/cli/jobs/single-step/java/iris). ## Model registries with MLflow
Learn more at [Guidelines for deploying MLflow models](how-to-deploy-mlflow-mode
* [Deploy MLflow to Online Endpoints](https://github.com/Azure/azureml-examples/blob/main/sdk/python/using-mlflow/deploy/mlflow_sdk_online_endpoints.ipynb): Demonstrates how to deploy models in MLflow format to online endpoints using MLflow SDK. * [Deploy MLflow to Online Endpoints with safe rollout](https://github.com/Azure/azureml-examples/blob/main/sdk/python/using-mlflow/deploy/mlflow_sdk_online_endpoints_progresive.ipynb): Demonstrates how to deploy models in MLflow format to online endpoints using MLflow SDK with progressive rollout of models and the deployment of multiple model's versions in the same endpoint. * [Deploy MLflow to web services (V1)](https://github.com/Azure/azureml-examples/blob/main/sdk/python/using-mlflow/deploy/mlflow_sdk_web_service.ipynb): Demonstrates how to deploy models in MLflow format to web services (ACI/AKS v1) using MLflow SDK.
-* [Deploying models trained in Azure Databricks to Azure Machine Learning with MLflow](https://github.com/Azure/azureml-examples/blob/main/sdk/python/using-mlflow/deploy/track_with_databricks_deploy_aml.ipynb): Demonstrates how to train models in Azure Databricks and deploy them in Azure ML. It also includes how to handle cases where you also want to track the experiments with the MLflow instance in Azure Databricks.
+* [Deploying models trained in Azure Databricks to Azure Machine Learning with MLflow](https://github.com/Azure/azureml-examples/blob/main/sdk/python/using-mlflow/deploy/track_with_databricks_deploy_aml.ipynb): Demonstrates how to train models in Azure Databricks and deploy them in Azure Machine Learning. It also includes how to handle cases where you also want to track the experiments with the MLflow instance in Azure Databricks.
## Training MLflow projects (preview)
machine-learning Concept Secure Code Best Practice https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/concept-secure-code-best-practice.md
In Azure Machine Learning, you can upload files and content from any source into
## Potential threats
-Development with Azure Machine Learning often involves web-based development environments (Notebooks & Azure ML studio). When using web-based development environments, the potential threats are:
+Development with Azure Machine Learning often involves web-based development environments (Notebooks & Azure Machine Learning studio). When you use web-based development environments, the potential threats are:
* [Cross site scripting (XSS)](https://owasp.org/www-community/attacks/xss/) * __DOM injection__: This type of attack can modify the UI displayed in the browser. For example, by changing how the run button behaves in a Jupyter Notebook.
- * __Access token/cookies__: XSS attacks can also access local storage and browser cookies. Your Azure Active Directory (AAD) authentication token is stored in local storage. An XSS attack could use this token to make API calls on your behalf, and then send the data to an external system or API.
+ * __Access token/cookies__: XSS attacks can also access local storage and browser cookies. Your Azure Active Directory (Azure AD) authentication token is stored in local storage. An XSS attack could use this token to make API calls on your behalf, and then send the data to an external system or API.
* [Cross site request forgery (CSRF)](https://owasp.org/www-community/attacks/csrf): This attack may replace the URL of an image or link with the URL of a malicious script or API. When the image is loaded, or link clicked, a call is made to the URL.
-## Azure ML studio notebooks
+## Azure Machine Learning studio notebooks
Azure Machine Learning studio provides a hosted notebook experience in your browser. Cells in a notebook can output HTML documents or fragments that contain malicious code. When the output is rendered, the code can be executed.
__Recommended actions__:
* Verify that you trust the contents of files before uploading to studio. When uploading, you must acknowledge that you're uploading trusted files. * When selecting a link to open an external application, you'll be prompted to trust the application.
-## Azure ML compute instance
+## Azure Machine Learning compute instance
-Azure Machine Learning compute instance hosts __Jupyter__ and __Jupyter Lab__. When using either, cells in a notebook or code in can output HTML documents or fragments that contain malicious code. When the output is rendered, the code can be executed. The same threats also apply when using __RStudio__ and __Posit Workbench (formerly RStudio Workbench)__ hosted on a compute instance.
+Azure Machine Learning compute instance hosts __Jupyter__ and __Jupyter Lab__. When you use either, cells in a notebook or code in can output HTML documents or fragments that contain malicious code. When the output is rendered, the code can be executed. The same threats also apply when you use __RStudio__ and __Posit Workbench (formerly RStudio Workbench)__ hosted on a compute instance.
__Possible threats__: * Cross site scripting (XSS)
machine-learning Concept Secure Network Traffic Flow https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/concept-secure-network-traffic-flow.md
This article assumes the following configuration:
| [Use AutoML, designer, dataset, and datastore from studio](#scenario-use-automl-designer-dataset-and-datastore-from-studio) | NA | NA | <ul><li>Workspace service principal configuration</li><li>Allow access from trusted Azure services</li></ul>For more information, see [How to secure a workspace in a virtual network](how-to-secure-workspace-vnet.md#secure-azure-storage-accounts). | | [Use compute instance and compute cluster](#scenario-use-compute-instance-and-compute-cluster) | <ul><li>Azure Machine Learning service on port 44224</li><li>Azure Batch Management service on ports 29876-29877</li></ul> | <ul><li>Azure Active Directory</li><li>Azure Resource Manager</li><li>Azure Machine Learning service</li><li>Azure Storage Account</li><li>Azure Key Vault</li></ul> | If you use a firewall, create user-defined routes. For more information, see [Configure inbound and outbound traffic](how-to-access-azureml-behind-firewall.md). | | [Use Azure Kubernetes Service](#scenario-use-azure-kubernetes-service) | NA | For information on the outbound configuration for AKS, see [How to secure Kubernetes inference](how-to-secure-kubernetes-inferencing-environment.md). | |
-| [Use Docker images managed by Azure Machine Learning](#scenario-use-docker-images-managed-by-azure-ml) | NA | <ul><li>Microsoft Container Registry</li><li>`viennaglobal.azurecr.io` global container registry</li></ul> | If the Azure Container Registry for your workspace is behind the VNet, configure the workspace to use a compute cluster to build images. For more information, see [How to secure a workspace in a virtual network](how-to-secure-workspace-vnet.md#enable-azure-container-registry-acr). |
+| [Use Docker images managed by Azure Machine Learning](#scenario-use-docker-images-managed-by-azure-machine-learning) | NA | <ul><li>Microsoft Container Registry</li><li>`viennaglobal.azurecr.io` global container registry</li></ul> | If the Azure Container Registry for your workspace is behind the VNet, configure the workspace to use a compute cluster to build images. For more information, see [How to secure a workspace in a virtual network](how-to-secure-workspace-vnet.md#enable-azure-container-registry-acr). |
> [!IMPORTANT] > Azure Machine Learning uses multiple storage accounts. Each stores different data, and has a different purpose:
For information on the outbound configuration required for Azure Kubernetes Serv
If your model requires extra inbound or outbound connectivity, such as to an external data source, use a network security group or your firewall to allow the traffic.
-## Scenario: Use Docker images managed by Azure ML
+## Scenario: Use Docker images managed by Azure Machine Learning
Azure Machine Learning provides Docker images that can be used to train models or perform inference. If you don't specify your own images, the ones provided by Azure Machine Learning are used. These images are hosted on the Microsoft Container Registry (MCR). They're also hosted on a geo-replicated Azure Container Registry named `viennaglobal.azurecr.io`.
If you provide your own docker images, such as on an Azure Container Registry th
:::image type="content" source="./media/concept-secure-network-traffic-flow/azure-machine-learning-docker-images.png" alt-text="Diagram of traffic flow when using provided Docker images"::: ## Next steps
-Now that you've learned how network traffic flows in a secured configuration, learn more about securing Azure ML in a virtual network by reading the [Virtual network isolation and privacy overview](how-to-network-security-overview.md) article.
+Now that you've learned how network traffic flows in a secured configuration, learn more about securing Azure Machine Learning in a virtual network by reading the [Virtual network isolation and privacy overview](how-to-network-security-overview.md) article.
For information on best practices, see the [Azure Machine Learning best practices for enterprise security](/azure/cloud-adoption-framework/ready/azure-best-practices/ai-machine-learning-enterprise-security) article.
machine-learning Concept V2 https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/concept-v2.md
Azure Machine Learning CLI v2 and Azure Machine Learning Python SDK v2 introduce
## Azure Machine Learning CLI v2
-The Azure Machine Learning CLI v2 (CLI v2) is the latest extension for the [Azure CLI](/cli/azure/what-is-azure-cli). The CLI v2 provides commands in the format *az ml __\<noun\> \<verb\> \<options\>__* to create and maintain Azure ML assets and workflows. The assets or workflows themselves are defined using a YAML file. The YAML file defines the configuration of the asset or workflow ΓÇô what is it, where should it run, and so on.
+The Azure Machine Learning CLI v2 (CLI v2) is the latest extension for the [Azure CLI](/cli/azure/what-is-azure-cli). The CLI v2 provides commands in the format *az ml __\<noun\> \<verb\> \<options\>__* to create and maintain Azure Machine Learning assets and workflows. The assets or workflows themselves are defined using a YAML file. The YAML file defines the configuration of the asset or workflow ΓÇô what is it, where should it run, and so on.
A few examples of CLI v2 commands:
A few examples of CLI v2 commands:
The CLI v2 is useful in the following scenarios:
-* On board to Azure ML without the need to learn a specific programming language
+* On board to Azure Machine Learning without the need to learn a specific programming language
- The YAML file defines the configuration of the asset or workflow ΓÇô what is it, where should it run, and so on. Any custom logic/IP used, say data preparation, model training, model scoring can remain in script files, which are referred to in the YAML, but not part of the YAML itself. Azure ML supports script files in python, R, Java, Julia or C#. All you need to learn is YAML format and command lines to use Azure ML. You can stick with script files of your choice.
+ The YAML file defines the configuration of the asset or workflow ΓÇô what is it, where should it run, and so on. Any custom logic/IP used, say data preparation, model training, model scoring can remain in script files, which are referred to in the YAML, but not part of the YAML itself. Azure Machine Learning supports script files in python, R, Java, Julia or C#. All you need to learn is YAML format and command lines to use Azure Machine Learning. You can stick with script files of your choice.
* Ease of deployment and automation
The CLI v2 is useful in the following scenarios:
* Managed inference deployments
- Azure ML offers [endpoints](concept-endpoints.md) to streamline model deployments for both real-time and batch inference deployments. This functionality is available only via CLI v2 and SDK v2.
+ Azure Machine Learning offers [endpoints](concept-endpoints.md) to streamline model deployments for both real-time and batch inference deployments. This functionality is available only via CLI v2 and SDK v2.
* Reusable components in pipelines
- Azure ML introduces [components](concept-component.md) for managing and reusing common logic across pipelines. This functionality is available only via CLI v2 and SDK v2.
+ Azure Machine Learning introduces [components](concept-component.md) for managing and reusing common logic across pipelines. This functionality is available only via CLI v2 and SDK v2.
## Azure Machine Learning Python SDK v2
-Azure ML Python SDK v2 is an updated Python SDK package, which allows users to:
+Azure Machine Learning Python SDK v2 is an updated Python SDK package, which allows users to:
* Submit training jobs * Manage data, models, environments * Perform managed inferencing (real time and batch)
-* Stitch together multiple tasks and production workflows using Azure ML pipelines
+* Stitch together multiple tasks and production workflows using Azure Machine Learning pipelines
The SDK v2 is on par with CLI v2 functionality and is consistent in how assets (nouns) and actions (verbs) are used between SDK and CLI. For example, to list an asset, the `list` action can be used in both CLI and SDK. The same `list` action can be used to list a compute, model, environment, and so on.
The SDK v2 is useful in the following scenarios:
* Reusable components in pipelines
- Azure ML introduces [components](concept-component.md) for managing and reusing common logic across pipelines. This functionality is available only via CLI v2 and SDK v2.
+ Azure Machine Learning introduces [components](concept-component.md) for managing and reusing common logic across pipelines. This functionality is available only via CLI v2 and SDK v2.
* Managed inferencing
- Azure ML offers [endpoints](concept-endpoints.md) to streamline model deployments for both real-time and batch inference deployments. This functionality is available only via CLI v2 and SDK v2.
+ Azure Machine Learning offers [endpoints](concept-endpoints.md) to streamline model deployments for both real-time and batch inference deployments. This functionality is available only via CLI v2 and SDK v2.
## Should I use v1 or v2?
The Azure Machine Learning CLI v1 has been deprecated. We recommend you to use C
* You were a CLI v1 user * You want to use new features like - reusable components, managed inferencing * You don't want to use a Python SDK - CLI v2 allows you to use YAML with scripts in python, R, Java, Julia or C#
-* You were a user of R SDK previously - Azure ML won't support an SDK in `R`. However, the CLI v2 has support for `R` scripts.
+* You were a user of R SDK previously - Azure Machine Learning won't support an SDK in `R`. However, the CLI v2 has support for `R` scripts.
* You want to use command line based automation/deployments * You don't need Spark Jobs. This feature is currently available in preview in CLI v2.
The Azure Machine Learning Python SDK v1 doesn't have a planned deprecation date
* Get started with SDK v2 * [Install and set up SDK (v2)](https://aka.ms/sdk-v2-install)
- * [Train models with the Azure ML Python SDK v2](how-to-train-model.md)
+ * [Train models with the Azure Machine Learning Python SDK v2](how-to-train-model.md)
* [Tutorial: Create production ML pipelines with Python SDK v2 in a Jupyter notebook](tutorial-pipeline-python-sdk.md)
machine-learning Concept Vulnerability Management https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/concept-vulnerability-management.md
In this article, we discuss these responsibilities and outline the vulnerability
## Microsoft-managed VM images
-Azure Machine Learning manages host OS VM images for Azure ML compute instance, Azure ML compute clusters, and Data Science Virtual Machines. The update frequency is monthly and includes the following:
+Azure Machine Learning manages host OS VM images for Azure Machine Learning compute instance, Azure Machine Learning compute clusters, and Data Science Virtual Machines. The update frequency is monthly and includes the following:
* For each new VM image version, the latest updates are sourced from the original publisher of the OS. Using the latest updates ensures that all OS-related patches that are applicable are picked. For Azure Machine Learning, the publisher is Canonical for all the Ubuntu 18 images. These images are used for Azure Machine Learning compute instances, compute clusters, and Data Science Virtual Machines. * VM images are updated monthly. * In addition to patches applied by the original publisher, Azure Machine Learning updates system packages when updates are available. * Azure Machine Learning checks and validates any machine learning packages that may require an upgrade. In most circumstances, new VM images contain the latest package versions. * All VM images are built on secure subscriptions that run vulnerability scanning regularly. Any unaddressed vulnerabilities are flagged and are to be fixed within the next release.
-* The frequency is on a monthly interval for most images. For compute instance, the image release is aligned with the Azure ML SDK release cadence as it comes preinstalled in the environment.
+* The frequency is on a monthly interval for most images. For compute instance, the image release is aligned with the Azure Machine Learning SDK release cadence as it comes preinstalled in the environment.
-Next to the regular release cadence, hot fixes are applied in the case vulnerabilities are discovered. Hot fixes get rolled out within 72 hours for Azure ML compute and within a week for Compute Instance.
+Next to the regular release cadence, hot fixes are applied in the case vulnerabilities are discovered. Hot fixes get rolled out within 72 hours for Azure Machine Learning compute and within a week for Compute Instance.
> [!NOTE] > The host OS is not the OS version you might specify for an [environment](how-to-use-environments.md) when training or deploying a model. Environments run inside Docker. Docker runs on the host OS.
Reproducibility is a key aspect of software development and machine learning exp
While Azure Machine Learning patches base images with each release, whether you use the latest image may be tradeoff between reproducibility and vulnerability management. So, it's your responsibility to choose the environment version used for your jobs or model deployments.
-By default, dependencies are layered on top of base images provided by Azure ML when building environments. You can also use your own base images when using environments in Azure Machine Learning. Once you install more dependencies on top of the Microsoft-provided images, or bring your own base images, vulnerability management becomes your responsibility.
+By default, dependencies are layered on top of base images provided by Azure Machine Learning when building environments. You can also use your own base images when using environments in Azure Machine Learning. Once you install more dependencies on top of the Microsoft-provided images, or bring your own base images, vulnerability management becomes your responsibility.
Associated to your Azure Machine Learning workspace is an Azure Container Registry instance that's used as a cache for container images. Any image materialized, is pushed to the container registry, and used if experimentation or deployment is triggered for the corresponding environment. Azure Machine Learning doesn't delete any image from your container registry, and it's your responsibility to evaluate the need of an image over time. To monitor and maintain environment hygiene, you can use [Microsoft Defender for Container Registry](../defender-for-cloud/defender-for-container-registries-usage.md) to help scan your images for vulnerabilities. To automate your processes based on triggers from Microsoft Defender, see [Automate responses to Microsoft Defender for Cloud triggers](../defender-for-cloud/workflow-automation.md).
Compute clusters automatically upgrade to the latest VM image. If the cluster is
[Kubernetes compute](how-to-attach-kubernetes-anywhere.md) lets you configure Kubernetes clusters to train, inference, and manage models in Azure Machine Learning. * Because you manage the environment with Kubenetes, both OS VM vulnerabilities and container image vulnerability management is your responsibility.
-* Azure Machine Learning frequently publishes new versions of AzureML extension container images into Microsoft Container Registry. It's MicrosoftΓÇÖs responsibility to ensure new image versions are free from vulnerabilities. Vulnerabilities are fixed with [each release](https://github.com/Azure/AML-Kubernetes/blob/master/docs/release-notes.md).
+* Azure Machine Learning frequently publishes new versions of Azure Machine Learning extension container images into Microsoft Container Registry. It's MicrosoftΓÇÖs responsibility to ensure new image versions are free from vulnerabilities. Vulnerabilities are fixed with [each release](https://github.com/Azure/AML-Kubernetes/blob/master/docs/release-notes.md).
* When your clusters run jobs without interruption, running jobs may run outdated container image versions. Once you upgrade the amlarc extension to a running cluster, newly submitted jobs will start to use the latest image version. When upgrading the AMLArc extension to its latest version, clean up the old container image versions from the clusters as required. * Observability on whether your Azure Arc cluster is running the latest version of AMLArc, you can find via the Azure portal. Under your Arc resource of the type 'Kubernetes - Azure Arc', see 'Extensions' to find the version of the AMLArc extension.
Compute clusters automatically upgrade to the latest VM image. If the cluster is
For code-based training experiences, you control which Azure Machine Learning environment is used. With AutoML and Designer, the environment is encapsulated as part of the service. These types of jobs can run on computes configured by you, allowing for extra controls such as network isolation.
-* Automated ML jobs run on environments that layer on top of Azure ML [base docker images](https://github.com/Azure/AzureML-Containers).
+* Automated ML jobs run on environments that layer on top of Azure Machine Learning [base docker images](https://github.com/Azure/AzureML-Containers).
- * Designer jobs are compartmentalized into [Components](concept-designer.md#component). Each component has its own environment that layers on top of the Azure ML base docker images. For more information on components, see the [Component reference](./component-reference/component-reference.md).
+ * Designer jobs are compartmentalized into [Components](concept-designer.md#component). Each component has its own environment that layers on top of the Azure Machine Learning base docker images. For more information on components, see the [Component reference](./component-reference/component-reference.md).
## Next steps * [Azure Machine Learning Base Images Repository](https://github.com/Azure/AzureML-Containers) * [Data Science Virtual Machine release notes](./data-science-virtual-machine/release-notes.md)
-* [AzureML Python SDK Release Notes](https://github.com/Azure/azure-sdk-for-python/blob/main/sdk/ml/azure-ai-ml/CHANGELOG.md)
+* [Azure Machine Learning Python SDK Release Notes](https://github.com/Azure/azure-sdk-for-python/blob/main/sdk/ml/azure-ai-ml/CHANGELOG.md)
* [Machine learning enterprise security](/azure/cloud-adoption-framework/ready/azure-best-practices/ai-machine-learning-enterprise-security)
machine-learning Concept Workspace https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/concept-workspace.md
There are multiple ways to create a workspace:
## Sub resources
-These sub resources are the main resources that are made in the AzureML workspace.
+These sub resources are the main resources that are made in the Azure Machine Learning workspace.
-* VMs: provide computing power for your AzureML workspace and are an integral part in deploying and training models.
+* VMs: provide computing power for your Azure Machine Learning workspace and are an integral part in deploying and training models.
* Load Balancer: a network load balancer is created for each compute instance and compute cluster to manage traffic even while the compute instance/cluster is stopped. * Virtual Network: these help Azure resources communicate with one another, the internet, and other on-premises networks. * Bandwidth: encapsulates all outbound data transfers across regions.
machine-learning How To Track Experiments https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/data-science-virtual-machine/how-to-track-experiments.md
The config contains information such as the workspace name, subscription, etc. a
## Track DSVM runs
-Add the following code to your notebook (or script) to set the AzureML workspace object.
+Add the following code to your notebook (or script) to set the Azure Machine Learning workspace object.
```Python import mlflow
In this section, we outline how to deploy models trained on a DSVM to Azure Mach
### Step 1: Create Inference Compute
-On the left-hand menu in [AzureML Studio](https://ml.azure.com) click on __Compute__ and then the __Inference clusters__ tab. Next, click on __+ New__ as discussed below:
+On the left-hand menu in [Azure Machine Learning Studio](https://ml.azure.com) click on __Compute__ and then the __Inference clusters__ tab. Next, click on __+ New__ as discussed below:
![Create Inference Compute](./media/how-to-track-experiments/mlflow-experiments-6.png)
When the model has deployed successfully, you should see the following (to get t
You should see that the deployment state goes from __transitioning__ to __healthy__. In addition, this details section provides the REST endpoint and Swagger URLs that an application developer can use to integrate your ML model into their apps.
-You can test the endpoint using [Postman](https://www.postman.com/), or you can use the AzureML SDK:
+You can test the endpoint using [Postman](https://www.postman.com/), or you can use the Azure Machine Learning SDK:
[!INCLUDE [sdk v1](../../../includes/machine-learning-sdk-v1.md)]
Delete the Inference Compute you created in Step 1 so that you don't incur ongoi
## Next Steps
-* Learn more about [deploying models in AzureML](../v1/how-to-deploy-and-where.md)
+* Learn more about [deploying models in Azure Machine Learning](../v1/how-to-deploy-and-where.md)
machine-learning Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/data-science-virtual-machine/overview.md
Additionally, we are excited to offer Azure DSVM for PyTorch (preview), which is
## Comparison with Azure Machine Learning
-The DSVM is a customized VM image for Data Science but [Azure Machine Learning](../overview-what-is-azure-machine-learning.md) (AzureML) is an end-to-end platform that encompasses:
+The DSVM is a customized VM image for Data Science but [Azure Machine Learning](../overview-what-is-azure-machine-learning.md) is an end-to-end platform that encompasses:
+ Fully Managed Compute + Compute Instances
The DSVM is a customized VM image for Data Science but [Azure Machine Learning](
+ Labeling + Pipelines (automate End-to-End Data science workflows)
-### Comparison with AzureML Compute Instances
+### Comparison with Azure Machine Learning Compute Instances
[Azure Machine Learning Compute Instances](../concept-compute-instance.md) are a fully configured and __managed__ VM image whereas the DSVM is an __unmanaged__ VM.
machine-learning Reference Ubuntu Vm https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/data-science-virtual-machine/reference-ubuntu-vm.md
The following Azure tools are installed on the VM:
* **Azure libraries**: The following are some of the pre-installed libraries. * **Python**: The Azure-related libraries in Python are *azure*, *azureml*, *pydocumentdb*, and *pyodbc*. With the first three libraries, you can access Azure storage services, Azure Machine Learning, and Azure Cosmos DB (a NoSQL database on Azure). The fourth library, pyodbc (along with the Microsoft ODBC driver for SQL Server), enables access to SQL Server, Azure SQL Database, and Azure Synapse Analytics from Python by using an ODBC interface. Enter **pip list** to see all the listed libraries. Be sure to run this command in both the Python 2.7 and 3.5 environments.
- * **R**: The Azure-related libraries in R are AzureML and RODBC.
+ * **R**: The Azure-related libraries in R are Azure Machine Learning and RODBC.
* **Java**: The list of Azure Java libraries can be found in the directory /dsvm/sdk/AzureSDKJava on the VM. The key libraries are Azure storage and management APIs, Azure Cosmos DB, and JDBC drivers for SQL Server. ## Azure Machine Learning
machine-learning Release Notes https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/data-science-virtual-machine/release-notes.md
Version: `22.11.25`
Main changes: -- `Azure ML SDK V2` samples included
+- `Azure Machine Learning SDK V2` samples included
- `Ray` to version `2.0.0` - Added `clock`, `recipes` `R` packages - `azureml-core` to version `1.47.0`
Version: `22.11.27`
Main changes: -- `Azure ML SDK V2` samples included
+- `Azure Machine Learning SDK V2` samples included
- `RScirpt` environment path alignment - `Ray` version `2.0.0` package added to `azureml_py38` and `azureml_py38_PT_TF` environments. - `azureml-core` to version `1.47.0`
Main changes:
- Updated `Spark` to version `3.2.2` - `MMLSpark` notebook features `v0.10.0` - 4 additional R libraries: [janitor](https://cran.r-project.org/web/packages/janitor/https://docsupdatetracker.net/index.html), [skimr](https://cran.r-project.org/web/packages/skimr/https://docsupdatetracker.net/index.html#:~:text=CRAN%20-%20Package%20skimr%20skimr:%20Compact%20and%20Flexible%2cby%20the%20user%20as%20can%20the%20default%20formatting.), [palmerpenguins](https://cran.r-project.org/web/packages/palmerpenguins/https://docsupdatetracker.net/index.html) and [doParallel](https://cran.r-project.org/web/packages/doParallel/https://docsupdatetracker.net/index.html)-- Added new AzureML Environment `azureml_310_sdkv2`
+- Added new Azure Machine Learning Environment `azureml_310_sdkv2`
[Data Science Virtual Machine - Windows 2019](https://azuremarketplace.microsoft.com/marketplace/apps/microsoft-dsvm.dsvm-win-2019?tab=Overview)
Main changes:
- `Plotly` and `summarytools` R studio extensions runtime import fix. - `Cudatoolkit` and `CUDNN` upgraded to `13.1` and `2.8.1` respectively.-- Fix `Python 3.8` - AzureML notebook run, pinned `matplotlib` to `3.2.1` and `cycler` to `0.11.0` packages in `Azureml_py38` environment.
+- Fix `Python 3.8` - Azure Machine Learning notebook run, pinned `matplotlib` to `3.2.1` and `cycler` to `0.11.0` packages in `Azureml_py38` environment.
## April 26, 2022 [Data Science Virtual Machine - Windows 2019](https://azuremarketplace.microsoft.com/marketplace/apps/microsoft-dsvm.dsvm-win-2019?tab=Overview)
Main changes:
- `Azure CLI` to version `2.33.1` - Fixed `jupyterhub` access issue using public ip address - Redesign of Conda environments - we're continuing with alignment and refining the Conda environments so we created:
- - `azureml_py38`: environment based on Python 3.8 with preinstalled [AzureML SDK](/python/api/overview/azure/ml/?view=azure-ml-py&preserve-view=true) containing also [AutoML](../concept-automated-ml.md) environment
+ - `azureml_py38`: environment based on Python 3.8 with preinstalled [Azure Machine Learning SDK](/python/api/overview/azure/ml/?view=azure-ml-py&preserve-view=true) containing also [AutoML](../concept-automated-ml.md) environment
- `azureml_py38_PT_TF`: additional `azureml_py38` environment, preinstalled with latest `TensorFlow` and `PyTorch` - `py38_default`: default system environment based on `Python 3.8` - We have removed `azureml_py36_tensorflow`, `azureml_py36_pytorch`, `py38_tensorflow` and `py38_pytorch` environments.
Main changes:
- Further `Log4j` vulnerability mitigation - although not used, we moved all `log4j` to version v2, we have removed old log4j jars1.0 and moved `log4j` version 2.0 jars. - Azure CLI to version 2.33.1 - Redesign of Conda environments - we're continuing with alignment and refining the Conda environments so we created:
- - `azureml_py38`: environment based on Python 3.8 with preinstalled [AzureML SDK](/python/api/overview/azure/ml/?view=azure-ml-py&preserve-view=true) containing also [AutoML](../concept-automated-ml.md) environment
+ - `azureml_py38`: environment based on Python 3.8 with preinstalled [Azure Machine Learning SDK](/python/api/overview/azure/ml/?view=azure-ml-py&preserve-view=true) containing also [AutoML](../concept-automated-ml.md) environment
- `azureml_py38_PT_TF`: complementary environment `azureml_py38` with preinstalled with latest TensorFlow and PyTorch - `py38_default`: default system environment based on Python 3.8 - we removed `azureml_py36_tensorflow`, `azureml_py36_pytorch`, `py38_tensorflow` and `py38_pytorch` environments.
Selected version updates are:
- R 4.1.0 - Julia 1.0.5 - NodeJS 16.2.0-- Visual Studio Code 1.56.2 incl. Azure ML extension
+- Visual Studio Code 1.56.2 incl. Azure Machine Learning extension
- PyCharm Community Edition 2021.1.1 - Jupyter Lab 2.2.6 - RStudio 1.4.1106
machine-learning Tools Included https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/data-science-virtual-machine/tools-included.md
The Data Science Virtual Machine comes with the most useful data-science tools p
| [NVidia System Management Interface (nvidia-smi)](https://developer.nvidia.com/nvidia-system-management-interface) | <span class='green-check'>&#9989;</span> | <span class='green-check'>&#9989;</span> | <span class='green-check'>&#9989;</span> | [nvidia-smi on the DSVM](./dsvm-tools-deep-learning-frameworks.md#nvidia-system-management-interface-nvidia-smi) | | [PyTorch](https://pytorch.org) | <span class='green-check'>&#9989;</span> | <span class='green-check'>&#9989;</span> | <span class='green-check'>&#9989;</span> | [PyTorch on the DSVM](./dsvm-tools-deep-learning-frameworks.md#pytorch) | | [TensorFlow](https://www.tensorflow.org) | <span class='green-check'>&#9989;</span></br> | <span class='green-check'>&#9989;</span> | <span class='green-check'>&#9989;</span></br> | [TensorFlow on the DSVM](./dsvm-tools-deep-learning-frameworks.md#tensorflow) |
-| Integration with [Azure Machine Learning](https://azure.microsoft.com/services/machine-learning/) (Python) | <span class='green-check'>&#9989;</span></br> (Python SDK, samples) | <span class='green-check'>&#9989;</span></br> (Python SDK,CLI, samples) | <span class='green-check'>&#9989;</span></br> (Python SDK,CLI, samples) | [Azure ML SDK](./dsvm-tools-data-science.md#azure-machine-learning-sdk-for-python) |
+| Integration with [Azure Machine Learning](https://azure.microsoft.com/services/machine-learning/) (Python) | <span class='green-check'>&#9989;</span></br> (Python SDK, samples) | <span class='green-check'>&#9989;</span></br> (Python SDK,CLI, samples) | <span class='green-check'>&#9989;</span></br> (Python SDK,CLI, samples) | [Azure Machine Learning SDK](./dsvm-tools-data-science.md#azure-machine-learning-sdk-for-python) |
| [XGBoost](https://github.com/dmlc/xgboost) | <span class='green-check'>&#9989;</span></br> (CUDA support) | <span class='green-check'>&#9989;</span></br> (CUDA support) | <span class='green-check'>&#9989;</span></br> (CUDA support) | [XGBoost on the DSVM](./dsvm-tools-data-science.md#xgboost) | | [Vowpal Wabbit](https://github.com/JohnLangford/vowpal_wabbit) | <span class='green-check'>&#9989;</span> | <span class='green-check'>&#9989;</span></br> | <span class='green-check'>&#9989;</span></br> | [Vowpal Wabbit on the DSVM](./dsvm-tools-data-science.md#vowpal-wabbit) | | [Weka](https://www.cs.waikato.ac.nz/ml/weka/) | <span class='red-x'>&#10060;</span> | <span class='red-x'>&#10060;</span> | <span class='red-x'>&#10060;</span> | |
machine-learning How To Access Azureml Behind Firewall https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-access-azureml-behind-firewall.md
__Outbound traffic__
| `AzureFrontDoor.FrontEnd`</br>* Not needed in Azure China. | 443 | Global entry point for [Azure Machine Learning studio](https://ml.azure.com). Store images and environments for AutoML. | | `MicrosoftContainerRegistry.<region>` | 443 | Access docker images provided by Microsoft. | | `Frontdoor.FirstParty` | 443 | Access docker images provided by Microsoft. |
-| `AzureMonitor` | 443 | Used to log monitoring and metrics to Azure Monitor. |
+| `AzureMonitor` | 443 | Used to log monitoring and metrics to Azure Monitor. Only needed if you haven't [secured Azure Monitor](how-to-secure-workspace-vnet.md#secure-azure-monitor-and-application-insights) for the workspace. </br>* This outbound is also used to log information for support incidents. |
> [!IMPORTANT] > If a compute instance or compute cluster is configured for no public IP, they can't access the public internet by default. However, they do need to communicate with the resources listed above. To enable outbound communication, you have two possible options:
The hosts in this section are used to install Visual Studio Code packages to est
| `*.vscode.dev`<br>`*.vscode-unpkg.net`<br>`*.vscode-cdn.net`<br>`*.vscodeexperiments.azureedge.net`<br>`default.exp-tas.com` | Required to access vscode.dev (Visual Studio Code for the Web) | | `code.visualstudio.com` | Required to download and install VS Code desktop. This host isn't required for VS Code Web. | | `update.code.visualstudio.com`<br>`*.vo.msecnd.net` | Used to retrieve VS Code server bits that are installed on the compute instance through a setup script. |
-| `marketplace.visualstudio.com`<br>`vscode.blob.core.windows.net`<br>`*.gallerycdn.vsassets.io` | Required to download and install VS Code extensions. These hosts enable the remote connection to compute instances using the Azure ML extension for VS Code. For more information, see [Connect to an Azure Machine Learning compute instance in Visual Studio Code](./how-to-set-up-vs-code-remote.md) |
+| `marketplace.visualstudio.com`<br>`vscode.blob.core.windows.net`<br>`*.gallerycdn.vsassets.io` | Required to download and install VS Code extensions. These hosts enable the remote connection to compute instances using the Azure Machine Learning extension for VS Code. For more information, see [Connect to an Azure Machine Learning compute instance in Visual Studio Code](./how-to-set-up-vs-code-remote.md) |
| `raw.githubusercontent.com/microsoft/vscode-tools-for-ai/master/azureml_remote_websocket_server/*` | Used to retrieve websocket server bits that are installed on the compute instance. The websocket server is used to transmit requests from Visual Studio Code client (desktop application) to Visual Studio Code server running on the compute instance. | ## Scenario: Third party firewall
For information on restricting access to models deployed to AKS, see [Restrict e
__Monitoring, metrics, and diagnostics__
-To support logging of metrics and other monitoring information to Azure Monitor and Application Insights, allow outbound traffic to the following hosts:
+If you haven't [secured Azure Monitor](how-to-secure-workspace-vnet.md#secure-azure-monitor-and-application-insights) for the workspace, you must allow outbound traffic to the following hosts:
> [!NOTE] > The information logged to these hosts is also used by Microsoft Support to be able to diagnose any problems you run into with your workspace.
machine-learning How To Access Data Batch Endpoints Jobs https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-access-data-batch-endpoints-jobs.md
Batch endpoints support reading files located in the following storage options:
* Azure Blob Storage > [!TIP]
-> Local data folders/files can be used when executing batch endpoints from the Azure ML CLI or Azure ML SDK for Python. However, that operation will result in the local data to be uploaded to the default Azure Machine Learning Data Store of the workspace you are working on.
+> Local data folders/files can be used when executing batch endpoints from the Azure Machine Learning CLI or Azure Machine Learning SDK for Python. However, that operation will result in the local data to be uploaded to the default Azure Machine Learning Data Store of the workspace you are working on.
> [!IMPORTANT] > __Deprecation notice__: Datasets of type `FileDataset` (V1) are deprecated and will be retired in the future. Existing batch endpoints relying on this functionality will continue to work but batch endpoints created with GA CLIv2 (2.4.0 and newer) or GA REST API (2022-05-01 and newer) will not support V1 dataset.
Azure Machine Learning data assets (formerly known as datasets) are supported as
# [REST](#tab/rest)
- Use the Azure ML CLI, Azure ML SDK for Python, or Studio to get the location (region), workspace, and data asset name and version. You will need them later.
+ Use the Azure Machine Learning CLI, Azure Machine Learning SDK for Python, or Studio to get the location (region), workspace, and data asset name and version. You will need them later.
1. Create a data input:
Data from Azure Machine Learning registered data stores can be directly referenc
# [REST](#tab/rest)
- Use the Azure ML CLI, Azure ML SDK for Python, or Studio to get the data store information.
+ Use the Azure Machine Learning CLI, Azure Machine Learning SDK for Python, or Studio to get the data store information.
Batch endpoints ensure that only authorized users are able to invoke batch deplo
The managed identity of the compute cluster is used for mounting and configuring external data storage accounts. However, the identity of the job is still used to read the underlying data allowing you to achieve granular access control. That means that in order to successfully read data from external storage services, the managed identity of the compute cluster where the deployment is running must have at least [Storage Blob Data Reader](../role-based-access-control/built-in-roles.md#storage-blob-data-reader) access to the storage account. Only storage account owners can [change your access level via the Azure portal](../storage/blobs/assign-azure-role-data-access.md). > [!NOTE]
-> To assign an identity to the compute used by a batch deployment, follow the instructions at [Set up authentication between Azure ML and other services](how-to-identity-based-service-authentication.md#compute-cluster). Configure the identity on the compute cluster associated with the deployment. Notice that all the jobs running on such compute are affected by this change. However, different deployments (even under the same deployment) can be configured to run under different clusters so you can administer the permissions accordingly depending on your requirements.
+> To assign an identity to the compute used by a batch deployment, follow the instructions at [Set up authentication between Azure Machine Learning and other services](how-to-identity-based-service-authentication.md#compute-cluster). Configure the identity on the compute cluster associated with the deployment. Notice that all the jobs running on such compute are affected by this change. However, different deployments (even under the same deployment) can be configured to run under different clusters so you can administer the permissions accordingly depending on your requirements.
## Next steps
machine-learning How To Access Data Interactive https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-access-data-interactive.md
Typically the beginning of a machine learning project involves exploratory data analysis (EDA), data-preprocessing (cleaning, feature engineering), and building prototypes of ML models to validate hypotheses. This *prototyping* phase of the project is highly interactive in nature that lends itself to developing in a Jupyter notebook or an IDE with a *Python interactive console*. In this article you'll learn how to: > [!div class="checklist"]
-> * Access data from a Azure ML Datastores URI as if it were a file system.
+> * Access data from a Azure Machine Learning Datastores URI as if it were a file system.
> * Materialize data into Pandas using `mltable` Python library.
-> * Materialize Azure ML data assets into Pandas using `mltable` Python library.
+> * Materialize Azure Machine Learning data assets into Pandas using `mltable` Python library.
> * Materialize data through an explicit download with the `azcopy` utility. ## Prerequisites
Typically the beginning of a machine learning project involves exploratory data
[!INCLUDE [preview disclaimer](../../includes/machine-learning-preview-generic-disclaimer.md)]
-An Azure ML datastore is a *reference* to an *existing* storage account on Azure. The benefits of creating and using a datastore include:
+An Azure Machine Learning datastore is a *reference* to an *existing* storage account on Azure. The benefits of creating and using a datastore include:
> [!div class="checklist"] > * A common and easy-to-use API to interact with different storage types (Blob/Files/ADLS).
An Azure ML datastore is a *reference* to an *existing* storage account on Azure
A *Datastore URI* is a Uniform Resource Identifier, which is a *reference* to a storage *location* (path) on your Azure storage account. The format of the datastore URI is: ```python
-# AzureML workspace details:
+# Azure Machine Learning workspace details:
subscription = '<subscription_id>' resource_group = '<resource_group>' workspace = '<workspace>'
uri = f'azureml://subscriptions/{subscription}/resourcegroups/{resource_group}/w
These Datastore URIs are a known implementation of [Filesystem spec](https://filesystem-spec.readthedocs.io/en/latest/https://docsupdatetracker.net/index.html) (`fsspec`): A unified pythonic interface to local, remote and embedded file systems and bytes storage.
-The Azure ML Datastore implementation of `fsspec` automatically handles credential/identity passthrough used by the Azure ML datastore. This means you don't need to expose account keys in your scripts or do additional sign-in procedures on a compute instance.
+The Azure Machine Learning Datastore implementation of `fsspec` automatically handles credential/identity passthrough used by the Azure Machine Learning datastore. This means you don't need to expose account keys in your scripts or do additional sign-in procedures on a compute instance.
For example, you can directly use Datastore URIs in Pandas - below is an example of reading a CSV file:
df.head()
> 1. Find the file/folder you want to read into pandas, select the elipsis (**...**) next to it. Select from the menu **Copy URI**. You can select the **Datastore URI** to copy into your notebook/script. > :::image type="content" source="media/how-to-access-data-ci/datastore_uri_copy.png" alt-text="Screenshot highlighting the copy of the datastore URI.":::
-You can also instantiate an Azure ML filesystem and do filesystem-like commands like `ls`, `glob`, `exists`, `open`, etc. The `open()` method will return a file-like object, which can be passed to any other library that expects to work with python files, or used by your own code as you would a normal python file object. These file-like objects respect the use of `with` contexts, for example:
+You can also instantiate an Azure Machine Learning filesystem and do filesystem-like commands like `ls`, `glob`, `exists`, `open`, etc. The `open()` method will return a file-like object, which can be passed to any other library that expects to work with python files, or used by your own code as you would a normal python file object. These file-like objects respect the use of `with` contexts, for example:
```python from azureml.fsspec import AzureMachineLearningFileSystem
df = pd.read_csv("azureml://subscriptions/<subid>/resourcegroups/<rgname>/worksp
#### Read a folder of CSV files into pandas
-The Pandas `read_csv()` method doesn't support reading a folder of CSV files. You need to glob csv paths and concatenate them to a data frame using Pandas `concat()` method. The code below demonstrates how to achieve this concatenation with the Azure ML filesystem:
+The Pandas `read_csv()` method doesn't support reading a folder of CSV files. You need to glob csv paths and concatenate them to a data frame using Pandas `concat()` method. The code below demonstrates how to achieve this concatenation with the Azure Machine Learning filesystem:
```python import pandas as pd
You'll notice the `mltable` library supports reading tabular data from different
|A path on your local computer | `./home/username/data/my_data` | |A path on a public http(s) server | `https://raw.githubusercontent.com/pandas-dev/pandas/main/doc/data/titanic.csv` | |A path on Azure Storage | `wasbs://<container_name>@<account_name>.blob.core.windows.net/<path>` <br> `abfss://<file_system>@<account_name>.dfs.core.windows.net/<path>` |
-|A long-form Azure ML datastore | `azureml://subscriptions/<subid>/resourcegroups/<rgname>/workspaces/<wsname>/datastores/<name>/paths/<path>` |
+|A long-form Azure Machine Learning datastore | `azureml://subscriptions/<subid>/resourcegroups/<rgname>/workspaces/<wsname>/datastores/<name>/paths/<path>` |
> [!NOTE]
-> `mltable` does user credential passthrough for paths on Azure Storage and Azure ML datastores. If you do not have permission to the data on the underlying storage then you will not be able to access the data.
+> `mltable` does user credential passthrough for paths on Azure Storage and Azure Machine Learning datastores. If you do not have permission to the data on the underlying storage then you will not be able to access the data.
### Files, folders and globs
df = tbl.to_pandas_dataframe()
df.head() ```
-##### [Azure ML Datastore](#tab/datastore)
+##### [Azure Machine Learning Datastore](#tab/datastore)
Update the placeholders (`<>`) in the code snippet with your details.
df = tbl.to_pandas_dataframe()
df.head() ```
-##### [Azure ML Datastore](#tab/datastore)
+##### [Azure Machine Learning Datastore](#tab/datastore)
Update the placeholders (`<>`) in the code snippet with your details.
df.head()
### Reading data assets
-In this section, you'll learn how to access your Azure ML data assets into pandas.
+In this section, you'll learn how to access your Azure Machine Learning data assets into pandas.
#### Table asset
-If you've previously created a Table asset in Azure ML (an `mltable`, or a V1 `TabularDataset`), you can load that into pandas using:
+If you've previously created a Table asset in Azure Machine Learning (an `mltable`, or a V1 `TabularDataset`), you can load that into pandas using:
```python import mltable
df.head()
> [!TIP] > Pandas is not designed to handle large datasets - you will only be able to process data that can fit into the memory of the compute instance. >
-> For large datasets we recommend that you use AzureML managed Spark, which provides the [PySpark Pandas API](https://spark.apache.org/docs/latest/api/python/user_guide/pandas_on_spark/https://docsupdatetracker.net/index.html).
+> For large datasets we recommend that you use Azure Machine Learning managed Spark, which provides the [PySpark Pandas API](https://spark.apache.org/docs/latest/api/python/user_guide/pandas_on_spark/https://docsupdatetracker.net/index.html).
You may wish to iterate quickly on a smaller subset of a large dataset before scaling up to a remote asynchronous job. `mltable` provides in-built functionality to get samples of large data using the [take_random_sample](/python/api/mltable/mltable.mltable.mltable#mltable-mltable-mltable-take-random-sample) method:
You can also take subsets of large data by using:
## Downloading data using the `azcopy` utility
-You may want to download the data to the local SSD of your host (local machine, cloud VM, Azure ML Compute Instance) and use the local filesystem. You can do this with the `azcopy` utility, which is pre-installed on an Azure ML compute instance. If you are **not** using an Azure ML compute instance or a Data Science Virtual Machine (DSVM), you may need to install `azcopy`. For more information please read [azcopy](../storage/common/storage-ref-azcopy.md).
+You may want to download the data to the local SSD of your host (local machine, cloud VM, Azure Machine Learning Compute Instance) and use the local filesystem. You can do this with the `azcopy` utility, which is pre-installed on an Azure Machine Learning compute instance. If you are **not** using an Azure Machine Learning compute instance or a Data Science Virtual Machine (DSVM), you may need to install `azcopy`. For more information please read [azcopy](../storage/common/storage-ref-azcopy.md).
> [!CAUTION] > We do not recommend downloading data in the `/home/azureuser/cloudfiles/code` location on a compute instance. This is designed to store notebook and code artifacts, **not** data. Reading data from this location will incur significant performance overhead when training. Instead we recommend storing your data in `home/azureuser`, which is the local SSD of the compute node.
machine-learning How To Access Resources From Endpoints Managed Identities https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-access-resources-from-endpoints-managed-identities.md
This guide assumes you don't have a managed identity, a storage account or an on
* To use Azure Machine Learning, you must have an Azure subscription. If you don't have an Azure subscription, create a free account before you begin. Try the [free or paid version of Azure Machine Learning](https://azure.microsoft.com/free/) today.
-* Install and configure the Azure ML Python SDK (v2). For more information, see [Install and set up SDK (v2)](https://aka.ms/sdk-v2-install).
+* Install and configure the Azure Machine Learning Python SDK (v2). For more information, see [Install and set up SDK (v2)](https://aka.ms/sdk-v2-install).
* An Azure Resource group, in which you (or the service principal you use) need to have `User Access Administrator` and `Contributor` access. You'll have such a resource group if you configured your ML extension per the above article.
This guide assumes you don't have a managed identity, a storage account or an on
* Role creation permissions for your subscription or the Azure resources accessed by the User-assigned identity.
-* Install and configure the Azure ML Python SDK (v2). For more information, see [Install and set up SDK (v2)](https://aka.ms/sdk-v2-install).
+* Install and configure the Azure Machine Learning Python SDK (v2). For more information, see [Install and set up SDK (v2)](https://aka.ms/sdk-v2-install).
* An Azure Resource group, in which you (or the service principal you use) need to have `User Access Administrator` and `Contributor` access. You'll have such a resource group if you configured your ML extension per the above article.
machine-learning How To Administrate Data Authentication https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-administrate-data-authentication.md
In general, data access from studio involves the following checks:
* Who is accessing? - There are multiple different types of authentication depending on the storage type. For example, account key, token, service principal, managed identity, and user identity.
- - If authentication is made using a user identity, then it's important to know *which* user is trying to access storage. For more information on authenticating a _user_, see [authentication for Azure Machine Learning](how-to-setup-authentication.md). For more information on service-level authentication, see [authentication between AzureML and other services](how-to-identity-based-service-authentication.md).
+ - If authentication is made using a user identity, then it's important to know *which* user is trying to access storage. For more information on authenticating a _user_, see [authentication for Azure Machine Learning](how-to-setup-authentication.md). For more information on service-level authentication, see [authentication between Azure Machine Learning and other services](how-to-identity-based-service-authentication.md).
* Do they have permission? - Are the credentials correct? If so, does the service principal, managed identity, etc., have the necessary permissions on the storage? Permissions are granted using Azure role-based access controls (Azure RBAC). - [Reader](../role-based-access-control/built-in-roles.md#reader) of the storage account reads metadata of the storage.
The following table lists what identities should be used for specific scenarios:
Data access is complex and it's important to recognize that there are many pieces to it. For example, accessing data from Azure Machine Learning studio is different than using the SDK. When using the SDK on your local development environment, you're directly accessing data in the cloud. When using studio, you aren't always directly accessing the data store from your client. Studio relies on the workspace to access data on your behalf. > [!TIP]
-> If you need to access data from outside Azure Machine Learning, such as using Azure Storage Explorer, *user* identity is probably what is used. Consult the documentation for the tool or service you are using for specific information. For more information on how Azure Machine Learning works with data, see [Setup authentication between AzureML and other services](how-to-identity-based-service-authentication.md).
+> If you need to access data from outside Azure Machine Learning, such as using Azure Storage Explorer, *user* identity is probably what is used. Consult the documentation for the tool or service you are using for specific information. For more information on how Azure Machine Learning works with data, see [Setup authentication between Azure Machine Learning and other services](how-to-identity-based-service-authentication.md).
## Azure Storage Account
machine-learning How To Assign Roles https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-assign-roles.md
In this article, you learn how to manage access (authorization) to an Azure Machine Learning workspace. [Azure role-based access control (Azure RBAC)](../role-based-access-control/overview.md) is used to manage access to Azure resources, such as the ability to create new resources or use existing ones. Users in your Azure Active Directory (Azure AD) are assigned specific roles, which grant access to resources. Azure provides both built-in roles and the ability to create custom roles. > [!TIP]
-> While this article focuses on Azure Machine Learning, individual services that Azure ML relies on provide their own RBAC settings. For example, using the information in this article, you can configure who can submit scoring requests to a model deployed as a web service on Azure Kubernetes Service. But Azure Kubernetes Service provides its own set of Azure roles. For service specific RBAC information that may be useful with Azure Machine Learning, see the following links:
+> While this article focuses on Azure Machine Learning, individual services that Azure Machine Learning relies on provide their own RBAC settings. For example, using the information in this article, you can configure who can submit scoring requests to a model deployed as a web service on Azure Kubernetes Service. But Azure Kubernetes Service provides its own set of Azure roles. For service specific RBAC information that may be useful with Azure Machine Learning, see the following links:
> > * [Control access to Azure Kubernetes cluster resources](../aks/azure-ad-rbac.md) > * [Use Azure RBAC for Kubernetes authorization](../aks/manage-azure-rbac.md)
In addition, [Azure Machine Learning registries](how-to-manage-registries.md) ha
| | | | **AzureML Registry User** | Can get registries, and read, write and delete assets within them. Cannot create new registry resources or delete them. |
-You can combine the roles to grant different levels of access. For example, you can grant a workspace user both **AzureML Data Scientist** and **Azure ML Compute Operator** roles to permit the user to perform experiments while creating computes in a self-service manner.
+You can combine the roles to grant different levels of access. For example, you can grant a workspace user both **AzureML Data Scientist** and **AzureML Compute Operator** roles to permit the user to perform experiments while creating computes in a self-service manner.
> [!IMPORTANT] > Role access can be scoped to multiple levels in Azure. For example, someone with owner access to a workspace may not have owner access to the resource group that contains the workspace. For more information, see [How Azure RBAC works](../role-based-access-control/overview.md#how-azure-rbac-works).
When using a customer-managed key (CMK), an Azure Key Vault is used to store the
Within the key vault, the user or service principal must have create, get, delete, and purge access to the key through a key vault access policy. For more information, see [Azure Key Vault security](../key-vault/general/security-features.md#controlling-access-to-key-vault-data).
-### User-assigned managed identity with Azure ML compute cluster
+### User-assigned managed identity with Azure Machine Learning compute cluster
To assign a user assigned identity to an Azure Machine Learning compute cluster, you need write permissions to create the compute and the [Managed Identity Operator Role](../role-based-access-control/built-in-roles.md#managed-identity-operator). For more information on Azure RBAC with Managed Identities, read [How to manage user assigned identity](../active-directory/managed-identities-azure-resources/how-to-manage-ua-identity-portal.md)
A more restricted role definition without wildcards in the allowed actions. It c
### MLflow data scientist
-Allows a data scientist to perform all MLflow AzureML supported operations **except**:
+Allows a data scientist to perform all MLflow Azure Machine Learning supported operations **except**:
* Creation of compute * Deploying models to a production AKS cluster
machine-learning How To Attach Kubernetes Anywhere https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-attach-kubernetes-anywhere.md
Title: Introduction to Kubernetes compute target in AzureML
+ Title: Introduction to Kubernetes compute target in Azure Machine Learning
-description: Learn how Azure Machine Learning Kubernetes compute enable AzureML across different infrastructures in cloud and on-premises
+description: Learn how Azure Machine Learning Kubernetes compute enable Azure Machine Learning across different infrastructures in cloud and on-premises
Last updated 08/31/2022
#Customer intent: As part of ML Professionals focusing on ML infratrasture setup using self-managed compute, I want to understand what Kubernetes compute target is and why do I need it.
-# Introduction to Kubernetes compute target in AzureML
+# Introduction to Kubernetes compute target in Azure Machine Learning
[!INCLUDE [dev v2](../../includes/machine-learning-dev-v2.md)]
-With AzureML CLI/Python SDK v2, AzureML introduced a new compute target - Kubernetes compute target. You can easily enable an existing **Azure Kubernetes Service** (AKS) cluster or **Azure Arc-enabled Kubernetes** (Arc Kubernetes) cluster to become a Kubernetes compute target in AzureML, and use it to train or deploy models.
+With Azure Machine Learning CLI/Python SDK v2, Azure Machine Learning introduced a new compute target - Kubernetes compute target. You can easily enable an existing **Azure Kubernetes Service** (AKS) cluster or **Azure Arc-enabled Kubernetes** (Arc Kubernetes) cluster to become a Kubernetes compute target in Azure Machine Learning, and use it to train or deploy models.
In this article, you learn about: > [!div class="checklist"]
In this article, you learn about:
## How it works
-AzureML Kubernetes compute supports two kinds of Kubernetes cluster:
+Azure Machine Learning Kubernetes compute supports two kinds of Kubernetes cluster:
* **[AKS cluster](https://azure.microsoft.com/services/kubernetes-service/)** in Azure. With your self-managed AKS cluster in Azure, you can gain security and controls to meet compliance requirement and flexibility to manage teams' ML workload. * **[Arc Kubernetes cluster](../azure-arc/kubernetes/overview.md)** outside of Azure. With Arc Kubernetes cluster, you can train or deploy models in any infrastructure on-premises, across multicloud, or the edge.
-With a simple cluster extension deployment on AKS or Arc Kubernetes cluster, Kubernetes cluster is seamlessly supported in AzureML to run training or inference workload. It's easy to enable and use an existing Kubernetes cluster for AzureML workload with the following simple steps:
+With a simple cluster extension deployment on AKS or Arc Kubernetes cluster, Kubernetes cluster is seamlessly supported in Azure Machine Learning to run training or inference workload. It's easy to enable and use an existing Kubernetes cluster for Azure Machine Learning workload with the following simple steps:
1. Prepare an [Azure Kubernetes Service cluster](../aks/learn/quick-kubernetes-deploy-cli.md) or [Arc Kubernetes cluster](../azure-arc/kubernetes/quickstart-connect-cluster.md).
-1. [Deploy the AzureML extension](how-to-deploy-kubernetes-extension.md).
-1. [Attach Kubernetes cluster to your Azure ML workspace](how-to-attach-kubernetes-to-workspace.md).
+1. [Deploy the Azure Machine Learning extension](how-to-deploy-kubernetes-extension.md).
+1. [Attach Kubernetes cluster to your Azure Machine Learning workspace](how-to-attach-kubernetes-to-workspace.md).
1. Use the Kubernetes compute target from CLI v2, SDK v2, and the Studio UI.
-**IT-operation team**. The IT-operation team is responsible for the first 3 steps above: prepare an AKS or Arc Kubernetes cluster, deploy Azure ML cluster extension, and attach Kubernetes cluster to Azure ML workspace. In addition to these essential compute setup steps, IT-operation team also uses familiar tools such as Azure CLI or kubectl to take care of the following tasks for the data-science team:
+**IT-operation team**. The IT-operation team is responsible for the first 3 steps above: prepare an AKS or Arc Kubernetes cluster, deploy Azure Machine Learning cluster extension, and attach Kubernetes cluster to Azure Machine Learning workspace. In addition to these essential compute setup steps, IT-operation team also uses familiar tools such as Azure CLI or kubectl to take care of the following tasks for the data-science team:
- Network and security configurations, such as outbound proxy server connection or Azure firewall configuration, inference router (azureml-fe) setup, SSL/TLS termination, and VNET configuration. - Create and manage instance types for different ML workload scenarios and gain efficient compute resource utilization. - Trouble shooting workload issues related to Kubernetes cluster.
-**Data-science team**. Once the IT-operations team finishes compute setup and compute target(s) creation, the data-science team can discover a list of available compute targets and instance types in AzureML workspace. These compute resources can be used for training or inference workload. Data science specifies compute target name and instance type name using their preferred tools or APIs such as AzureML CLI v2, Python SDK v2, or Studio UI.
+**Data-science team**. Once the IT-operations team finishes compute setup and compute target(s) creation, the data-science team can discover a list of available compute targets and instance types in Azure Machine Learning workspace. These compute resources can be used for training or inference workload. Data science specifies compute target name and instance type name using their preferred tools or APIs such as Azure Machine Learning CLI v2, Python SDK v2, or Studio UI.
## Kubernetes usage scenarios
-With Arc Kubernetes cluster, you can build, train, and deploy models in any infrastructure on-premises and across multicloud using Kubernetes. This opens some new use patterns previously not possible in cloud setting environment. The following table provides a summary of the new use patterns enabled by AzureML Kubernetes compute:
+With Arc Kubernetes cluster, you can build, train, and deploy models in any infrastructure on-premises and across multicloud using Kubernetes. This opens some new use patterns previously not possible in cloud setting environment. The following table provides a summary of the new use patterns enabled by Azure Machine Learning Kubernetes compute:
-| Usage pattern | Location of data | Motivation | Infra setup & Azure ML implementation |
+| Usage pattern | Location of data | Motivation | Infra setup & Azure Machine Learning implementation |
| -- | -- | -- | -- | Train model in cloud, deploy model on-premises | Cloud | Make use of cloud compute. Either because of elastic compute needs or special hardware such as a GPU.<br/>Model must be deployed on-premises because of security, compliance, or latency requirements | 1. Azure managed compute in cloud.<br/>2. Customer managed Kubernetes on-premises.<br/>3. Fully automated MLOps in hybrid mode, including training and model deployment steps transitioning seamlessly from cloud to on-premises and vice versa.<br/>4. Repeatable, with all assets tracked properly. Model retrained when necessary, and model deployment updated automatically after retraining. | | Train model on-premises, deploy model in cloud | On-premises | Data must remain on-premises due to data-residency requirements.<br/>Deploy model in the cloud for global service access or for compute elasticity for scale and throughput. | 1. Azure managed compute in cloud.<br/>2. Customer managed Kubernetes on-premises.<br/>3. Fully automated MLOps in hybrid mode, including training and model deployment steps transitioning seamlessly from cloud to on-premises and vice versa.<br/>4. Repeatable, with all assets tracked properly. Model retrained when necessary, and model deployment updated automatically after retraining. |
-| Bring your own AKS in Azure | Cloud | More security and controls.<br/>All private IP machine learning to prevent data exfiltration. | 1. AKS cluster behind an Azure VNet.<br/>2. Create private endpoints in the same VNet for AzureML workspace and its associated resources.<br/>3. Fully automated MLOps. |
+| Bring your own AKS in Azure | Cloud | More security and controls.<br/>All private IP machine learning to prevent data exfiltration. | 1. AKS cluster behind an Azure VNet.<br/>2. Create private endpoints in the same VNet for Azure Machine Learning workspace and its associated resources.<br/>3. Fully automated MLOps. |
| Full ML lifecycle on-premises | On-premises | Secure sensitive data or proprietary IP, such as ML models and code/scripts. | 1. Outbound proxy server connection on-premises.<br/>2. Azure ExpressRoute and Azure Arc private link to Azure resources.<br/>3. Customer managed Kubernetes on-premises.<br/>4. Fully automated MLOps. | ## Recommended best practices **Separation of responsibilities between the IT-operations team and data-science team**. As we mentioned above, managing your own compute and infrastructure for ML workload is a complicated task, and it's best to be done by IT-operations team so data-science team can focus on ML models for organizational efficiency.
-**Create and manage instance types for different ML workload scenarios**. Each ML workload uses different amounts of compute resources such as CPU/GPU and memory. AzureML implements instance type as Kubernetes custom resource definition (CRD) with properties of nodeSelector and resource request/limit. With a carefully curated list of instance types, IT-operations can target ML workload on specific node(s) and manage compute resource utilization efficiently.
+**Create and manage instance types for different ML workload scenarios**. Each ML workload uses different amounts of compute resources such as CPU/GPU and memory. Azure Machine Learning implements instance type as Kubernetes custom resource definition (CRD) with properties of nodeSelector and resource request/limit. With a carefully curated list of instance types, IT-operations can target ML workload on specific node(s) and manage compute resource utilization efficiently.
-**Multiple AzureML workspaces share the same Kubernetes cluster**. You can attach Kubernetes cluster multiple times to the same AzureML workspace or different AzureML workspaces, creating multiple compute targets in one workspace or multiple workspaces. Since many customers organize data science projects around AzureML workspace, multiple data science projects can now share the same Kubernetes cluster. This significantly reduces ML infrastructure management overheads and IT cost saving.
+**Multiple Azure Machine Learning workspaces share the same Kubernetes cluster**. You can attach Kubernetes cluster multiple times to the same Azure Machine Learning workspace or different Azure Machine Learning workspaces, creating multiple compute targets in one workspace or multiple workspaces. Since many customers organize data science projects around Azure Machine Learning workspace, multiple data science projects can now share the same Kubernetes cluster. This significantly reduces ML infrastructure management overheads and IT cost saving.
-**Team/project workload isolation using Kubernetes namespace**. When you attach Kubernetes cluster to AzureML workspace, you can specify a Kubernetes namespace for the compute target. All workloads run by the compute target will be placed under the specified namespace.
+**Team/project workload isolation using Kubernetes namespace**. When you attach Kubernetes cluster to Azure Machine Learning workspace, you can specify a Kubernetes namespace for the compute target. All workloads run by the compute target will be placed under the specified namespace.
## KubernetesCompute and legacy AksCompute
-With AzureML CLI/Python SDK v1, you can deploy models on AKS using AksCompute target. Both KubernetesCompute target and AksCompute target support AKS integration, however they support it differently. The following table shows their key differences:
+With Azure Machine Learning CLI/Python SDK v1, you can deploy models on AKS using AksCompute target. Both KubernetesCompute target and AksCompute target support AKS integration, however they support it differently. The following table shows their key differences:
|Capabilities |AKS integration with AksCompute (legacy) |AKS integration with KubernetesCompute| |--|--|--|
With AzureML CLI/Python SDK v1, you can deploy models on AKS using AksCompute ta
|Batch inference | No | Yes | |Real-time inference new features | No new features development | Active roadmap |
-With these key differences and overall AzureML evolution to use SDK/CLI v2, AzureML recommends you to use Kubernetes compute target to deploy models if you decide to use AKS for model deployment.
+With these key differences and overall Azure Machine Learning evolution to use SDK/CLI v2, Azure Machine Learning recommends you to use Kubernetes compute target to deploy models if you decide to use AKS for model deployment.
## Next steps -- [Step 1: Deploy AzureML extension](how-to-deploy-kubernetes-extension.md)
+- [Step 1: Deploy Azure Machine Learning extension](how-to-deploy-kubernetes-extension.md)
- [Step 2: Attach Kubernetes cluster to workspace](how-to-attach-kubernetes-to-workspace.md) - [Create and manage instance types](how-to-manage-kubernetes-instance-types.md) ### Other resources - [Kubernetes version and region availability](./reference-kubernetes.md#supported-kubernetes-version-and-region)-- [Work with custom data storage](./reference-kubernetes.md#azureml-jobs-connect-with-custom-data-storage)
+- [Work with custom data storage](./reference-kubernetes.md#azure-machine-learning-jobs-connect-with-custom-data-storage)
### Examples
-All AzureML examples can be found in [https://github.com/Azure/azureml-examples.git](https://github.com/Azure/azureml-examples).
+All Azure Machine Learning examples can be found in [https://github.com/Azure/azureml-examples.git](https://github.com/Azure/azureml-examples).
-For any AzureML example, you only need to update the compute target name to your Kubernetes compute target, then you're all done.
+For any Azure Machine Learning example, you only need to update the compute target name to your Kubernetes compute target, then you're all done.
* Explore training job samples with CLI v2 - [https://github.com/Azure/azureml-examples/tree/main/cli/jobs](https://github.com/Azure/azureml-examples/tree/main/cli/jobs) * Explore model deployment with online endpoint samples with CLI v2 - [https://github.com/Azure/azureml-examples/tree/main/cli/endpoints/online/kubernetes](https://github.com/Azure/azureml-examples/tree/main/cli/endpoints/online/kubernetes) * Explore batch endpoint samples with CLI v2 - [https://github.com/Azure/azureml-examples/tree/main/cli/endpoints/batch](https://github.com/Azure/azureml-examples/tree/main/cli/endpoints/batch)
machine-learning How To Attach Kubernetes To Workspace https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-attach-kubernetes-to-workspace.md
Title: Attach a Kubernetes cluster to AzureML workspace
+ Title: Attach a Kubernetes cluster to Azure Machine Learning workspace
description: Learn about how to attach a Kubernetes cluster
-# Attach a Kubernetes cluster to AzureML workspace
+# Attach a Kubernetes cluster to Azure Machine Learning workspace
-Once AzureML extension is deployed on AKS or Arc Kubernetes cluster, you can attach the Kubernetes cluster to AzureML workspace and create compute targets for ML professionals to use.
+Once Azure Machine Learning extension is deployed on AKS or Arc Kubernetes cluster, you can attach the Kubernetes cluster to Azure Machine Learning workspace and create compute targets for ML professionals to use.
## Prerequisites
-Attaching a Kubernetes cluster to AzureML workspace can flexibly support many different scenarios, such as the shared scenarios with multiple attachments, model training scripts accessing Azure resources, and the authentication configuration of the workspace. But you need to pay attention to the following prerequisites.
+Attaching a Kubernetes cluster to Azure Machine Learning workspace can flexibly support many different scenarios, such as the shared scenarios with multiple attachments, model training scripts accessing Azure resources, and the authentication configuration of the workspace. But you need to pay attention to the following prerequisites.
#### Multi-attach and workload isolation
If you plan to have different compute targets for different projects/teams, you
> [!IMPORTANT] >
-> The namespace you plan to specify when attaching the cluster to AzureML workspace should be previously created in your cluster.
+> The namespace you plan to specify when attaching the cluster to Azure Machine Learning workspace should be previously created in your cluster.
#### Securely access Azure resource from training script
If you plan to have different compute targets for different projects/teams, you
#### Attach to workspace with user-assigned managed identity
-Azure Machine Learning workspace defaults to having a system-assigned managed identity to access Azure ML resources. The steps are completed if the system assigned default setting is on.
+Azure Machine Learning workspace defaults to having a system-assigned managed identity to access Azure Machine Learning resources. The steps are completed if the system assigned default setting is on.
Otherwise, if a [user-assigned managed identity is specified in Azure Machine Learning workspace creation](../machine-learning/how-to-identity-based-service-authentication.md#user-assigned-managed-identity), the following role assignments need to be granted to the managed identity manually before attaching the compute.
Otherwise, if a [user-assigned managed identity is specified in Azure Machine Le
> * If the "Kubernetes Extension Contributor" role permission is not available, the cluster attachment fails with "extension not installed" error. > * If the "Azure Kubernetes Service Cluster Admin" role permission is not available, the cluster attachment fails with "internal server" error.
-## How to attach a Kubernetes cluster to AzureML workspace
+## How to attach a Kubernetes cluster to Azure Machine Learning workspace
-We support two ways to attach a Kubernetes cluster to AzureML workspace, using Azure CLI or studio UI.
+We support two ways to attach a Kubernetes cluster to Azure Machine Learning workspace, using Azure CLI or studio UI.
### [Azure CLI](#tab/cli)
The following commands show how to attach an AKS and Azure Arc-enabled Kubernete
**AKS cluster** ```azurecli
-az ml compute attach --resource-group <resource-group-name> --workspace-name <workspace-name> --type Kubernetes --name k8s-compute --resource-id "/subscriptions/<subscription-id>/resourceGroups/<resource-group-name>/providers/Microsoft.ContainerService/managedclusters/<cluster-name>" --identity-type SystemAssigned --namespace <Kubernetes namespace to run AzureML workloads> --no-wait
+az ml compute attach --resource-group <resource-group-name> --workspace-name <workspace-name> --type Kubernetes --name k8s-compute --resource-id "/subscriptions/<subscription-id>/resourceGroups/<resource-group-name>/providers/Microsoft.ContainerService/managedclusters/<cluster-name>" --identity-type SystemAssigned --namespace <Kubernetes namespace to run Azure Machine Learning workloads> --no-wait
``` **Arc Kubernetes cluster**
Set the `--type` argument to `Kubernetes`. Use the `identity_type` argument to e
> > `--user-assigned-identities` is only required for `UserAssigned` managed identities. Although you can provide a list of comma-separated user managed identities, only the first one is used when you attach your cluster. >
-> Compute attach won't create the Kubernetes namespace automatically or validate whether the kubernetes namespace existed. You need to verify that the specified namespace exists in your cluster, otherwise, any AzureML workloads submitted to this compute will fail.
+> Compute attach won't create the Kubernetes namespace automatically or validate whether the kubernetes namespace existed. You need to verify that the specified namespace exists in your cluster, otherwise, any Azure Machine Learning workloads submitted to this compute will fail.
### [Studio](#tab/studio)
Attaching a Kubernetes cluster makes it available to your workspace for training
1. Enter a compute name and select your Kubernetes cluster from the dropdown.
- * **(Optional)** Enter Kubernetes namespace, which defaults to `default`. All machine learning workloads will be sent to the specified Kubernetes namespace in the cluster. Compute attach won't create the Kubernetes namespace automatically or validate whether the kubernetes namespace exists. You need to verify that the specified namespace exists in your cluster, otherwise, any AzureML workloads submitted to this compute will fail.
+ * **(Optional)** Enter Kubernetes namespace, which defaults to `default`. All machine learning workloads will be sent to the specified Kubernetes namespace in the cluster. Compute attach won't create the Kubernetes namespace automatically or validate whether the kubernetes namespace exists. You need to verify that the specified namespace exists in your cluster, otherwise, any Azure Machine Learning workloads submitted to this compute will fail.
* **(Optional)** Assign system-assigned or user-assigned managed identity. Managed identities eliminate the need for developers to manage credentials. For more information, see the [Assign managed identity](#assign-managed-identity-to-the-compute-target) section of this article.
You can use a managed identity to access Azure Blob:
## Next steps - [Create and manage instance types](./how-to-manage-kubernetes-instance-types.md)-- [AzureML inference router and connectivity requirements](./how-to-kubernetes-inference-routing-azureml-fe.md)
+- [Azure Machine Learning inference router and connectivity requirements](./how-to-kubernetes-inference-routing-azureml-fe.md)
- [Secure AKS inferencing environment](./how-to-secure-kubernetes-inferencing-environment.md)
machine-learning How To Authenticate Batch Endpoint https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-authenticate-batch-endpoint.md
You can either use one of the [built-in security roles](../role-based-access-con
The following examples show different ways to start batch deployment jobs using different types of credentials: > [!IMPORTANT]
-> When working on a private link-enabled workspaces, batch endpoints can't be invoked from the UI in Azure ML studio. Please use the Azure ML CLI v2 instead for job creation.
+> When working on a private link-enabled workspaces, batch endpoints can't be invoked from the UI in Azure Machine Learning studio. Please use the Azure Machine Learning CLI v2 instead for job creation.
### Running jobs using user's credentials In this case, we want to execute a batch endpoint using the identity of the user currently logged in. Follow these steps: > [!NOTE]
-> When working on Azure ML studio, batch endpoints/deployments are always executed using the identity of the current user logged in.
+> When working on Azure Machine Learning studio, batch endpoints/deployments are always executed using the identity of the current user logged in.
# [Azure CLI](#tab/cli)
In this case, we want to execute a batch endpoint using the identity of the user
# [Python](#tab/sdk)
-1. Use the Azure ML SDK for Python to log in using either interactive or device authentication:
+1. Use the Azure Machine Learning SDK for Python to log in using either interactive or device authentication:
```python from azure.ai.ml import MLClient
machine-learning How To Auto Train Forecast https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-auto-train-forecast.md
transformations:
encoding: ascii ```
-You can now define an input data object, which is required to start a training job, using the AzureML Python SDK as follows:
+You can now define an input data object, which is required to start a training job, using the Azure Machine Learning Python SDK as follows:
```python from azure.ai.ml.constants import AssetTypes
You can specify [validation data](concept-automated-ml.md#training-validation-an
Learn more about how AutoML applies cross validation to [prevent over fitting](concept-manage-ml-pitfalls.md#prevent-overfitting). ## Compute to run experiment
-AutoML uses AzureML Compute, which is a fully managed compute resource, to run the training job. In the following example, a compute cluster named `cpu-compute` is created:
+AutoML uses Azure Machine Learning Compute, which is a fully managed compute resource, to run the training job. In the following example, a compute cluster named `cpu-compute` is created:
[!notebook-python[] (~/azureml-examples-main/sdk/python/jobs/configuration.ipynb?name=create-cpu-compute)]
Repeat the necessary steps to load this future data to a data frame and then run
[!INCLUDE [sdk v1](../../includes/machine-learning-sdk-v1.md)] > [!IMPORTANT]
-> Many models and hierarchical time series are currently only supported in AzureML v1. Support for AzureML v2 is forthcoming.
+> Many models and hierarchical time series are currently only supported in Azure Machine Learning v1. Support for Azure Machine Learning v2 is forthcoming.
There are scenarios where a single machine learning model is insufficient and multiple machine learning models are needed. For instance, predicting sales for each individual store for a brand, or tailoring an experience to individual users. Building a model for each instance can lead to improved results on many machine learning problems.
machine-learning How To Auto Train Image Models https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-auto-train-image-models.md
Automated ML supports model training for computer vision tasks like image classi
To install the SDK you can either, * Create a compute instance, which automatically installs the SDK and is pre-configured for ML workflows. For more information, see [Create and manage an Azure Machine Learning compute instance](how-to-create-manage-compute-instance.md).
- * Use the following commands to install Azure ML Python SDK v2:
+ * Use the following commands to install Azure Machine Learning Python SDK v2:
* Uninstall previous preview version: ```python pip uninstall azure-ai-ml ```
- * Install the Azure ML Python SDK v2:
+ * Install the Azure Machine Learning Python SDK v2:
```python pip install azure-ai-ml ```
machine-learning How To Auto Train Nlp Models https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-auto-train-nlp-models.md
In this article, you learn how to train natural language processing (NLP) models
Automated ML supports NLP which allows ML professionals and data scientists to bring their own text data and build custom models for tasks such as, multi-class text classification, multi-label text classification, and named entity recognition (NER).
-You can seamlessly integrate with the [Azure Machine Learning data labeling](how-to-create-text-labeling-projects.md) capability to label your text data or bring your existing labeled data. Automated ML provides the option to use distributed training on multi-GPU compute clusters for faster model training. The resulting model can be operationalized at scale by leveraging Azure MLΓÇÖs MLOps capabilities.
+You can seamlessly integrate with the [Azure Machine Learning data labeling](how-to-create-text-labeling-projects.md) capability to label your text data or bring your existing labeled data. Automated ML provides the option to use distributed training on multi-GPU compute clusters for faster model training. The resulting model can be operationalized at scale by leveraging Azure Machine LearningΓÇÖs MLOps capabilities.
## Prerequisites
text_classification_job.set_featurization(dataset_language='eng')
## Distributed training
-You can also run your NLP experiments with distributed training on an Azure ML compute cluster.
+You can also run your NLP experiments with distributed training on an Azure Machine Learning compute cluster.
# [Azure CLI](#tab/cli)
machine-learning How To Automl Forecasting Faq https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-automl-forecasting-faq.md
There are four basic configurations supported by AutoML forecasting:
|Configuration|Scenario|Pros|Cons| |--|--|--|--|
-|**Default AutoML**|Recommended if the dataset has a small number of time series that have roughly similar historic behavior.|- Simple to configure from code/SDK or AzureML Studio <br><br> - AutoML has the chance to cross-learn across different time series since the regression models pool all series together in training. See the [model grouping](./concept-automl-forecasting-methods.md#model-grouping) section for more information.|- Regression models may be less accurate if the time series in the training data have divergent behavior <br> <br> - Time series models may take a long time to train if there are a large number of series in the training data. See the ["why is AutoML slow on my data"](#why-is-automl-slow-on-my-data) answer for more information.|
-|**AutoML with deep learning**|Recommended for datasets with more than 1000 observations and, potentially, numerous time series exhibiting complex patterns. When enabled, AutoML will sweep over temporal convolutional neural network (TCN) models during training. See the [enable deep learning](./how-to-auto-train-forecast.md#enable-deep-learning) section for more information.|- Simple to configure from code/SDK or AzureML Studio <br> <br> - Cross-learning opportunities since the TCN pools data over all series <br> <br> - Potentially higher accuracy due to the large capacity of DNN models. See the [forecasting models in AutoML](./concept-automl-forecasting-methods.md#forecasting-models-in-automl) section for more information.|- Training can take much longer due to the complexity of DNN models <br> <br> - Series with small amounts of history are unlikely to benefit from these models.|
-|**Many Models**|Recommended if you need to train and manage a large number of forecasting models in a scalable way. See the [forecasting at scale](./how-to-auto-train-forecast.md#forecasting-at-scale) section for more information.|- Scalable <br> <br> - Potentially higher accuracy when time series have divergent behavior from one another.|- No cross-learning across time series <br> <br> - You can't configure or launch Many Models jobs from AzureML Studio, only the code/SDK experience is currently available.|
+|**Default AutoML**|Recommended if the dataset has a small number of time series that have roughly similar historic behavior.|- Simple to configure from code/SDK or Azure Machine Learning Studio <br><br> - AutoML has the chance to cross-learn across different time series since the regression models pool all series together in training. See the [model grouping](./concept-automl-forecasting-methods.md#model-grouping) section for more information.|- Regression models may be less accurate if the time series in the training data have divergent behavior <br> <br> - Time series models may take a long time to train if there are a large number of series in the training data. See the ["why is AutoML slow on my data"](#why-is-automl-slow-on-my-data) answer for more information.|
+|**AutoML with deep learning**|Recommended for datasets with more than 1000 observations and, potentially, numerous time series exhibiting complex patterns. When enabled, AutoML will sweep over temporal convolutional neural network (TCN) models during training. See the [enable deep learning](./how-to-auto-train-forecast.md#enable-deep-learning) section for more information.|- Simple to configure from code/SDK or Azure Machine Learning Studio <br> <br> - Cross-learning opportunities since the TCN pools data over all series <br> <br> - Potentially higher accuracy due to the large capacity of DNN models. See the [forecasting models in AutoML](./concept-automl-forecasting-methods.md#forecasting-models-in-automl) section for more information.|- Training can take much longer due to the complexity of DNN models <br> <br> - Series with small amounts of history are unlikely to benefit from these models.|
+|**Many Models**|Recommended if you need to train and manage a large number of forecasting models in a scalable way. See the [forecasting at scale](./how-to-auto-train-forecast.md#forecasting-at-scale) section for more information.|- Scalable <br> <br> - Potentially higher accuracy when time series have divergent behavior from one another.|- No cross-learning across time series <br> <br> - You can't configure or launch Many Models jobs from Azure Machine Learning Studio, only the code/SDK experience is currently available.|
|**Hierarchical Time Series**|HTS is recommended if the series in your data have nested, hierarchical structure and you need to train or make forecasts at aggregated levels of the hierarchy. See the [hierarchical time series forecasting](how-to-auto-train-forecast.md#hierarchical-time-series-forecasting) section for more information.|- Training at aggregated levels can reduce noise in the leaf node time series and potentially lead to higher accuracy models. <br> <br> - Forecasts can be retrieved for any level of the hierarchy by aggregating or dis-aggregating forecasts from the training level.|- You need to provide the aggregation level for training. AutoML doesn't currently have an algorithm to find an optimal level.| > [!NOTE]
If your AutoML forecasting job fails, you'll see an error message in the studio
## What is a workspace / environment / experiment/ compute instance / compute target?
-If you aren't familiar with Azure Machine Learning concepts, start with the ["What is AzureML"](overview-what-is-azure-machine-learning.md) article and the [workspaces](./concept-workspace.md) article.
+If you aren't familiar with Azure Machine Learning concepts, start with the ["What is Azure Machine Learning"](overview-what-is-azure-machine-learning.md) article and the [workspaces](./concept-workspace.md) article.
## Next steps * Learn more about [how to set up AutoML to train a time-series forecasting model](./how-to-auto-train-forecast.md).
machine-learning How To Autoscale Endpoints https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-autoscale-endpoints.md
Azure Monitor autoscaling supports a rich set of rules. You can configure metric
:::image type="content" source="media/how-to-autoscale-endpoints/concept-autoscale.png" alt-text="Diagram for autoscale adding/removing instance as needed":::
-Today, you can manage autoscaling using either the Azure CLI, REST, ARM, or the browser-based Azure portal. Other Azure ML SDKs, such as the Python SDK, will add support over time.
+Today, you can manage autoscaling using either the Azure CLI, REST, ARM, or the browser-based Azure portal. Other Azure Machine Learning SDKs, such as the Python SDK, will add support over time.
## Prerequisites
endpoint_name = "<YOUR-ENDPOINT-NAME>"
deployment_name = "blue" ```
-Get Azure ML and Azure Monitor clients:
+Get Azure Machine Learning and Azure Monitor clients:
```python credential = DefaultAzureCredential()
machine-learning How To Batch Scoring Script https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-batch-scoring-script.md
[!INCLUDE [cli v2](../../includes/machine-learning-dev-v2.md)]
-Batch endpoints allow you to deploy models to perform inference at scale. Because how inference should be executed varies from model's format, model's type and use case, batch endpoints require a scoring script (also known as batch driver script) to indicate the deployment how to use the model over the provided data. In this article you will learn how to use scoring scripts in different scenarios and their best practices.
+Batch endpoints allow you to deploy models to perform long-running inference at scale. To indicate how batch endpoints should use your model over the input data to create predictions, you need to create and specify a scoring script (also known as batch driver script). In this article, you will learn how to use scoring scripts in different scenarios and their best practices.
> [!TIP]
-> MLflow models don't require a scoring script as it is autogenerated for you. For more details about how batch endpoints work with MLflow models, see the dedicated tutorial [Using MLflow models in batch deployments](how-to-mlflow-batch.md). Notice that this feature doesn't prevent you from writing an specific scoring script for MLflow models as explained at [Using MLflow models with a scoring script](how-to-mlflow-batch.md#customizing-mlflow-models-deployments-with-a-scoring-script).
+> MLflow models don't require a scoring script as it is autogenerated for you. For more details about how batch endpoints work with MLflow models, see the dedicated tutorial [Using MLflow models in batch deployments](how-to-mlflow-batch.md). If you want to change the default inference routine, write an scoring script for your MLflow models as explained at [Using MLflow models with a scoring script](how-to-mlflow-batch.md#customizing-mlflow-models-deployments-with-a-scoring-script).
> [!WARNING] > If you are deploying an Automated ML model under a batch endpoint, notice that the scoring script that Automated ML provides only works for Online Endpoints and it is not designed for batch execution. Please follow this guideline to learn how to create one depending on what your model does.
The scoring script must contain two methods:
#### The `init` method
-Use the `init()` method for any costly or common preparation. For example, use it to load the model into a global object. This function will be called once at the beginning of the process. You model's files will be available in an environment variable called `AZUREML_MODEL_DIR`. Use this variable to locate the files associated with the model. Notice that some models may be contained in a folder (in the following example, the model has several files in a folder named `model`). See [how you can find out what's the folder used by your model](#using-models-that-are-folders).
+Use the `init()` method for any costly or common preparation. For example, use it to load the model into memory. This function is called once at the beginning of the entire batch job. Your model's files are available in a path determined by the environment variable `AZUREML_MODEL_DIR`. Notice that depending on how your model was registered, its files may be contained in a folder (in the following example, the model has several files in a folder named `model`). See [how you can find out what's the folder used by your model](#using-models-that-are-folders).
```python def init():
Notice that in this example we are placing the model in a global variable `model
#### The `run` method
-Use the `run(mini_batch: List[str]) -> Union[List[Any], pandas.DataFrame]` method to perform the scoring of each mini-batch generated by the batch deployment. Such method will be called once per each `mini_batch` generated for your input data. Batch deployments read data in batches accordingly to how the deployment is configured.
+Use the `run(mini_batch: List[str]) -> Union[List[Any], pandas.DataFrame]` method to perform the scoring of each mini-batch generated by the batch deployment. Such method is called once per each `mini_batch` generated for your input data. Batch deployments read data in batches accordingly to how the deployment is configured.
```python
-def run(mini_batch: List[str]) -> Union[List[Any], pandas.DataFrame]:
+import pandas as pd
+from typing import List, Any, Union
+
+def run(mini_batch: List[str]) -> Union[List[Any], pd.DataFrame]:
results = [] for file in mini_batch:
def run(mini_batch: List[str]) -> Union[List[Any], pandas.DataFrame]:
return pd.DataFrame(results) ```
-The method receives a list of file paths as a parameter (`mini_batch`). You can use this list to either iterate over each file and process it one by one, or to read the entire batch and process it at once. The best option will depend on your compute memory and the throughput you need to achieve. For an example of how to read entire batches of data at once see [High throughput deployments](how-to-image-processing-batch.md#high-throughput-deployments).
+The method receives a list of file paths as a parameter (`mini_batch`). You can use this list to either iterate over each file and process it one by one, or to read the entire batch and process it at once. The best option depends on your compute memory and the throughput you need to achieve. For an example of how to read entire batches of data at once see [High throughput deployments](how-to-image-processing-batch.md#high-throughput-deployments).
> [!NOTE] > __How is work distributed?__ > > Batch deployments distribute work at the file level, which means that a folder containing 100 files with mini-batches of 10 files will generate 10 batches of 10 files each. Notice that this will happen regardless of the size of the files involved. If your files are too big to be processed in large mini-batches we suggest to either split the files in smaller files to achieve a higher level of parallelism or to decrease the number of files per mini-batch. At this moment, batch deployment can't account for skews in the file's size distribution.
-The `run()` method should return a Pandas `DataFrame` or an array/list. Each returned output element indicates one successful run of an input element in the input `mini_batch`. For file datasets, each row/element will represent a single file processed. For a tabular dataset, each row/element will represent a row in a processed file.
+The `run()` method should return a Pandas `DataFrame` or an array/list. Each returned output element indicates one successful run of an input element in the input `mini_batch`. For file datasets, each row/element represents a single file processed. For a tabular dataset, each row/element represents a row in a processed file.
> [!IMPORTANT] > __How to write predictions?__
The `run()` method should return a Pandas `DataFrame` or an array/list. Each ret
> [!WARNING] > Do not not output complex data types (or lists of complex data types) in the `run` function. Those outputs will be transformed to string and they will be hard to read.
-The resulting DataFrame or array is appended to the output file indicated. There's no requirement on the cardinality of the results (1 file can generate 1 or many rows/elements in the output). All elements in the result DataFrame or array will be written to the output file as-is (considering the `output_action` isn't `summary_only`).
+The resulting DataFrame or array is appended to the output file indicated. There's no requirement on the cardinality of the results (1 file can generate 1 or many rows/elements in the output). All elements in the result DataFrame or array are written to the output file as-is (considering the `output_action` isn't `summary_only`).
#### Python packages for scoring
-Any library that your scoring script requires to run needs to be indicated in the environment where your batch deployment runs. As for scoring scripts, environments are indicated per deployment. Usually, you will indicate your requirements using a `conda.yml` dependencies file which may look as follows:
+Any library that your scoring script requires to run needs to be indicated in the environment where your batch deployment runs. As for scoring scripts, environments are indicated per deployment. Usually, you indicate your requirements using a `conda.yml` dependencies file, which may look as follows:
__mnist/environment/conda.yml__
Refer to [Create a batch deployment](how-to-use-batch-endpoint.md#create-a-batch
## Writing predictions in a different way
-By default, the batch deployment will write the model's predictions in a single file as indicated in the deployment. However, there are some cases where you need to write the predictions in multiple files. For instance, if the input data is partitioned, you typically would want to generate your output partitioned too. On those cases you can [Customize outputs in batch deployments](how-to-deploy-model-custom-output.md) to indicate:
+By default, the batch deployment writes the model's predictions in a single file as indicated in the deployment. However, there are some cases where you need to write the predictions in multiple files. For instance, if the input data is partitioned, you typically would want to generate your output partitioned too. On those cases you can [Customize outputs in batch deployments](how-to-deploy-model-custom-output.md) to indicate:
> [!div class="checklist"] > * The file format used (CSV, parquet, json, etc).
When writing scoring scripts that work with big amounts of data, you need to tak
* The memory footprint of the model when running over the input data. * The available memory in your compute.
-Batch deployments distribute work at the file level, which means that a folder containing 100 files with mini-batches of 10 files will generate 10 batches of 10 files each. Notice that this will happen regardless of the size of the files involved. If your files are too big to be processed in large mini-batches we suggest to either split the files in smaller files to achieve a higher level of parallelism or to decrease the number of files per mini-batch. At this moment, batch deployment can't account for skews in the file's size distribution.
+Batch deployments distribute work at the file level, which means that a folder containing 100 files with mini-batches of 10 files will generate 10 batches of 10 files each (regardless of the size of the files involved). If your files are too big to be processed in large mini-batches, we suggest to either split the files in smaller files to achieve a higher level of parallelism or to decrease the number of files per mini-batch. At this moment, batch deployment can't account for skews in the file's size distribution.
### Relationship between the degree of parallelism and the scoring script Your deployment configuration controls the size of each mini-batch and the number of workers on each node. Take into account them when deciding if you want to read the entire mini-batch to perform inference, or if you want to run inference file by file, or row by row (for tabular). See [Running inference at the mini-batch, file or the row level](#running-inference-at-the-mini-batch-file-or-the-row-level) to see the different approaches.
-When running multiple workers on the same instance, take into account that memory will be shared across all the workers. Usually, increasing the number of workers per node should be accompanied by a decrease in the mini-batch size or by a change in the scoring strategy (if data size and compute SKU remains the same).
+When running multiple workers on the same instance, take into account that memory is shared across all the workers. Usually, increasing the number of workers per node should be accompanied by a decrease in the mini-batch size or by a change in the scoring strategy (if data size and compute SKU remains the same).
### Running inference at the mini-batch, file or the row level
You will typically want to run inference over the batch all at once when you wan
> [!WARNING] > Running inference at the batch level may require having high control over the input data size to be able to correctly account for the memory requirements and avoid out of memory exceptions. Whether you are able or not of loading the entire mini-batch in memory will depend on the size of the mini-batch, the size of the instances in the cluster, the number of workers on each node, and the size of the mini-batch.
-For an example about how to achieve it see [High throughput deployments](how-to-image-processing-batch.md#high-throughput-deployments). This example processes an entire batch of files at a time.
+For an example about how to achieve it, see [High throughput deployments](how-to-image-processing-batch.md#high-throughput-deployments). This example processes an entire batch of files at a time.
#### File level
machine-learning How To Configure Auto Train https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-configure-auto-train.md
-# Set up AutoML training with the Azure ML Python SDK v2
+# Set up AutoML training with the Azure Machine Learning Python SDK v2
[!INCLUDE [sdk v2](../../includes/machine-learning-sdk-v2.md)] > [!div class="op_single_selector" title1="Select the version of Azure Machine Learning Python you are using:"]
For this article you need:
* The Azure Machine Learning Python SDK v2 installed. To install the SDK you can either,
- * Create a compute instance, which already has installed the latest AzureML Python SDK and is pre-configured for ML workflows. See [Create and manage an Azure Machine Learning compute instance](how-to-create-manage-compute-instance.md) for more information.
+ * Create a compute instance, which already has installed the latest Azure Machine Learning Python SDK and is pre-configured for ML workflows. See [Create and manage an Azure Machine Learning compute instance](how-to-create-manage-compute-instance.md) for more information.
- * Use the followings commands to install Azure ML Python SDK v2:
+ * Use the followings commands to install Azure Machine Learning Python SDK v2:
* Uninstall previous preview version: ```Python pip uninstall azure-ai-ml ```
- * Install the Azure ML Python SDK v2:
+ * Install the Azure Machine Learning Python SDK v2:
```Python pip install azure-ai-ml ```
try:
ml_client = MLClient.from_config(credential) except Exception as ex: print(ex)
- # Enter details of your AzureML workspace
+ # Enter details of your Azure Machine Learning workspace
subscription_id = "<SUBSCRIPTION_ID>" resource_group = "<RESOURCE_GROUP>" workspace = "<AZUREML_WORKSPACE_NAME>"
If you don't explicitly specify a `validation_data` or `n_cross_validation` para
## Compute to run experiment
-Automated ML jobs with the Python SDK v2 (or CLI v2) are currently only supported on Azure ML remote compute (cluster or compute instance).
+Automated ML jobs with the Python SDK v2 (or CLI v2) are currently only supported on Azure Machine Learning remote compute (cluster or compute instance).
[Learn more about creating compute with the Python SDKv2 (or CLIv2).](./how-to-train-model.md).
After you test a model and confirm you want to use it in production, you can reg
## AutoML in pipelines
-To leverage AutoML in your MLOps workflows, you can add AutoML Job steps to your [AzureML Pipelines](./how-to-create-component-pipeline-python.md). This allows you to automate your entire workflow by hooking up your data prep scripts to AutoML and then registering and validating the resulting best model.
+To leverage AutoML in your MLOps workflows, you can add AutoML Job steps to your [Azure Machine Learning Pipelines](./how-to-create-component-pipeline-python.md). This allows you to automate your entire workflow by hooking up your data prep scripts to AutoML and then registering and validating the resulting best model.
Below is a [sample pipeline](https://github.com/Azure/azureml-examples/tree/main/sdk/python/jobs/pipelines/1h_automl_in_pipeline/automl-classification-bankmarketing-in-pipeline) with an AutoML classification component and a command component that shows the resulting AutoML output. Note how the inputs (training & validation data) and the outputs (best model) are referenced in different steps.
machine-learning How To Configure Databricks Automl Environment https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-configure-databricks-automl-environment.md
Title: Develop with AutoML & Azure Databricks
-description: Learn to set up a development environment in Azure Machine Learning and Azure Databricks. Use the Azure ML SDKs for Databricks and Databricks with AutoML.
+description: Learn to set up a development environment in Azure Machine Learning and Azure Databricks. Use the Azure Machine Learning SDKs for Databricks and Databricks with AutoML.
Azure Databricks integrates with Azure Machine Learning and its AutoML capabilit
You can use Azure Databricks: + To train a model using Spark MLlib and deploy the model to ACI/AKS.
-+ With [automated machine learning](concept-automated-ml.md) capabilities using an Azure ML SDK.
++ With [automated machine learning](concept-automated-ml.md) capabilities using an Azure Machine Learning SDK. + As a compute target from an [Azure Machine Learning pipeline](concept-ml-pipelines.md). ## Set up a Databricks cluster
Use these settings:
Wait until the cluster is running before proceeding further.
-## Add the Azure ML SDK to Databricks
+## Add the Azure Machine Learning SDK to Databricks
Once the cluster is running, [create a library](https://docs.databricks.com/user-guide/libraries.html#create-a-library) to attach the appropriate Azure Machine Learning SDK package to your cluster.
-To use automated ML, skip to [Add the Azure ML SDK with AutoML](#add-the-azure-ml-sdk-with-automl-to-databricks).
+To use automated ML, skip to [Add the Azure Machine Learning SDK with AutoML](#add-the-azure-machine-learning-sdk-with-automl-to-databricks).
1. Right-click the current Workspace folder where you want to store the library. Select **Create** > **Library**.
To use automated ML, skip to [Add the Azure ML SDK with AutoML](#add-the-azure-m
![Azure Machine Learning SDK for Databricks](./media/how-to-configure-environment/amlsdk-withoutautoml.jpg)
-## Add the Azure ML SDK with AutoML to Databricks
-If the cluster was created with Databricks Runtime 7.3 LTS (*not* ML), run the following command in the first cell of your notebook to install the AzureML SDK.
+## Add the Azure Machine Learning SDK with AutoML to Databricks
+If the cluster was created with Databricks Runtime 7.3 LTS (*not* ML), run the following command in the first cell of your notebook to install the Azure Machine Learning SDK.
``` %pip install --upgrade --force-reinstall -r https://aka.ms/automl_linux_requirements.txt
machine-learning How To Configure Environment https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-configure-environment.md
Create a workspace configuration file in one of the following methods:
from azure.ai.ml import MLClient from azure.identity import DefaultAzureCredential
- #Enter details of your AzureML workspace
+ #Enter details of your Azure Machine Learning workspace
subscription_id = '<SUBSCRIPTION_ID>' resource_group = '<RESOURCE_GROUP>' workspace = '<AZUREML_WORKSPACE_NAME>'
machine-learning How To Create Attach Compute Cluster https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-create-attach-compute-cluster.md
Compute clusters can run jobs securely in a [virtual network environment](how-to
* Azure Machine Learning Compute has default limits, such as the number of cores that can be allocated. For more information, see [Manage and request quotas for Azure resources](how-to-manage-quotas.md).
-* Azure allows you to place _locks_ on resources, so that they can't be deleted or are read only. __Do not apply resource locks to the resource group that contains your workspace__. Applying a lock to the resource group that contains your workspace will prevent scaling operations for Azure ML compute clusters. For more information on locking resources, see [Lock resources to prevent unexpected changes](../azure-resource-manager/management/lock-resources.md).
+* Azure allows you to place _locks_ on resources, so that they can't be deleted or are read only. __Do not apply resource locks to the resource group that contains your workspace__. Applying a lock to the resource group that contains your workspace will prevent scaling operations for Azure Machine Learning compute clusters. For more information on locking resources, see [Lock resources to prevent unexpected changes](../azure-resource-manager/management/lock-resources.md).
## Create
machine-learning How To Create Component Pipeline Python https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-create-component-pipeline-python.md
If you don't have an Azure subscription, create a free account before you begin.
## Start an interactive Python session
-This article uses the Python SDK for Azure ML to create and control an Azure Machine Learning pipeline. The article assumes that you'll be running the code snippets interactively in either a Python REPL environment or a Jupyter notebook.
+This article uses the Python SDK for Azure Machine Learning to create and control an Azure Machine Learning pipeline. The article assumes that you'll be running the code snippets interactively in either a Python REPL environment or a Jupyter notebook.
-This article is based on the [image_classification_keras_minist_convnet.ipynb](https://github.com/Azure/azureml-examples/blob/main/sdk/python/jobs/pipelines/2e_image_classification_keras_minist_convnet/image_classification_keras_minist_convnet.ipynb) notebook found in the `sdk/jobs/pipelines/2e_image_classification_keras_minist_convnet` directory of the [AzureML Examples](https://github.com/azure/azureml-examples) repository.
+This article is based on the [image_classification_keras_minist_convnet.ipynb](https://github.com/Azure/azureml-examples/blob/main/sdk/python/jobs/pipelines/2e_image_classification_keras_minist_convnet/image_classification_keras_minist_convnet.ipynb) notebook found in the `sdk/jobs/pipelines/2e_image_classification_keras_minist_convnet` directory of the [Azure Machine Learning Examples](https://github.com/azure/azureml-examples) repository.
## Import required libraries
The next section will show create components in two different ways: the first tw
The first component in this pipeline will convert the compressed data files of `fashion_ds` into two csv files, one for training and the other for scoring. You'll use Python function to define this component.
-If you're following along with the example in the [AzureML Examples repo](https://github.com/Azure/azureml-examples/tree/main/sdk/python/jobs/pipelines/2e_image_classification_keras_minist_convnet), the source files are already available in `prep/` folder. This folder contains two files to construct the component: `prep_component.py`, which defines the component and `conda.yaml`, which defines the run-time environment of the component.
+If you're following along with the example in the [Azure Machine Learning Examples repo](https://github.com/Azure/azureml-examples/tree/main/sdk/python/jobs/pipelines/2e_image_classification_keras_minist_convnet), the source files are already available in `prep/` folder. This folder contains two files to construct the component: `prep_component.py`, which defines the component and `conda.yaml`, which defines the run-time environment of the component.
#### Define component using Python function
In this section, you'll create a component for training the image classification
The difference is that since the training logic is more complicated, you can put the original training code in a separate Python file.
-The source files of this component are under `train/` folder in the [AzureML Examples repo](https://github.com/Azure/azureml-examples/tree/main/sdk/python/jobs/pipelines/2e_image_classification_keras_minist_convnet). This folder contains three files to construct the component:
+The source files of this component are under `train/` folder in the [Azure Machine Learning Examples repo](https://github.com/Azure/azureml-examples/tree/main/sdk/python/jobs/pipelines/2e_image_classification_keras_minist_convnet). This folder contains three files to construct the component:
- `train.py`: contains the actual logic to train model. - `train_component.py`: defines the interface of the component and imports the function in `train.py`.
The `train.py` file contains a normal Python function, which performs the traini
#### Define component using Python function
-After defining the training function successfully, you can use @command_component in Azure Machine Learning SDK v2 to wrap your function as a component, which can be used in AzureML pipelines.
+After defining the training function successfully, you can use @command_component in Azure Machine Learning SDK v2 to wrap your function as a component, which can be used in Azure Machine Learning pipelines.
:::code language="python" source="~/azureml-examples-main/sdk/python/jobs/pipelines/2e_image_classification_keras_minist_convnet/train/train_component.py":::
Now, you've prepared all source files for the `Train Image Classification Keras`
In this section, other than the previous components, you'll create a component to score the trained model via Yaml specification and script.
-If you're following along with the example in the [AzureML Examples repo](https://github.com/Azure/azureml-examples/tree/main/sdk/python/jobs/pipelines/2e_image_classification_keras_minist_convnet), the source files are already available in `score/` folder. This folder contains three files to construct the component:
+If you're following along with the example in the [Azure Machine Learning Examples repo](https://github.com/Azure/azureml-examples/tree/main/sdk/python/jobs/pipelines/2e_image_classification_keras_minist_convnet), the source files are already available in `score/` folder. This folder contains three files to construct the component:
- `score.py`: contains the source code of the component. - `score.yaml`: defines the interface and other details of the component.
You can open the `Link to Azure Machine Learning studio`, which is the job detai
:::image type="content" source="./media/how-to-create-component-pipeline-python/pipeline-ui.png" alt-text="Screenshot of the pipeline job detail page." lightbox ="./media/how-to-create-component-pipeline-python/pipeline-ui.png":::
-You can check the logs and outputs of each component by right clicking the component, or select the component to open its detail pane. To learn more about how to debug your pipeline in UI, see [How to use studio UI to build and debug Azure ML pipelines](how-to-use-pipeline-ui.md).
+You can check the logs and outputs of each component by right clicking the component, or select the component to open its detail pane. To learn more about how to debug your pipeline in UI, see [How to use studio UI to build and debug Azure Machine Learning pipelines](how-to-use-pipeline-ui.md).
## (Optional) Register components to workspace
machine-learning How To Create Component Pipelines Cli https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-create-component-pipelines-cli.md
ms.devlang: azurecli, cliv2
[!INCLUDE [cli v2](../../includes/machine-learning-cli-v2.md)]
-In this article, you learn how to create and run [machine learning pipelines](concept-ml-pipelines.md) by using the Azure CLI and components (for more, see [What is an Azure Machine Learning component?](concept-component.md)). You can create pipelines without using components, but components offer the greatest amount of flexibility and reuse. AzureML Pipelines may be defined in YAML and run from the CLI, authored in Python, or composed in AzureML Studio Designer with a drag-and-drop UI. This document focuses on the CLI.
+In this article, you learn how to create and run [machine learning pipelines](concept-ml-pipelines.md) by using the Azure CLI and components (for more, see [What is an Azure Machine Learning component?](concept-component.md)). You can create pipelines without using components, but components offer the greatest amount of flexibility and reuse. Azure Machine Learning Pipelines may be defined in YAML and run from the CLI, authored in Python, or composed in Azure Machine Learning Studio Designer with a drag-and-drop UI. This document focuses on the CLI.
## Prerequisites
In this article, you learn how to create and run [machine learning pipelines](co
## Create your first pipeline with component
-Let's create your first pipeline with component using an example. This section aims to give you an initial impression of what pipeline and component look like in AzureML with a concrete example.
+Let's create your first pipeline with component using an example. This section aims to give you an initial impression of what pipeline and component look like in Azure Machine Learning with a concrete example.
From the `cli/jobs/pipelines-with-components/basics` directory of the [`azureml-examples` repository](https://github.com/Azure/azureml-examples), navigate to the `3b_pipeline_with_data` subdirector. There are three types of files in this directory. Those are the files you'll need to create when building your own pipeline.
From the `cli/jobs/pipelines-with-components/basics` directory of the [`azureml-
- **component.yml**: This YAML file defines the component. It packages following information: - Metadata: name, display name, version, description, type etc. The metadata helps to describe and manage the component. - Interface: inputs and outputs. For example, a model training component will take training data and number of epochs as input, and generate a trained model file as output. Once the interface is defined, different teams can develop and test the component independently.
- - Command, code & environment: the command, code and environment to run the component. Command is the shell command to execute the component. Code usually refers to a source code directory. Environment could be an AzureML environment(curated or customer created), docker image or conda environment.
+ - Command, code & environment: the command, code and environment to run the component. Command is the shell command to execute the component. Code usually refers to a source code directory. Environment could be an Azure Machine Learning environment(curated or customer created), docker image or conda environment.
- **component_src**: This is the source code directory for a specific component. It contains the source code that will be executed in the component. You can use your preferred language(Python, R...). The code must be executed by a shell command. The source code can take a few inputs from shell command line to control how this step is going to be executed. For example, a training step may take training data, learning rate, number of epochs to control the training process. The argument of a shell command is used to pass inputs and outputs to the code.
In the *3b_pipeline_with_data* example, we've created a three steps pipeline.
### Read and write data in pipeline
-One common scenario is to read and write data in your pipeline. In AzureML, we use the same schema to [read and write data](how-to-read-write-data-v2.md) for all type of jobs (pipeline job, command job, and sweep job). Below are pipeline job examples of using data for common scenarios.
+One common scenario is to read and write data in your pipeline. In Azure Machine Learning, we use the same schema to [read and write data](how-to-read-write-data-v2.md) for all type of jobs (pipeline job, command job, and sweep job). Below are pipeline job examples of using data for common scenarios.
- [local data](https://github.com/Azure/azureml-examples/tree/sdk-preview/cli/jobs/pipelines-with-components/basics/4a_local_data_input) - [web file with public URL](https://github.com/Azure/azureml-examples/blob/sdk-preview/cli/jobs/pipelines-with-components/basics/4c_web_url_input/pipeline.yml)-- [AzureML datastore and path](https://github.com/Azure/azureml-examples/tree/sdk-preview/cli/jobs/pipelines-with-components/basics/4b_datastore_datapath_uri)-- [AzureML data asset](https://github.com/Azure/azureml-examples/tree/sdk-preview/cli/jobs/pipelines-with-components/basics/4d_data_input)
+- [Azure Machine Learning datastore and path](https://github.com/Azure/azureml-examples/tree/sdk-preview/cli/jobs/pipelines-with-components/basics/4b_datastore_datapath_uri)
+- [Azure Machine Learning data asset](https://github.com/Azure/azureml-examples/tree/sdk-preview/cli/jobs/pipelines-with-components/basics/4d_data_input)
## Understand the component definition YAML
The most common used schema of the component YAML is described in below table. S
|key|description| |||
-|name|**Required**. Name of the component. Must be unique across the AzureML workspace. Must start with lowercase letter. Allow lowercase letters, numbers and underscore(_). Maximum length is 255 characters.|
+|name|**Required**. Name of the component. Must be unique across the Azure Machine Learning workspace. Must start with lowercase letter. Allow lowercase letters, numbers and underscore(_). Maximum length is 255 characters.|
|display_name|Display name of the component in the studio UI. Can be non-unique within the workspace.| |command|**Required** the command to execute| |code|Local path to the source code directory to be uploaded and used for the component.|
The most common used schema of the component YAML is described in below table. S
|outputs|Dictionary of component outputs. The key is a name for the output within the context of the component and the value is the component output definition. Outputs can be referenced in the command using the ${{ outputs.<output_name> }} expression.| |is_deterministic|Whether to reuse the previous job's result if the component inputs did not change. Default value is `true`, also known as reuse by default. The common scenario when set as `false` is to force reload data from a cloud storage or URL.|
-For the example in *3b_pipeline_with_data/componentA.yml*, componentA has one data input and one data output, which can be connected to other steps in the parent pipeline. All the files under `code` section in component YAML will be uploaded to AzureML when submitting the pipeline job. In this example, files under `./componentA_src` will be uploaded (line 16 in *componentA.yml*). You can see the uploaded source code in Studio UI: double select the ComponentA step and navigate to Snapshot tab, as shown in below screenshot. We can see it's a hello-world script just doing some simple printing, and write current datetime to the `componentA_output` path. The component takes input and output through command line argument, and it's handled in the *hello.py* using `argparse`.
+For the example in *3b_pipeline_with_data/componentA.yml*, componentA has one data input and one data output, which can be connected to other steps in the parent pipeline. All the files under `code` section in component YAML will be uploaded to Azure Machine Learning when submitting the pipeline job. In this example, files under `./componentA_src` will be uploaded (line 16 in *componentA.yml*). You can see the uploaded source code in Studio UI: double select the ComponentA step and navigate to Snapshot tab, as shown in below screenshot. We can see it's a hello-world script just doing some simple printing, and write current datetime to the `componentA_output` path. The component takes input and output through command line argument, and it's handled in the *hello.py* using `argparse`.
:::image type="content" source="./media/how-to-create-component-pipelines-cli/component-snapshot.png" alt-text="Screenshot of pipeline with data example above showing componentA." lightbox="./media/how-to-create-component-pipelines-cli/component-snapshot.png":::
If you want to add an input to a component, remember to edit three places: 1)`i
### Environment
-Environment defines the environment to execute the component. It could be an AzureML environment(curated or custom registered), docker image or conda environment. See examples below.
+Environment defines the environment to execute the component. It could be an Azure Machine Learning environment(curated or custom registered), docker image or conda environment. See examples below.
-- [AzureML registered environment asset](https://github.com/Azure/azureml-examples/tree/sdk-preview/cli/jobs/pipelines-with-components/basics/5b_env_registered). It's referenced in component following `azureml:<environment-name>:<environment-version>` syntax.
+- [Azure Machine Learning registered environment asset](https://github.com/Azure/azureml-examples/tree/sdk-preview/cli/jobs/pipelines-with-components/basics/5b_env_registered). It's referenced in component following `azureml:<environment-name>:<environment-version>` syntax.
- [public docker image](https://github.com/Azure/azureml-examples/tree/sdk-preview/cli/jobs/pipelines-with-components/basics/5a_env_public_docker_image) - [conda file](https://github.com/Azure/azureml-examples/tree/sdk-preview/cli/jobs/pipelines-with-components/basics/5c_env_conda_file) Conda file needs to be used together with a base image.
machine-learning How To Create Component Pipelines Ui https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-create-component-pipelines-ui.md
[!INCLUDE [cli v2](../../includes/machine-learning-cli-v2.md)]
-In this article, you'll learn how to create and run [machine learning pipelines](concept-ml-pipelines.md) by using the Azure Machine Learning studio and [Components](concept-component.md). You can create pipelines without using components, but components offer better amount of flexibility and reuse. Azure ML Pipelines may be defined in YAML and [run from the CLI](how-to-create-component-pipelines-cli.md), [authored in Python](how-to-create-component-pipeline-python.md), or composed in Azure ML Studio Designer with a drag-and-drop UI. This document focuses on the AzureML studio designer UI.
+In this article, you'll learn how to create and run [machine learning pipelines](concept-ml-pipelines.md) by using the Azure Machine Learning studio and [Components](concept-component.md). You can create pipelines without using components, but components offer better amount of flexibility and reuse. Azure Machine Learning Pipelines may be defined in YAML and [run from the CLI](how-to-create-component-pipelines-cli.md), [authored in Python](how-to-create-component-pipeline-python.md), or composed in Azure Machine Learning Studio Designer with a drag-and-drop UI. This document focuses on the Azure Machine Learning studio designer UI.
## Prerequisites
In the example below take using CLI for example. If you want to learn more about
1. From the `cli/jobs/pipelines-with-components/basics` directory of the [`azureml-examples` repository](https://github.com/Azure/azureml-examples), navigate to the `1b_e2e_registered_components` subdirectory.
-1. Register the components to AzureML workspace using following commands. Learn more about [ML components](concept-component.md).
+1. Register the components to Azure Machine Learning workspace using following commands. Learn more about [ML components](concept-component.md).
```CLI az ml component create --file train.yml
machine-learning How To Create Data Assets https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-create-data-assets.md
Last updated 01/23/2023
> * [v1](./v1/how-to-create-register-datasets.md) > * [v2 (current version)](how-to-create-data-assets.md)
-In this article, you'll learn how to create a data asset in Azure ML. An Azure ML data asset is similar to web browser bookmarks (favorites). Instead of remembering long storage paths (URIs) that point to your most frequently used data, you can create a data asset, and then access that asset with a friendly name.
+In this article, you'll learn how to create a data asset in Azure Machine Learning. An Azure Machine Learning data asset is similar to web browser bookmarks (favorites). Instead of remembering long storage paths (URIs) that point to your most frequently used data, you can create a data asset, and then access that asset with a friendly name.
-Data asset creation also creates a *reference* to the data source location, along with a copy of its metadata. Because the data remains in its existing location, you incur no extra storage cost, and don't risk data source integrity. You can create Data assets from Azure ML datastores, Azure Storage, public URLs, and local files.
+Data asset creation also creates a *reference* to the data source location, along with a copy of its metadata. Because the data remains in its existing location, you incur no extra storage cost, and don't risk data source integrity. You can create Data assets from Azure Machine Learning datastores, Azure Storage, public URLs, and local files.
## Prerequisites
You can create three data asset types:
|||||| |**File**<br>Reference a single file | `uri_file` | `FileDataset` | A type new to V2 APIs. In V1 APIs, files always mapped to a folder on the compute target filesystem; this mapping required an `os.path.join`. In V2 APIs, the single file is mapped. This way, you can refer to that location in your code. | Read/write a single file - the file can have any format. | |**Folder**<br> Reference a single folder | `uri_folder` | `FileDataset` | In V1 APIs, `FileDataset` had an associated engine that could take a file sample from a folder. In V2 APIs, a Folder is a simple mapping to the compute target filesystem. | You must read/write a folder of parquet/CSV files into Pandas/Spark.<br><br>Deep-learning with images, text, audio, video files located in a folder. |
-|**Table**<br> Reference a data table | `mltable` | `TabularDataset` | In V1 APIs, the Azure ML back-end stored the data materialization blueprint. This storage location meant that `TabularDataset` only worked if you had an Azure ML workspace. `mltable` stores the data materialization blueprint in *your* storage. This storage location means you can use it *disconnected to Azure ML* - for example, local, on-premises. In V2 APIs, you'll find it easier to transition from local to remote jobs. Read [Working with tables in Azure Machine Learning](how-to-mltable.md) for more information. | You have a complex schema subject to frequent changes, or you need a subset of large tabular data.<br><br>AutoML with Tables. |
+|**Table**<br> Reference a data table | `mltable` | `TabularDataset` | In V1 APIs, the Azure Machine Learning back-end stored the data materialization blueprint. This storage location meant that `TabularDataset` only worked if you had an Azure Machine Learning workspace. `mltable` stores the data materialization blueprint in *your* storage. This storage location means you can use it *disconnected to AzureML* - for example, local, on-premises. In V2 APIs, you'll find it easier to transition from local to remote jobs. Read [Working with tables in Azure Machine Learning](how-to-mltable.md) for more information. | You have a complex schema subject to frequent changes, or you need a subset of large tabular data.<br><br>AutoML with Tables. |
## Supported paths
machine-learning How To Create Manage Compute Instance https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-create-manage-compute-instance.md
To avoid getting charged for a compute instance that is switched on but inactive
A compute instance is considered inactive if the below conditions are met: * No active Jupyter Kernel sessions (which translates to no Notebooks usage via Jupyter, JupyterLab or Interactive notebooks) * No active Jupyter terminal sessions
-* No active AzureML runs or experiments
+* No active Azure Machine Learning runs or experiments
* No SSH connections * No VS code connections; you must close your VS Code connection for your compute instance to be considered inactive. Sessions are auto-terminated if VS code detects no activity for 3 hours.
from azure.ai.ml.entities import ComputeInstance, AmlCompute, ComputeSchedules,
from azure.identity import DefaultAzureCredential from dateutil import tz import datetime
-# Enter details of your AML workspace
+# Enter details of your Azure Machine Learning workspace
subscription_id = "<guid>" resource_group = "sample-rg" workspace = "sample-ws"
Following is a sample policy to default a shutdown schedule at 10 PM PST.
You can assign a system- or user-assigned [managed identity](../active-directory/managed-identities-azure-resources/overview.md) to a compute instance, to authenticate against other Azure resources such as storage. Using managed identities for authentication helps improve workspace security and management. For example, you can allow users to access training data only when logged in to a compute instance. Or use a common user-assigned managed identity to permit access to a specific storage account.
-You can create compute instance with managed identity from Azure ML Studio:
+You can create compute instance with managed identity from Azure Machine Learning Studio:
1. Fill out the form to [create a new compute instance](?tabs=azure-studio#create). 1. Select **Next: Advanced Settings**.
az ml compute create --name myinstance --identity-type SystemAssigned --type Com
You can also use V2 CLI with yaml file, for example to create a compute instance with user-assigned managed identity: ```azurecli
-azure ml compute create --file compute.yaml --resource-group my-resource-group --workspace-name my-workspace
+Azure Machine Learning compute create --file compute.yaml --resource-group my-resource-group --workspace-name my-workspace
``` The identity definition is contained in compute.yaml file:
machine-learning How To Debug Managed Online Endpoints Visual Studio Code https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-debug-managed-online-endpoints-visual-studio-code.md
This guide assumes you have the following items installed locally on your PC.
- [VS Code](https://code.visualstudio.com/#alt-downloads) - [Azure CLI](/cli/azure/install-azure-cli) - [Azure CLI `ml` extension (v2)](how-to-configure-cli.md)-- [Azure ML Python SDK (v2)](https://aka.ms/sdk-v2-install)
+- [Azure Machine Learning Python SDK (v2)](https://aka.ms/sdk-v2-install)
For more information, see the guide on [how to prepare your system to deploy online endpoints](how-to-deploy-online-endpoints.md#prepare-your-system).
Once your environment is set up, use the VS Code debugger to test and debug your
- To debug scoring behavior, place your breakpoint(s) inside the `run` function. 1. Select the VS Code Job view.
-1. In the Run and Debug dropdown, select **Azure ML: Debug Local Endpoint** to start debugging your endpoint locally.
+1. In the Run and Debug dropdown, select **AzureML: Debug Local Endpoint** to start debugging your endpoint locally.
In the **Breakpoints** section of the Run view, check that: - **Raised Exceptions** is **unchecked** - **Uncaught Exceptions** is **checked**
- :::image type="content" source="media/how-to-debug-managed-online-endpoints-visual-studio-code/configure-debug-profile.png" alt-text="Configure Azure ML Debug Local Environment debug profile":::
+ :::image type="content" source="media/how-to-debug-managed-online-endpoints-visual-studio-code/configure-debug-profile.png" alt-text="Configure Azure Machine Learning Debug Local Environment debug profile":::
1. Select the play icon next to the Run and Debug dropdown to start your debugging session.
machine-learning How To Debug Visual Studio Code https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-debug-visual-studio-code.md
Use the Azure Machine Learning extension to validate, run, and debug your machin
1. Provide the name of the script you want to run. The path is relative to the directory opened in VS Code. 1. Choose whether you want to use an Azure Machine Learning dataset or not. You can create [Azure Machine Learning datasets](how-to-manage-resources-vscode.md#create-dataset) using the extension. 1. Debugpy is required in order to attach the debugger to the container running your experiment. To add debugpy as a dependency,select **Add Debugpy**. Otherwise, select **Skip**. Not adding debugpy as a dependency runs your experiment without attaching to the debugger.
- 1. A configuration file containing your run configuration settings opens in the editor. If you're satisfied with the settings, select **Submit experiment**. Alternatively, you open the command palette (**View > Command Palette**) from the menu bar and enter the `Azure ML: Submit experiment` command into the text box.
+ 1. A configuration file containing your run configuration settings opens in the editor. If you're satisfied with the settings, select **Submit experiment**. Alternatively, you open the command palette (**View > Command Palette**) from the menu bar and enter the `AzureML: Submit experiment` command into the text box.
1. Once your experiment is submitted, a Docker image containing your script and the configurations specified in your run configuration is created. When the Docker image build process begins, the contents of the `60_control_log.txt` file stream to the output console in VS Code.
To enable debugging, make the following changes to the Python script(s) used by
parser.add_argument('--remote_debug', action='store_true') parser.add_argument('--remote_debug_connection_timeout', type=int, default=300,
- help=f'Defines how much time the AzureML compute target '
+ help=f'Defines how much time the Azure Machine Learning compute target '
f'will await a connection from a debugger client (VSCODE).') parser.add_argument('--remote_debug_client_ip', type=str, help=f'Defines IP Address of VS Code client')
parser.add_argument("--output_train", type=str, help="output_train directory")
parser.add_argument('--remote_debug', action='store_true') parser.add_argument('--remote_debug_connection_timeout', type=int, default=300,
- help=f'Defines how much time the AzureML compute target '
+ help=f'Defines how much time the Azure Machine Learning compute target '
f'will await a connection from a debugger client (VSCODE).') parser.add_argument('--remote_debug_client_ip', type=str, help=f'Defines IP Address of VS Code client')
ip_address: 10.3.0.5
Save the `ip_address` value. It's used in the next section. > [!TIP]
-> You can also find the IP address from the run logs for the child run for this pipeline step. For more information on viewing this information, see [Monitor Azure ML experiment runs and metrics](how-to-log-view-metrics.md).
+> You can also find the IP address from the run logs for the child run for this pipeline step. For more information on viewing this information, see [Monitor Azure Machine Learning experiment runs and metrics](how-to-log-view-metrics.md).
### Configure development environment
machine-learning How To Deploy Automl Endpoint https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-deploy-automl-endpoint.md
Create a directory called `src/` and place the scoring file you downloaded into
1. Configure workspace details and get a handle to the workspace: ```python
- # enter details of your AzureML workspace
+ # enter details of your Azure Machine Learning workspace
subscription_id = "<SUBSCRIPTION_ID>" resource_group = "<RESOURCE_GROUP>" workspace = "<AZUREML_WORKSPACE_NAME>"
machine-learning How To Deploy Batch With Rest https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-deploy-batch-with-rest.md
In this article, you learn how to use the new REST APIs to:
> [!IMPORTANT] > The code snippets in this article assume that you are using the Bash shell. >
-> The code snippets are pulled from the `/cli/batch-score-rest.sh` file in the [AzureML Example repository](https://github.com/Azure/azureml-examples).
+> The code snippets are pulled from the `/cli/batch-score-rest.sh` file in the [Azure Machine Learning Example repository](https://github.com/Azure/azureml-examples).
## Set endpoint name
In this article, you learn how to use the new REST APIs to:
[Batch endpoints](concept-endpoints.md#what-are-batch-endpoints) simplify the process of hosting your models for batch scoring, so you can focus on machine learning, not infrastructure. In this article, you'll create a batch endpoint and deployment, and invoking it to start a batch scoring job. But first you'll have to register the assets needed for deployment, including model, code, and environment.
-There are many ways to create an Azure Machine Learning batch endpoint, including the Azure CLI, Azure ML SDK for Python, and visually with the studio. The following example creates a batch endpoint and a batch deployment with the REST API.
+There are many ways to create an Azure Machine Learning batch endpoint, including the Azure CLI, Azure Machine Learning SDK for Python, and visually with the studio. The following example creates a batch endpoint and a batch deployment with the REST API.
## Create machine learning assets
Now, let's look at other options for invoking the batch endpoint. When it comes
- An `InputData` property has `JobInputType` and `Uri` keys. When you are specifying a single file, use `"JobInputType": "UriFile"`, and when you are specifying a folder, use `'JobInputType": "UriFolder"`. -- When the file or folder is on Azure ML registered datastore, the syntax for the `Uri` is `azureml://datastores/<datastore-name>/paths/<path-on-datastore>` for folder, and `azureml://datastores/<datastore-name>/paths/<path-on-datastore>/<file-name>` for a specific file. You can also use the longer form to represent the same path, such as `azureml://subscriptions/<subscription-id>/resourceGroups/<resource-group-name>/workspaces/<workspace-name>/datastores/<datastore-name>/paths/<path-on-datastore>/`.
+- When the file or folder is on Azure Machine Learning registered datastore, the syntax for the `Uri` is `azureml://datastores/<datastore-name>/paths/<path-on-datastore>` for folder, and `azureml://datastores/<datastore-name>/paths/<path-on-datastore>/<file-name>` for a specific file. You can also use the longer form to represent the same path, such as `azureml://subscriptions/<subscription-id>/resourceGroups/<resource-group-name>/workspaces/<workspace-name>/datastores/<datastore-name>/paths/<path-on-datastore>/`.
- When the file or folder is registered as V2 data asset as `uri_folder` or `uri_file`, the syntax for the `Uri` is `\"azureml://locations/<location-name>/workspaces/<workspace-name>/data/<data-name>/versions/<data-version>"` (Asset ID form) or `\"/subscriptions/<subscription-id>/resourcegroups/<resource-group-name>/providers/Microsoft.MachineLearningServices/workspaces/<workspace-name>/data/<data-name>/versions/<data-version>\"` (ARM ID form). - When the file or folder is a publicly accessible path, the syntax for the URI is `https://<public-path>` for folder, `https://<public-path>/<file-name>` for a specific file. > [!NOTE]
-> For more information about data URI, see [Azure Machine Learning data reference URI](reference-yaml-core-syntax.md#azure-ml-data-reference-uri).
+> For more information about data URI, see [Azure Machine Learning data reference URI](reference-yaml-core-syntax.md#azure-machine-learning-data-reference-uri).
Below are some examples using different types of input data. -- If your data is a folder on the Azure ML registered datastore, you can either:
+- If your data is a folder on the Azure Machine Learning registered datastore, you can either:
- Use the short form to represent the URI:
Below are some examples using different types of input data.
JOB_ID_SUFFIX=$(echo ${JOB_ID##/*/}) ``` -- If you want to manage your data as Azure ML registered V2 data asset as `uri_folder`, you can follow the two steps below:
+- If you want to manage your data as Azure Machine Learning registered V2 data asset as `uri_folder`, you can follow the two steps below:
1. Create the V2 data asset:
machine-learning How To Deploy Custom Container https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-deploy-custom-container.md
Title: Deploy a custom container as an online endpoint
+ Title: Deploy a model in a custom container to an online endpoint
-description: Learn how to use a custom container to use open-source servers in Azure Machine Learning.
+description: Learn how to use a custom container with an open-source server to deploy a model in Azure Machine Learning.
ms.devlang: azurecli
-# Deploy a TensorFlow model served with TensorFlow Serving using a custom container in an online endpoint
+# Use a custom container to deploy a model to an online endpoint
[!INCLUDE [cli v2](../../includes/machine-learning-cli-v2.md)] [!INCLUDE [sdk v2](../../includes/machine-learning-sdk-v2.md)]
-Learn how to deploy a custom container as an online endpoint in Azure Machine Learning.
+Learn how to use a custom container for deploying a model to an online endpoint in Azure Machine Learning.
Custom container deployments can use web servers other than the default Python Flask server used by Azure Machine Learning. Users of these deployments can still take advantage of Azure Machine Learning's built-in monitoring, scaling, alerting, and authentication.
-You can find [various examples](https://github.com/Azure/azureml-examples/tree/main/cli/endpoints/online/custom-container) for TensorFlow Serving, TorchServe, Triton Inference Server, Plumber R package, and AzureML Inference Minimal image as below:
+The following table lists various [deployment examples](https://github.com/Azure/azureml-examples/tree/main/cli/endpoints/online/custom-container) that use custom containers such as TensorFlow Serving, TorchServe, Triton Inference Server, Plumber R package, and AzureML Inference Minimal image.
|Example|Script (CLI)|Description| |-|||
-|[minimal/multimodel](https://github.com/Azure/azureml-examples/blob/main/cli/endpoints/online/custom-container/minimal/multimodel)|[deploy-custom-container-minimal-multimodel](https://github.com/Azure/azureml-examples/blob/main/cli/deploy-custom-container-minimal-multimodel.sh)|Deploy multiple models to a single deployment by extending the AzureML Inference Minimal image.|
-|[minimal/single-model](https://github.com/Azure/azureml-examples/blob/main/cli/endpoints/online/custom-container/minimal/single-model)|[deploy-custom-container-minimal-single-model](https://github.com/Azure/azureml-examples/blob/main/cli/deploy-custom-container-minimal-single-model.sh)|Deploy a single model by extending the AzureML Inference Minimal image.|
-|[mlflow/multideployment-scikit](https://github.com/Azure/azureml-examples/blob/main/cli/endpoints/online/custom-container/mlflow/multideployment-scikit)|[deploy-custom-container-mlflow-multideployment-scikit](https://github.com/Azure/azureml-examples/blob/main/cli/deploy-custom-container-mlflow-multideployment-scikit.sh)|Deploy two MLFlow models with different Python requirements to two separate deployments behind a single endpoint using the AzureML Inference Minimal Image.|
+|[minimal/multimodel](https://github.com/Azure/azureml-examples/blob/main/cli/endpoints/online/custom-container/minimal/multimodel)|[deploy-custom-container-minimal-multimodel](https://github.com/Azure/azureml-examples/blob/main/cli/deploy-custom-container-minimal-multimodel.sh)|Deploy multiple models to a single deployment by extending the Azure Machine Learning Inference Minimal image.|
+|[minimal/single-model](https://github.com/Azure/azureml-examples/blob/main/cli/endpoints/online/custom-container/minimal/single-model)|[deploy-custom-container-minimal-single-model](https://github.com/Azure/azureml-examples/blob/main/cli/deploy-custom-container-minimal-single-model.sh)|Deploy a single model by extending the Azure Machine Learning Inference Minimal image.|
+|[mlflow/multideployment-scikit](https://github.com/Azure/azureml-examples/blob/main/cli/endpoints/online/custom-container/mlflow/multideployment-scikit)|[deploy-custom-container-mlflow-multideployment-scikit](https://github.com/Azure/azureml-examples/blob/main/cli/deploy-custom-container-mlflow-multideployment-scikit.sh)|Deploy two MLFlow models with different Python requirements to two separate deployments behind a single endpoint using the Azure Machine Learning Inference Minimal Image.|
|[r/multimodel-plumber](https://github.com/Azure/azureml-examples/blob/main/cli/endpoints/online/custom-container/r/multimodel-plumber)|[deploy-custom-container-r-multimodel-plumber](https://github.com/Azure/azureml-examples/blob/main/cli/deploy-custom-container-r-multimodel-plumber.sh)|Deploy three regression models to one endpoint using the Plumber R package| |[tfserving/half-plus-two](https://github.com/Azure/azureml-examples/blob/main/cli/endpoints/online/custom-container/tfserving/half-plus-two)|[deploy-custom-container-tfserving-half-plus-two](https://github.com/Azure/azureml-examples/blob/main/cli/deploy-custom-container-tfserving-half-plus-two.sh)|Deploy a simple Half Plus Two model using a TensorFlow Serving custom container using the standard model registration process.| |[tfserving/half-plus-two-integrated](https://github.com/Azure/azureml-examples/blob/main/cli/endpoints/online/custom-container/tfserving/half-plus-two-integrated)|[deploy-custom-container-tfserving-half-plus-two-integrated](https://github.com/Azure/azureml-examples/blob/main/cli/deploy-custom-container-tfserving-half-plus-two-integrated.sh)|Deploy a simple Half Plus Two model using a TensorFlow Serving custom container with the model integrated into the image.|
You can find [various examples](https://github.com/Azure/azureml-examples/tree/m
|[torchserve/huggingface-textgen](https://github.com/Azure/azureml-examples/blob/main/cli/endpoints/online/custom-container/torchserve/huggingface-textgen)|[deploy-custom-container-torchserve-huggingface-textgen](https://github.com/Azure/azureml-examples/blob/main/cli/deploy-custom-container-torchserve-huggingface-textgen.sh)|Deploy Hugging Face models to an online endpoint and follow along with the Hugging Face Transformers TorchServe example.| |[triton/single-model](https://github.com/Azure/azureml-examples/blob/main/cli/endpoints/online/custom-container/triton/single-model)|[deploy-custom-container-triton-single-model](https://github.com/Azure/azureml-examples/blob/main/cli/deploy-custom-container-triton-single-model.sh)|Deploy a Triton model using a custom container|
-This article focuses on serving a TensorFlow model with TensorFlow (TF) Serving.
+This article focuses on serving a TensorFlow model with TensorFlow (TF) Serving.
> [!WARNING] > Microsoft may not be able to help troubleshoot problems caused by a custom image. If you encounter problems, you may be asked to use the default image or one of the images Microsoft provides to see if the problem is specific to your image.
from azure.identity import DefaultAzureCredential
2. Configure workspace details and get a handle to the workspace: ```python
-# enter details of your AzureML workspace
+# enter details of your Azure Machine Learning workspace
subscription_id = "<SUBSCRIPTION_ID>" resource_group = "<RESOURCE_GROUP>" workspace = "<AZUREML_WORKSPACE_NAME>"
machine-learning How To Deploy Kubernetes Extension https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-deploy-kubernetes-extension.md
Title: Deploy AzureML extension on Kubernetes cluster
-description: Learn about the AzureML extension, available configuration settings and different deployment scenarios, and verify and managed AzureML extension
+ Title: Deploy Azure Machine Learning extension on Kubernetes cluster
+description: Learn about the Azure Machine Learning extension, available configuration settings and different deployment scenarios, and verify and managed Azure Machine Learning extension
-# Deploy AzureML extension on AKS or Arc Kubernetes cluster
+# Deploy Azure Machine Learning extension on AKS or Arc Kubernetes cluster
-To enable your AKS or Arc Kubernetes cluster to run training jobs or inference workloads, you must first deploy the AzureML extension on an AKS or Arc Kubernetes cluster. The AzureML extension is built on the [cluster extension for AKS](../aks/cluster-extensions.md) and [cluster extension or Arc Kubernetes](../azure-arc/kubernetes/conceptual-extensions.md), and its lifecycle can be managed easily with Azure CLI [k8s-extension](/cli/azure/k8s-extension).
+To enable your AKS or Arc Kubernetes cluster to run training jobs or inference workloads, you must first deploy the Azure Machine Learning extension on an AKS or Arc Kubernetes cluster. The Azure Machine Learning extension is built on the [cluster extension for AKS](../aks/cluster-extensions.md) and [cluster extension or Arc Kubernetes](../azure-arc/kubernetes/conceptual-extensions.md), and its lifecycle can be managed easily with Azure CLI [k8s-extension](/cli/azure/k8s-extension).
In this article, you can learn: > [!div class="checklist"] > * Prerequisites > * Limitations
-> * Review AzureML extension config settings
-> * AzureML extension deployment scenarios
-> * Verify AzureML extension deployment
-> * Review AzureML extension components
-> * Manage AzureML extension
+> * Review Azure Machine Learning extension config settings
+> * Azure Machine Learning extension deployment scenarios
+> * Verify Azure Machine Learning extension deployment
+> * Review Azure Machine Learning extension components
+> * Manage Azure Machine Learning extension
## Prerequisites
In this article, you can learn:
- [Using a service principal with AKS](../aks/kubernetes-service-principal.md) is **not supported** by Azure Machine Learning. The AKS cluster must use a **managed identity** instead. Both **system-assigned managed identity** and **user-assigned managed identity** are supported. For more information, see [Use a managed identity in Azure Kubernetes Service](../aks/use-managed-identity.md). - When your AKS cluster used service principal is converted to use Managed Identity, before installing the extension, all node pools need to be deleted and recreated, rather than updated directly. - [Disabling local accounts](../aks/managed-aad.md#disable-local-accounts) for AKS is **not supported** by Azure Machine Learning. When the AKS Cluster is deployed, local accounts are enabled by default.-- If your AKS cluster has an [Authorized IP range enabled to access the API server](../aks/api-server-authorized-ip-ranges.md), enable the AzureML control plane IP ranges for the AKS cluster. The AzureML control plane is deployed across paired regions. Without access to the API server, the machine learning pods can't be deployed. Use the [IP ranges](https://www.microsoft.com/download/confirmation.aspx?id=56519) for both the [paired regions](../availability-zones/cross-region-replication-azure.md) when enabling the IP ranges in an AKS cluster.
+- If your AKS cluster has an [Authorized IP range enabled to access the API server](../aks/api-server-authorized-ip-ranges.md), enable the Azure Machine Learning control plane IP ranges for the AKS cluster. The Azure Machine Learning control plane is deployed across paired regions. Without access to the API server, the machine learning pods can't be deployed. Use the [IP ranges](https://www.microsoft.com/download/confirmation.aspx?id=56519) for both the [paired regions](../availability-zones/cross-region-replication-azure.md) when enabling the IP ranges in an AKS cluster.
- Azure Machine Learning does not support attaching an AKS cluster cross subscription. If you have an AKS cluster in a different subscription, you must first [connect it to Azure-Arc](../azure-arc/kubernetes/quickstart-connect-cluster.md) and specify in the same subscription as your Azure Machine Learning workspace. - Azure Machine Learning does not guarantee support for all preview stage features in AKS. For example, [Azure AD pod identity](../aks/use-azure-ad-pod-identity.md) is not supported.-- If you've previously followed the steps from [AzureML AKS v1 document](./v1/how-to-create-attach-kubernetes.md) to create or attach your AKS as inference cluster, use the following link to [clean up the legacy azureml-fe related resources](./v1/how-to-create-attach-kubernetes.md#delete-azureml-fe-related-resources) before you continue the next step.
+- If you've previously followed the steps from [Azure Machine Learning AKS v1 document](./v1/how-to-create-attach-kubernetes.md) to create or attach your AKS as inference cluster, use the following link to [clean up the legacy azureml-fe related resources](./v1/how-to-create-attach-kubernetes.md#delete-azureml-fe-related-resources) before you continue the next step.
-## Review AzureML extension configuration settings
+## Review Azure Machine Learning extension configuration settings
-You can use AzureML CLI command `k8s-extension create` to deploy AzureML extension. CLI `k8s-extension create` allows you to specify a set of configuration settings in `key=value` format using `--config` or `--config-protected` parameter. Following is the list of available configuration settings to be specified during AzureML extension deployment.
+You can use Azure Machine Learning CLI command `k8s-extension create` to deploy Azure Machine Learning extension. CLI `k8s-extension create` allows you to specify a set of configuration settings in `key=value` format using `--config` or `--config-protected` parameter. Following is the list of available configuration settings to be specified during Azure Machine Learning extension deployment.
|Configuration Setting Key Name |Description |Training |Inference |Training and Inference |--|--|--|--|--|
- |`enableTraining` |`True` or `False`, default `False`. **Must** be set to `True` for AzureML extension deployment with Machine Learning model training and batch scoring support. | **&check;**| N/A | **&check;** |
- | `enableInference` |`True` or `False`, default `False`. **Must** be set to `True` for AzureML extension deployment with Machine Learning inference support. |N/A| **&check;** | **&check;** |
+ |`enableTraining` |`True` or `False`, default `False`. **Must** be set to `True` for Azure Machine Learning extension deployment with Machine Learning model training and batch scoring support. | **&check;**| N/A | **&check;** |
+ | `enableInference` |`True` or `False`, default `False`. **Must** be set to `True` for Azure Machine Learning extension deployment with Machine Learning inference support. |N/A| **&check;** | **&check;** |
| `allowInsecureConnections` |`True` or `False`, default `False`. **Can** be set to `True` to use inference HTTP endpoints for development or test purposes. |N/A| Optional | Optional | | `inferenceRouterServiceType` |`loadBalancer`, `nodePort` or `clusterIP`. **Required** if `enableInference=True`. | N/A| **&check;** | **&check;** | | `internalLoadBalancerProvider` | This config is only applicable for Azure Kubernetes Service(AKS) cluster now. Set to `azure` to allow the inference router using internal load balancer. | N/A| Optional | Optional | |`sslSecret`| The name of the Kubernetes secret in the `azureml` namespace. This config is used to store `cert.pem` (PEM-encoded TLS/SSL cert) and `key.pem` (PEM-encoded TLS/SSL key), which are required for inference HTTPS endpoint support when ``allowInsecureConnections`` is set to `False`. For a sample YAML definition of `sslSecret`, see [Configure sslSecret](./how-to-secure-kubernetes-online-endpoint.md#configure-sslsecret). Use this config or a combination of `sslCertPemFile` and `sslKeyPemFile` protected config settings. |N/A| Optional | Optional | |`sslCname` |An TLS/SSL CNAME is used by inference HTTPS endpoint. **Required** if `allowInsecureConnections=False` | N/A | Optional | Optional|
- | `inferenceRouterHA` |`True` or `False`, default `True`. By default, AzureML extension will deploy three inference router replicas for high availability, which requires at least three worker nodes in a cluster. Set to `False` if your cluster has fewer than three worker nodes, in this case only one inference router service is deployed. | N/A| Optional | Optional |
+ | `inferenceRouterHA` |`True` or `False`, default `True`. By default, Azure Machine Learning extension will deploy three inference router replicas for high availability, which requires at least three worker nodes in a cluster. Set to `False` if your cluster has fewer than three worker nodes, in this case only one inference router service is deployed. | N/A| Optional | Optional |
|`nodeSelector` | By default, the deployed kubernetes resources and your machine learning workloads are randomly deployed to one or more nodes of the cluster, and DaemonSet resources are deployed to ALL nodes. If you want to restrict the extension deployment and your training/inference workloads to specific nodes with label `key1=value1` and `key2=value2`, use `nodeSelector.key1=value1`, `nodeSelector.key2=value2` correspondingly. | Optional| Optional | Optional |
- |`installNvidiaDevicePlugin` | `True` or `False`, default `False`. [NVIDIA Device Plugin](https://github.com/NVIDIA/k8s-device-plugin#nvidia-device-plugin-for-kubernetes) is required for ML workloads on NVIDIA GPU hardware. By default, AzureML extension deployment won't install NVIDIA Device Plugin regardless Kubernetes cluster has GPU hardware or not. User can specify this setting to `True`, to install it, but make sure to fulfill [Prerequisites](https://github.com/NVIDIA/k8s-device-plugin#prerequisites). | Optional |Optional |Optional |
- |`installPromOp`|`True` or `False`, default `True`. AzureML extension needs prometheus operator to manage prometheus. Set to `False` to reuse the existing prometheus operator. For more information about reusing the existing prometheus operator, refer to [reusing the prometheus operator](./how-to-troubleshoot-kubernetes-extension.md#prometheus-operator)| Optional| Optional | Optional |
- |`installVolcano`| `True` or `False`, default `True`. AzureML extension needs volcano scheduler to schedule the job. Set to `False` to reuse existing volcano scheduler. For more information about reusing the existing volcano scheduler, refer to [reusing volcano scheduler](./how-to-troubleshoot-kubernetes-extension.md#volcano-scheduler) | Optional| N/A | Optional |
- |`installDcgmExporter` |`True` or `False`, default `False`. Dcgm-exporter can expose GPU metrics for AzureML workloads, which can be monitored in Azure portal. Set `installDcgmExporter` to `True` to install dcgm-exporter. But if you want to utilize your own dcgm-exporter, refer to [DCGM exporter](./how-to-troubleshoot-kubernetes-extension.md#dcgm-exporter) |Optional |Optional |Optional |
+ |`installNvidiaDevicePlugin` | `True` or `False`, default `False`. [NVIDIA Device Plugin](https://github.com/NVIDIA/k8s-device-plugin#nvidia-device-plugin-for-kubernetes) is required for ML workloads on NVIDIA GPU hardware. By default, Azure Machine Learning extension deployment won't install NVIDIA Device Plugin regardless Kubernetes cluster has GPU hardware or not. User can specify this setting to `True`, to install it, but make sure to fulfill [Prerequisites](https://github.com/NVIDIA/k8s-device-plugin#prerequisites). | Optional |Optional |Optional |
+ |`installPromOp`|`True` or `False`, default `True`. Azure Machine Learning extension needs prometheus operator to manage prometheus. Set to `False` to reuse the existing prometheus operator. For more information about reusing the existing prometheus operator, refer to [reusing the prometheus operator](./how-to-troubleshoot-kubernetes-extension.md#prometheus-operator)| Optional| Optional | Optional |
+ |`installVolcano`| `True` or `False`, default `True`. Azure Machine Learning extension needs volcano scheduler to schedule the job. Set to `False` to reuse existing volcano scheduler. For more information about reusing the existing volcano scheduler, refer to [reusing volcano scheduler](./how-to-troubleshoot-kubernetes-extension.md#volcano-scheduler) | Optional| N/A | Optional |
+ |`installDcgmExporter` |`True` or `False`, default `False`. Dcgm-exporter can expose GPU metrics for Azure Machine Learning workloads, which can be monitored in Azure portal. Set `installDcgmExporter` to `True` to install dcgm-exporter. But if you want to utilize your own dcgm-exporter, refer to [DCGM exporter](./how-to-troubleshoot-kubernetes-extension.md#dcgm-exporter) |Optional |Optional |Optional |
|Configuration Protected Setting Key Name |Description |Training |Inference |Training and Inference |--|--|--|--|--|
- | `sslCertPemFile`, `sslKeyPemFile` |Path to TLS/SSL certificate and key file (PEM-encoded), required for AzureML extension deployment with inference HTTPS endpoint support, when ``allowInsecureConnections`` is set to False. **Note** PEM file with pass phrase protected isn't supported | N/A| Optional | Optional |
+ | `sslCertPemFile`, `sslKeyPemFile` |Path to TLS/SSL certificate and key file (PEM-encoded), required for Azure Machine Learning extension deployment with inference HTTPS endpoint support, when ``allowInsecureConnections`` is set to False. **Note** PEM file with pass phrase protected isn't supported | N/A| Optional | Optional |
-As you can see from above configuration settings table, the combinations of different configuration settings allow you to deploy AzureML extension for different ML workload scenarios:
+As you can see from above configuration settings table, the combinations of different configuration settings allow you to deploy Azure Machine Learning extension for different ML workload scenarios:
* For training job and batch inference workload, specify `enableTraining=True` * For inference workload only, specify `enableInference=True` * For all kinds of ML workload, specify both `enableTraining=True` and `enableInference=True`
-If you plan to deploy AzureML extension for real-time inference workload and want to specify `enableInference=True`, pay attention to following configuration settings related to real-time inference workload:
+If you plan to deploy Azure Machine Learning extension for real-time inference workload and want to specify `enableInference=True`, pay attention to following configuration settings related to real-time inference workload:
* `azureml-fe` router service is required for real-time inference support and you need to specify `inferenceRouterServiceType` config setting for `azureml-fe`. `azureml-fe` can be deployed with one of following `inferenceRouterServiceType`: * Type `LoadBalancer`. Exposes `azureml-fe` externally using a cloud provider's load balancer. To specify this value, ensure that your cluster supports load balancer provisioning. Note most on-premises Kubernetes clusters might not support external load balancer. * Type `NodePort`. Exposes `azureml-fe` on each Node's IP at a static port. You'll be able to contact `azureml-fe`, from outside of cluster, by requesting `<NodeIP>:<NodePort>`. Using `NodePort` also allows you to set up your own load balancing solution and TLS/SSL termination for `azureml-fe`. * Type `ClusterIP`. Exposes `azureml-fe` on a cluster-internal IP, and it makes `azureml-fe` only reachable from within the cluster. For `azureml-fe` to serve inference requests coming outside of cluster, it requires you to set up your own load balancing solution and TLS/SSL termination for `azureml-fe`.
- * To ensure high availability of `azureml-fe` routing service, AzureML extension deployment by default creates three replicas of `azureml-fe` for clusters having three nodes or more. If your cluster has **less than 3 nodes**, set `inferenceRouterHA=False`.
+ * To ensure high availability of `azureml-fe` routing service, Azure Machine Learning extension deployment by default creates three replicas of `azureml-fe` for clusters having three nodes or more. If your cluster has **less than 3 nodes**, set `inferenceRouterHA=False`.
* You also want to consider using **HTTPS** to restrict access to model endpoints and secure the data that clients submit. For this purpose, you would need to specify either `sslSecret` config setting or combination of `sslKeyPemFile` and `sslCertPemFile` config-protected settings.
- * By default, AzureML extension deployment expects config settings for **HTTPS** support. For development or testing purposes, **HTTP** support is conveniently provided through config setting `allowInsecureConnections=True`.
+ * By default, Azure Machine Learning extension deployment expects config settings for **HTTPS** support. For development or testing purposes, **HTTP** support is conveniently provided through config setting `allowInsecureConnections=True`.
-## AzureML extension deployment - CLI examples and Azure portal
+## Azure Machine Learning extension deployment - CLI examples and Azure portal
### [Azure CLI](#tab/deploy-extension-with-cli)
-To deploy AzureML extension with CLI, use `az k8s-extension create` command passing in values for the mandatory parameters.
+To deploy Azure Machine Learning extension with CLI, use `az k8s-extension create` command passing in values for the mandatory parameters.
-We list four typical extension deployment scenarios for reference. To deploy extension for your production usage, carefully read the complete list of [configuration settings](#review-azureml-extension-configuration-settings).
+We list four typical extension deployment scenarios for reference. To deploy extension for your production usage, carefully read the complete list of [configuration settings](#review-azure-machine-learning-extension-configuration-settings).
- **Use AKS cluster in Azure for a quick proof of concept to run all kinds of ML workload, i.e., to run training jobs or to deploy models as online/batch endpoints**
- For AzureML extension deployment on AKS cluster, make sure to specify `managedClusters` value for `--cluster-type` parameter. Run the following Azure CLI command to deploy AzureML extension:
+ For Azure Machine Learning extension deployment on AKS cluster, make sure to specify `managedClusters` value for `--cluster-type` parameter. Run the following Azure CLI command to deploy Azure Machine Learning extension:
```azurecli az k8s-extension create --name <extension-name> --extension-type Microsoft.AzureML.Kubernetes --config enableTraining=True enableInference=True inferenceRouterServiceType=LoadBalancer allowInsecureConnections=True inferenceLoadBalancerHA=False --cluster-type managedClusters --cluster-name <your-AKS-cluster-name> --resource-group <your-RG-name> --scope cluster ``` - **Use Arc Kubernetes cluster outside of Azure for a quick proof of concept, to run training jobs only**
- For AzureML extension deployment on [Arc Kubernetes](../azure-arc/kubernetes/overview.md) cluster, you would need to specify `connectedClusters` value for `--cluster-type` parameter. Run the following Azure CLI command to deploy AzureML extension:
+ For Azure Machine Learning extension deployment on [Arc Kubernetes](../azure-arc/kubernetes/overview.md) cluster, you would need to specify `connectedClusters` value for `--cluster-type` parameter. Run the following Azure CLI command to deploy Azure Machine Learning extension:
```azurecli az k8s-extension create --name <extension-name> --extension-type Microsoft.AzureML.Kubernetes --config enableTraining=True --cluster-type connectedClusters --cluster-name <your-connected-cluster-name> --resource-group <your-RG-name> --scope cluster ``` - **Enable an AKS cluster in Azure for production training and inference workload**
- For AzureML extension deployment on AKS, make sure to specify `managedClusters` value for `--cluster-type` parameter. Assuming your cluster has more than three nodes, and you'll use an Azure public load balancer and HTTPS for inference workload support. Run the following Azure CLI command to deploy AzureML extension:
+ For Azure Machine Learning extension deployment on AKS, make sure to specify `managedClusters` value for `--cluster-type` parameter. Assuming your cluster has more than three nodes, and you'll use an Azure public load balancer and HTTPS for inference workload support. Run the following Azure CLI command to deploy Azure Machine Learning extension:
```azurecli az k8s-extension create --name <extension-name> --extension-type Microsoft.AzureML.Kubernetes --config enableTraining=True enableInference=True inferenceRouterServiceType=LoadBalancer sslCname=<ssl cname> --config-protected sslCertPemFile=<file-path-to-cert-PEM> sslKeyPemFile=<file-path-to-cert-KEY> --cluster-type managedClusters --cluster-name <your-AKS-cluster-name> --resource-group <your-RG-name> --scope cluster ``` - **Enable an [Arc Kubernetes](../azure-arc/kubernetes/overview.md) cluster anywhere for production training and inference workload using NVIDIA GPUs**
- For AzureML extension deployment on [Arc Kubernetes](../azure-arc/kubernetes/overview.md) cluster, make sure to specify `connectedClusters` value for `--cluster-type` parameter. Assuming your cluster has more than three nodes, you'll use a NodePort service type and HTTPS for inference workload support, run following Azure CLI command to deploy AzureML extension:
+ For Azure Machine Learning extension deployment on [Arc Kubernetes](../azure-arc/kubernetes/overview.md) cluster, make sure to specify `connectedClusters` value for `--cluster-type` parameter. Assuming your cluster has more than three nodes, you'll use a NodePort service type and HTTPS for inference workload support, run following Azure CLI command to deploy Azure Machine Learning extension:
```azurecli az k8s-extension create --name <extension-name> --extension-type Microsoft.AzureML.Kubernetes --config enableTraining=True enableInference=True inferenceRouterServiceType=NodePort sslCname=<ssl cname> installNvidiaDevicePlugin=True installDcgmExporter=True --config-protected sslCertPemFile=<file-path-to-cert-PEM> sslKeyPemFile=<file-path-to-cert-KEY> --cluster-type connectedClusters --cluster-name <your-connected-cluster-name> --resource-group <your-RG-name> --scope cluster ``` ### [Azure portal](#tab/portal)
-The UI experience to deploy extension is only available for **[Arc Kubernetes](../azure-arc/kubernetes/overview.md)**. If you have an AKS cluster without Azure Arc connection, you need to use CLI to deploy AzureML extension.
+The UI experience to deploy extension is only available for **[Arc Kubernetes](../azure-arc/kubernetes/overview.md)**. If you have an AKS cluster without Azure Arc connection, you need to use CLI to deploy Azure Machine Learning extension.
1. In the [Azure portal](https://portal.azure.com/#home), navigate to **Kubernetes - Azure Arc** and select your cluster. 1. Select **Extensions** (under **Settings**), and then select **+ Add**.
The UI experience to deploy extension is only available for **[Arc Kubernetes](.
1. From the list of available extensions, select **Azure Machine Learning extension** to deploy the latest version of the extension.
- :::image type="content" source="media/how-to-attach-kubernetes-to-workspace/deploy-extension-from-ui-extension-list.png" alt-text="Screenshot of selecting AzureML extension from Azure portal.":::
+ :::image type="content" source="media/how-to-attach-kubernetes-to-workspace/deploy-extension-from-ui-extension-list.png" alt-text="Screenshot of selecting Azure Machine Learning extension from Azure portal.":::
-1. Follow the prompts to deploy the extension. You can customize the installation by configuring the installation in the tab of **Basics**, **Configurations** and **Advanced**. For a detailed list of AzureML extension configuration settings, see [AzureML extension configuration settings](#review-azureml-extension-configuration-settings).
+1. Follow the prompts to deploy the extension. You can customize the installation by configuring the installation in the tab of **Basics**, **Configurations** and **Advanced**. For a detailed list of Azure Machine Learning extension configuration settings, see [Azure Machine Learning extension configuration settings](#review-azure-machine-learning-extension-configuration-settings).
- :::image type="content" source="media/how-to-attach-kubernetes-to-workspace/deploy-extension-from-ui-settings.png" alt-text="Screenshot of configuring AzureML extension settings from Azure portal.":::
+ :::image type="content" source="media/how-to-attach-kubernetes-to-workspace/deploy-extension-from-ui-settings.png" alt-text="Screenshot of configuring Azure Machine Learning extension settings from Azure portal.":::
1. On the **Review + create** tab, select **Create**. :::image type="content" source="media/how-to-attach-kubernetes-to-workspace/deploy-extension-from-ui-create.png" alt-text="Screenshot of deploying new extension to the Arc-enabled Kubernetes cluster from Azure portal.":::
-1. After the deployment completes, you're able to see the AzureML extension in **Extension** page. If the extension installation succeeds, you can see **Installed** for the **Install status**.
+1. After the deployment completes, you're able to see the Azure Machine Learning extension in **Extension** page. If the extension installation succeeds, you can see **Installed** for the **Install status**.
- :::image type="content" source="media/how-to-attach-kubernetes-to-workspace/deploy-extension-from-ui-extension-detail.png" alt-text="Screenshot of installed AzureML extensions listing in Azure portal.":::
+ :::image type="content" source="media/how-to-attach-kubernetes-to-workspace/deploy-extension-from-ui-extension-detail.png" alt-text="Screenshot of installed Azure Machine Learning extensions listing in Azure portal.":::
-### Verify AzureML extension deployment
+### Verify Azure Machine Learning extension deployment
-1. Run the following CLI command to check AzureML extension details:
+1. Run the following CLI command to check Azure Machine Learning extension details:
```azurecli az k8s-extension show --name <extension-name> --cluster-type connectedClusters --cluster-name <your-connected-cluster-name> --resource-group <resource-group>
The UI experience to deploy extension is only available for **[Arc Kubernetes](.
kubectl get pods -n azureml ```
-## Review AzureML extension component
+## Review Azure Machine Learning extension component
-Upon AzureML extension deployment completes, you can use `kubectl get deployments -n azureml` to see list of resources created in the cluster. It usually consists a subset of following resources per configuration settings specified.
+Upon Azure Machine Learning extension deployment completes, you can use `kubectl get deployments -n azureml` to see list of resources created in the cluster. It usually consists a subset of following resources per configuration settings specified.
|Resource name |Resource type |Training |Inference |Training and Inference| Description | Communication with cloud| |--|--|--|--|--|--|--|
Upon AzureML extension deployment completes, you can use `kubectl get deployment
> [!IMPORTANT] > * Azure Relay resource is under the same resource group as the Arc cluster resource. It is used to communicate with the Kubernetes cluster and modifying them will break attached compute targets.
- > * By default, the kubernetes deployment resources are randomly deployed to 1 or more nodes of the cluster, and daemonset resources are deployed to ALL nodes. If you want to restrict the extension deployment to specific nodes, use `nodeSelector` configuration setting described in [configuration settings table](#review-azureml-extension-configuration-settings).
+ > * By default, the kubernetes deployment resources are randomly deployed to 1 or more nodes of the cluster, and daemonset resources are deployed to ALL nodes. If you want to restrict the extension deployment to specific nodes, use `nodeSelector` configuration setting described in [configuration settings table](#review-azure-machine-learning-extension-configuration-settings).
> [!NOTE] > * **{EXTENSION-NAME}:** is the extension name specified with `az k8s-extension create --name` CLI command.
-### Manage AzureML extension
+### Manage Azure Machine Learning extension
-Update, list, show and delete an AzureML extension.
+Update, list, show and delete an Azure Machine Learning extension.
- For AKS cluster without Azure Arc connected, refer to [Usage of AKS extensions](../aks/cluster-extensions.md#usage-of-cluster-extensions). - For Azure Arc-enabled Kubernetes, refer to [Usage of cluster extensions](../azure-arc/kubernetes/extensions.md#usage-of-cluster-extensions).
Update, list, show and delete an AzureML extension.
- [Step 2: Attach Kubernetes cluster to workspace](how-to-attach-kubernetes-to-workspace.md) - [Create and manage instance types](./how-to-manage-kubernetes-instance-types.md)-- [AzureML inference router and connectivity requirements](./how-to-kubernetes-inference-routing-azureml-fe.md)
+- [Azure Machine Learning inference router and connectivity requirements](./how-to-kubernetes-inference-routing-azureml-fe.md)
- [Secure AKS inferencing environment](./how-to-secure-kubernetes-inferencing-environment.md)
machine-learning How To Deploy Mlflow Models Online Endpoints https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-deploy-mlflow-models-online-endpoints.md
Additionally, you need to:
- Install the Azure CLI and the ml extension to the Azure CLI. For more information, see [Install, set up, and use the CLI (v2)](how-to-configure-cli.md).
-# [Python (Azure ML SDK)](#tab/sdk)
+# [Python (Azure Machine Learning SDK)](#tab/sdk)
- Install the Azure Machine Learning SDK for Python
az account set --subscription <subscription>
az configure --defaults workspace=<workspace> group=<resource-group> location=<location> ```
-# [Python (Azure ML SDK)](#tab/sdk)
+# [Python (Azure Machine Learning SDK)](#tab/sdk)
The workspace is the top-level resource for Azure Machine Learning, providing a centralized place to work with all the artifacts you create when you use Azure Machine Learning. In this section, we connect to the workspace in which you perform deployment tasks.
MODEL_NAME='sklearn-diabetes'
az ml model create --name $MODEL_NAME --type "mlflow_model" --path "sklearn-diabetes/model" ```
-# [Python (Azure ML SDK)](#tab/sdk)
+# [Python (Azure Machine Learning SDK)](#tab/sdk)
```python model_name = 'sklearn-diabetes'
Alternatively, if your model was logged inside of a run, you can register it dir
# [Azure CLI](#tab/cli)
-Use the Azure ML CLI v2 to create a model from a training job output. In the following example, a model named `$MODEL_NAME` is registered using the artifacts of a job with ID `$RUN_ID`. The path where the model is stored is `$MODEL_PATH`.
+Use the Azure Machine Learning CLI v2 to create a model from a training job output. In the following example, a model named `$MODEL_NAME` is registered using the artifacts of a job with ID `$RUN_ID`. The path where the model is stored is `$MODEL_PATH`.
```bash az ml model create --name $MODEL_NAME --path azureml://jobs/$RUN_ID/outputs/artifacts/$MODEL_PATH
az ml model create --name $MODEL_NAME --path azureml://jobs/$RUN_ID/outputs/arti
> [!NOTE] > The path `$MODEL_PATH` is the location where the model has been stored in the run.
-# [Python (Azure ML SDK)](#tab/sdk)
+# [Python (Azure Machine Learning SDK)](#tab/sdk)
```python model_name = 'sklearn-diabetes'
version = registered_model.version
:::code language="yaml" source="~/azureml-examples-main/cli/endpoints/online/ncd/create-endpoint.yaml":::
- # [Python (Azure ML SDK)](#tab/sdk)
+ # [Python (Azure Machine Learning SDK)](#tab/sdk)
```python endpoint_name = "sklearn-diabetes-" + datetime.datetime.now().strftime("%m%d%H%M%f")
version = registered_model.version
:::code language="azurecli" source="~/azureml-examples-main/cli/deploy-managed-online-endpoint-ncd.sh" ID="create_endpoint":::
- # [Python (Azure ML SDK)](#tab/sdk)
+ # [Python (Azure Machine Learning SDK)](#tab/sdk)
```python ml_client.begin_create_or_update(endpoint)
version = registered_model.version
:::code language="yaml" source="~/azureml-examples-main/cli/endpoints/online/ncd/sklearn-deployment.yaml":::
- # [Python (Azure ML SDK)](#tab/sdk)
+ # [Python (Azure Machine Learning SDK)](#tab/sdk)
```python blue_deployment = ManagedOnlineDeployment(
version = registered_model.version
:::code language="azurecli" source="~/azureml-examples-main/cli/deploy-managed-online-endpoint-ncd.sh" ID="create_sklearn_deployment":::
- # [Python (Azure ML SDK)](#tab/sdk)
+ # [Python (Azure Machine Learning SDK)](#tab/sdk)
```python ml_client.online_deployments.begin_create_or_update(blue_deployment)
version = registered_model.version
*This step in not required in the Azure CLI since we used the `--all-traffic` during creation. If you need to change traffic, you can use the command `az ml online-endpoint update --traffic` as explained at [Progressively update traffic](how-to-deploy-mlflow-models-online-progressive.md#progressively-update-the-traffic).*
- # [Python (Azure ML SDK)](#tab/sdk)
+ # [Python (Azure Machine Learning SDK)](#tab/sdk)
```python endpoint.traffic = { blue_deployment_name: 100 }
version = registered_model.version
*This step in not required in the Azure CLI since we used the `--all-traffic` during creation. If you need to change traffic, you can use the command `az ml online-endpoint update --traffic` as explained at [Progressively update traffic](how-to-deploy-mlflow-models-online-progressive.md#progressively-update-the-traffic).*
- # [Python (Azure ML SDK)](#tab/sdk)
+ # [Python (Azure Machine Learning SDK)](#tab/sdk)
```python ml_client.begin_create_or_update(endpoint).result()
To submit a request to the endpoint, you can do as follows:
:::code language="azurecli" source="~/azureml-examples-main/cli/deploy-managed-online-endpoint-ncd.sh" ID="test_sklearn_deployment":::
-# [Python (Azure ML SDK)](#tab/sdk)
+# [Python (Azure Machine Learning SDK)](#tab/sdk)
```python ml_client.online_endpoints.invoke(
You will typically select this workflow when:
> If you choose to indicate an scoring script for an MLflow model deployment, you will also have to specify the environment where the deployment will run. > [!WARNING]
-> Customizing the scoring script for MLflow deployments is only available from the Azure CLI or SDK for Python. If you are creating a deployment using [Azure ML studio](https://ml.azure.com), please switch to the CLI or the SDK.
+> Customizing the scoring script for MLflow deployments is only available from the Azure CLI or SDK for Python. If you are creating a deployment using [Azure Machine Learning studio](https://ml.azure.com), please switch to the CLI or the SDK.
### Steps
Use the following steps to deploy an MLflow model with a custom scoring script.
*The environment will be created inline in the deployment configuration.*
- # [Python (Azure ML SDK)](#tab/sdk)
+ # [Python (Azure Machine Learning SDK)](#tab/sdk)
```pythonS environment = Environment(
Use the following steps to deploy an MLflow model with a custom scoring script.
# [Studio](#tab/studio)
- On [Azure ML studio portal](https://ml.azure.com), follow these steps:
+ On [Azure Machine Learning studio portal](https://ml.azure.com), follow these steps:
1. Navigate to the __Environments__ tab on the side menu. 1. Select the tab __Custom environments__ > __Create__.
Use the following steps to deploy an MLflow model with a custom scoring script.
az ml online-deployment create -f deployment.yml ```
- # [Python (Azure ML SDK)](#tab/sdk)
+ # [Python (Azure Machine Learning SDK)](#tab/sdk)
```python blue_deployment = ManagedOnlineDeployment(
Use the following steps to deploy an MLflow model with a custom scoring script.
# [Studio](#tab/studio) > [!IMPORTANT]
- > You can't create custom MLflow deployments in Online Endpoints using the Azure Machine Learning portal. Switch to [Azure ML CLI](?tabs=azure-cli) or the [Azure ML SDK for Python](?tabs=python).
+ > You can't create custom MLflow deployments in Online Endpoints using the Azure Machine Learning portal. Switch to [Azure Machine Learning CLI](?tabs=azure-cli) or the [Azure Machine Learning SDK for Python](?tabs=python).
Use the following steps to deploy an MLflow model with a custom scoring script.
az ml online-endpoint invoke --name $ENDPOINT_NAME --request-file endpoints/online/mlflow/sample-request-sklearn-custom.json ```
- # [Python (Azure ML SDK)](#tab/sdk)
+ # [Python (Azure Machine Learning SDK)](#tab/sdk)
```python ml_client.online_endpoints.invoke(
Once you're done with the endpoint, you can delete the associated resources:
:::code language="azurecli" source="~/azureml-examples-main/cli/deploy-managed-online-endpoint-ncd.sh" ID="delete_endpoint":::
-# [Python (Azure ML SDK)](#tab/sdk)
+# [Python (Azure Machine Learning SDK)](#tab/sdk)
```python ml_client.online_endpoints.begin_delete(endpoint_name)
machine-learning How To Deploy Mlflow Models Online Progressive https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-deploy-mlflow-models-online-progressive.md
Additionally, you will need to:
- Install the Azure CLI and the ml extension to the Azure CLI. For more information, see [Install, set up, and use the CLI (v2)](how-to-configure-cli.md).
-# [Python (Azure ML SDK)](#tab/sdk)
+# [Python (Azure Machine Learning SDK)](#tab/sdk)
- Install the Azure Machine Learning SDK for Python
az account set --subscription <subscription>
az configure --defaults workspace=<workspace> group=<resource-group> location=<location> ```
-# [Python (Azure ML SDK)](#tab/sdk)
+# [Python (Azure Machine Learning SDK)](#tab/sdk)
The workspace is the top-level resource for Azure Machine Learning, providing a centralized place to work with all the artifacts you create when you use Azure Machine Learning. In this section, we'll connect to the workspace in which you'll perform deployment tasks.
MODEL_NAME='heart-classifier'
az ml model create --name $MODEL_NAME --type "mlflow_model" --path "model" ```
-# [Python (Azure ML SDK)](#tab/sdk)
+# [Python (Azure Machine Learning SDK)](#tab/sdk)
```python model_name = 'heart-classifier'
We are going to exploit this functionality by deploying multiple versions of the
ENDPOINT_NAME="heart-classifier-$ENDPOINT_SUFIX" ```
- # [Python (Azure ML SDK)](#tab/sdk)
+ # [Python (Azure Machine Learning SDK)](#tab/sdk)
```python import random
We are going to exploit this functionality by deploying multiple versions of the
auth_mode: key ```
- # [Python (Azure ML SDK)](#tab/sdk)
+ # [Python (Azure Machine Learning SDK)](#tab/sdk)
```python endpoint = ManagedOnlineEndpoint(
We are going to exploit this functionality by deploying multiple versions of the
az ml online-endpoint create -n $ENDPOINT_NAME -f endpoint.yml ```
- # [Python (Azure ML SDK)](#tab/sdk)
+ # [Python (Azure Machine Learning SDK)](#tab/sdk)
```python ml_client.online_endpoints.begin_create_or_update(endpoint).result()
We are going to exploit this functionality by deploying multiple versions of the
ENDPOINT_SECRET_KEY=$(az ml online-endpoint get-credentials -n $ENDPOINT_NAME | jq -r ".accessToken") ```
- # [Python (Azure ML SDK)](#tab/sdk)
+ # [Python (Azure Machine Learning SDK)](#tab/sdk)
```python endpoint_secret_key = ml_client.online_endpoints.list_keys(
We are going to exploit this functionality by deploying multiple versions of the
# [Python (MLflow SDK)](#tab/mlflow)
- This functionality is not available in the MLflow SDK. Go to [Azure ML studio](https://ml.azure.com), navigate to the endpoint and retrieve the secret key from there.
+ This functionality is not available in the MLflow SDK. Go to [Azure Machine Learning studio](https://ml.azure.com), navigate to the endpoint and retrieve the secret key from there.
### Create a blue deployment
So far, the endpoint is empty. There are no deployments on it. Let's create the
instance_count: 1 ```
- # [Python (Azure ML SDK)](#tab/sdk)
+ # [Python (Azure Machine Learning SDK)](#tab/sdk)
```python blue_deployment_name = "default"
So far, the endpoint is empty. There are no deployments on it. Let's create the
> [!TIP] > We set the flag `--all-traffic` in the create command, which will assign all the traffic to the new deployment.
- # [Python (Azure ML SDK)](#tab/sdk)
+ # [Python (Azure Machine Learning SDK)](#tab/sdk)
```python ml_client.online_deployments.begin_create_or_update(blue_deployment).result()
So far, the endpoint is empty. There are no deployments on it. Let's create the
*This step in not required in the Azure CLI since we used the `--all-traffic` during creation.*
- # [Python (Azure ML SDK)](#tab/sdk)
+ # [Python (Azure Machine Learning SDK)](#tab/sdk)
```python endpoint.traffic = { blue_deployment_name: 100 }
So far, the endpoint is empty. There are no deployments on it. Let's create the
*This step in not required in the Azure CLI since we used the `--all-traffic` during creation.*
- # [Python (Azure ML SDK)](#tab/sdk)
+ # [Python (Azure Machine Learning SDK)](#tab/sdk)
```python ml_client.begin_create_or_update(endpoint).result()
So far, the endpoint is empty. There are no deployments on it. Let's create the
} ```
- # [Python (Azure ML SDK)](#tab/sdk)
+ # [Python (Azure Machine Learning SDK)](#tab/sdk)
The following code samples 5 observations from the training dataset, removes the `target` column (as the model will predict it), and creates a request in the file `sample.json` that can be used with the model deployment.
So far, the endpoint is empty. There are no deployments on it. Let's create the
az ml online-endpoint invoke --name $ENDPOINT_NAME --request-file sample.json ```
- # [Python (Azure ML SDK)](#tab/sdk)
+ # [Python (Azure Machine Learning SDK)](#tab/sdk)
```python ml_client.online_endpoints.invoke(
Let's imagine that there is a new version of the model created by the developmen
VERSION=$(az ml model show -n heart-classifier --label latest | jq -r ".version") ```
- # [Python (Azure ML SDK)](#tab/sdk)
+ # [Python (Azure Machine Learning SDK)](#tab/sdk)
```python model_name = 'heart-classifier'
Let's imagine that there is a new version of the model created by the developmen
GREEN_DEPLOYMENT_NAME="xgboost-model-$VERSION" ```
- # [Python (Azure ML SDK)](#tab/sdk)
+ # [Python (Azure Machine Learning SDK)](#tab/sdk)
```python green_deployment_name = f"xgboost-model-{version}"
Let's imagine that there is a new version of the model created by the developmen
az ml online-deployment create -n $GREEN_DEPLOYMENT_NAME --endpoint-name $ENDPOINT_NAME -f green-deployment.yml ```
- # [Python (Azure ML SDK)](#tab/sdk)
+ # [Python (Azure Machine Learning SDK)](#tab/sdk)
```python ml_client.online_deployments.begin_create_or_update(green_deployment).result()
Let's imagine that there is a new version of the model created by the developmen
az ml online-endpoint invoke --name $ENDPOINT_NAME --deployment-name $GREEN_DEPLOYMENT_NAME --request-file sample.json ```
- # [Python (Azure ML SDK)](#tab/sdk)
+ # [Python (Azure Machine Learning SDK)](#tab/sdk)
```python ml_client.online_endpoints.invoke(
One we are confident with the new deployment, we can update the traffic to route
*This step in not required in the Azure CLI*
- # [Python (Azure ML SDK)](#tab/sdk)
+ # [Python (Azure Machine Learning SDK)](#tab/sdk)
```python endpoint.traffic = {blue_deployment_name: 90, green_deployment_name: 10}
One we are confident with the new deployment, we can update the traffic to route
az ml online-endpoint update --name $ENDPOINT_NAME --traffic "default=90 $GREEN_DEPLOYMENT_NAME=10" ```
- # [Python (Azure ML SDK)](#tab/sdk)
+ # [Python (Azure Machine Learning SDK)](#tab/sdk)
```python ml_client.begin_create_or_update(endpoint).result()
One we are confident with the new deployment, we can update the traffic to route
*This step in not required in the Azure CLI*
- # [Python (Azure ML SDK)](#tab/sdk)
+ # [Python (Azure Machine Learning SDK)](#tab/sdk)
```python endpoint.traffic = {blue_deployment_name: 0, green_deployment_name: 100}
One we are confident with the new deployment, we can update the traffic to route
az ml online-endpoint update --name $ENDPOINT_NAME --traffic "default=0 $GREEN_DEPLOYMENT_NAME=100" ```
- # [Python (Azure ML SDK)](#tab/sdk)
+ # [Python (Azure Machine Learning SDK)](#tab/sdk)
```python ml_client.begin_create_or_update(endpoint).result()
One we are confident with the new deployment, we can update the traffic to route
az ml online-deployment delete --endpoint-name $ENDPOINT_NAME --name default ```
- # [Python (Azure ML SDK)](#tab/sdk)
+ # [Python (Azure Machine Learning SDK)](#tab/sdk)
```python ml_client.online_deployments.begin_delete(
One we are confident with the new deployment, we can update the traffic to route
az ml online-endpoint delete --name $ENDPOINT_NAME --yes ```
-# [Python (Azure ML SDK)](#tab/sdk)
+# [Python (Azure Machine Learning SDK)](#tab/sdk)
```python ml_client.online_endpoints.begin_delete(name=endpoint_name)
machine-learning How To Deploy Mlflow Models https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-deploy-mlflow-models.md
Azure Machine Learning offers many ways to deploy MLflow models into Online and
> [!div class="checklist"] > - MLflow SDK
-> - Azure ML CLI and Azure ML SDK for Python
+> - Azure Machine Learning CLI and Azure Machine Learning SDK for Python
> - Azure Machine Learning studio Each workflow has different capabilities, particularly around which type of compute they can target. The following table shows them.
-| Scenario | MLflow SDK | Azure ML CLI/SDK | Azure ML studio |
+| Scenario | MLflow SDK | Azure Machine Learning CLI/SDK | Azure Machine Learning studio |
| :- | :-: | :-: | :-: | | Deploy to managed online endpoints | [See example](how-to-deploy-mlflow-models-online-progressive.md)<sup>1</sup> | [See example](how-to-deploy-mlflow-models-online-endpoints.md)<sup>1</sup> | [See example](how-to-deploy-mlflow-models-online-endpoints.md?tabs=studio)<sup>1</sup> | | Deploy to managed online endpoints (with a scoring script) | | [See example](how-to-deploy-mlflow-models-online-endpoints.md#customizing-mlflow-model-deployments) | |
Each workflow has different capabilities, particularly around which type of comp
### Which option to use?
-If you are familiar with MLflow or your platform support MLflow natively (like Azure Databricks) and you wish to continue using the same set of methods, use the MLflow SDK. On the other hand, if you are more familiar with the [Azure ML CLI v2](concept-v2.md), you want to automate deployments using automation pipelines, or you want to keep deployments configuration in a git repository; we recommend you to use the [Azure ML CLI v2](concept-v2.md). If you want to quickly deploy and test models trained with MLflow, you can use [Azure Machine Learning studio](https://ml.azure.com) UI deployment.
+If you are familiar with MLflow or your platform support MLflow natively (like Azure Databricks) and you wish to continue using the same set of methods, use the MLflow SDK. On the other hand, if you are more familiar with the [Azure Machine Learning CLI v2](concept-v2.md), you want to automate deployments using automation pipelines, or you want to keep deployments configuration in a git repository; we recommend you to use the [Azure Machine Learning CLI v2](concept-v2.md). If you want to quickly deploy and test models trained with MLflow, you can use [Azure Machine Learning studio](https://ml.azure.com) UI deployment.
## Differences between models deployed in Azure Machine Learning and MLflow built-in server
The rest of this section mostly applies to online endpoints but you can learn mo
### Input formats
-| Input type | MLflow built-in server | Azure ML Online Endpoints |
+| Input type | MLflow built-in server | Azure Machine Learning Online Endpoints |
| :- | :-: | :-: | | JSON-serialized pandas DataFrames in the split orientation | **&check;** | **&check;** | | JSON-serialized pandas DataFrames in the records orientation | Deprecated | |
machine-learning How To Deploy Model Cognitive Search https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-deploy-model-cognitive-search.md
from azureml.core.webservice import AksWebservice, Webservice
# If deploying to a cluster configured for dev/test, ensure that it was created with enough # cores and memory to handle this deployment configuration. Note that memory is also used by
-# things such as dependencies and AzureML components.
+# things such as dependencies and Azure Machine Learning components.
aks_config = AksWebservice.deploy_configuration(autoscale_enabled=True, autoscale_min_replicas=1,
machine-learning How To Deploy Model Custom Output https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-deploy-model-custom-output.md
Follow the next steps to create a deployment using the previous scoring script:
# [Azure CLI](#tab/cli)
- No extra step is required for the Azure ML CLI. The environment definition will be included in the deployment file.
+ No extra step is required for the Azure Machine Learning CLI. The environment definition will be included in the deployment file.
# [Python](#tab/sdk)
machine-learning How To Deploy Online Endpoints https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-deploy-online-endpoints.md
The [workspace](concept-workspace.md) is the top-level resource for Azure Machin
To connect to a workspace, we need identifier parameters - a subscription, resource group and workspace name. We'll use these details in the `MLClient` from `azure.ai.ml` to get a handle to the required Azure Machine Learning workspace. This example uses the [default Azure authentication](/python/api/azure-identity/azure.identity.defaultazurecredential). ```python
- # enter details of your AzureML workspace
+ # enter details of your Azure Machine Learning workspace
subscription_id = "<SUBSCRIPTION_ID>" resource_group = "<RESOURCE_GROUP>" workspace = "<AZUREML_WORKSPACE_NAME>"
machine-learning How To Devops Machine Learning https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-devops-machine-learning.md
You'll need an Azure Resource Manager connection to authenticate with Azure port
## Step 5: Create variables
-You should already have a resource group in Azure with [Azure Machine Learning](overview-what-is-azure-machine-learning.md). To deploy your DevOps pipeline to AzureML, you'll need to create variables for your subscription ID, resource group, and machine learning workspace.
+You should already have a resource group in Azure with [Azure Machine Learning](overview-what-is-azure-machine-learning.md). To deploy your DevOps pipeline to Azure Machine Learning, you'll need to create variables for your subscription ID, resource group, and machine learning workspace.
1. Select the Variables tab on your pipeline edit page.
You should already have a resource group in Azure with [Azure Machine Learning](
1. Create a new variable, `Subscription_ID`, and select the checkbox **Keep this value secret**. Set the value to your [Azure portal subscription ID](../azure-portal/get-subscription-tenant-id.md). 1. Create a new variable for `Resource_Group` with the name of the resource group for Azure Machine Learning (example: `machinelearning`).
-1. Create a new variable for `AzureML_Workspace_Name` with the name of your Azure ML workspace (example: `docs-ws`).
+1. Create a new variable for `AzureML_Workspace_Name` with the name of your Azure Machine Learning workspace (example: `docs-ws`).
1. Select **Save** to save your variables. ## Step 6: Build your YAML pipeline
Delete the starter pipeline and replace it with the following YAML code. In this
* Use the Python version task to set up Python 3.8 and install the SDK requirements. * Use the Bash task to run bash scripts for the Azure Machine Learning SDK and CLI.
-* Use the Azure CLI task to pass the values of your three variables and use papermill to run your Jupyter notebook and push output to AzureML.
+* Use the Azure CLI task to pass the values of your three variables and use papermill to run your Jupyter notebook and push output to Azure Machine Learning.
```yaml trigger:
steps:
1. Open your completed pipeline run and view the AzureCLI task. Check the task view to verify that the output task finished running.
- :::image type="content" source="media/how-to-devops-machine-learning/machine-learning-azurecli-output.png" alt-text="Screenshot of machine learning output to AzureML.":::
+ :::image type="content" source="media/how-to-devops-machine-learning/machine-learning-azurecli-output.png" alt-text="Screenshot of machine learning output to Azure Machine Learning.":::
1. Open Azure Machine Learning studio and navigate to the completed `sklearn-diabetes-example` job. On the **Metrics** tab, you should see the training results.
machine-learning How To Export Delete Data https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-export-delete-data.md
Job history documents, which may contain personal user information, are stored i
Azure Machine Learning studio provides a unified view of your machine learning resources - for example, notebooks, data assets, models, and jobs. Azure Machine Learning studio emphasizes preservation of a record of your data and experiments. You can delete computational resources such as pipelines and compute resources with the browser. For these resources, navigate to the resource in question and choose **Delete**.
-You can unregister data assets and archive jobs, but these operations don't delete the data. To entirely remove the data, data assets and job data require deletion at the storage level. Storage level deletion happens in the portal, as described earlier. Azure ML Studio can handle individual deletion. Job deletion deletes the data of that job.
+You can unregister data assets and archive jobs, but these operations don't delete the data. To entirely remove the data, data assets and job data require deletion at the storage level. Storage level deletion happens in the portal, as described earlier. Azure Machine Learning Studio can handle individual deletion. Job deletion deletes the data of that job.
-Azure ML Studio can handle training artifact downloads from experimental jobs. Choose the relevant **Job**. Choose **Output + logs**, and navigate to the specific artifacts you wish to download. Choose **...** and **Download**, or select **Download all**.
+Azure Machine Learning Studio can handle training artifact downloads from experimental jobs. Choose the relevant **Job**. Choose **Output + logs**, and navigate to the specific artifacts you wish to download. Choose **...** and **Download**, or select **Download all**.
To download a registered model, navigate to the **Model** and choose **Download**.
machine-learning How To Generate Automl Training Code https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-generate-automl-training-code.md
The following diagram illustrates that you can generate the code for automated M
* This article assumes some familiarity with setting up an automated machine learning experiment. Follow the [tutorial](tutorial-auto-train-image-models.md) or [how-to](how-to-configure-auto-train.md) to see the main automated machine learning experiment design patterns.
-* Automated ML code generation is only available for experiments run on remote Azure ML compute targets. Code generation isn't supported for local runs.
+* Automated ML code generation is only available for experiments run on remote Azure Machine Learning compute targets. Code generation isn't supported for local runs.
-* All automated ML runs triggered through AzureML Studio, SDKv2 or CLIv2 will have code generation enabled.
+* All automated ML runs triggered through Azure Machine Learning Studio, SDKv2 or CLIv2 will have code generation enabled.
## Get generated code and model artifacts
-By default, each automated ML trained model generates its training code after training completes. Automated ML saves this code in the experiment's `outputs/generated_code` for that specific model. You can view them in the Azure ML studio UI on the **Outputs + logs** tab of the selected model.
+By default, each automated ML trained model generates its training code after training completes. Automated ML saves this code in the experiment's `outputs/generated_code` for that specific model. You can view them in the Azure Machine Learning studio UI on the **Outputs + logs** tab of the selected model.
* **script.py** This is the model's training code that you likely want to analyze with the featurization steps, specific algorithm used, and hyperparameters.
-* **script_run_notebook.ipynb** Notebook with boiler-plate code to run the model's training code (script.py) in AzureML compute through Azure ML SDKv2.
+* **script_run_notebook.ipynb** Notebook with boiler-plate code to run the model's training code (script.py) in Azure Machine Learning compute through Azure Machine Learning SDKv2.
After the automated ML training run completes, there are you can access the `script.py` and the `script_run_notebook.ipynb` files via the Azure Machine Learning studio UI.
If you're using the Python SDKv2, you can also download the "script.py" and the
## script.py
-The `script.py` file contains the core logic needed to train a model with the previously used hyperparameters. While intended to be executed in the context of an Azure ML script run, with some modifications, the model's training code can also be run standalone in your own on-premises environment.
+The `script.py` file contains the core logic needed to train a model with the previously used hyperparameters. While intended to be executed in the context of an Azure Machine Learning script run, with some modifications, the model's training code can also be run standalone in your own on-premises environment.
The script can roughly be broken down into several the following parts: data loading, data preparation, data featurization, preprocessor/algorithm specification, and training. ### Data loading
-The function `get_training_dataset()` loads the previously used dataset. It assumes that the script is run in an AzureML script run under the same workspace as the original experiment.
+The function `get_training_dataset()` loads the previously used dataset. It assumes that the script is run in an Azure Machine Learning script run under the same workspace as the original experiment.
```python def get_training_dataset(dataset_id):
The main code that runs all the previous functions is the following:
def main(training_dataset_id=None): from azureml.core.run import Run
- # The following code is for when running this code as part of an AzureML script run.
+ # The following code is for when running this code as part of an Azure Machine Learning script run.
run = Run.get_context() setup_instrumentation(run)
Finally, the model is serialized and saved as a `.pkl` file named "model.pkl":
## script_run_notebook.ipynb
-The `script_run_notebook.ipynb` notebook serves as an easy way to execute `script.py` on an Azure ML compute.
+The `script_run_notebook.ipynb` notebook serves as an easy way to execute `script.py` on an Azure Machine Learning compute.
This notebook is similar to the existing automated ML sample notebooks however, there are a couple of key differences as explained in the following sections. ### Environment
command_job = command(
) returned_job = ml_client.create_or_update(command_job)
-print(returned_job.studio_url) # link to naviagate to submitted run in AzureML Studio
+print(returned_job.studio_url) # link to naviagate to submitted run in Azure Machine Learning Studio
``` ## Next steps
machine-learning How To Github Actions Machine Learning https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-github-actions-machine-learning.md
You'll need to update the CLI setup file variables to match your workspace.
||| |GROUP | Name of resource group | |LOCATION | Location of your workspace (example: `eastus2`) |
- |WORKSPACE | Name of Azure ML workspace |
+ |WORKSPACE | Name of Azure Machine Learning workspace |
## Step 4. Update `pipeline.yml` with your compute cluster name
-You'll use a `pipeline.yml` file to deploy your Azure ML pipeline. This is a machine learning pipeline and not a DevOps pipeline. You only need to make this update if you're using a name other than `cpu-cluster` for your computer cluster name.
+You'll use a `pipeline.yml` file to deploy your Azure Machine Learning pipeline. This is a machine learning pipeline and not a DevOps pipeline. You only need to make this update if you're using a name other than `cpu-cluster` for your computer cluster name.
1. In your cloned repository, go to `azureml-examples/cli/jobs/pipelines/nyc-taxi/pipeline.yml`. 1. Each time you see `compute: azureml:cpu-cluster`, update the value of `cpu-cluster` with your compute cluster name. For example, if your cluster is named `my-cluster`, your new value would be `azureml:my-cluster`. There are five updates.
machine-learning How To High Availability Machine Learning https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-high-availability-machine-learning.md
The following artifacts can be exported and imported between workspaces by using
| -- | -- | -- | | Models | [az ml model download --model-id {ID} --target-dir {PATH}](/cli/azure/ml/model#az-ml-model-download) | [az ml model register ΓÇôname {NAME} --path {PATH}](/cli/azure/ml/model) | | Environments | [az ml environment download -n {NAME} -d {PATH}](/cli/azure/ml/environment#ml-az-ml-environment-download) | [az ml environment register -d {PATH}](/cli/azure/ml/environment#ml-az-ml-environment-register) |
-| Azure ML pipelines (code-generated) | [az ml pipeline get --path {PATH}](/cli/azure/ml(v1)/pipeline#az-ml(v1)-pipeline-get) | [az ml pipeline create --name {NAME} -y {PATH}](/cli/azure/ml(v1)/pipeline#az-ml(v1)-pipeline-create)
+| Azure Machine Learning pipelines (code-generated) | [az ml pipeline get --path {PATH}](/cli/azure/ml(v1)/pipeline#az-ml(v1)-pipeline-get) | [az ml pipeline create --name {NAME} -y {PATH}](/cli/azure/ml(v1)/pipeline#az-ml(v1)-pipeline-create)
> [!TIP]
-> * __Registered datasets__ cannot be downloaded or moved. This includes datasets generated by Azure ML, such as intermediate pipeline datasets. However datasets that refer to a shared file location that both workspaces can access, or where the underlying data storage is replicated, can be registered on both workspaces. Use the [az ml dataset register](/cli/azure/ml(v1)/dataset#ml-az-ml-dataset-register) to register a dataset.
+> * __Registered datasets__ cannot be downloaded or moved. This includes datasets generated by Azure Machine Learning, such as intermediate pipeline datasets. However datasets that refer to a shared file location that both workspaces can access, or where the underlying data storage is replicated, can be registered on both workspaces. Use the [az ml dataset register](/cli/azure/ml(v1)/dataset#ml-az-ml-dataset-register) to register a dataset.
> * __Job outputs__ are stored in the default storage account associated with a workspace. While job outputs might become inaccessible from the studio UI in the case of a service outage, you can directly access the data through the storage account. For more information on working with data stored in blobs, see [Create, download, and list blobs with Azure CLI](../storage/blobs/storage-quickstart-blobs-cli.md). ## Recovery options
machine-learning How To Identity Based Service Authentication https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-identity-based-service-authentication.md
Title: Set up service authentication
-description: Learn how to set up and configure authentication between Azure ML and other Azure services.
+description: Learn how to set up and configure authentication between Azure Machine Learning and other Azure services.
-# Set up authentication between Azure ML and other services
+# Set up authentication between Azure Machine Learning and other services
[!INCLUDE [dev v2](../../includes/machine-learning-dev-v2.md)]
Azure Machine Learning is composed of multiple Azure services. There are multipl
* The Azure Machine Learning workspace uses a __managed identity__ to communicate with other services. By default, this is a system-assigned managed identity. You can also use a user-assigned managed identity instead.
-* Azure Machine Learning uses Azure Container Registry (ACR) to store Docker images used to train and deploy models. If you allow Azure ML to automatically create ACR, it will enable the __admin account__.
-* The Azure ML compute cluster uses a __managed identity__ to retrieve connection information for datastores from Azure Key Vault and to pull Docker images from ACR. You can also configure identity-based access to datastores, which will instead use the managed identity of the compute cluster.
+* Azure Machine Learning uses Azure Container Registry (ACR) to store Docker images used to train and deploy models. If you allow Azure Machine Learning to automatically create ACR, it will enable the __admin account__.
+* The Azure Machine Learning compute cluster uses a __managed identity__ to retrieve connection information for datastores from Azure Key Vault and to pull Docker images from ACR. You can also configure identity-based access to datastores, which will instead use the managed identity of the compute cluster.
* Data access can happen along multiple paths depending on the data storage service and your configuration. For example, authentication to the datastore may use an account key, token, security principal, managed identity, or user identity. * Managed online endpoints can use a managed identity to access Azure resources when performing inference. For more information, see [Access Azure resources from an online endpoint](how-to-access-resources-from-endpoints-managed-identities.md).
The same behavior applies when you work with data interactively via a Jupyter No
To help ensure that you securely connect to your storage service on Azure, Azure Machine Learning requires that you have permission to access the corresponding data storage. > [!WARNING]
-> Cross tenant access to storage accounts is not supported. If cross tenant access is needed for your scenario, please reach out to the AzureML Data Support team alias at amldatasupport@microsoft.com for assistance with a custom code solution.
+> Cross tenant access to storage accounts is not supported. If cross tenant access is needed for your scenario, please reach out to the Azure Machine Learning Data Support team alias at amldatasupport@microsoft.com for assistance with a custom code solution.
Identity-based data access supports connections to **only** the following storage services.
If your storage account has virtual network settings, that dictates what identit
## Scenario: Azure Container Registry without admin user
-When you disable the admin user for ACR, Azure ML uses a managed identity to build and pull Docker images. There are two workflows when configuring Azure ML to use an ACR with the admin user disabled:
+When you disable the admin user for ACR, Azure Machine Learning uses a managed identity to build and pull Docker images. There are two workflows when configuring Azure Machine Learning to use an ACR with the admin user disabled:
-* Allow Azure ML to create the ACR instance and then disable the admin user afterwards.
+* Allow Azure Machine Learning to create the ACR instance and then disable the admin user afterwards.
* Bring an existing ACR with the admin user already disabled.
-### Azure ML with auto-created ACR instance
+### Azure Machine Learning with auto-created ACR instance
1. Create a new Azure Machine Learning workspace. 1. Perform an action that requires Azure Container Registry. For example, the [Tutorial: Train your first model](tutorial-1st-experiment-sdk-train.md).
When you disable the admin user for ACR, Azure ML uses a managed identity to bui
If ACR admin user is disallowed by subscription policy, you should first create ACR without admin user, and then associate it with the workspace. Also, if you have existing ACR with admin user disabled, you can attach it to the workspace.
-[Create ACR from Azure CLI](../container-registry/container-registry-get-started-azure-cli.md) without setting ```--admin-enabled``` argument, or from Azure portal without enabling admin user. Then, when creating Azure Machine Learning workspace, specify the Azure resource ID of the ACR. The following example demonstrates creating a new Azure ML workspace that uses an existing ACR:
+[Create ACR from Azure CLI](../container-registry/container-registry-get-started-azure-cli.md) without setting ```--admin-enabled``` argument, or from Azure portal without enabling admin user. Then, when creating Azure Machine Learning workspace, specify the Azure resource ID of the ACR. The following example demonstrates creating a new Azure Machine Learning workspace that uses an existing ACR:
> [!TIP] > To get the value for the `--container-registry` parameter, use the [az acr show](/cli/azure/acr#az-acr-show) command to show information for your ACR. The `id` field contains the resource ID for your ACR.
machine-learning How To Image Processing Batch https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-image-processing-batch.md
One the scoring script is created, it's time to create a batch deployment for it
# [Azure CLI](#tab/cli)
- No extra step is required for the Azure ML CLI. The environment definition will be included in the deployment file.
+ No extra step is required for the Azure Machine Learning CLI. The environment definition will be included in the deployment file.
# [Python](#tab/sdk)
One the scoring script is created, it's time to create a batch deployment for it
1. Although you can invoke a specific deployment inside of an endpoint, you will usually want to invoke the endpoint itself, and let the endpoint decide which deployment to use. Such deployment is named the "default" deployment. This gives you the possibility of changing the default deployment - and hence changing the model serving the deployment - without changing the contract with the user invoking the endpoint. Use the following instruction to update the default deployment:
- # [Azure ML CLI](#tab/cli)
+ # [Azure Machine Learning CLI](#tab/cli)
```bash az ml batch-endpoint update --name $ENDPOINT_NAME --set defaults.deployment_name=$DEPLOYMENT_NAME ```
- # [Azure ML SDK for Python](#tab/sdk)
+ # [Azure Machine Learning SDK for Python](#tab/sdk)
```python endpoint.defaults.deployment_name = deployment.name
machine-learning How To Inference Onnx Automl Image Models https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-inference-onnx-automl-image-models.md
try:
ml_client = MLClient.from_config(credential) except Exception as ex: print(ex)
- # Enter details of your AML workspace
+ # Enter details of your Azure Machine Learning workspace
subscription_id = '' resource_group = '' workspace_name = ''
machine-learning How To Inference Server Http https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-inference-server-http.md
Now you can modify the scoring script (`score.py`) and test your changes by runn
There are two ways to use Visual Studio Code (VS Code) and [Python Extension](https://marketplace.visualstudio.com/items?itemName=ms-python.python) to debug with [azureml-inference-server-http](https://pypi.org/project/azureml-inference-server-http/) package ([Launch and Attach modes](https://code.visualstudio.com/docs/editor/debugging#_launch-versus-attach-configurations)). -- **Launch mode**: set up the `launch.json` in VS Code and start the AzureML inference HTTP server within VS Code.
+- **Launch mode**: set up the `launch.json` in VS Code and start the Azure Machine Learning inference HTTP server within VS Code.
1. Start VS Code and open the folder containing the script (`score.py`). 1. Add the following configuration to `launch.json` for that workspace in VS Code:
There are two ways to use Visual Studio Code (VS Code) and [Python Extension](ht
1. Start debugging session in VS Code. Select "Run" -> "Start Debugging" (or `F5`). -- **Attach mode**: start the AzureML inference HTTP server in a command line and use VS Code + Python Extension to attach to the process.
+- **Attach mode**: start the Azure Machine Learning inference HTTP server in a command line and use VS Code + Python Extension to attach to the process.
> [!NOTE] > If you're using Linux environment, first install the `gdb` package by running `sudo apt-get install -y gdb`. 1. Add the following configuration to `launch.json` for that workspace in VS Code:
The following steps explain how the Azure Machine Learning inference HTTP server
## Understanding logs
-Here we describe logs of the AzureML inference HTTP server. You can get the log when you run the `azureml-inference-server-http` locally, or [get container logs](how-to-troubleshoot-online-endpoints.md#get-container-logs) if you're using online endpoints.
+Here we describe logs of the Azure Machine Learning inference HTTP server. You can get the log when you run the `azureml-inference-server-http` locally, or [get container logs](how-to-troubleshoot-online-endpoints.md#get-container-logs) if you're using online endpoints.
> [!NOTE] > The logging format has changed since version 0.8.0. If you find your log in different style, update the `azureml-inference-server-http` package to the latest version. > [!TIP]
-> If you are using online endpoints, the log from the inference server starts with `Azure ML Inferencing HTTP server <version>`.
+> If you are using online endpoints, the log from the inference server starts with `Azure Machine Learning Inferencing HTTP server <version>`.
### Startup logs When the server is started, the server settings are first displayed by the logs as follows: ```
-Azure ML Inferencing HTTP server <version>
+Azure Machine Learning Inferencing HTTP server <version>
Server Settings
Score: POST 127.0.0.1:<port>/score
For example, when you launch the server followed the [end-to-end example](#end-to-end-example): ```
-Azure ML Inferencing HTTP server v0.8.0
+Azure Machine Learning Inferencing HTTP server v0.8.0
Server Settings
machine-learning How To Interactive Jobs https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-interactive-jobs.md
Title: Interact with your jobs (debug and monitor)
-description: Debug or monitor your Machine Learning job as it runs on AzureML compute with your training application of choice.
+description: Debug or monitor your Machine Learning job as it runs on Azure Machine Learning compute with your training application of choice.
machine-learning How To Kubernetes Inference Routing Azureml Fe https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-kubernetes-inference-routing-azureml-fe.md
Title: Inference router and connectivity requirements
-description: Learn about what is AzureML inference router, how autoscaling works, and how to configure and meet inference requests performance (# of requests per second and latency)
+description: Learn about what is Azure Machine Learning inference router, how autoscaling works, and how to configure and meet inference requests performance (# of requests per second and latency)
-# AzureML inference router and connectivity requirements
+# Azure Machine Learning inference router and connectivity requirements
-AzureML inference router is a critical component for real-time inference with Kubernetes cluster. In this article, you can learn about:
+Azure Machine Learning inference router is a critical component for real-time inference with Kubernetes cluster. In this article, you can learn about:
- * What is AzureML inference router
+ * What is Azure Machine Learning inference router
* How autoscaling works * How to configure and meet inference request performance (# of requests per second and latency) * Connectivity requirements for AKS inferencing cluster
-## What is AzureML inference router
+## What is Azure Machine Learning inference router
-AzureML inference router is the front-end component (`azureml-fe`) which is deployed on AKS or Arc Kubernetes cluster at AzureML extension deployment time. It has following functions:
+Azure Machine Learning inference router is the front-end component (`azureml-fe`) which is deployed on AKS or Arc Kubernetes cluster at Azure Machine Learning extension deployment time. It has following functions:
* Routes incoming inference requests from cluster load balancer or ingress controller to corresponding model pods. * Load-balance all incoming inference requests with smart coordinated routing.
The following diagram illustrates this flow:
:::image type="content" source="./media/how-to-attach-kubernetes-to-workspace/request-handling-architecture.png" alt-text="Diagram illustrating the flow of requests between components.":::
-As you can see from above diagram, by default 3 `azureml-fe` instances are created during AzureML extension deployment, one instance acts as coordinating role, and the other instances serve incoming inference requests. The coordinating instance has all information about model pods and makes decision about which model pod to serve incoming request, while the serving `azureml-fe` instances are responsible for routing the request to selected model pod and propagate the response back to the original user.
+As you can see from above diagram, by default 3 `azureml-fe` instances are created during Azure Machine Learning extension deployment, one instance acts as coordinating role, and the other instances serve incoming inference requests. The coordinating instance has all information about model pods and makes decision about which model pod to serve incoming request, while the serving `azureml-fe` instances are responsible for routing the request to selected model pod and propagate the response back to the original user.
## Autoscaling
-AzureML inference router handles autoscaling for all model deployments on the Kubernetes cluster. Since all inference requests go through it, it has the necessary data to automatically scale the deployed model(s).
+Azure Machine Learning inference router handles autoscaling for all model deployments on the Kubernetes cluster. Since all inference requests go through it, it has the necessary data to automatically scale the deployed model(s).
> [!IMPORTANT]
-> * **Do not enable Kubernetes Horizontal Pod Autoscaler (HPA) for model deployments**. Doing so would cause the two auto-scaling components to compete with each other. Azureml-fe is designed to auto-scale models deployed by AzureML, where HPA would have to guess or approximate model utilization from a generic metric like CPU usage or a custom metric configuration.
+> * **Do not enable Kubernetes Horizontal Pod Autoscaler (HPA) for model deployments**. Doing so would cause the two auto-scaling components to compete with each other. Azureml-fe is designed to auto-scale models deployed by Azure Machine Learning, where HPA would have to guess or approximate model utilization from a generic metric like CPU usage or a custom metric configuration.
> > * **Azureml-fe does not scale the number of nodes in an AKS cluster**, because this could lead to unexpected cost increases. Instead, **it scales the number of replicas for the model** within the physical cluster boundaries. If you need to scale the number of nodes within the cluster, you can manually scale the cluster or [configure the AKS cluster autoscaler](../aks/cluster-autoscaler.md).
The `azureml-fe` can reach 5K requests per second (QPS) with good latency, havin
>If you have RPS requirements higher than 10K, consider the following options: > >* Increase resource requests/limits for `azureml-fe` pods; by default it has 2 vCPU and 1.2G memory resource limit.
->* Increase the number of instances for `azureml-fe`. By default, AzureML creates 3 or 1 `azureml-fe` instances per cluster.
-> * This instance count depends on your configuration of `inferenceRouterHA` of the [AzureML entension](how-to-deploy-kubernetes-extension.md#review-azureml-extension-configuration-settings).
+>* Increase the number of instances for `azureml-fe`. By default, Azure Machine Learning creates 3 or 1 `azureml-fe` instances per cluster.
+> * This instance count depends on your configuration of `inferenceRouterHA` of the [Azure Machine Learning entension](how-to-deploy-kubernetes-extension.md#review-azure-machine-learning-extension-configuration-settings).
> * The increased instance count cannot be persisted, since it will be overwritten with your configured value once the extension is upgraded. >* Reach out to Microsoft experts for help.
The following diagram shows the connectivity requirements for AKS inferencing. B
For general AKS connectivity requirements, see [Control egress traffic for cluster nodes in Azure Kubernetes Service](../aks/limit-egress-traffic.md).
-For accessing Azure ML services behind a firewall, see [Configure inbound and outbound network traffic](how-to-access-azureml-behind-firewall.md).
+For accessing Azure Machine Learning services behind a firewall, see [Configure inbound and outbound network traffic](how-to-access-azureml-behind-firewall.md).
### Overall DNS resolution requirements
machine-learning How To Log Mlflow Models https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-log-mlflow-models.md
Logging models has the following advantages:
> * Models can be directly loaded for inference using `mlflow.<flavor>.load_model` and use the `predict` function. > * Models can be used as pipelines inputs directly. > * Models can be deployed without indicating a scoring script nor an environment.
-> * Swagger is enabled in deployed endpoints automatically and the __Test__ feature can be used in Azure ML studio.
+> * Swagger is enabled in deployed endpoints automatically and the __Test__ feature can be used in Azure Machine Learning studio.
> * You can use the Responsible AI dashboard. There are different ways to start using the model's concept in Azure Machine Learning with MLflow, as explained in the following sections:
machine-learning How To Log View Metrics https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-log-view-metrics.md
mlflow.log_metric('anothermetric',1)
``` > [!TIP]
-> When submitting jobs using Azure ML CLI v2, you can set the experiment name using the property `experiment_name` in the YAML definition of the job. You don't have to configure it on your training script. See [YAML: display name, experiment name, description, and tags](reference-yaml-job-command.md#yaml-display-name-experiment-name-description-and-tags) for details.
+> When submitting jobs using Azure Machine Learning CLI v2, you can set the experiment name using the property `experiment_name` in the YAML definition of the job. You don't have to configure it on your training script. See [YAML: display name, experiment name, description, and tags](reference-yaml-job-command.md#yaml-display-name-experiment-name-description-and-tags) for details.
mlflow.log_params(params)
``` > [!NOTE]
-> Azure ML SDK v1 logging can't log parameters. We recommend the use of MLflow for tracking experiments as it offers a superior set of features.
+> Azure Machine Learning SDK v1 logging can't log parameters. We recommend the use of MLflow for tracking experiments as it offers a superior set of features.
## Logging metrics
client = mlflow.tracking.MlflowClient()
client.list_artifacts("<RUN_ID>") ```
-The method above will list all the artifacts logged in the run, but they will remain stored in the artifacts store (Azure ML storage). To download any of them, use the method `download_artifact`:
+The method above will list all the artifacts logged in the run, but they will remain stored in the artifacts store (Azure Machine Learning storage). To download any of them, use the method `download_artifact`:
```python file_path = client.download_artifacts("<RUN_ID>", path="feature_importance_weight.png")
Select the logged metrics to render charts on the right side. You can customize
### View and download diagnostic logs
-Log files are an essential resource for debugging the Azure ML workloads. After submitting a training job, drill down to a specific run to view its logs and outputs:
+Log files are an essential resource for debugging the Azure Machine Learning workloads. After submitting a training job, drill down to a specific run to view its logs and outputs:
1. Navigate to the **Jobs** tab. 1. Select the runID for a specific run.
machine-learning How To Machine Learning Interpretability Aml https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-machine-learning-interpretability-aml.md
Follow one of these paths to access the explanations dashboard in Azure Machine
1. Select a particular experiment to view all the runs in that experiment. 1. Select a run, and then the **Explanations** tab to the explanation visualization dashboard.
- [![Visualization Dashboard with Aggregate Feature Importance in AzureML studio in experiments](./media/how-to-machine-learning-interpretability-aml/model-explanation-dashboard-aml-studio.png)](./media/how-to-machine-learning-interpretability-aml/model-explanation-dashboard-aml-studio.png#lightbox)
+ [![Visualization Dashboard with Aggregate Feature Importance in Azure Machine Learning studio in experiments](./media/how-to-machine-learning-interpretability-aml/model-explanation-dashboard-aml-studio.png)](./media/how-to-machine-learning-interpretability-aml/model-explanation-dashboard-aml-studio.png#lightbox)
* **Models** pane
Dataset explorer | Supported (not forecasting) | Not supported. Since sparse
* **Forecasting models not supported with model explanations**: Interpretability, best model explanation, isnΓÇÖt available for AutoML forecasting experiments that recommend the following algorithms as the best model: TCNForecaster, AutoArima, Prophet, ExponentialSmoothing, Average, Naive, Seasonal Average, and Seasonal Naive. AutoML Forecasting regression models support explanations. However, in the explanation dashboard, the "Individual feature importance" tab isnΓÇÖt supported for forecasting because of complexity in their data pipelines.
-* **Local explanation for data index**: The explanation dashboard doesnΓÇÖt support relating local importance values to a row identifier from the original validation dataset if that dataset is greater than 5000 datapoints as the dashboard randomly downsamples the data. However, the dashboard shows raw dataset feature values for each datapoint passed into the dashboard under the Individual feature importance tab. Users can map local importances back to the original dataset through matching the raw dataset feature values. If the validation dataset size is less than 5000 samples, the `index` feature in AzureML studio will correspond to the index in the validation dataset.
+* **Local explanation for data index**: The explanation dashboard doesnΓÇÖt support relating local importance values to a row identifier from the original validation dataset if that dataset is greater than 5000 datapoints as the dashboard randomly downsamples the data. However, the dashboard shows raw dataset feature values for each datapoint passed into the dashboard under the Individual feature importance tab. Users can map local importances back to the original dataset through matching the raw dataset feature values. If the validation dataset size is less than 5000 samples, the `index` feature in Azure Machine Learning studio will correspond to the index in the validation dataset.
* **What-if/ICE plots not supported in studio**: What-If and Individual Conditional Expectation (ICE) plots arenΓÇÖt supported in Azure Machine Learning studio under the Explanations tab since the uploaded explanation needs an active compute to recalculate predictions and probabilities of perturbed features. ItΓÇÖs currently supported in Jupyter notebooks when run as a widget using the SDK. ## Next steps
-[Techniques for model interpretability in Azure ML](how-to-machine-learning-interpretability.md)
+[Techniques for model interpretability in Azure Machine Learning](how-to-machine-learning-interpretability.md)
[Check out Azure Machine Learning interpretability sample notebooks](https://github.com/Azure/MachineLearningNotebooks/tree/master/how-to-use-azureml/explain-model)
machine-learning How To Manage Environments V2 https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-manage-environments-v2.md
Title: 'Manage Azure Machine Learning environments with the CLI & SDK (v2)'
-description: Learn how to manage Azure ML environments using Python SDK and Azure CLI extension for Machine Learning.
+description: Learn how to manage Azure Machine Learning environments using Python SDK and Azure CLI extension for Machine Learning.
-Azure Machine Learning environments define the execution environments for your jobs or deployments and encapsulate the dependencies for your code. Azure ML uses the environment specification to create the Docker container that your training or scoring code runs in on the specified compute target. You can define an environment from a conda specification, Docker image, or Docker build context.
+Azure Machine Learning environments define the execution environments for your jobs or deployments and encapsulate the dependencies for your code. Azure Machine Learning uses the environment specification to create the Docker container that your training or scoring code runs in on the specified compute target. You can define an environment from a conda specification, Docker image, or Docker build context.
-In this article, learn how to create and manage Azure ML environments using the SDK & CLI (v2).
+In this article, learn how to create and manage Azure Machine Learning environments using the SDK & CLI (v2).
## Prerequisites
Note that `--depth 1` clones only the latest commit to the repository, which red
# [Azure CLI](#tab/cli)
-When using the Azure CLI, you need identifier parameters - a subscription, resource group, and workspace name. While you can specify these parameters for each command, you can also set defaults that will be used for all the commands. Use the following commands to set default values. Replace `<subscription ID>`, `<AzureML workspace name>`, and `<resource group>` with the values for your configuration:
+When using the Azure CLI, you need identifier parameters - a subscription, resource group, and workspace name. While you can specify these parameters for each command, you can also set defaults that will be used for all the commands. Use the following commands to set default values. Replace `<subscription ID>`, `<Azure Machine Learning workspace name>`, and `<resource group>` with the values for your configuration:
```azurecli az account set --subscription <subscription ID>
-az configure --defaults workspace=<AzureML workspace name> group=<resource group>
+az configure --defaults workspace=<Azure Machine Learning workspace name> group=<resource group>
``` # [Python SDK](#tab/python)
from azure.identity import DefaultAzureCredential
#import required libraries for environments examples from azure.ai.ml.entities import Environment, BuildContext
-#Enter details of your AzureML workspace
+#Enter details of your Azure Machine Learning workspace
subscription_id = '<SUBSCRIPTION_ID>' resource_group = '<RESOURCE_GROUP>' workspace = '<AZUREML_WORKSPACE_NAME>'
ml_client = MLClient(DefaultAzureCredential(), subscription_id, resource_group,
## Curated environments
-There are two types of environments in Azure ML: curated and custom environments. Curated environments are predefined environments containing popular ML frameworks and tooling. Custom environments are user-defined and can be created via `az ml environment create`.
+There are two types of environments in Azure Machine Learning: curated and custom environments. Curated environments are predefined environments containing popular ML frameworks and tooling. Custom environments are user-defined and can be created via `az ml environment create`.
-Curated environments are provided by Azure ML and are available in your workspace by default. Azure ML routinely updates these environments with the latest framework version releases and maintains them for bug fixes and security patches. They're backed by cached Docker images, which reduce job preparation cost and model deployment time.
+Curated environments are provided by Azure Machine Learning and are available in your workspace by default. Azure Machine Learning routinely updates these environments with the latest framework version releases and maintains them for bug fixes and security patches. They're backed by cached Docker images, which reduce job preparation cost and model deployment time.
You can use these curated environments out of the box for training or deployment by referencing a specific environment using the `azureml:<curated-environment-name>:<version>` or `azureml:<curated-environment-name>@latest` syntax. You can also use them as reference for your own custom environments by modifying the Dockerfiles that back these curated environments.
-You can see the set of available curated environments in the Azure ML studio UI, or by using the CLI (v2) via `az ml environments list`.
+You can see the set of available curated environments in the Azure Machine Learning studio UI, or by using the CLI (v2) via `az ml environments list`.
## Create an environment
ml_client.environments.create_or_update(env_docker_image)
> [!TIP]
-> Azure ML maintains a set of CPU and GPU Ubuntu Linux-based base images with common system dependencies. For example, the GPU images contain Miniconda, OpenMPI, CUDA, cuDNN, and NCCL. You can use these images for your environments, or use their corresponding Dockerfiles as reference when building your own custom images.
+> Azure Machine Learning maintains a set of CPU and GPU Ubuntu Linux-based base images with common system dependencies. For example, the GPU images contain Miniconda, OpenMPI, CUDA, cuDNN, and NCCL. You can use these images for your environments, or use their corresponding Dockerfiles as reference when building your own custom images.
> > For the set of base images and their corresponding Dockerfiles, see the [AzureML-Containers repo](https://github.com/Azure/AzureML-Containers).
Instead of defining an environment from a prebuilt image, you can also define on
# [Azure CLI](#tab/cli)
-The following example is a YAML specification file for an environment defined from a build context. The local path to the build context folder is specified in the `build.path` field, and the relative path to the Dockerfile within that build context folder is specified in the `build.dockerfile_path` field. If `build.dockerfile_path` is omitted in the YAML file, Azure ML will look for a Dockerfile named `Dockerfile` at the root of the build context.
+The following example is a YAML specification file for an environment defined from a build context. The local path to the build context folder is specified in the `build.path` field, and the relative path to the Dockerfile within that build context folder is specified in the `build.dockerfile_path` field. If `build.dockerfile_path` is omitted in the YAML file, Azure Machine Learning will look for a Dockerfile named `Dockerfile` at the root of the build context.
In this example, the build context contains a Dockerfile named `Dockerfile` and a `requirements.txt` file that is referenced within the Dockerfile for installing Python packages.
az ml environment create --file assets/environment/docker-context.yml
# [Python SDK](#tab/python)
-In the following example, the local path to the build context folder is specified in the `path' parameter. Azure ML will look for a Dockerfile named `Dockerfile` at the root of the build context.
+In the following example, the local path to the build context folder is specified in the `path' parameter. Azure Machine Learning will look for a Dockerfile named `Dockerfile` at the root of the build context.
```python env_docker_context = Environment(
ml_client.environments.create_or_update(env_docker_context)
-Azure ML will start building the image from the build context when the environment is created. You can monitor the status of the build and view the build logs in the studio UI.
+Azure Machine Learning will start building the image from the build context when the environment is created. You can monitor the status of the build and view the build logs in the studio UI.
### Create an environment from a conda specification You can define an environment using a standard conda YAML configuration file that includes the dependencies for the conda environment. See [Creating an environment manually](https://conda.io/projects/conda/en/latest/user-guide/tasks/manage-environments.html#creating-an-environment-file-manually) for information on this standard format.
-You must also specify a base Docker image for this environment. Azure ML will build the conda environment on top of the Docker image provided. If you install some Python dependencies in your Docker image, those packages won't exist in the execution environment thus causing runtime failures. By default, Azure ML will build a Conda environment with dependencies you specified, and will execute the job in that environment instead of using any Python libraries that you installed on the base image.
+You must also specify a base Docker image for this environment. Azure Machine Learning will build the conda environment on top of the Docker image provided. If you install some Python dependencies in your Docker image, those packages won't exist in the execution environment thus causing runtime failures. By default, Azure Machine Learning will build a Conda environment with dependencies you specified, and will execute the job in that environment instead of using any Python libraries that you installed on the base image.
## [Azure CLI](#tab/cli)
-The following example is a YAML specification file for an environment defined from a conda specification. Here the relative path to the conda file from the Azure ML environment YAML file is specified via the `conda_file` property. You can alternatively define the conda specification inline using the `conda_file` property, rather than defining it in a separate file.
+The following example is a YAML specification file for an environment defined from a conda specification. Here the relative path to the conda file from the Azure Machine Learning environment YAML file is specified via the `conda_file` property. You can alternatively define the conda specification inline using the `conda_file` property, rather than defining it in a separate file.
:::code language="yaml" source="~/azureml-examples-main/cli/assets/environment/docker-image-plus-conda.yml":::
ml_client.environments.create_or_update(env_docker_conda)
-Azure ML will build the final Docker image from this environment specification when the environment is used in a job or deployment. You can also manually trigger a build of the environment in the studio UI.
+Azure Machine Learning will build the final Docker image from this environment specification when the environment is used in a job or deployment. You can also manually trigger a build of the environment in the studio UI.
## Manage environments
-The SDK and CLI (v2) also allow you to manage the lifecycle of your Azure ML environment assets.
+The SDK and CLI (v2) also allow you to manage the lifecycle of your Azure Machine Learning environment assets.
### List
ml_client.environments.archive(name="docker-image-example", version="1")
# [Azure CLI](#tab/cli)
-To use an environment for a training job, specify the `environment` field of the job YAML configuration. You can either reference an existing registered Azure ML environment via `environment: azureml:<environment-name>:<environment-version>` or `environment: azureml:<environment-name>@latest` (to reference the latest version of an environment), or define an environment specification inline. If defining an environment inline, don't specify the `name` and `version` fields, as these environments are treated as "unregistered" environments and aren't tracked in your environment asset registry.
+To use an environment for a training job, specify the `environment` field of the job YAML configuration. You can either reference an existing registered Azure Machine Learning environment via `environment: azureml:<environment-name>:<environment-version>` or `environment: azureml:<environment-name>@latest` (to reference the latest version of an environment), or define an environment specification inline. If defining an environment inline, don't specify the `name` and `version` fields, as these environments are treated as "unregistered" environments and aren't tracked in your environment asset registry.
# [Python SDK](#tab/python)
machine-learning How To Manage Files https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-manage-files.md
To create a new file in a different folder:
1. Select **Create new file**. > [!IMPORTANT]
-> Content in notebooks and scripts can potentially read data from your sessions and access data without your organization in Azure. Only load files from trusted sources. For more information, see [Secure code best practices](concept-secure-code-best-practice.md#azure-ml-studio-notebooks).
+> Content in notebooks and scripts can potentially read data from your sessions and access data without your organization in Azure. Only load files from trusted sources. For more information, see [Secure code best practices](concept-secure-code-best-practice.md#azure-machine-learning-studio-notebooks).
## Manage files with Git
machine-learning How To Manage Kubernetes Instance Types https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-manage-kubernetes-instance-types.md
Instance types are an Azure Machine Learning concept that allows targeting certain types of compute nodes for training and inference workloads. For an Azure VM, an example for an instance type is `STANDARD_D2_V3`.
-In Kubernetes clusters, instance types are represented in a custom resource definition (CRD) that is installed with the AzureML extension. Two elements in AzureML extension represent the instance types:
+In Kubernetes clusters, instance types are represented in a custom resource definition (CRD) that is installed with the Azure Machine Learning extension. Two elements in Azure Machine Learning extension represent the instance types:
[nodeSelector](https://kubernetes.io/docs/concepts/scheduling-eviction/assign-pod-node/#nodeselector) and [resources](https://kubernetes.io/docs/concepts/configuration/manage-resources-containers/).
In short, a `nodeSelector` lets you specify which node a pod should run on. The
>[!IMPORTANT] >
-> If you have [specified a nodeSelector when deploying the AzureML extension](./how-to-deploy-kubernetes-extension.md#review-azureml-extension-configuration-settings), the nodeSelector will be applied to all instance types. This means that:
+> If you have [specified a nodeSelector when deploying the Azure Machine Learning extension](./how-to-deploy-kubernetes-extension.md#review-azure-machine-learning-extension-configuration-settings), the nodeSelector will be applied to all instance types. This means that:
> - For each instance type creating, the specified nodeSelector should be a subset of the extension-specified nodeSelector. > - If you use an instance type **with nodeSelector**, the workload will run on any node matching both the extension-specified nodeSelector and the instance type-specified nodeSelector. > - If you use an instance type **without a nodeSelector**, the workload will run on any node mathcing the extension-specified nodeSelector.
In short, a `nodeSelector` lets you specify which node a pod should run on. The
## Default instance type
-By default, a `defaultinstancetype` with the following definition is created when you attach a Kubernetes cluster to an AzureML workspace:
+By default, a `defaultinstancetype` with the following definition is created when you attach a Kubernetes cluster to an Azure Machine Learning workspace:
- If you don't apply a `nodeSelector`, it means the pod can get scheduled on any node. - The workload's pods are assigned default resources with 0.1 cpu cores, 2-GB memory and 0 GPU for request. - The resources used by the workload's pods are limited to 2 cpu cores and 8-GB memory:
items:
memory: "1Gi" ```
-The above example creates two instance types: `cpusmall` and `defaultinstancetype`. This `defaultinstancetype` definition overrides the `defaultinstancetype` definition created when Kubernetes cluster was attached to AzureML workspace.
+The above example creates two instance types: `cpusmall` and `defaultinstancetype`. This `defaultinstancetype` definition overrides the `defaultinstancetype` definition created when Kubernetes cluster was attached to Azure Machine Learning workspace.
If you submit a training or inference workload without an instance type, it uses the `defaultinstancetype`. To specify a default instance type for a Kubernetes cluster, create an instance type with name `defaultinstancetype`. It's automatically recognized as the default.
If you use the `resource section`, the valid resource definition need to meet th
## Next steps -- [AzureML inference router and connectivity requirements](./how-to-kubernetes-inference-routing-azureml-fe.md)
+- [Azure Machine Learning inference router and connectivity requirements](./how-to-kubernetes-inference-routing-azureml-fe.md)
- [Secure AKS inferencing environment](./how-to-secure-kubernetes-inferencing-environment.md)
machine-learning How To Manage Models Mlflo