Updates from: 05/17/2023 01:10:08
Service Microsoft Docs article Related commit history on GitHub Change details
active-directory Active Directory Optional Claims https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/develop/active-directory-optional-claims.md
The set of optional claims available by default for applications to use are list
| `acct` | Users account status in tenant | JWT, SAML | | If the user is a member of the tenant, the value is `0`. If they're a guest, the value is `1`. | | `auth_time` | Time when the user last authenticated. See OpenID Connect spec.| JWT | | | | `ctry` | User's country/region | JWT | | Azure AD returns the `ctry` optional claim if it's present and the value of the field is a standard two-letter country/region code, such as FR, JP, SZ, and so on. |
-| `email` | The reported email address for this user | JWT, SAML | MSA, Azure AD | This value is included by default if the user is a guest in the tenant. For managed users (the users inside the tenant), it must be requested through this optional claim or, on v2.0 only, with the OpenID scope. This value isn't guaranteed to be correct, and is mutable over time - never use it for authorization or to save data for a user. For more information, see [Validate the user has permission to access this data](access-tokens.md). If you are the email claim for authorization, we recommend [performing a migration to move to a more secure claim](./migrate-off-email-claim-authorization.md). If you require an addressable email address in your app, request this data from the user directly, using this claim as a suggestion or prefill in your UX. |
+| `email` | The reported email address for this user | JWT, SAML | MSA, Azure AD | This value is included by default if the user is a guest in the tenant. For managed users (the users inside the tenant), it must be requested through this optional claim or, on v2.0 only, with the OpenID scope. This value isn't guaranteed to be correct, and is mutable over time - never use it for authorization or to save data for a user. For more information, see [Validate the user has permission to access this data](access-tokens.md). If you require an addressable email address in your app, request this data from the user directly, using this claim as a suggestion or prefill in your UX. |
| `fwd` | IP address.| JWT | | Adds the original IPv4 address of the requesting client (when inside a VNET) | | `groups`| Optional formatting for group claims |JWT, SAML| |For details see [Group claims](#configuring-groups-optional-claims). For more information about group claims, see [How to configure group claims](../hybrid/how-to-connect-fed-group-claims.md). Used with the GroupMembershipClaims setting in the [application manifest](reference-app-manifest.md), which must be set as well. | `idtyp` | Token type | JWT access tokens | Special: only in app-only access tokens | Value is `app` when the token is an app-only token. This claim is the most accurate way for an API to determine if a token is an app token or an app+user token.|
Some optional claims can be configured to change the way the claim is returned.
| | `include_externally_authenticated_upn_without_hash` | Same as listed previously, except that the hash marks (`#`) are replaced with underscores (`_`), for example `foo_hometenant.com_EXT_@resourcetenant.com`| | `aud` | | In v1 access tokens, this claim is used to change the format of the `aud` claim. This claim has no effect in v2 tokens or either version's ID tokens, where the `aud` claim is always the client ID. Use this configuration to ensure that your API can more easily perform audience validation. Like all optional claims that affect the access token, the resource in the request must set this optional claim, since resources own the access token.| | | `use_guid` | Emits the client ID of the resource (API) in GUID format as the `aud` claim always instead of it being runtime dependent. For example, if a resource sets this flag, and its client ID is `bb0a297b-6a42-4a55-ac40-09a501456577`, any app that requests an access token for that resource will receive an access token with `aud` : `bb0a297b-6a42-4a55-ac40-09a501456577`. </br></br> Without this claim set, an API could get tokens with an `aud` claim of `api://MyApi.com`, `api://MyApi.com/`, `api://myapi.com/AdditionalRegisteredField` or any other value set as an app ID URI for that API, and the client ID of the resource. |
-| `email` | | Can be used for both SAML and JWT responses, and for v1.0 and v2.0 tokens. |
- | | `replace_unverified_email_with_upn` (Preview) | In scenarios where email ownership is not verified, the `email` claim returns the user's home tenant UPN instead, unless otherwise stated. For managed users, email is verified if the home tenant owns the email's domain as a custom domain name. For guest users, email is verified if either the home or resource tenants own the email's domain. If the user authenticates using Email OTP, MSA, or Google federation, the `email` claim remains the same. If the user authenticates using Facebook or SAML/WS-Fed IdP federation, the `email` claim isn't returned. The `email` claim isn't guaranteed to be mailbox addressable, regardless of whether it is verified. |
#### Additional properties example
active-directory Migrate Off Email Claim Authorization https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/develop/migrate-off-email-claim-authorization.md
- Title: Migrate away from using email claims for authorization
-description: Learn how to migrate your application away from using insecure claims, such as email, for authorization purposes.
------- Previously updated : 05/11/2023------
-# Migrate away from using email claims for authorization
-
-This article is meant to provide guidance to developers whose applications are currently using a pattern where the email claim is used for authorization, which can lead to full account takeover by another user. Continue reading to learn more about if your application is impacted, and steps for remediation.
-
-## How do I know if my application is impacted?
-
-Microsoft recommends reviewing application source code and determining whether the following patterns are present:
--- A mutable claim, such as `email`, is used for the purposes of uniquely identifying a user-- A mutable claim, such as `email` is used for the purposes of authorizing a user's access to resources-
-These patterns are considered insecure, as users without a provisioned mailbox can have any email address set for their Mail (Primary SMTP) attribute. This attribute is **not guaranteed to come from a verified email address**. When an unverified email claim is used for authorization, any user without a provisioned mailbox has the potential to gain unauthorized access by changing their Mail attribute to impersonate another user.
-
-This risk of unauthorized access has only been found in multi-tenant apps, as a user from one tenant could escalate their privileges to access resources from another tenant through modification of their Mail attribute.
-
-## Migrate applications to more secure configurations
-
-You should never use mutable claims (such as `email`, `preferred_username`, etc.) as identifiers to perform authorization checks or index users in a database. These values are re-usable and could expose your application to privilege escalation attacks.
-
-If your application is currently using a mutable value for indexing users, you should migrate to a globally unique identifier, such as the object ID (referred to as `oid` in the token claims). Doing so ensures that each user is indexed on a value that can't be re-used (or abused to impersonate another user).
--
-If your application uses email (or any other mutable claim) for authorization purposes, you should read through the [Secure applications and APIs by validating claims](claims-validation.md) and implement the appropriate checks.
-
-## Short-term risk mitigation
-
-To mitigate the risk of unauthorized access before updating application code, you can use the `replace_unverified_email_with_upn` property for the optional `email` claim, which replaces (or removes) email claims, depending on account type, according to the following table:
-
-| **User type** | **Replaced with** |
-||-|
-| On Premise | Home tenant UPN |
-| Cloud Managed | Home tenant UPN |
-| Microsoft Account (MSA) | Email address the user signed up with |
-| Email OTP | Email address the user signed up with |
-| Social IDP: Google | Email address the user signed up with |
-| Social IDP: Facebook | Email claim isn't issued |
-| Direct Fed | Email claim isn't issued |
-
-Enabling `replace_unverified_email_with_upn` eliminates the most significant risk of cross-tenant privilege escalation by ensuring authorization doesn't occur against an arbitrarily set email value. While enabling this property prevents unauthorized access, it can also break access to users with unverified emails. Internal data suggests the overall percentage of users with unverified emails is low and this tradeoff is appropriate to secure applications in the short term.
-
-The `replace_unverified_email_with_upn` option is also documented under the documentation for [additional properties of optional claims](active-directory-optional-claims.md#additional-properties-of-optional-claims).
-
-Enabling `replace_unverified_email_with_upn` should be viewed mainly as a short-term risk mitigation strategy while migrating applications away from email claims, and not as a permanent solution for resolving account escalation risk related to email usage.
-
-## Next steps
--- To learn more about using claims-based authorization securely, see [Secure applications and APIs by validating claims](claims-validation.md)-- For more information about optional claims, see [Provide optional claims to your application](./active-directory-optional-claims.md)
active-directory V2 Oauth2 Client Creds Grant Flow https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/develop/v2-oauth2-client-creds-grant-flow.md
curl -X POST -H "Content-Type: application/x-www-form-urlencoded" -d 'client_id=
| | | -- | | `tenant` | Required | The directory tenant the application plans to operate against, in GUID or domain-name format. | | `client_id` | Required | The application ID that's assigned to your app. You can find this information in the portal where you registered your app. |
-| `scope` | Required | The value passed for the `scope` parameter in this request should be the resource identifier (application ID URI) of the resource you want, affixed with the `.default` suffix. For the Microsoft Graph example, the value is `https://graph.microsoft.com/.default`. <br/>This value tells the Microsoft identity platform that of all the direct application permissions you have configured for your app, the endpoint should issue a token for the ones associated with the resource you want to use. To learn more about the `/.default` scope, see the [consent documentation](v2-permissions-and-consent.md#the-default-scope). |
+| `scope` | Required | The value passed for the `scope` parameter in this request should be the resource identifier (application ID URI) of the resource you want, affixed with the `.default` suffix. All scopes included must be for a single resource. Including scopes for multiple resources will result in an error. <br/>For the Microsoft Graph example, the value is `https://graph.microsoft.com/.default`. This value tells the Microsoft identity platform that of all the direct application permissions you have configured for your app, the endpoint should issue a token for the ones associated with the resource you want to use. To learn more about the `/.default` scope, see the [consent documentation](v2-permissions-and-consent.md#the-default-scope). |
| `client_secret` | Required | The client secret that you generated for your app in the app registration portal. The client secret must be URL-encoded before being sent. The Basic auth pattern of instead providing credentials in the Authorization header, per [RFC 6749](https://datatracker.ietf.org/doc/html/rfc6749#section-2.3.1) is also supported. | | `grant_type` | Required | Must be set to `client_credentials`. |
scope=https%3A%2F%2Fgraph.microsoft.com%2F.default
| -- | | -- | | `tenant` | Required | The directory tenant the application plans to operate against, in GUID or domain-name format. | | `client_id` | Required | The application (client) ID that's assigned to your app. |
-| `scope` | Required | The value passed for the `scope` parameter in this request should be the resource identifier (application ID URI) of the resource you want, affixed with the `.default` suffix. For the Microsoft Graph example, the value is `https://graph.microsoft.com/.default`. <br/>This value informs the Microsoft identity platform that of all the direct application permissions you have configured for your app, it should issue a token for the ones associated with the resource you want to use. To learn more about the `/.default` scope, see the [consent documentation](v2-permissions-and-consent.md#the-default-scope). |
+| `scope` | Required | The value passed for the `scope` parameter in this request should be the resource identifier (application ID URI) of the resource you want, affixed with the `.default` suffix. All scopes included must be for a single resource. Including scopes for multiple resources will result in an error. <br/>For the Microsoft Graph example, the value is `https://graph.microsoft.com/.default`. This value tells the Microsoft identity platform that of all the direct application permissions you have configured for your app, the endpoint should issue a token for the ones associated with the resource you want to use. To learn more about the `/.default` scope, see the [consent documentation](v2-permissions-and-consent.md#the-default-scope). |
| `client_assertion_type` | Required | The value must be set to `urn:ietf:params:oauth:client-assertion-type:jwt-bearer`. | | `client_assertion` | Required | An assertion (a JSON web token) that you need to create and sign with the certificate you registered as credentials for your application. Read about [certificate credentials](active-directory-certificate-credentials.md) to learn how to register your certificate and the format of the assertion.| | `grant_type` | Required | Must be set to `client_credentials`. |
active-directory Howto Manage Local Admin Passwords https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/devices/howto-manage-local-admin-passwords.md
To enable Windows LAPS with Azure AD, you must take actions in Azure AD and the
## Recovering local administrator password and password metadata
-To view the local administrator password for a Windows device joined to Azure AD, you must be granted the *deviceLocalCredentials.Read.All* permission.
+To view the local administrator password for a Windows device joined to Azure AD, you must be granted the *microsoft.directory/deviceLocalCredentials/password/read* action.
-To view the local administrator password metadata for a Windows device joined to Azure AD, you must be granted the *deviceLocalCredentials.Read* permission.
+To view the local administrator password metadata for a Windows device joined to Azure AD, you must be granted the *microsoft.directory/deviceLocalCredentials/standard/read* action.
-The following built-in roles are granted these permissions by default:
+The following built-in roles are granted these actions by default:
-|Built-in role|DeviceLocalCredential.Read.All|DeviceLocalCredential.Read|
+|Built-in role|microsoft.directory/deviceLocalCredentials/standard/read and microsoft.directory/deviceLocalCredentials/password/read|microsoft.directory/deviceLocalCredentials/standard/read|
|||| |[Global Administrator](../roles/permissions-reference.md#global-administrator)|Yes|Yes| |[Cloud Device Administrator](../roles/permissions-reference.md#cloud-device-administrator)|Yes|Yes|
The following built-in roles are granted these permissions by default:
|[Security Administrator](../roles/permissions-reference.md#security-administrator)|No|Yes| |[Security Reader](../roles/permissions-reference.md#security-reader)|No|Yes|
-Any roles not listed are granted neither permission.
+Any roles not listed are granted neither action.
You can also use Microsoft Graph API [Get deviceLocalCredentialInfo](/graph/api/devicelocalcredentialinfo-get?view=graph-rest-beta&preserve-view=true) to recover local administrative password. If you use the Microsoft Graph API, the password returned is in Base64 encoded value that you need to decode before using it.
active-directory Sap https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/governance/sap.md
Customers who have yet to transition from applications such as SAP ECC to SAP S/
## SSO, workflows, and separation of duties In addition to the native provisioning integrations that allow you to manage access to your SAP applications, Azure AD supports a rich set of integrations with SAP.
-* SSO: Once youΓÇÖve setup provisioning for your SAP application, youΓÇÖll want to enable single sign-on for those applications. Azure AD can serve as the identity provider and server as the authentication authority for your SAP applications. Learn more about how you can [configure Azure AD as the corporate identity provider for your SAP applications](https://help.sap.com/docs/IDENTITY_AUTHENTICATION/6d6d63354d1242d185ab4830fc04feb1/058c7b14209f4f2d8de039da4330a1c1.html).
-Custom workflows: When a new employee is hired in your organization, you may need to trigger a workflow within your SAP server.
-* Using the [Entra Identity Governance Lifecycle Workflows](lifecycle-workflow-extensibility.md) in conjunction with the [SAP connector in Azure Logic apps](https://learn.microsoft.com/azure/logic-apps/logic-apps-using-sap-connector), you can trigger custom actions in SAP upon hiring a new employee.
-* Separation of duties: With separation of duties checks now available in preview in Azure AD [entitlement management](https://techcommunity.microsoft.com/t5/azure-active-directory-identity/ensure-compliance-using-separation-of-duties-checks-in-access/ba-p/2466939), customers can now ensure that users don't take on excessive access rights. Admins and access managers can prevent users from requesting additional access packages if theyΓÇÖre already assigned to other access packages or are a member of other groups that are incompatible with the requested access. Enterprises with critical regulatory requirements for SAP apps will have a single consistent view of access controls and enforce separation of duties checks across their financial and other business critical applications and Azure AD-integrated applications. With our [Pathlock](https://pathlock.com/), integration customers can leverage fine-grained separation of duties checks with access packages in Azure AD, and over time will help customers to address Sarbanes Oxley and other compliance requirements.
+* **SSO:** Once youΓÇÖve setup provisioning for your SAP application, youΓÇÖll want to enable single sign-on for those applications. Azure AD can serve as the identity provider and server as the authentication authority for your SAP applications. Learn more about how you can [configure Azure AD as the corporate identity provider for your SAP applications](https://help.sap.com/docs/IDENTITY_AUTHENTICATION/6d6d63354d1242d185ab4830fc04feb1/058c7b14209f4f2d8de039da4330a1c1.html).
+* **Custom workflows:** When a new employee is hired in your organization, you may need to trigger a workflow within your SAP server. Using the [Entra Identity Governance Lifecycle Workflows](lifecycle-workflow-extensibility.md) in conjunction with the [SAP connector in Azure Logic apps](https://learn.microsoft.com/azure/logic-apps/logic-apps-using-sap-connector), you can trigger custom actions in SAP upon hiring a new employee.
+* **Separation of duties:** With separation of duties checks now available in preview in Azure AD [entitlement management](https://techcommunity.microsoft.com/t5/azure-active-directory-identity/ensure-compliance-using-separation-of-duties-checks-in-access/ba-p/2466939), customers can now ensure that users don't take on excessive access rights. Admins and access managers can prevent users from requesting additional access packages if theyΓÇÖre already assigned to other access packages or are a member of other groups that are incompatible with the requested access. Enterprises with critical regulatory requirements for SAP apps will have a single consistent view of access controls and enforce separation of duties checks across their financial and other business critical applications and Azure AD-integrated applications. With our [Pathlock](https://pathlock.com/), integration customers can leverage fine-grained separation of duties checks with access packages in Azure AD, and over time will help customers to address Sarbanes Oxley and other compliance requirements.
## Next steps
aks Deploy Application Az Cli https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/deploy-application-az-cli.md
+
+ Title: Deploy an Azure Kubernetes application programmatically by using Azure CLI
+description: Learn how to deploy an Azure Kubernetes application programmatically by using Azure CLI.
+++ Last updated : 05/15/2023++
+# Deploy an Azure Kubernetes application programmatically by using Azure CLI
+
+To deploy a Kubernetes application programmatically through Azure CLI, you select the Kubernetes application and settings, accept legal terms and conditions, and finally deploy the application through CLI commands.
+
+## Select Kubernetes application
+
+First, you need to select the Kubernetes application that you want to deploy in the Azure portal. You'll also need to copy some of the details for later use.
+
+1. In the Azure portal, go to the [Marketplace page](https://ms.portal.azure.com/#view/Microsoft_Azure_Marketplace/MarketplaceOffersBlade/selectedMenuItemId/home/fromContext/AKS).
+1. Select your Kubernetes application.
+1. Select the required plan.
+1. Select the **Create** button.
+1. Fill out all the application (extension) details.
+1. In the **Review + Create** tab, select **Download a template for automation**. If all the validations are passed, you'll see the ARM template in the editor.
+1. Examine the ARM template:
+
+ 1. In the variables section, copy the `plan-name,` `plan-publisher,` `plan-offerID,` and `clusterExtensionTypeName` values for later use.
+
+ ```json
+ "variables": {
+ "plan-name": "DONOTMODIFY",
+ "plan-publisher": "DONOTMODIFY",
+ "plan-offerID": "DONOTMODIFY",
+ "releaseTrain": "DONOTMODIFY",
+ "clusterExtensionTypeName": "DONOTMODIFY"
+ },
+ ```
+
+ 1. In the resource `Microsoft.KubernetesConfiguration/extensions' section, copy the `configurationSettings` section for later use..
+
+ ```json
+ {
+ "type": "Microsoft.KubernetesConfiguration/extensions",
+ "apiVersion": "2022-11-01",
+ "name": "[parameters('extensionResourceName')]",
+ "properties": {
+ "extensionType": "[variables('clusterExtensionTypeName')]",
+ "autoUpgradeMinorVersion": true,
+ "releaseTrain": "[variables('releaseTrain')]",
+ "configurationSettings": {
+ "title": "[parameters('app-title')]",
+ "value1": "[parameters('app-value1')]",
+ "value2": "[parameters('app-value2')]"
+ },
+ ```
+
+ > [!NOTE]
+ > If there are no configuration settings in the ARM template, refer to the application-related documentation in Azure Marketplace or on the partner's website.
+
+## Accept terms and agreements
+
+Before you can deploy a Kubernetes application, you need to accept its terms and agreements. To do so, run the following command, using the values you copied for `plan-publisher`, `plan-offerID`, and `plan-name`.
+
+```azurecli
+az vm image terms accept --offer <plan-offerID> --plan <plan-name> --publisher <plan-publisher>
+```
+
+> [!NOTE]
+> Although this command is for VMs, it also works for containers. For more information, see the [`az cm image terms` reference](/cli/azure/vm/image/terms).
+
+## Deploy the application
+
+To deploy the application (extension) through Azure CLI, follow the steps outlined in [Deploy and manage cluster extensions by using Azure CLI](deploy-extensions-az-cli.md).
+
+## Next steps
+
+- Learn about [Kubernetes applications available through Marketplace](deploy-marketplace.md).
+- Learn about [cluster extensions](cluster-extensions.md).
aks Deploy Application Template https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/deploy-application-template.md
+
+ Title: Deploy an Azure Kubernetes application by using an ARM template
+description: Learn how to deploy an Azure Kubernetes application by using an ARM template.
+++ Last updated : 05/15/2023++
+# Deploy an Azure Kubernetes application by using an ARM template
+
+To deploy a Kubernetes application programmatically through Azure CLI, you select the Kubernetes application and settings, generate an ARM template, accept legal terms and conditions, and finally deploy the ARM template.
+
+## Select Kubernetes application
+
+First, you need to select the Kubernetes application that you want to deploy in the Azure portal.
+
+1. In the Azure portal, go to the [Marketplace page](https://ms.portal.azure.com/#view/Microsoft_Azure_Marketplace/MarketplaceOffersBlade/selectedMenuItemId/home/fromContext/AKS).
+1. Select your Kubernetes application.
+1. Select the required plan.
+1. Select the **Usage Information + Support** tab. Copy the values for `publisherID`, `productID`, and `planID`. You'll need these values later.
+
+ :::image type="content" source="media/deploy-application-template/usage-information.png" alt-text="Screenshot showing the Usage Information + Support tab for a Kubernetes application.":::
+
+## Generate ARM template
+
+Continue on to generate the ARM template for your deployment.
+
+1. Select the **Create** button.
+1. Fill out all the application (extension) details.
+1. At the bottom of the **Review + Create** tab, select **Download a template for automation**.
+
+ :::image type="content" source="media/deploy-application-template/download-template.png" alt-text="Screenshot showing the option to download a template for a Kubernetes application.":::
+
+ If all the validations are passed, you'll see the ARM template in the editor.
+
+ :::image type="content" source="media/deploy-application-template/download-arm-template.png" alt-text="Screenshot showing an ARM template for a Kubernetes application.":::
+
+1. Download the ARM template and save it to a file on your computer.
+
+## Accept terms and agreements
+
+Before you can deploy a Kubernetes application, you need to accept its terms and agreements. To do so, use [Azure CLI](/cli/azure/vm/image/terms) or [Azure PowerShell](/powershell/module/azurerm.marketplaceordering/). Be sure to use the values you copied for `plan-publisher`, `plan-offerID`, and `plan-name` in your command.
+
+```azurecli
+az vm image terms accept --offer <Product ID> --plan <Plan ID> --publisher <Publisher ID>
+```
+
+> [!NOTE]
+> Although this Azure CLI command is for VMs, it also works for containers. For more information, see the [`az cm image terms` reference](/cli/azure/vm/image/terms).
+
+```azurepowershell
+## Get-AzureRmMarketplaceTerms -Publisher <Publisher ID> -Product <Product ID> -Name <Plan ID>
+```
+
+## Deploy ARM template
+
+Once you've accepted the terms, you can deploy your ARM template. For instructions, see [Tutorial: Create and deploy your first ARM template](/azure/azure-resource-manager/templates/template-tutorial-create-first-template).
+
+## Next steps
+
+- Learn about [Kubernetes applications available through Marketplace](deploy-marketplace.md).
+- Learn about [cluster extensions](cluster-extensions.md).
+
aks Deploy Extensions Az Cli https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/deploy-extensions-az-cli.md
Title: Deploy and manage cluster extensions by using the Azure CLI description: Learn how to use Azure CLI to deploy and manage extensions for Azure Kubernetes Service clusters. Previously updated : 05/12/2023 Last updated : 05/15/2023
To list all extensions installed on a cluster, use `k8s-extension list`, passing
az k8s-extension list --cluster-name <clusterName> --resource-group <resourceGroupName> --cluster-type managedClusters ```
-### Update extension instance
+## Update extension instance
> [!NOTE] > Refer to documentation for the specific extension type to understand the specific settings in `--configuration-settings` and `--configuration-protected-settings` that are able to be updated. For `--configuration-protected-settings`, all settings are expected to be provided, even if only one setting is being updated. If any of these settings are omitted, those settings will be considered obsolete and deleted.
az k8s-extension update --name azureml --extension-type Microsoft.AzureML.Kubern
| `--resource-group` | The resource group containing the AKS cluster | | `--cluster-type` | The cluster type on which the extension instance has to be created. Specify `managedClusters` as it maps to AKS clusters|
+If updating a Kubernetes application procured through Marketplace, the following parameters are also required:
+
+| Parameter name | Description |
+|-||
+|`--plan-name` | **Plan ID** of the extension, found on the Marketplace page in the Azure portal under **Usage Information + Support**. |
+|`--plan-product` | **Product ID** of the extension, found on the Marketplace page in the Azure portal under **Usage Information + Support**. An example of this is the name of the ISV offering used. |
+|`--plan-publisher` | **Publisher ID** of the extension, found on the Marketplace page in the Azure portal under **Usage Information + Support**. |
+ ### Optional parameters for update | Parameter name | Description |
az k8s-extension update --name azureml --extension-type Microsoft.AzureML.Kubern
| `--configuration-protected-settings-file` | Path to the JSON file having key value pairs to be used for passing in sensitive settings to the extension. If this parameter is used in the command, then `--configuration-protected-settings` can't be used in the same command. | | `--scope` | Scope of installation for the extension - `cluster` or `namespace` | | `--release-train` | Extension authors can publish versions in different release trains such as `Stable`, `Preview`, etc. If this parameter isn't set explicitly, `Stable` is used as default. This parameter can't be used when `autoUpgradeMinorVersion` parameter is set to `false`. |
-|`--plan-name` | **Plan ID** of the extension, found on the Marketplace page in the Azure portal under **Usage Information + Support**. |
-|`--plan-product` | **Product ID** of the extension, found on the Marketplace page in the Azure portal under **Usage Information + Support**. An example of this is the name of the ISV offering used. |
-|`--plan-publisher` | **Publisher ID** of the extension, found on the Marketplace page in the Azure portal under **Usage Information + Support**. |
+ ## Delete extension instance
api-management Index https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/api-management/policies/index.md
na Previously updated : 10/31/2017 Last updated : 05/15/2023 -+ # API Management policy samples
| [Add capabilities to a backend service and cache the response](./cache-response.md) | Shows how to add capabilities to a backend service. For example, accept a name of the place instead of latitude and longitude in a weather forecast API. | | [Authorize access based on JWT claims](./authorize-request-based-on-jwt-claims.md) | Shows how to authorize access to specific HTTP methods on an API based on JWT claims. | | [Authorize requests using external authorizer](./authorize-request-using-external-authorizer.md) | Shows how to use external authorizer for securing API access. |
-| [Authorize access using Google OAuth token](./use-google-as-oauth-token-provider.md) | Shows how to authorize access to your endpoints using Google as an OAuth token provider. |
| [Filter IP Addresses when using an Application Gateway](./filter-ip-addresses-when-using-appgw.md) | Shows how to IP filter in policies when the API Management instance is accessed via an Application Gateway | [Generate Shared Access Signature and forward request to Azure storage](./generate-shared-access-signature.md) | Shows how to generate [Shared Access Signature](../../storage/common/storage-sas-overview.md) using expressions and forward the request to Azure storage with rewrite-uri policy. |
-| [Get OAuth2 access token from AAD and forward it to the backend](./use-oauth2-for-authorization.md) | Provides and example of using OAuth2 for authorization between the gateway and a backend. It shows how to obtain an access token from AAD and forward it to the backend. |
+| [Get OAuth2 access token from Azure AD and forward it to the backend](./use-oauth2-for-authorization.md) | Provides an example of using OAuth2 for authorization between the gateway and a backend. It shows how to obtain an access token from Azure AD and forward it to the backend. |
| [Get X-CSRF token from SAP gateway using send request policy](./get-x-csrf-token-from-sap-gateway.md) | Shows how to implement X-CSRF pattern used by many APIs. This example is specific to SAP Gateway. | | [Route the request based on the size of its body](./route-requests-based-on-size.md) | Demonstrates how to route requests based on the size of their bodies. | | [Send request context information to the backend service](./send-request-context-info-to-backend-service.md) | Shows how to send some context information to the backend service for logging or processing. |
api-management Use Google As Oauth Token Provider https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/api-management/policies/use-google-as-oauth-token-provider.md
- Title: Sample API management policy - Authorize access using Google OAuth token-
-description: Azure API management policy sample - Demonstrates how to authorize access to your endpoints using Google as an OAuth token provider.
------- Previously updated : 10/13/2017---
-# Authorize access using Google OAuth token
-
-This article shows an Azure API management policy sample that demonstrates how to authorize access to your endpoints using Google as an OAuth token provider. To set or edit a policy code, follow the steps described in [Set or edit a policy](../set-edit-policies.md). To see other examples, see [policy samples](../policy-reference.md).
-
-## Policy
-
-Paste the code into the **inbound** block.
-
-[!code-xml[Main](../../../api-management-policy-samples/examples/Simple Google OAuth validate-jwt.policy.xml)]
-
-## Next steps
-
-Learn more about APIM policies:
-
-+ [Transformation policies](../api-management-transformation-policies.md)
-+ [Policy samples](../policy-reference.md)
application-gateway Create Ssl Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/application-gateway/create-ssl-portal.md
Thumbprint Subject
E1E81C23B3AD33F9B4D1717B20AB65DBB91AC630 CN=www.contoso.com ```
-Use [Export-PfxCertificate](/powershell/module/pki/export-pfxcertificate) with the Thumbprint that was returned to export a pfx file from the certificate. Make sure your password is 4 - 12 characters long:
+Use [Export-PfxCertificate](/powershell/module/pki/export-pfxcertificate) with the Thumbprint that was returned to export a pfx file from the certificate. The supported pfc algorithms are listed at [PFXImportCertStore function](/windows/win32/api/wincrypt/nf-wincrypt-pfximportcertstore#remarks). Make sure your password is 4 - 12 characters long:
```powershell
application-gateway Ssl Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/application-gateway/ssl-overview.md
Application Gateway supports TLS termination at the gateway, after which traffic
- **Intelligent routing** ΓÇô By decrypting the traffic, the application gateway has access to the request content, such as headers, URI, and so on, and can use this data to route requests. - **Certificate management** ΓÇô Certificates only need to be purchased and installed on the application gateway and not all backend servers. This saves both time and money.
-To configure TLS termination, a TLS/SSL certificate must be added to the listener. This allows the Application Gateway to decrypt incoming traffic and encrypt response traffic to the client. The certificate provided to the Application Gateway must be in Personal Information Exchange (PFX) format, which contains both the private and public keys.
+To configure TLS termination, a TLS/SSL certificate must be added to the listener. This allows the Application Gateway to decrypt incoming traffic and encrypt response traffic to the client. The certificate provided to the Application Gateway must be in Personal Information Exchange (PFX) format, which contains both the private and public keys. The supported PFX algorithms are listed at [PFXImportCertStore function](/windows/win32/api/wincrypt/nf-wincrypt-pfximportcertstore#remarks).
> [!IMPORTANT] > The certificate on the listener requires the entire certificate chain to be uploaded (the root certificate from the CA, the intermediates and the leaf certificate) to establish the chain of trust. - > [!NOTE] > Application gateway doesn't provide any capability to create a new certificate or send a certificate request to a certification authority.
azure-arc Use Azure Policy Flux 2 https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/kubernetes/use-azure-policy-flux-2.md
Title: "Deploy applications consistently at scale using Flux v2 configurations and Azure Policy" Previously updated : 8/23/2022 Last updated : 05/15/2023 description: "Use Azure Policy to apply Flux v2 configurations at scale on Azure Arc-enabled Kubernetes or AKS clusters."
You can use Azure Policy to apply Flux v2 configurations (`Microsoft.KubernetesC
To use Azure Policy, select a built-in policy definition and create a policy assignment. You can search for **flux** to find all of the Flux v2 policy definitions. When creating the policy assignment:
-1. Set the scope for the assignment.
- * The scope will be all resource groups in a subscription or management group or specific resource groups.
-2. Set the parameters for the Flux v2 configuration that will be created.
+1. Set the scope for the assignment to all resource groups in a subscription or management group, or to specific resource groups.
+2. Set the parameters for the Flux v2 configuration that will be created.
Once the assignment is created, the Azure Policy engine identifies all Azure Arc-enabled Kubernetes clusters located within the scope and applies the GitOps configuration to each cluster. To enable separation of concerns, you can create multiple policy assignments, each with a different Flux v2 configuration pointing to a different source. For example, one git repository may be used by cluster admins and other repositories may be used by application teams. > [!TIP]
-> There are built-in policy definitions for these scenarios:
+> There are [built-in policy definitions](policy-reference.md) for these scenarios:
> > * Flux extension install (required for all scenarios): `Configure installation of Flux extension on Kubernetes cluster` > * Flux configuration using public Git repository (generally a test scenario): `Configure Kubernetes clusters with Flux v2 configuration using public Git repository`
Verify you have `Microsoft.Authorization/policyAssignments/write` permissions on
1. In the Azure portal, navigate to **Policy**. 1. In the **Authoring** section of the sidebar, select **Definitions**.
-1. In the "Kubernetes" category, choose the "Configure Kubernetes clusters with specified GitOps configuration using no secrets" built-in policy definition.
+1. In the "Kubernetes" category, choose the **Configure Kubernetes clusters with Flux v2 configuration using public Git repository** built-in policy definition.
1. Select **Assign**. 1. Set the **Scope** to the management group, subscription, or resource group to which the policy assignment will apply. * If you want to exclude any resources from the policy assignment scope, set **Exclusions**.
Verify you have `Microsoft.Authorization/policyAssignments/write` permissions on
* When creating Flux configurations you must provide a value for one (and only one) of these parameters: `repositoryRefBranch`, `repositoryRefTag`, `repositoryRefSemver`, `repositoryRefCommit`. 1. Select **Next**. 1. Enable **Create a remediation task**.
-1. Verify **Create a managed identity** is checked, and that the identity will have **Contributor** permissions.
- * For more information, see the [Create a policy assignment quickstart](../../governance/policy/assign-policy-portal.md) and the [Remediate non-compliant resources with Azure Policy article](../../governance/policy/how-to/remediate-resources.md).
+1. Verify **Create a managed identity** is checked, and that the identity will have **Contributor** permissions.
+ * For more information, see [Quickstart: Create a policy assignment to identify non-compliant resources](../../governance/policy/assign-policy-portal.md) and [Remediate non-compliant resources with Azure Policy](../../governance/policy/how-to/remediate-resources.md).
1. Select **Review + create**. After creating the policy assignment, the configuration is applied to new Azure Arc-enabled Kubernetes or AKS clusters created within the scope of policy assignment.
If you have a scenario that differs from the built-in policies, you can overcome
## Next steps
-[Set up Azure Monitor for Containers with Azure Arc-enabled Kubernetes clusters](../../azure-monitor/containers/container-insights-enable-arc-enabled-clusters.md).
+* [Set up Azure Monitor for Containers with Azure Arc-enabled Kubernetes clusters](../../azure-monitor/containers/container-insights-enable-arc-enabled-clusters.md).
+* Learn more about [deploying applications using GitOps with Flux v2](tutorial-use-gitops-flux2.md).
azure-cache-for-redis Cache How To Import Export Data https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-cache-for-redis/cache-how-to-import-export-data.md
Use import to bring Redis compatible RDB files from any Redis server running in
:::image type="content" source="./media/cache-how-to-import-export-data/cache-import-blobs.png" alt-text="Screenshot showing the Import button to select to begin the import.":::
- You can monitor the progress of the import operation by following the notifications from the Azure portal, or by viewing the events in the [audit log](../azure-monitor/essentials/activity-log.md).
+ You can monitor the progress of the import operation by following the notifications from the Azure portal, or by viewing the events in the [activity log](../azure-monitor/essentials/activity-log.md).
> [!IMPORTANT]
- > Audit log support is not yet available in the Enterprise tiers.
+ > Activity log support is not yet available in the Enterprise tiers.
> :::image type="content" source="./media/cache-how-to-import-export-data/cache-import-data-import-complete.png" alt-text="Screenshot showing the import progress in the notifications area.":::
This section contains frequently asked questions about the Import/Export feature
- [I received a timeout error during my Import/Export operation. What does it mean?](#i-received-a-timeout-error-during-my-importexport-operation-what-does-it-mean) - [I got an error when exporting my data to Azure Blob Storage. What happened?](#i-got-an-error-when-exporting-my-data-to-azure-blob-storage-what-happened) - [How to export if I have firewall enabled on my storage account?](#how-to-export-if-i-have-firewall-enabled-on-my-storage-account)
+- [Can I import or export data from a storage account in a different subscription than my cache?](#can-i-import-or-export-data-from-a-storage-account-in-a-different-subscription-than-my-cache)
### Which tiers support Import/Export?
For firewall enabled storage accounts, we need to check ΓÇ£Allow Azure services
More information here - [Managed identity for storage accounts - Azure Cache for Redis](cache-managed-identity.md)
+### Can I import or export data from a storage account in a different subscription than my cache?
+
+In the _Premium_ tier, you can import and export data from a storage account in a different subscription than your cache, but you must use [managed identity](cache-managed-identity.md) as the authentication method. You will need to select the chosen subscription holding the storage account when configuring the import or export.
+ ## Next steps Learn more about Azure Cache for Redis features.
azure-cache-for-redis Cache How To Monitor https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-cache-for-redis/cache-how-to-monitor.md
In contrast, for clustered caches, we recommend using the metrics with the suffi
## List of metrics
+- 99th Percentile Latency (preview)
+ - Depicts the worst-case (99th percentile) latency of server-side commands. Measured by issuing `PING` commands from the load balancer to the Redis server and tracking the time to respond.
+ - Useful for tracking the health of your Redis instance. Latency will increase if the cache is under heavy load or if there are long running commands that delay the execution of the `PING` command.
+ - This metric is only available in Standard and Premium tier caches
- Cache Latency (preview) - The latency of the cache calculated using the internode latency of the cache. This metric is measured in microseconds, and has three dimensions: `Avg`, `Min`, and `Max`. The dimensions represent the average, minimum, and maximum latency of the cache during the specified reporting interval. - Cache Misses
In contrast, for clustered caches, we recommend using the metrics with the suffi
> [!CAUTION] > The Server Load metric can present incorrect data for Enterprise and Enterprise Flash tier caches. Sometimes Server Load is represented as being over 100. We are investigating this issue. We recommend using the CPU metric instead in the meantime. - - Sets - The number of set operations to the cache during the specified reporting interval. This value is the sum of the following values from the Redis INFO all command: `cmdstat_set`, `cmdstat_hset`, `cmdstat_hmset`, `cmdstat_hsetnx`, `cmdstat_lset`, `cmdstat_mset`, `cmdstat_msetnx`, `cmdstat_setbit`, `cmdstat_setex`, `cmdstat_setrange`, and `cmdstat_setnx`. - Total Keys
azure-cache-for-redis Cache How To Premium Persistence https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-cache-for-redis/cache-how-to-premium-persistence.md
Persistence features are intended to be used to restore data to the same cache a
- RDB/AOF persisted data files can't be imported to a new cache. Use the [Import/Export](cache-how-to-import-export-data.md) feature instead. - Persistence isn't supported with caches using [passive geo-replication](cache-how-to-geo-replication.md) or [active geo-replication](cache-how-to-active-geo-replication.md). - On the _Premium_ tier, AOF persistence isn't supported with [multiple replicas](cache-how-to-multi-replicas.md). -- On the _Premium_ tier, data must be persisted to a storage account in the same region as the cache instance.
+- On the _Premium_ tier, data must be persisted to a storage account in the same region as the cache instance.
+- On the _Premium_ tier, storage accounts in different subscriptions can be used to persist data if [managed identity](cache-managed-identity.md) is used to connect to the storage account.
## Differences between persistence in the Premium and Enterprise tiers
On the **Enterprise** and **Enterprise Flash** tiers, data is persisted to a man
| Setting | Suggested value | Description | | | - | -- |
+ | **Authentication Method** | Drop-down and select an authentication method. Choices are **Managed Identity** or **Storage Key**| Choose your prefered authentication method. Using [managed identity](cache-managed-identity.md) allows you to use a storage account in a different subscription than the one in which your cache is located. |
+ | **Subscription** | Drop-down and select an subscription. | You can choose a storage account in a different subscription if you are using managed identity as the authentication method. |
| **Backup Frequency** | Drop-down and select a backup interval. Choices include **15 Minutes**, **30 minutes**, **60 minutes**, **6 hours**, **12 hours**, and **24 hours**. | This interval starts counting down after the previous backup operation successfully completes. When it elapses, a new backup starts. | | **Storage Account** | Drop-down and select your storage account. | Choose a storage account in the same region and subscription as the cache. A **Premium Storage** account is recommended because it has higher throughput. Also, we _strongly_ recommend that you disable the soft delete feature on the storage account as it leads to increased storage costs. For more information, see [Pricing and billing](../storage/blobs/soft-delete-blob-overview.md). | | **Storage Key** | Drop-down and choose either the **Primary key** or **Secondary key** to use. | If the storage key for your persistence account is regenerated, you must reconfigure the key from the **Storage Key** drop-down. |
On the **Enterprise** and **Enterprise Flash** tiers, data is persisted to a man
| Setting | Suggested value | Description | | | - | -- |
+ | **Authentication Method** | Drop-down and select an authentication method. Choices are **Managed Identity** or **Storage Key**| Choose your prefered authentication method. Using [managed identity](cache-managed-identity.md) allows you to use a storage account in a different subscription than the one in which your cache is located. |
+ | **Subscription** | Drop-down and select an subscription. | You can choose a storage account in a different subscription if you are using managed identity as the authentication method. |
| **First Storage Account** | Drop-down and select your storage account. | Choose a storage account in the same region and subscription as the cache. A **Premium Storage** account is recommended because it has higher throughput. Also, we _strongly_ recommend that you disable the soft delete feature on the storage account as it leads to increased storage costs. For more information, see [Pricing and billing](/azure/storage/blobs/soft-delete-blob-overview). | | **First Storage Key** | Drop-down and choose either the **Primary key** or **Secondary key** to use. | If the storage key for your persistence account is regenerated, you must reconfigure the key from the **Storage Key** drop-down. | | **Second Storage Account** | (Optional) Drop-down and select your secondary storage account. | You can optionally configure another storage account. If a second storage account is configured, the writes to the replica cache are written to this second storage account. |
For both RDB and AOF persistence:
### Can I use the same storage account for persistence across two different caches?
-Yes, you can use the same storage account for persistence across two different caches.
+Yes, you can use the same storage account for persistence across two different caches. The [limitations on subscriptions and regions](#prerequisites-and-limitations) will still apply.
### Will I be charged for the storage being used in data persistence?
All RDB persistence backups, except for the most recent one, are automatically d
Use a second storage account for AOF persistence when you think you've higher than expected set operations on the cache. Setting up the secondary storage account helps ensure your cache doesn't reach storage bandwidth limits. This option is only available for Premium tier caches. - ### How can I remove the second storage account? You can remove the AOF persistence secondary storage account by setting the second storage account to be the same as the first storage account. For existing caches, access **Data persistence** from the **Resource menu** for your cache. To disable AOF persistence, select **Disabled**.
azure-cache-for-redis Cache Managed Identity https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-cache-for-redis/cache-managed-identity.md
Each type of managed identity has advantages, but in Azure Cache for Redis, the
Managed identity can be enabled either when you create a cache instance or after the cache has been created. During the creation of a cache, only a system-assigned identity can be assigned. Either identity type can be added to an existing cache.
-### Prerequisites and limitations
+## Scope of availability
+
+|Tier | Basic, Standard | Premium |Enterprise, Enterprise Flash |
+|||||
+|Available | No | Yes | No |
+
+## Prerequisites and limitations
Managed identity for storage is only used with the import/export feature and persistence feature now, which limits its use to the Premium tier of Azure Cache for Redis.
azure-cache-for-redis Cache Whats New https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-cache-for-redis/cache-whats-new.md
Azure Cache for Redis now supports clustered caches with up to 30 shards. Now, y
For more information, see [Configure clustering for Azure Cache for Redis instance](cache-how-to-premium-clustering.md#azure-cache-for-redis-now-supports-up-to-30-shards-preview).
+## April 2023
+
+### 99th percentile latency metric (preview)
+
+A new metric is available to track the worst-case latency of server-side commands in Azure Cache for Redis instances. Latency is measured by using `PING` commands and tracking response times. This metric can be used to track the health of your cache instance and to see if long-running commands are compromising latency performance.
+
+For more information, see [Monitor Azure Cache for Redis](cache-how-to-monitor.md#list-of-metrics).
+ ## March 2023 ### In-place scale up and scale out for the Enterprise tiers (preview)
azure-functions Functions Reference Python https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/functions-reference-python.md
Azure Functions supports the following Python versions:
| Functions version | Python\* versions | | -- | :--: |
-| 4.x | 3.10 (Preview)<br/>3.9<br/> 3.8<br/>3.7 |
+| 4.x | 3.10<br/>3.9<br/> 3.8<br/>3.7 |
| 3.x | 3.9<br/> 3.8<br/>3.7 | | 2.x | 3.7 |
azure-monitor Javascript Framework Extensions https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/javascript-framework-extensions.md
Install the npm package:
```bash
-npm install @microsoft/applicationinsights-react-js @microsoft/applicationinsights-web --save
+npm install @microsoft/applicationinsights-angularplugin-js @microsoft/applicationinsights-web --save
```
azure-netapp-files Azacsnap Introduction https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-netapp-files/azacsnap-introduction.md
Azure Application Consistent Snapshot tool (AzAcSnap) is a command-line tool tha
AzAcSnap leverages the volume snapshot and replication functionalities in Azure NetApp Files and Azure Large Instance. It provides the following benefits:
+- **Rapid backup snapshots independent of database size**
+ - AzAcSnap takes snapshot backups regardless of the size of the volumes or database by leveraging the snapshot technology of storage.
+ - It takes snapshots in parallel across all the volumes thereby allowing multiple volumes to be part of the database storage.
+ - Tests have shown a 100+TiB database stored across 16 volumes was able to be snapshot backup in less than 2 minutes.
- **Application-consistent data protection**
- AzAcSnap is a centralized solution for backing up critical database files. It ensures database consistency before performing a storage volume snapshot. As a result, it ensures that the storage volume snapshot can be used for database recovery.
+ - AzAcSnap can be deployed as a centralized or distributed solution for backing up critical database files. It ensures database consistency before performing a storage volume snapshot. As a result, it ensures that the storage volume snapshot can be used for database recovery.
- **Database catalog management**
- When you use AzAcSnap with SAP HANA, the records within the backup catalog are kept current with storage snapshots. This capability allows a database administrator to see the backup activity.
+ - When you use AzAcSnap with SAP HANA, the records within the backup catalog are kept current with storage snapshots. This capability allows a database administrator to see the backup activity.
- **Ad hoc volume protection**
- This capability is helpful for non-database volumes that don't need application quiescing before taking a storage snapshot. Examples include SAP HANA log-backup volumes or SAPTRANS volumes.
+ - This capability is helpful for non-database volumes that don't need application quiescing before taking a storage snapshot. Examples include SAP HANA log-backup volumes or SAPTRANS volumes.
- **Cloning of storage volumes**
- This capability provides space-efficient storage volume clones for development and test purposes.
+ - This capability provides space-efficient storage volume clones for development and test purposes.
- **Support for disaster recovery**
- AzAcSnap leverages storage volume replication to provide options for recovering replicated application-consistent snapshots at a remote site.
+ - AzAcSnap leverages storage volume replication to provide options for recovering replicated application-consistent snapshots at a remote site.
AzAcSnap is a single binary. It does not need additional agents or plug-ins to interact with the database or the storage (Azure NetApp Files via Azure Resource Manager, and Azure Large Instance via SSH). AzAcSnap must be installed on a system that has connectivity to the database and the storage. However, the flexibility of installation and configuration allows for either a single centralized installation (Azure NetApp Files only) or a fully distributed installation (Azure NetApp Files and Azure Large Instance) with copies installed on each database installation.
azure-netapp-files Azure Government https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-netapp-files/azure-government.md
All [Azure NetApp Files features](whats-new.md) available on Azure public cloud
| Azure NetApp Files features | Azure public cloud availability | Azure Government availability | |: |: |: | | Azure NetApp Files backup | Public preview | No |
-| Standard network features | Generally available (GA) | No |
| Azure NetApp Files datastores for AVS | Generally available (GA) | No | | Azure NetApp Files customer-managed keys | Public preview | No | | Azure NetApp Files large volumes | Public preview | No |
+| Edit network features for existing volumes | Public preview | No |
+| Standard network features | Generally available (GA) | No |
## Portal access
When connecting to Azure Government through PowerShell, you must specify an envi
| | | | [Azure](/powershell/module/az.accounts/Connect-AzAccount) commands |`Connect-AzAccount -EnvironmentName AzureUSGovernment` | | [Azure Active Directory](/powershell/module/azuread/connect-azuread) commands |`Connect-AzureAD -AzureEnvironmentName AzureUSGovernment` |
-| [Azure (Classic deployment model)](/powershell/module/servicemanagement/azure.service/add-azureaccount) commands |`Add-AzureAccount -Environment AzureUSGovernment` |
+| [Azure (Classic deployment model)](/powershell/module/servicemanagement/azure/add-azureaccount) commands |`Add-AzureAccount -Environment AzureUSGovernment` |
| [Azure Active Directory (Classic deployment model)](/previous-versions/azure/jj151815(v=azure.100)) commands |`Connect-MsolService -AzureEnvironment UsGovernment` | See [Connect to Azure Government with PowerShell](../azure-government/documentation-government-get-started-connect-with-ps.md) for details.
azure-netapp-files Azure Netapp Files Network Topologies https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-netapp-files/azure-netapp-files-network-topologies.md
Azure NetApp Files volumes are designed to be contained in a special purpose sub
## Configurable network features
- You can create new volumes choosing *Standard* or *Basic* network features in supported regions. In regions where the Standard network features aren't supported, the volume defaults to using the Basic network features. For more information, see [Configure network features](configure-network-features.md).
+ In supported regions, you can create new volumes or modify existing volumes to use *Standard* or *Basic* network features. In regions where the Standard network features aren't supported, the volume defaults to using the Basic network features. For more information, see [Configure network features](configure-network-features.md).
* ***Standard*** Selecting this setting enables higher IP limits and standard VNet features such as [network security groups](../virtual-network/network-security-groups-overview.md) and [user-defined routes](../virtual-network/virtual-networks-udr-overview.md#user-defined) on delegated subnets, and additional connectivity patterns as indicated in this article.
Azure NetApp Files volumes are designed to be contained in a special purpose sub
### Supported regions
-Azure NetApp Files Standard network features are supported for the following regions:
+<a name="regions-standard-network-features"></a>Azure NetApp Files *Standard network features* are supported for the following regions:
* Australia Central * Australia Central 2
Azure NetApp Files Standard network features are supported for the following reg
* West US 2 * West US 3
+<a name="regions-edit-network-features"></a>The option to *[edit network features for existing volumes](configure-network-features.md#edit-network-features-option-for-existing-volumes)* is supported for the following regions:
+
+* Australia Central
+* Australia Central 2
+* Australia East
+* Brazil South
+* Canada Central
+* East Asia
+* Germany North
+* Japan West
+* Korea Central
+* North Central US
+* Norway East
+* South Africa North
+* South India
+* Sweden Central
+* UAE Central
+* UAE North
+ ## Considerations You should understand a few considerations when you plan for Azure NetApp Files network.
The following table describes the network topologies supported by each network f
| Connectivity to volume in a peered VNet (Same region) | Yes | Yes | | Connectivity to volume in a peered VNet (Cross region or global peering) | Yes* | No | | Connectivity to a volume over ExpressRoute gateway | Yes | Yes |
-| ExpressRoute (ER) FastPath | Yes | No |
+| [ExpressRoute (ER) FastPath](../expressroute/about-fastpath.md) | Yes | No |
| Connectivity from on-premises to a volume in a spoke VNet over ExpressRoute gateway and VNet peering with gateway transit | Yes | Yes | | Connectivity from on-premises to a volume in a spoke VNet over VPN gateway | Yes | Yes | | Connectivity from on-premises to a volume in a spoke VNet over VPN gateway and VNet peering with gateway transit | Yes | Yes |
azure-netapp-files Configure Network Features https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-netapp-files/configure-network-features.md
na Previously updated : 01/31/2023 Last updated : 05/16/2023 # Configure network features for an Azure NetApp Files volume
-The **Network Features** functionality enables you to indicate whether you want to use VNet features for an Azure NetApp Files volume. With this functionality, you can set the option to ***Standard*** or ***Basic***. You can specify the setting when you create a new NFS, SMB, or dual-protocol volume. See [Guidelines for Azure NetApp Files network planning](azure-netapp-files-network-topologies.md) for details about network features.
+The **Network Features** functionality enables you to indicate whether you want to use VNet features for an Azure NetApp Files volume. With this functionality, you can set the option to ***Standard*** or ***Basic***. You can specify the setting when you create a new NFS, SMB, or dual-protocol volume. You can also modify the network features option on existing volumes. See [Guidelines for Azure NetApp Files network planning](azure-netapp-files-network-topologies.md) for details about network features.
This article helps you understand the options and shows you how to configure network features.
-The **Network Features** functionality is not available in Azure Government regions. See [supported regions](azure-netapp-files-network-topologies.md#supported-regions) for a full list.
+The **Network Features** functionality isn't available in Azure Government regions. See [supported regions](azure-netapp-files-network-topologies.md#supported-regions) for a full list.
## Options for network features
Two settings are available for network features:
* ***Basic*** This setting provides reduced IP limits (<1000) and no additional VNet features for the volumes.
- You should set **Network Features** to *Basic* if you do not require VNet features.
+ You should set **Network Features** to *Basic* if you don't require VNet features.
## Considerations
-* Regardless of the Network Features option you set (*Standard* or *Basic*), an Azure VNet can only have one subnet delegated to Azure NetApp files. See [Delegate a subnet to Azure NetApp Files](azure-netapp-files-delegate-subnet.md#considerations).
+* Regardless of the network features option you set (*Standard* or *Basic*), an Azure VNet can only have one subnet delegated to Azure NetApp files. See [Delegate a subnet to Azure NetApp Files](azure-netapp-files-delegate-subnet.md#considerations).
-* Currently, you can specify the network features setting only during the creation process of a new volume. You cannot modify the setting on existing volumes.
+* You can create or modify volumes with the Standard network features only if the corresponding [Azure region supports the Standard volume capability](azure-netapp-files-network-topologies.md#supported-regions).
-* You can create volumes with the Standard network features only if the corresponding [Azure region supports the Standard volume capability](azure-netapp-files-network-topologies.md#supported-regions).
* If the Standard volume capability is supported for the region, the Network Features field of the Create a Volume page defaults to *Standard*. You can change this setting to *Basic*.
- * If the Standard volume capability is not available for the region, the Network Features field of the Create a Volume page defaults to *Basic*, and you cannot modify the setting.
+ * If the Standard volume capability isn't available for the region, the Network Features field of the Create a Volume page defaults to *Basic*, and you can't modify the setting.
-* The ability to locate storage compatible with the desired type of network features depends on the VNet specified. If you cannot create a volume because of insufficient resources, you can try a different VNet for which compatible storage is available.
+* The ability to locate storage compatible with the desired type of network features depends on the VNet specified. If you can't create a volume because of insufficient resources, you can try a different VNet for which compatible storage is available.
-* You can create Basic volumes from Basic volume snapshots and Standard volumes from Standard volume snapshots. Creating a Basic volume from a Standard volume snapshot is not supported. Creating a Standard volume from a Basic volume snapshot is not supported.
+* You can create Basic volumes from Basic volume snapshots and Standard volumes from Standard volume snapshots. Creating a Basic volume from a Standard volume snapshot isn't supported. Creating a Standard volume from a Basic volume snapshot isn't supported.
-* When restoring a backup to a new volume, the new volume can be configure with Basic or Standard network features.
+* When you restore a backup to a new volume, you can configure the new volume with Basic or Standard network features.
-* Conversion between Basic and Standard network features in either direction is not currently supported.
-
-## Set the Network Features option
+## <a name="set-the-network-features-option"></a>Set network features option during volume creation
-This section shows you how to set the Network Features option.
+This section shows you how to set the network features option when you create a new volume.
1. During the process of creating a new [NFS](azure-netapp-files-create-volumes.md), [SMB](azure-netapp-files-create-volumes-smb.md), or [dual-protocol](create-volumes-dual-protocol.md) volume, you can set the **Network Features** option to **Basic** or **Standard** under the Basic tab of the Create a Volume screen.
This section shows you how to set the Network Features option.
[ ![Screenshot that shows the Volumes page displaying the network features setting.](../media/azure-netapp-files/network-features-volume-list.png)](../media/azure-netapp-files/network-features-volume-list.png#lightbox)
+## Edit network features option for existing volumes
+
+You can edit the network features option of existing volumes from *Basic* to *Standard* network features. The change you make applies to all volumes in the same *network sibling set* (or *siblings*). Siblings are determined by their network IP address relationship. They share the same NIC for mounting the volume to the client or connecting to the SMB share of the volume. At the creation of a volume, its siblings are determined by a placement algorithm that aims for reusing the IP address where possible.
+
+You can also revert the option from *Standard* back to *Basic* network features, but considerations apply and require careful planning. For example, you might need to change configurations for Network Security Groups (NSGs), user-defined routes (UDRs), and IP limits if you revert. See [Guidelines for Azure NetApp Files network planning](azure-netapp-files-network-topologies.md#constraints) for constraints and supported network topologies about Standard and Basic network features.
+
+This feature currently doesn't support SDK.
+
+> [!IMPORTANT]
+> The option to edit network features is currently in preview. You need to submit a waitlist request for accessing the feature through the **[Azure NetApp Files standard networking features (edit volumes) Public Preview Request Form](https://aka.ms/anfeditnetworkfeaturespreview)**. This feature is expected to be enabled within a week after you submit the waitlist request. You can check the status of feature registration by using the following command:
+>
+> ```azurepowershell-interactive
+> Get-AzProviderFeature -ProviderNamespace Microsoft.NetApp -FeatureName ANFBasicToStdNetworkFeaturesUpgrade
+>
+> FeatureName ProviderName RegistrationState
+> -- --
+> ANFBasicToStdNetworkFeaturesUpgrade Microsoft.NetApp Registered
+> ```
+
+> [!IMPORTANT]
+> Updating the network features option might cause a network disruption on the volumes for up to 5 minutes.
+
+1. Navigate to the volume that you want to change the network features option.
+1. Select **Change network features**.
+1. The **Edit network features** window displays the volumes that are in the same network sibling set. Confirm whether you want to modify the network features option.
+
+ :::image type="content" source="../media/azure-netapp-files/edit-network-features.png" alt-text="Screenshot showing the Edit Network Features window." lightbox="../media/azure-netapp-files/edit-network-features.png":::
+ ## Next steps * [Guidelines for Azure NetApp Files network planning](azure-netapp-files-network-topologies.md)
azure-netapp-files Large Volumes Requirements Considerations https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-netapp-files/large-volumes-requirements-considerations.md
To enroll in the preview for large volumes, use the [large volumes preview sign-
* Large volumes are not currently supported with cross-region replication. * You can't create a large volume with application volume groups. * Large volumes aren't currently supported with cross-zone replication.
-* The SDK for large volumes isn't currently available.
* Currently, large volumes are not suited for database (HANA, Oracle, SQL Server, etc) data and log volumes. For database workloads requiring more than a single volumeΓÇÖs throughput limit, consider deploying multiple regular volumes. * Throughput ceilings for the three performance tiers (Standard, Premium, and Ultra) of large volumes are based on the existing 100-TiB maximum capacity targets. You're able to grow to 500 TiB with the throughput ceiling per the following table:
azure-netapp-files Whats New https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-netapp-files/whats-new.md
Azure NetApp Files is updated regularly. This article provides a summary about t
## May 2023
+* [Standard network features - Edit volumes](configure-network-features.md#edit-network-features-option-for-existing-volumes) (Preview)
+
+ Azure NetApp Files volumes have been supported with Standard network features since [October 2021](#october-2021), but only for newly created volumes. This new *edit volumes* capability lets you change *existing* volumes that were configured with Basic network features to use Standard network features. This capability provides an enhanced, more standard, Azure Virtual Network (VNet) experience through various security and connectivity features that are available on Azure VNets to Azure services. When you edit existing volumes to use Standard network features, you can start taking advantage of networking capabilities, such as (but not limited to):
+ * Increased number of client IPs in a virtual network (including immediately peered VNets) accessing Azure NetApp Files volumes - the [same as Azure VMs](azure-netapp-files-resource-limits.md#resource-limits)
+ * Enhanced network security with support for [network security groups](../virtual-network/network-security-groups-overview.md) on Azure NetApp Files delegated subnets
+ * Enhanced network control with support for [user-defined routes](../virtual-network/virtual-networks-udr-overview.md#user-defined) to and from Azure NetApp Files delegated subnets
+ * Connectivity over Active/Active VPN gateway setup
+ * [ExpressRoute FastPath](../expressroute/about-fastpath.md) connectivity to Azure NetApp Files
+
+ This feature is now in public preview, currently available in [16 Azure regions](azure-netapp-files-network-topologies.md#regions-edit-network-features). It will roll out to other regions. Stay tuned for further information as more regions become available.
+ * [Azure Application Consistent Snapshot tool (AzAcSnap) 8 (GA)](azacsnap-introduction.md)
- Version 8 of the AzAcSnap tool is now generally available. [Azure Application Consistent Snapshot Tool](azacsnap-introduction.md) (AzAcSnap) is a command-line tool that enables customers to simplify data protection for third-party databases in Linux environments. AzAcSnap 8 introduces the following new capabilities and improvements:
+ Version 8 of the AzAcSnap tool is now generally available. [Azure Application Consistent Snapshot Tool](azacsnap-introduction.md) (AzAcSnap) is a command-line tool that enables you to simplify data protection for third-party databases in Linux environments. AzAcSnap 8 introduces the following new capabilities and improvements:
* Restore change - ability to revert volume for Azure NetApp Files
- * New global settings file (.azacsnaprc) to control behavior of azacsnap
+ * New global settings file (`.azacsnaprc`) to control behavior of `azacsnap`
* Logging enhancements for failure cases and new "mainlog" for summarized monitoring
- * Backup (-c backup) and Details (-c details) fixes
+ * Backup (`-c backup`) and Details (`-c details`) fixes
Download the latest release of the installer [here](https://aka.ms/azacsnapinstaller).
backup Backup Support Matrix Iaas https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/backup/backup-support-matrix-iaas.md
Recovery points on DPM or MABS disk | 64 for file servers, and 448 for app serve
Restore files across operating systems | You can restore files on any machine that has the same OS as the backed-up VM, or a compatible OS. See the [compatible OS table](backup-azure-restore-files-from-vm.md#step-3-os-requirements-to-successfully-run-the-script). Restore files from encrypted VMs | Not supported. Restore files from network-restricted storage accounts | Not supported.
-Restore files on VMs by using Windows Storage Spaces | Not supported on the same VM.<br/><br/> Instead, restore the files on a compatible VM.
+Restore files on VMs by using Windows Storage Spaces | Not supported.
Restore files on a Linux VM by using LVM or RAID arrays | Not supported on the same VM.<br/><br/> Restore on a compatible VM. Restore files with special network settings | Not supported on the same VM. <br/><br/> Restore on a compatible VM. Restore files from an ultra disk | Supported. <br/><br/>See [Azure VM storage support](#vm-storage-support).
Back up Gen2 VMs | Supported. <br><br/> Azure Backup supports backup and restore
Back up Azure VMs with locks | Supported for managed VMs. <br><br> Not supported for unmanaged VMs. [Restore spot VMs](../virtual-machines/spot-vms.md) | Not supported. <br><br/> Azure Backup restores spot VMs as regular Azure VMs. [Restore VMs in an Azure dedicated host](../virtual-machines/dedicated-hosts.md) | Supported.<br></br>When you're restoring an Azure VM through the [Create new](backup-azure-arm-restore-vms.md#create-a-vm) option, the VM can't be restored in the dedicated host, even when the restore is successful. To achieve this, we recommend that you [restore as disks](backup-azure-arm-restore-vms.md#restore-disks). While you're restoring as disks by using the template, create a VM in a dedicated host, and then attach the disks.<br></br>This is not applicable in a secondary region while you're performing [cross-region restore](backup-azure-arm-restore-vms.md#cross-region-restore).
-Configure standalone Azure VMs in Windows Storage Spaces | Supported.
+Configure standalone Azure VMs in Windows Storage Spaces | Not supported.
[Restore Virtual Machine Scale Sets](../virtual-machine-scale-sets/virtual-machine-scale-sets-orchestration-modes.md#scale-sets-with-flexible-orchestration) | Supported for the flexible orchestration model to back up and restore a single Azure VM. Restore with managed identities | Supported for managed Azure VMs. <br><br> Not supported for classic and unmanaged Azure VMs. <br><br> Cross-region restore isn't supported with managed identities. <br><br> Currently, this is available in all Azure public and national cloud regions. <br><br> [Learn more](backup-azure-arm-restore-vms.md#restore-vms-with-managed-identities). <a name="tvm-backup">Back up trusted launch VMs</a> | Backup is supported. <br><br> Backup of trusted launch VMs is supported through [Enhanced policy](backup-azure-vms-enhanced-policy.md). You can enable backup through a [Recovery Services vault](./backup-azure-arm-vms-prepare.md), the [pane for managing a VM](./backup-during-vm-creation.md#start-a-backup-after-creating-the-vm), and the [pane for creating a VM](backup-during-vm-creation.md#create-a-vm-with-backup-configured). <br><br> **Feature details** <br><br> - Backup is supported in all regions where trusted launch VMs are available. <br><br> - Configuration of backups, alerts, and monitoring for trusted launch VMs is currently not supported through the backup center. <br><br> - Migration of an existing [Gen2 VM](../virtual-machines/generation-2.md) (protected with Azure Backup) to a trusted launch VM is currently not supported. [Learn how to create a trusted launch VM](../virtual-machines/trusted-launch-portal.md?tabs=portal#deploy-a-trusted-launch-vm). <br><br> - Item-level restore is not supported.
batch Batch Sig Images https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/batch/batch-sig-images.md
Title: Use the Azure Compute Gallery to create a custom image pool description: Custom image pools are an efficient way to configure compute nodes to run your Batch workloads. Previously updated : 03/04/2021 Last updated : 05/12/2023 ms.devlang: csharp, python
The following steps show how to prepare a VM, take a snapshot, and create an ima
### Prepare a VM
-If you are creating a new VM for the image, use a first party Azure Marketplace image supported by Batch as the base image for your managed image. Only first party images can be used as a base image. To get a full list of Azure Marketplace image references supported by Azure Batch, see the [List node agent SKUs](/java/api/com.microsoft.azure.batch.protocol.accounts.listnodeagentskus) operation.
+If you are creating a new VM for the image, use a first party Azure Marketplace image supported by Batch as the base image for your managed image. Only first party images can be used as a base image.
+
+To get a full list of current Azure Marketplace image references supported by Azure Batch, use one of the following APIs to return a list of Windows and Linux VM images including the node agent SKU IDs for each image:
+
+- PowerShell: [Azure Batch supported images](/powershell/module/az.batch/get-azbatchsupportedimage)
+- Azure CLI: [Azure Batch pool supported images](/cli/azure/batch/pool/supported-images)
+- Batch service APIs: [Batch service APIs](batch-apis-tools.md#batch-service-apis) and [Azure Batch service supported images](/rest/api/batchservice/account/listsupportedimages)
+- List node agent SKUs: [Node agent SKUs](/java/api/com.microsoft.azure.batch.protocol.accounts.listnodeagentskus)
> [!NOTE] > You can't use a third-party image that has additional license and purchase terms as your base image. For information about these Marketplace images, see the guidance for [Linux](../virtual-machines/linux/cli-ps-findimage.md#check-the-purchase-plan-information) or [Windows](../virtual-machines/windows/cli-ps-findimage.md#view-purchase-plan-properties)VMs.
A snapshot is a full, read-only copy of a VHD. To create a snapshot of a VM's OS
To create a managed image from a snapshot, use Azure command-line tools such as the [az image create](/cli/azure/image) command. Create an image by specifying an OS disk snapshot and optionally one or more data disk snapshots.
+To create an image from a VM in the portal, see [Capture an image of a VM](../virtual-machines/capture-image-portal.md).
+
+To create an image using a source other than a VM, see [Create an image](../virtual-machines/image-version.md).
++ ### Create an Azure Compute Gallery Once you have successfully created your managed image, you need to create an Azure Compute Gallery to make your custom image available. To learn how to create an Azure Compute Gallery for your images, see [Create an Azure Compute Gallery](../virtual-machines/create-gallery.md).
To create a pool from your Shared Image using the Azure CLI, use the `az batch p
> [!NOTE] > You need to authenticate using Azure AD. If you use shared-key-auth, you will get an authentication error.
+> [!IMPORTANT]
+> The node agent SKU id must align with the publisher/offer/SKU in order for the node to start.
+ ```azurecli az batch pool create \ --id mypool --vm-size Standard_A1_v2 \ --target-dedicated-nodes 2 \ --image "/subscriptions/{sub id}/resourceGroups/{resource group name}/providers/Microsoft.Compute/galleries/{gallery name}/images/{image definition name}/versions/{version id}" \
- --node-agent-sku-id "batch.node.ubuntu 16.04"
+ --{node-agent-sku-id}
``` ## Create a pool from a Shared Image using C#
private static VirtualMachineConfiguration CreateVirtualMachineConfiguration(Ima
{ return new VirtualMachineConfiguration( imageReference: imageReference,
- nodeAgentSkuId: "batch.node.windows amd64");
+ nodeAgentSkuId: {});
} private static ImageReference CreateImageReference()
ir = batchmodels.ImageReference(
# be installed on the node. vmc = batchmodels.VirtualMachineConfiguration( image_reference=ir,
- node_agent_sku_id="batch.node.ubuntu 18.04"
+ {node_agent_sku_id}
) # Create the unbound pool
new_pool = batchmodels.PoolAddParameter(
client.pool.add(new_pool) ```
-## Create a pool from a Shared Image using the Azure portal
+## Create a pool from a Shared Image or Custom Image using the Azure portal
Use the following steps to create a pool from a Shared Image in the Azure portal.
Use the following steps to create a pool from a Shared Image in the Azure portal
1. In the **Image Type** section, select **Azure Compute Gallery**. 1. Complete the remaining sections with information about your managed image. 1. Select **OK**.
+1. Once the node is allocated, use **Connect** to generate user and the RDP file for Windows OR use SSH to for Linux to login to the allocated node and verify.
![Create a pool with from a Shared image with the portal.](media/batch-sig-images/create-custom-pool.png)
+
## Considerations for large pools
If you plan to create a pool with hundreds or thousands of VMs or more using a S
## Next steps - For an in-depth overview of Batch, see [Batch service workflow and resources](batch-service-workflow-features.md).-- Learn about the [Azure Compute Gallery](../virtual-machines/shared-image-galleries.md).
+- Learn about the [Azure Compute Gallery](../virtual-machines/shared-image-galleries.md).
cognitive-services How To Custom Voice Create Voice https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Speech-Service/how-to-custom-voice-create-voice.md
To create a custom neural voice in Speech Studio, follow these steps for one of
1. Select **Next**. 1. Optionally, you can add up to 10 custom speaking styles: 1. Select **Add a custom style** and thoughtfully enter a custom style name of your choice. This name will be used by your application within the `style` element of [Speech Synthesis Markup Language (SSML)](speech-synthesis-markup-voice.md#speaking-styles-and-roles). You can also use the custom style name as SSML via the [Audio Content Creation](how-to-audio-content-creation.md) tool in [Speech Studio](https://speech.microsoft.com/portal/audiocontentcreation).
- 1. Select style samples as training data. It's recommended that the style samples are all from the same voice talent profile.
+ 1. Select style samples as training data. The style samples should be all from the same voice talent profile.
1. Select **Next**. 1. Select a speaker file with the voice talent statement that corresponds to the speaker in your training data. 1. Select **Next**.
cognitive-services Chatgpt Quickstart https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/openai/chatgpt-quickstart.md
Title: 'Quickstart - Get started using ChatGPT (Preview) and GPT-4 (Preview) with Azure OpenAI Service'
+ Title: 'Quickstart - Get started using ChatGPT and GPT-4 with Azure OpenAI Service'
description: Walkthrough on how to get started with ChatGPT and GPT-4 on Azure OpenAI Service.
zone_pivot_groups: openai-quickstart-new
recommendations: false
-# Quickstart: Get started using ChatGPT (preview) and GPT-4 (preview) with Azure OpenAI Service
+# Quickstart: Get started using ChatGPT and GPT-4 with Azure OpenAI Service
Use this article to get started using Azure OpenAI.
cognitive-services Advanced Prompt Engineering https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/openai/concepts/advanced-prompt-engineering.md
While the principles of prompt engineering can be generalized across many differ
- Chat Completion API. - Completion API.
-Each API requires input data to be formatted differently, which in turn impacts overall prompt design. The **Chat Completion API** supports the ChatGPT (preview) and GPT-4 (preview) models. These models are designed to take input formatted in a [specific chat-like transcript](../how-to/chatgpt.md) stored inside an array of dictionaries.
+Each API requires input data to be formatted differently, which in turn impacts overall prompt design. The **Chat Completion API** supports the ChatGPT and GPT-4 models. These models are designed to take input formatted in a [specific chat-like transcript](../how-to/chatgpt.md) stored inside an array of dictionaries.
-The **Completion API** supports the older GPT-3 models and has much more flexible input requirements in that it takes a string of text with no specific format rules. Technically the ChatGPT (preview) models can be used with either APIs, but we strongly recommend using the Chat Completion API for these models. To learn more, please consult our [in-depth guide on using these APIs](../how-to/chatgpt.md).
+The **Completion API** supports the older GPT-3 models and has much more flexible input requirements in that it takes a string of text with no specific format rules. Technically the ChatGPT models can be used with either APIs, but we strongly recommend using the Chat Completion API for these models. To learn more, please consult our [in-depth guide on using these APIs](../how-to/chatgpt.md).
The techniques in this guide will teach you strategies for increasing the accuracy and grounding of responses you generate with a Large Language Model (LLM). It is, however, important to remember that even when using prompt engineering effectively you still need to validate the responses the models generate. Just because a carefully crafted prompt worked well for a particular scenario doesn't necessarily mean it will generalize more broadly to certain use cases. Understanding the [limitations of LLMs](/legal/cognitive-services/openai/transparency-note?context=%2Fazure%2Fcognitive-services%2Fopenai%2Fcontext%2Fcontext#limitations), is just as important as understanding how to leverage their strengths.
cognitive-services Models https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/openai/concepts/models.md
description: Learn about the different model capabilities that are available with Azure OpenAI. Previously updated : 05/11/2023 Last updated : 05/15/2023
Azure OpenAI provides access to many different models, grouped by family and cap
| Model family | Description | |--|--|
-| [GPT-4](#gpt-4-models) | A set of models that improve on GPT-3.5 and can understand as well as generate natural language and code. **These models are currently in preview.**|
-| [GPT-3](#gpt-3-models) | A series of models that can understand and generate natural language. This includes the new [ChatGPT model (preview)](#chatgpt-gpt-35-turbo-preview). |
+| [GPT-4](#gpt-4-models) | A set of models that improve on GPT-3.5 and can understand as well as generate natural language and code. |
+| [GPT-3](#gpt-3-models) | A series of models that can understand and generate natural language. This includes the new [ChatGPT model](#chatgpt-gpt-35-turbo). |
| [Codex](#codex-models) | A series of models that can understand and generate code, including translating natural language to code. | | [Embeddings](#embeddings-models) | A set of models that can understand and use embeddings. An embedding is a special format of data representation that can be easily utilized by machine learning models and algorithms. The embedding is an information dense representation of the semantic meaning of a piece of text. Currently, we offer three families of Embeddings models for different functionalities: similarity, text search, and code search. |
You can get a list of models that are available for both inference and fine-tuni
We recommend starting with the most capable model in a model family to confirm whether the model capabilities meet your requirements. Then you can stay with that model or move to a model with lower capability and cost, optimizing around that model's capabilities.
-## GPT-4 models (preview)
+## GPT-4 models
GPT-4 can solve difficult problems with greater accuracy than any of OpenAI's previous models. Like gpt-35-turbo, GPT-4 is optimized for chat but works well for traditional completions tasks.
- These models are currently in preview. For access, existing Azure OpenAI customers can [apply by filling out this form](https://aka.ms/oai/get-gpt4).
+Due to high demand access to this model series is currently only available by request. To request access, existing Azure OpenAI customers can [apply by filling out this form](https://aka.ms/oai/get-gpt4)
+ - `gpt-4` - `gpt-4-32k`
Ada is usually the fastest model and can perform tasks like parsing text, addres
**Use for**: Parsing text, simple classification, address correction, keywords
-### ChatGPT (gpt-35-turbo) (preview)
+### ChatGPT (gpt-35-turbo)
The ChatGPT model (gpt-35-turbo) is a language model designed for conversational interfaces and the model behaves differently than previous GPT-3 models. Previous models were text-in and text-out, meaning they accepted a prompt string and returned a completion to append to the prompt. However, the ChatGPT model is conversation-in and message-out. The model expects a prompt string formatted in a specific chat-like transcript format, and returns a completion that represents a model-written message in the chat.
These models can be used with Completion API requests. `gpt-35-turbo` is the onl
| text-davinci-002 | East US, South Central US, West Europe | N/A | 4,097 | Jun 2021 | | text-davinci-003 | East US, West Europe | N/A | 4,097 | Jun 2021 | | text-davinci-fine-tune-002 | N/A | N/A | | |
-| gpt-35-turbo<sup>1</sup> (ChatGPT) (preview) | East US, France Central, South Central US, West Europe | N/A | 4,096 | Sep 2021 |
+| gpt-35-turbo<sup>1</sup> (ChatGPT) | East US, France Central, South Central US, West Europe | N/A | 4,096 | Sep 2021 |
<br><sup>1</sup> Currently, only version `0301` of this model is available. This version of the model will be deprecated on 8/1/2023 in favor of newer version of the gpt-35-model. See [ChatGPT model versioning](../how-to/chatgpt.md#model-versioning) for more details.
These models can only be used with the Chat Completion API.
| Model ID | Base model Regions | Fine-Tuning Regions | Max Request (tokens) | Training Data (up to) | | | | | | |
-| `gpt-4` <sup>1,</sup><sup>2</sup> (preview) | East US, France Central, South Central US | N/A | 8,192 | September 2021 |
-| `gpt-4-32k` <sup>1,</sup><sup>2</sup> (preview) | East US, France Central, South Central US | N/A | 32,768 | September 2021 |
+| `gpt-4` <sup>1,</sup><sup>2</sup> | East US, France Central | N/A | 8,192 | September 2021 |
+| `gpt-4-32k` <sup>1,</sup><sup>2</sup> | East US, France Central | N/A | 32,768 | September 2021 |
-<sup>1</sup> The model is in preview and [only available by request](https://aka.ms/oai/get-gpt4).<br>
+<sup>1</sup> The model is [only available by request](https://aka.ms/oai/get-gpt4).<br>
<sup>2</sup> Currently, only version `0314` of this model is available. ### Codex Models
cognitive-services Chatgpt https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/openai/how-to/chatgpt.md
Title: How to work with the ChatGPT and GPT-4 models (preview)
+ Title: How to work with the ChatGPT and GPT-4 models
-description: Learn about the options for how to use the ChatGPT and GPT-4 models (preview)
+description: Learn about the options for how to use the ChatGPT and GPT-4 models
Previously updated : 03/21/2023 Last updated : 05/15/2023 keywords: ChatGPT zone_pivot_groups: openai-chat
-# Learn how to work with the ChatGPT and GPT-4 models (preview)
+# Learn how to work with the ChatGPT and GPT-4 models
The ChatGPT and GPT-4 models are language models that are optimized for conversational interfaces. The models behave differently than the older GPT-3 models. Previous models were text-in and text-out, meaning they accepted a prompt string and returned a completion to append to the prompt. However, the ChatGPT and GPT-4 models are conversation-in and message-out. The models expect input formatted in a specific chat-like transcript format, and return a completion that represents a model-written message in the chat. While this format was designed specifically for multi-turn conversations, you'll find it can also work well for non-chat scenarios too.
In Azure OpenAI there are two different options for interacting with these type
- Chat Completion API. - Completion API with Chat Markup Language (ChatML).
-The Chat Completion API is a new dedicated API for interacting with the ChatGPT and GPT-4 models. **Both sets of models are currently in preview**. This API is the preferred method for accessing these models. **It is also the only way to access the new GPT-4 models**.
+The Chat Completion API is a new dedicated API for interacting with the ChatGPT and GPT-4 models. This API is the preferred method for accessing these models. **It is also the only way to access the new GPT-4 models**.
ChatML uses the same [completion API](../reference.md#completions) that you use for other models like text-davinci-002, it requires a unique token based prompt format known as Chat Markup Language (ChatML). This provides lower level access than the dedicated Chat Completion API, but also requires additional input validation, only supports ChatGPT (gpt-35-turbo) models, and **the underlying format is more likely to change over time**.
cognitive-services Embeddings https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/openai/how-to/embeddings.md
Previously updated : 6/24/2022 Last updated : 5/9/2023 recommendations: false
To obtain an embedding vector for a piece of text, we make a request to the embe
# [console](#tab/console) ```console
-curl https://YOUR_RESOURCE_NAME.openai.azure.com/openai/deployments/YOUR_DEPLOYMENT_NAME/embeddings?api-version=2022-12-01\
+curl https://YOUR_RESOURCE_NAME.openai.azure.com/openai/deployments/YOUR_DEPLOYMENT_NAME/embeddings?api-version=2023-05-15\
-H 'Content-Type: application/json' \ -H 'api-key: YOUR_API_KEY' \ -d '{"input": "Sample Document goes here"}'
import openai
openai.api_type = "azure" openai.api_key = YOUR_API_KEY openai.api_base = "https://YOUR_RESOURCE_NAME.openai.azure.com"
-openai.api_version = "2022-12-01"
+openai.api_version = "2023-05-15"
response = openai.Embedding.create( input="Your text string goes here",
response = openai.Embedding.create(
embeddings = response['data'][0]['embedding'] print(embeddings) ```+
+# [C#](#tab/csharp)
+```csharp
+using Azure;
+using Azure.AI.OpenAI;
+
+Uri oaiEndpoint = new ("https://YOUR_RESOURCE_NAME.openai.azure.com");
+string oaiKey = "YOUR_API_KEY";
+
+AzureKeyCredential credentials = new (oaiKey);
+
+OpenAIClient openAIClient = new (oaiEndpoint, credentials);
+
+EmbeddingsOptions embeddingOptions = new ("Your text string goes here");
+
+var returnValue = openAIClient.GetEmbeddings("YOUR_DEPLOYMENT_NAME", embeddingOptions);
+
+foreach (float item in returnValue.Value.Data[0].Embedding)
+{
+ Console.WriteLine(item);
+}
+```
+
-## Best Practices
+## Best practices
### Verify inputs don't exceed the maximum length
cognitive-services Managed Identity https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/openai/how-to/managed-identity.md
Title: How to Configure Azure OpenAI Service with Managed Identities
+ Title: How to configure Azure OpenAI Service with managed identities
description: Provides guidance on how to set managed identity with Azure Active Directory
recommendations: false
-# How to Configure Azure OpenAI Service with Managed Identities
+# How to configure Azure OpenAI Service with managed identities
More complex security scenarios require Azure role-based access control (Azure RBAC). This document covers how to authenticate to your OpenAI resource using Azure Active Directory (Azure AD).
Assigning yourself to the Cognitive Services User role will allow you to use you
Use the access token to authorize your API call by setting the `Authorization` header value. ```bash
- curl ${endpoint%/}/openai/deployments/YOUR_DEPLOYMENT_NAME/completions?api-version=2022-12-01 \
+ curl ${endpoint%/}/openai/deployments/YOUR_DEPLOYMENT_NAME/completions?api-version=2023-05-15 \
-H "Content-Type: application/json" \ -H "Authorization: Bearer $accessToken" \ -d '{ "prompt": "Once upon a time" }'
cognitive-services Prepare Dataset https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/openai/how-to/prepare-dataset.md
Classifiers are the easiest models to get started with. For classification probl
#### Case study: Is the model making untrue statements?
-Let's say you'd like to ensure that the text of the ads on your website mention the correct product and company. In other words, you want to ensure the model isn't making things up. You may want to fine-tune a classifier which filters out incorrect ads.
+Let's say you'd like to ensure that the text of the ads on your website mentions the correct product and company. In other words, you want to ensure the model isn't making things up. You may want to fine-tune a classifier which filters out incorrect ads.
The dataset might look something like the following:
For this use case we fine-tuned an ada model since it is faster and cheaper, and
Now we can query our model by making a Completion request. ```console
-curl https://YOUR_RESOURCE_NAME.openaiazure.com/openai/deployments/YOUR_DEPLOYMENT_NAME/completions?api-version=2022-12-01\ \
+curl https://YOUR_RESOURCE_NAME.openaiazure.com/openai/deployments/YOUR_DEPLOYMENT_NAME/completions?api-version=2023-05-15\ \
-H 'Content-Type: application/json' \ -H 'api-key: YOUR_API_KEY' \ -d '{
Once the model is fine-tuned, you can get back the log probabilities for the fir
Now we can query our model by making a Completion request. ```console
-curl https://YOUR_RESOURCE_NAME.openaiazure.com/openai/deployments/YOUR_DEPLOYMENT_NAME/completions?api-version=2022-12-01\ \
+curl https://YOUR_RESOURCE_NAME.openaiazure.com/openai/deployments/YOUR_DEPLOYMENT_NAME/completions?api-version=2023-05-15\ \
-H 'Content-Type: application/json' \ -H 'api-key: YOUR_API_KEY' \ -d '{
cognitive-services Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/openai/overview.md
Previously updated : 05/01/2023 Last updated : 05/15/2023 recommendations: false keywords:
keywords:
# What is Azure OpenAI Service?
-Azure OpenAI Service provides REST API access to OpenAI's powerful language models including the GPT-3, Codex and Embeddings model series. In addition, the new GPT-4 and ChatGPT (gpt-35-turbo) model series are now available in preview. These models can be easily adapted to your specific task including but not limited to content generation, summarization, semantic search, and natural language to code translation. Users can access the service through REST APIs, Python SDK, or our web-based interface in the Azure OpenAI Studio.
+Azure OpenAI Service provides REST API access to OpenAI's powerful language models including the GPT-3, Codex and Embeddings model series. In addition, the new GPT-4 and ChatGPT (gpt-35-turbo) model series have now reached general availability. These models can be easily adapted to your specific task including but not limited to content generation, summarization, semantic search, and natural language to code translation. Users can access the service through REST APIs, Python SDK, or our web-based interface in the Azure OpenAI Studio.
### Features overview | Feature | Azure OpenAI | | | |
-| Models available | **NEW GPT-4 series (preview)** <br> GPT-3 base series <br>**NEW ChatGPT (gpt-35-turbo) (preview)**<br> Codex series <br> Embeddings series <br> Learn more in our [Models](./concepts/models.md) page.|
-| Fine-tuning | Ada <br> Babbage <br> Curie <br> Cushman* <br> Davinci* <br> \* Currently unavailable. \*\*East US and West Europe Fine-tuning is currently unavailable to new customers. Please use US South Central for US based training|
+| Models available | **NEW GPT-4 series** <br> GPT-3 base series <br>**NEW ChatGPT (gpt-35-turbo)**<br> Codex series <br> Embeddings series <br> Learn more in our [Models](./concepts/models.md) page.|
+| Fine-tuning | Ada <br> Babbage <br> Curie <br> Cushman <br> Davinci <br>**Fine-tuning is currently unavailable to new customers**.|
| Price | [Available here](https://azure.microsoft.com/pricing/details/cognitive-services/openai-service/) | | Virtual network support & private link support | Yes | | Managed Identity| Yes, via Azure Active Directory |
The number of examples typically range from 0 to 100 depending on how many can f
The service provides users access to several different models. Each model provides a different capability and price point.
-GPT-4 models are the latest available models. These models are currently in preview. For access, existing Azure OpenAI customers can [apply by filling out this form](https://aka.ms/oai/get-gpt4).
+GPT-4 models are the latest available models. Due to high demand access to this model series is currently only available by request. To request access, existing Azure OpenAI customers can [apply by filling out this form](https://aka.ms/oai/get-gpt4)
The GPT-3 base models are known as Davinci, Curie, Babbage, and Ada in decreasing order of capability and increasing order of speed.
cognitive-services Quotas Limits https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/openai/quotas-limits.md
Title: Azure OpenAI Service quotas and limits
description: Quick reference, detailed description, and best practices on the quotas and limits for the OpenAI service in Azure Cognitive Services. -+ Previously updated : 04/25/2023- Last updated : 05/15/2023+ # Azure OpenAI Service quotas and limits
The following sections provide you with a quick guide to the quotas and limits t
| Limit Name | Limit Value | |--|--| | OpenAI resources per region per Azure subscription | 3 |
-| Requests per minute per model* | Davinci-models (002 and later): 120 <br> ChatGPT model (preview): 300 <br> GPT-4 models (preview): 18 <br> All other models: 300 |
+| Requests per minute per model* | Davinci-models (002 and later): 120 <br> ChatGPT model: 300 <br> GPT-4 models: 18 <br> All other models: 300 |
| Tokens per minute per model* | Davinci-models (002 and later): 40,000 <br> ChatGPT model: 120,000 <br> GPT-4 8k model: 10,000 <br> GPT-4 32k model: 32,000 <br> All other models: 120,000 | | Max fine-tuned model deployments* | 2 | | Ability to deploy same model to multiple deployments | Not allowed |
cognitive-services Reference https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/openai/reference.md
Previously updated : 04/06/2023 Last updated : 05/15/2023 recommendations: false
Azure OpenAI provides two methods for authentication. you can use either API Ke
The service APIs are versioned using the ```api-version``` query parameter. All versions follow the YYYY-MM-DD date structure. For example: ```http
-POST https://YOUR_RESOURCE_NAME.openai.azure.com/openai/deployments/YOUR_DEPLOYMENT_NAME/completions?api-version=2022-12-01
+POST https://YOUR_RESOURCE_NAME.openai.azure.com/openai/deployments/YOUR_DEPLOYMENT_NAME/completions?api-version=2023-05-15
``` ## Completions
POST https://{your-resource-name}.openai.azure.com/openai/deployments/{deploymen
- `2023-03-15-preview` [Swagger spec](https://github.com/Azure/azure-rest-api-specs/blob/main/specification/cognitiveservices/data-plane/AzureOpenAI/inference/preview/2023-03-15-preview/inference.json) - `2022-12-01` [Swagger spec](https://github.com/Azure/azure-rest-api-specs/blob/main/specification/cognitiveservices/data-plane/AzureOpenAI/inference/stable/2022-12-01/inference.json)
+- `2023-05-15` [Swagger spec](https://github.com/Azure/azure-rest-api-specs/blob/main/specification/cognitiveservices/data-plane/AzureOpenAI/inference/stable/2023-05-15/inference.json)
**Request body**
POST https://{your-resource-name}.openai.azure.com/openai/deployments/{deploymen
#### Example request ```console
-curl https://YOUR_RESOURCE_NAME.openai.azure.com/openai/deployments/YOUR_DEPLOYMENT_NAME/completions?api-version=2022-12-01\
+curl https://YOUR_RESOURCE_NAME.openai.azure.com/openai/deployments/YOUR_DEPLOYMENT_NAME/completions?api-version=2023-05-15\
-H "Content-Type: application/json" \ -H "api-key: YOUR_API_KEY" \ -d "{
POST https://{your-resource-name}.openai.azure.com/openai/deployments/{deploymen
- `2023-03-15-preview` [Swagger spec](https://github.com/Azure/azure-rest-api-specs/blob/main/specification/cognitiveservices/data-plane/AzureOpenAI/inference/preview/2023-03-15-preview/inference.json) - `2022-12-01` [Swagger spec](https://github.com/Azure/azure-rest-api-specs/blob/main/specification/cognitiveservices/data-plane/AzureOpenAI/inference/stable/2022-12-01/inference.json)
+- `2023-05-15` [Swagger spec](https://github.com/Azure/azure-rest-api-specs/blob/main/specification/cognitiveservices/data-plane/AzureOpenAI/inference/stable/2023-05-15/inference.json)
**Request body**
POST https://{your-resource-name}.openai.azure.com/openai/deployments/{deploymen
#### Example request ```console
-curl https://YOUR_RESOURCE_NAME.openai.azure.com/openai/deployments/YOUR_DEPLOYMENT_NAME/embeddings?api-version=2022-12-01 \
+curl https://YOUR_RESOURCE_NAME.openai.azure.com/openai/deployments/YOUR_DEPLOYMENT_NAME/embeddings?api-version=2023-05-15 \
-H "Content-Type: application/json" \ -H "api-key: YOUR_API_KEY" \ -d "{\"input\": \"The food was delicious and the waiter...\"}"
curl https://YOUR_RESOURCE_NAME.openai.azure.com/openai/deployments/YOUR_DEPLOYM
## Chat completions
-Create completions for chat messages with the ChatGPT (preview) and GPT-4 (preview) models. Chat completions are currently only available with `api-version=2023-03-15-preview`.
+Create completions for chat messages with the ChatGPT and GPT-4 models.
**Create chat completions**
POST https://{your-resource-name}.openai.azure.com/openai/deployments/{deploymen
**Supported versions** - `2023-03-15-preview` [Swagger spec](https://github.com/Azure/azure-rest-api-specs/blob/main/specification/cognitiveservices/data-plane/AzureOpenAI/inference/preview/2023-03-15-preview/inference.json)
+- `2023-05-15` [Swagger spec](https://github.com/Azure/azure-rest-api-specs/blob/main/specification/cognitiveservices/data-plane/AzureOpenAI/inference/stable/2023-05-15/inference.json)
#### Example request ```console
-curl https://YOUR_RESOURCE_NAME.openai.azure.com/openai/deployments/YOUR_DEPLOYMENT_NAME/chat/completions?api-version=2023-03-15-preview \
+curl https://YOUR_RESOURCE_NAME.openai.azure.com/openai/deployments/YOUR_DEPLOYMENT_NAME/chat/completions?api-version=2023-05-15 \
-H "Content-Type: application/json" \ -H "api-key: YOUR_API_KEY" \ -d '{"messages":[{"role": "system", "content": "You are a helpful assistant."},{"role": "user", "content": "Does Azure OpenAI support customer managed keys?"},{"role": "assistant", "content": "Yes, customer managed keys are supported by Azure OpenAI."},{"role": "user", "content": "Do other Azure Cognitive Services support this too?"}]}'
cognitive-services Embeddings https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/openai/tutorials/embeddings.md
RESOURCE_ENDPOINT = os.getenv("AZURE_OPENAI_ENDPOINT")
openai.api_type = "azure" openai.api_key = API_KEY openai.api_base = RESOURCE_ENDPOINT
-openai.api_version = "2022-12-01"
+openai.api_version = "2023-05-15"
-url = openai.api_base + "/openai/deployments?api-version=2022-12-01"
+url = openai.api_base + "/openai/deployments?api-version=2023-05-15"
r = requests.get(url, headers={"api-key": API_KEY})
cognitive-services Whats New https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/openai/whats-new.md
Previously updated : 05/11/2023 Last updated : 05/15/2023 recommendations: false keywords:
keywords:
## May 2023
+### Azure OpenAI Chat Completion General Availability (GA)
+
+- General availability support for:
+ - Chat Completion API version `2023-05-15`.
+ - GPT-35-Turbo models.
+ - GPT-4 model series. Due to high demand access to this model series is currently only available by request. To request access, existing Azure OpenAI customers can [apply by filling out this form](https://aka.ms/oai/get-gpt4)
+
+If you are currently using the `2023-03-15-preview` API, we recommend migrating to the GA `2023-05-15` API. If you are currently using API version `2022-12-01` this API remains GA, but does not include the latest Chat Completion capabilities.
+
+> [!IMPORTANT]
+> Using the current versions of the GPT-35-Turbo models with the completion endpoint remains in preview.
+
+### France Central
+ - Azure OpenAI is now available in the France Central region. Check the [models page](concepts/models.md), for the latest information on model availability in each region. ## April 2023
communication-services Advisor Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communication-services/concepts/advisor-overview.md
The following SDKs are supported for this feature, along with all their supporte
* Phone Numbers * Management * Network Traversal
+* Call Automation
## Next steps
connectors Connectors Native Webhook https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/connectors/connectors-native-webhook.md
tags: connectors
# Create and run automated event-based workflows by using HTTP webhooks in Azure Logic Apps
-With [Azure Logic Apps](../logic-apps/logic-apps-overview.md) and the built-in HTTP Webhook connector, you can create automated tasks and workflows that subscribe to a service endpoint, wait for specific events, and run based on those events, rather than regularly checking or *polling* that endpoint.
+With [Azure Logic Apps](../logic-apps/logic-apps-overview.md) and the HTTP Webhook built-in connector, you can create an automated workflow that subscribes to a service endpoint, waits for specific events, and runs specific actions, rather than regularly check or *poll* the service endpoint.
Here are some example webhook-based workflows:
-* Wait for an item to arrive from an [Azure Event Hub](https://github.com/logicappsio/EventHubAPI) before triggering a logic app run.
+* Wait for an event to arrive from [Azure Event Hubs](https://github.com/logicappsio/EventHubAPI) before triggering a workflow run.
* Wait for an approval before continuing a workflow.
-This article shows how to use the Webhook trigger and Webhook action so that your logic app can receive and respond to events at a service endpoint.
+This how-to guide shows how to use the HTTP Webhook trigger and Webhook action so that your logic app workflow can receive and respond to events at a service endpoint.
## How do webhooks work?
-A webhook trigger is event-based, which doesn't depend on checking or polling regularly for new items. When you save a logic app that starts with a webhook trigger, or when you change your logic app from disabled to enabled, the webhook trigger *subscribes* to the specified service endpoint by registering a *callback URL* with that endpoint. The trigger then waits for that service endpoint to call the URL, which starts running the logic app. Similar to the [Request trigger](connectors-native-reqres.md), the logic app fires immediately when the specified event happens. The webhook trigger *unsubscribes* from the service endpoint if you remove the trigger and save your logic app, or when you change your logic app from enabled to disabled.
+A webhook trigger is event-based, which doesn't depend on checking or polling regularly for new data or events. After you add a webhook trigger to an empty workflow and then save the workflow, or after you re-enable a disabled logic app resource, the webhook trigger *subscribes* to the specified service endpoint by registering a *callback URL* with that endpoint. The trigger then waits for that service endpoint to call the URL, which fires the trigger and starts the workflow. Similar to the [Request trigger](connectors-native-reqres.md), a webhook trigger fires immediately. The webhook trigger also remains subscribed to the service endpoint unless you manually take the following actions:
-A webhook action is also event-based and *subscribes* to the specified service endpoint by registering a *callback URL* with that endpoint. The webhook action pauses the logic app's workflow and waits until the service endpoint calls the URL before the logic app resumes running. The webhook action *unsubscribes* from the service endpoint in these cases:
+* Change the trigger's parameter values.
+* Delete the trigger and then save your workflow.
+* Disable your logic app resource.
-* When the webhook action successfully finishes
-* If the logic app run is canceled while waiting for a response
-* Before the logic app times out
+Similar to the webhook trigger, a webhook action works is also event-based. After you add a webhook action to an existing workflow and then save the workflow, or after you re-enable a disabled logic app resource, the webhook action *subscribes* to the specified service endpoint by registering a *callback URL* with that endpoint. When the workflow runs, the webhook action pauses the workflow and waits until the service endpoint calls the URL before the workflow resumes running. A webhook action *unsubscribes* from the service endpoint when the following conditions occur:
+
+* The webhook action successfully finishes.
+* The workflow run is canceled while waiting for a response.
+* Before a workflow run times out.
+* You change any webhook action parameter values that are used as inputs by a webhook trigger.
For example, the Office 365 Outlook connector's [**Send approval email**](connectors-create-api-office365-outlook.md) action is an example of webhook action that follows this pattern. You can extend this pattern into any service by using the webhook action.
For information about encryption, security, and authorization for inbound calls
This built-in trigger calls the subscribe endpoint on the target service and registers a callback URL with the target service. Your logic app then waits for the target service to send an `HTTP POST` request to the callback URL. When this event happens, the trigger fires and passes any data in the request along to the workflow.
-1. Sign in to the [Azure portal](https://portal.azure.com). Open your blank logic app in Logic App Designer.
+1. In the [Azure portal](https://portal.azure.com), pen your blank logic app workflow in the designer.
1. In the designer's search box, enter `http webhook` as your filter. From the **Triggers** list, select the **HTTP Webhook** trigger.
container-apps Dapr Authentication Token https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/container-apps/dapr-authentication-token.md
+
+ Title: Enable token authentication for Dapr requests
+description: Learn more about enabling token authentication for Dapr requests to your container app in Azure Container Apps.
++++ Last updated : 04/14/2023++
+# Enable token authentication for Dapr requests
+
+When [Dapr][dapr] is enabled for your application in Azure Container Apps, it injects the environmental variable `APP_API_TOKEN` into your app's container. Dapr includes the same token in all requests sent to your app, as either:
+
+- An HTTP header (`dapr-api-token`)
+- A gRPC metadata option (`dapr-api-token[0]`)
+
+The token is randomly generated and unique per each app and app revision. It can also change at any time. Your application should read the token from the `APP_API_TOKEN` environmental variable when it starts up to ensure that it's using the correct token.
+
+You can use this token to authenticate that calls coming into your application are actually coming from the Dapr sidecar, even when listening on public endpoints.
+
+1. The `daprd` container reads and injects it into each call made from Dapr to your application.
+1. Your application can then use that token to validate that the request is coming from Dapr.
+
+## Prerequisites
+
+[Dapr-enabled Azure Container App][dapr-aca]
+
+## Authenticate requests from Dapr
+
+# [With Dapr SDKs](#tab/sdk)
+
+If you're using a [Dapr SDK](https://docs.dapr.io/developing-applications/sdks/), the Dapr SDKs automatically validates the token in all incoming requests from Dapr, rejecting calls that don't include the correct token. You don't need to perform any other action.
+
+Incoming requests that don't include the token, or include an incorrect token, are rejected automatically.
+
+# [Without an SDK](#tab/nosdk)
+
+If you're not using a Dapr SDK, you need to check the HTTP header or gRPC metadata property in all incoming requests in order to validate that they're created by the Dapr sidecar.
+
+### HTTP
+
+In your code, look for the HTTP header `dapr-api-token` in incoming requests:
+
+```sh
+dapr-api-token: <token>
+```
+
+### gRPC
+
+When using the gRPC protocol, inspect the incoming calls for the API token on the gRPC metadata:
+
+```sh
+dapr-api-token[0]
+```
++++
+## Next steps
+
+[Learn more about the Dapr integration with Azure Container Apps.][dapr-aca]
++
+<!-- Links Internal -->
+
+[dapr-aca]: ./dapr-overview.md
+
+<!-- Links External -->
+
+[dapr]: https://docs.dapr.io/
container-apps Dapr Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/container-apps/dapr-overview.md
Previously updated : 04/14/2023 Last updated : 05/15/2023 # Dapr integration with Azure Container Apps
Now that you've learned about Dapr and some of the challenges it solves:
- Try [Deploying a Dapr application to Azure Container Apps using the Azure CLI][dapr-quickstart] or [Azure Resource Manager][dapr-arm-quickstart]. - Walk through a tutorial [using GitHub Actions to automate changes for a multi-revision, Dapr-enabled container app][dapr-github-actions].-- Learn how to [perform event-driven work using Dapr bindings][dapr-bindings-tutorial]
+- Learn how to [perform event-driven work using Dapr bindings][dapr-bindings-tutorial].
+- [Enable token authentication for Dapr requests.][dapr-token]
- [Scale your Dapr applications using KEDA scalers][dapr-keda] - [Answer common questions about the Dapr integration with Azure Container Apps][dapr-faq] + <!-- Links Internal --> [dapr-quickstart]: ./microservices-dapr.md [dapr-arm-quickstart]: ./microservices-dapr-azure-resource-manager.md [dapr-github-actions]: ./dapr-github-actions.md [dapr-bindings-tutorial]: ./microservices-dapr-bindings.md
+[dapr-token]: ./dapr-authentication-token.md
[dapr-keda]: ./dapr-keda-scaling.md [dapr-faq]: ./faq.yml#dapr
container-apps Firewall Integration https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/container-apps/firewall-integration.md
Network Security Groups (NSGs) needed to configure virtual networks closely rese
You can lock down a network via NSGs with more restrictive rules than the default NSG rules to control all inbound and outbound traffic for the Container Apps environment at the subscription level.
-In the workload profiles architecture, user-defined routes (UDRs) and securing outbound traffic with a firewall are supported. Learn more in the [networking concepts document](./networking.md#user-defined-routes-udrpreview).
+In the workload profiles environment, user-defined routes (UDRs) and securing outbound traffic with a firewall are supported. Learn more in the [networking concepts document](./networking.md#user-defined-routes-udrpreview).
-In the Consumption only architecture, custom user-defined routes (UDRs) and ExpressRoutes aren't supported.
+In the Consumption only environment, custom user-defined routes (UDRs) and ExpressRoutes aren't supported.
## NSG allow rules The following tables describe how to configure a collection of NSG allow rules. >[!NOTE]
-> The subnet associated with a Container App Environment on the Consumption only architecture requires a CIDR prefix of `/23` or larger. On the workload profiles architecture (preview), a `/27` or larger is required.
+> The subnet associated with a Container App Environment on the Consumption only environment requires a CIDR prefix of `/23` or larger. On the workload profiles environment (preview), a `/27` or larger is required.
### Inbound
The following tables describe how to configure a collection of NSG allow rules.
### Outbound with service tags
-The following service tags are required when using NSGs on the Consumption only architecture:
+The following service tags are required when using NSGs on the Consumption only environment:
| Protocol | Port | ServiceTag | Description |--|--|--|--|
The following service tags are required when using NSGs on the Consumption only
| TCP | `9000` | `AzureCloud.<REGION>` | Required for internal AKS secure connection between underlying nodes and control plane. Replace `<REGION>` with the region where your container app is deployed. | | TCP | `443` | `AzureMonitor` | Allows outbound calls to Azure Monitor. |
-The following service tags are required when using NSGs on the workload profiles architecture:
+The following service tags are required when using NSGs on the workload profiles environment:
>[!Note] > If you are using Azure Container Registry (ACR) with NSGs configured on your virtual network, create a private endpoint on your ACR to allow Container Apps to pull images through the virtual network.
The following service tags are required when using NSGs on the workload profiles
### Outbound with wild card IP rules
-The following IP rules are required when using NSGs on both the Consumption only architecture and the workload profiles architecture:
+The following IP rules are required when using NSGs on both the Consumption only environment and the workload profiles environment:
| Protocol | Port | IP | Description | |--|--|--|--|
container-apps Microservices Dapr Bindings https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/container-apps/microservices-dapr-bindings.md
azd down
## Next steps - Learn more about [deploying Dapr applications to Azure Container Apps](./microservices-dapr.md).
+- [Enable token authentication for Dapr requests.](./dapr-authentication-token.md)
- Learn more about [Azure Developer CLI](/azure/developer/azure-developer-cli/overview) and [making your applications compatible with `azd`](/azure/developer/azure-developer-cli/make-azd-compatible). - [Scale your Dapr applications using KEDA scalers](./dapr-keda-scaling.md)
container-apps Microservices Dapr Pubsub https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/container-apps/microservices-dapr-pubsub.md
Previously updated : 04/11/2023 Last updated : 05/15/2023 zone_pivot_group_filename: container-apps/dapr-zone-pivot-groups.json zone_pivot_groups: dapr-languages-set
azd down
## Next steps - Learn more about [deploying Dapr applications to Azure Container Apps](./microservices-dapr.md).
+- [Enable token authentication for Dapr requests.](./dapr-authentication-token.md)
- Learn more about [Azure Developer CLI](/azure/developer/azure-developer-cli/overview) and [making your applications compatible with `azd`](/azure/developer/azure-developer-cli/make-azd-compatible).-- [Scale your Dapr applications using KEDA scalers](./dapr-keda-scaling.md)
+- [Scale your Dapr applications using KEDA scalers](./dapr-keda-scaling.md)
container-apps Microservices Dapr Service Invoke https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/container-apps/microservices-dapr-service-invoke.md
Previously updated : 02/06/2023 Last updated : 05/15/2023 zone_pivot_group_filename: container-apps/dapr-zone-pivot-groups.json zone_pivot_groups: dapr-languages-set
azd down
## Next steps - Learn more about [deploying Dapr applications to Azure Container Apps](./microservices-dapr.md).
+- [Enable token authentication for Dapr requests.](./dapr-authentication-token.md)
- Learn more about [Azure Developer CLI](/azure/developer/azure-developer-cli/overview) and [making your applications compatible with `azd`](/azure/developer/azure-developer-cli/make-azd-compatible).
container-apps Networking https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/container-apps/networking.md
Title: Networking architecture in Azure Container Apps
+ Title: Networking environment in Azure Container Apps
description: Learn how to configure virtual networks in Azure Container Apps.
Last updated 03/29/2023
-# Networking architecture in Azure Container Apps
+# Networking environment in Azure Container Apps
-Azure Container Apps run in the context of an [environment](environment.md), which is supported by a virtual network (VNet). By default, your Container App Environment is created with a VNet that is automatically generated for you. Generated VNets are inaccessible to you as they're created in Microsoft's tenant. This VNet is publicly accessible over the internet, can only reach internet accessible endpoints, and supports a limited subset of networking capabilities such as ingress IP restrictions and container app level ingress controls.
+Azure Container Apps run in the context of an [environment](environment.md), which is supported by a virtual network (VNet). By default, your Container App environment is created with a VNet that is automatically generated for you. Generated VNets are inaccessible to you as they're created in Microsoft's tenant. This VNet is publicly accessible over the internet, can only reach internet accessible endpoints, and supports a limited subset of networking capabilities such as ingress IP restrictions and container app level ingress controls.
Use the Custom VNet configuration to provide your own VNet if you need more Azure networking features such as:
Use the Custom VNet configuration to provide your own VNet if you need more Azur
- Network Security Groups - Communicating with resources behind private endpoints in your virtual network
-The features available depend on your architecture selection.
+The features available depend on your environment selection.
-## Architecture Selection
+## Environment Selection
-There are two architectures in Container Apps: the Consumption only architecture supports only the [Consumption plan (GA)](./plans.md) and the workload profiles architecture that supports both the [Consumption + Dedicated plan structure (preview)](./plans.md). The two architectures share many of the same networking characteristics. However, there are some key differences.
+There are two environments in Container Apps: the Consumption only environment supports only the [Consumption plan (GA)](./plans.md) and the workload profiles environment that supports both the [Consumption + Dedicated plan structure (preview)](./plans.md). The two environments share many of the same networking characteristics. However, there are some key differences.
-| Architecture Type | Description |
+| Environment Type | Description |
|--|-|
-| Workload profiles architecture (preview) | Supports user defined routes (UDR) and egress through NAT Gateway. The minimum required subnet size is /27. <br /> <br /> As workload profiles are currently in preview, the number of supported regions is limited. To learn more, visit the [workload profiles overview](./workload-profiles-overview.md#supported-regions).|
-| Consumption only architecture | Doesn't support user defined routes (UDR) and egress through NAT Gateway. The minimum required subnet size is /23. |
+| Workload profiles environment (preview) | Supports user defined routes (UDR) and egress through NAT Gateway. The minimum required subnet size is /27. <br /> <br /> As workload profiles are currently in preview, the number of supported regions is limited. To learn more, visit the [workload profiles overview](./workload-profiles-overview.md#supported-regions).|
+| Consumption only environment | Doesn't support user defined routes (UDR) and egress through NAT Gateway. The minimum required subnet size is /23. |
## Accessibility Levels
IP addresses are broken down into the following types:
| Type | Description | |--|--| | Public inbound IP address | Used for app traffic in an external deployment, and management traffic in both internal and external deployments. |
-| Outbound public IP | Used as the "from" IP for outbound connections that leave the virtual network. These connections aren't routed down a VPN. Outbound IPs aren't guaranteed and may change over time. Using a NAT gateway or other proxy for outbound traffic from a Container App environment is only supported on the workload profile architecture. |
+| Outbound public IP | Used as the "from" IP for outbound connections that leave the virtual network. These connections aren't routed down a VPN. Outbound IPs aren't guaranteed and may change over time. Using a NAT gateway or other proxy for outbound traffic from a Container App environment is only supported on the workload profile environment. |
| Internal load balancer IP address | This address only exists in an internal deployment. | | App-assigned IP-based TLS/SSL addresses | These addresses are only possible with an external deployment, and when IP-based TLS/SSL binding is configured. |
IP addresses are broken down into the following types:
Virtual network integration depends on a dedicated subnet. How IP addresses are allocated in a subnet and what subnet sizes are supported depends on which plan you're using in Azure Container Apps. Selecting an appropriately sized subnet for the scale of your Container Apps is important as subnet sizes can't be modified post creation in Azure. -- **Consumption only architecture:**
+- Consumption only environment:
- /23 is the minimum subnet size required for virtual network integration. - Container Apps reserves a minimum of 60 IPs for infrastructure in your VNet, and the amount may increase up to 256 addresses as your container environment scales. - As your app scales, a new IP address is allocated for each new replica. -- **Workload profiles architecture:**
+- Workload profiles environment:
- /27 is the minimum subnet size required for virtual network integration. - The subnet you're integrating your container app with must be delegated to `Microsoft.App/environments`. - 11 IP addresses are automatically reserved for integration with the subnet. When your apps are running on workload profiles, the number of IP addresses required for infrastructure integration doesn't vary based on the scale of your container apps. - More IP addresses are allocated depending on your Container App's workload profile:
- - When you're using Consumption workload profiles for your container app, IP address assignment behaves the same as when running on the Consumption only architecture. As your app scales, a new IP address is allocated for each new replica.
+ - When you're using Consumption workload profiles for your container app, IP address assignment behaves the same as when running on the Consumption only environment. As your app scales, a new IP address is allocated for each new replica.
- When you're using the Dedicated workload profile for your container app, each node has 1 IP address assigned. As a Container Apps environment is created, you provide resource IDs for a single subnet. If you're using the CLI, the parameter to define the subnet resource ID is `infrastructure-subnet-resource-id`. The subnet hosts infrastructure components and user app containers.
-In addition, if you're using the Azure CLI with the Consumption only architecture and the [platformReservedCidr](vnet-custom-internal.md#networking-parameters) range is defined, both subnets must not overlap with the IP range defined in `platformReservedCidr`.
+In addition, if you're using the Azure CLI with the Consumption only environment and the [platformReservedCidr](vnet-custom-internal.md#networking-parameters) range is defined, both subnets must not overlap with the IP range defined in `platformReservedCidr`.
### Subnet Address Range Restrictions
Subnet address ranges can't overlap with the following ranges reserved by AKS:
- 172.31.0.0/16 - 192.0.2.0/24
-In addition, Container Apps on the workload profiles architecture reserve the following addresses:
+In addition, Container Apps on the workload profiles environment reserve the following addresses:
- 100.100.0.0/17 - 100.100.128.0/19
In addition, Container Apps on the workload profiles architecture reserve the fo
## Routes
-User Defined Routes (UDR) and controlled egress through NAT Gateway are supported in the workload profiles architecture, which is in preview. In the Consumption only architecture, these features aren't supported.
+User Defined Routes (UDR) and controlled egress through NAT Gateway are supported in the workload profiles environment, which is in preview. In the Consumption only environment, these features aren't supported.
### User defined routes (UDR) - preview
Azure creates a default route table for your virtual networks upon create. By im
#### Configuring UDR with Azure Firewall - preview:
-UDR is only supported on the workload profiles architecture. The following application and network rules must be added to the allowlist for your firewall depending on which resources you are using.
+UDR is only supported on the workload profiles environment. The following application and network rules must be added to the allowlist for your firewall depending on which resources you are using.
> [!Note] > For a guide on how to setup UDR with Container Apps to restrict outbound traffic with Azure Firewall, visit the [how to for Container Apps and Azure Firewall](./user-defined-routes.md).
Network rules allow or deny traffic based on the network and transport layer. Th
### NAT gateway integration - preview
-You can use NAT Gateway to simplify outbound connectivity for your outbound internet traffic in your virtual network on the workload profiles architecture. NAT Gateway is used to provide a static public IP address, so when you configure NAT Gateway on your Container Apps subnet, all outbound traffic from your container app is routed through the NAT Gateway's static public IP address.
+You can use NAT Gateway to simplify outbound connectivity for your outbound internet traffic in your virtual network on the workload profiles environment. NAT Gateway is used to provide a static public IP address, so when you configure NAT Gateway on your Container Apps subnet, all outbound traffic from your container app is routed through the NAT Gateway's static public IP address.
### Lock down your Container App environment :::image type="content" source="media/networking/locked-down-network.png" alt-text="Diagram of how to fully lock down your network for Container Apps.":::
-With the workload profiles architecture (preview), you can fully secure your ingress/egress networking traffic. To do so, you should use the following features:
-- Create your internal container app environment on the workload profiles architecture. For steps, see [here](./workload-profiles-manage-cli.md).
+With the workload profiles environment (preview), you can fully secure your ingress/egress networking traffic. To do so, you should use the following features:
+- Create your internal container app environment on the workload profiles environment. For steps, see [here](./workload-profiles-manage-cli.md).
- Integrate your Container Apps with an Application Gateway. For steps, see [here](./waf-app-gateway.md). - Configure UDR to route all traffic through Azure Firewall. For steps, see [here](./user-defined-routes.md).
With the workload profiles architecture (preview), you can fully secure your ing
- **VNet-scope ingress**: If you plan to use VNet-scope [ingress](ingress-overview.md) in an internal Container Apps environment, configure your domains in one of the following ways:
- 1. **Non-custom domains**: If you don't plan to use custom domains, create a private DNS zone that resolves the Container Apps environment's default domain to the static IP address of the Container Apps environment. You can use [Azure Private DNS](../dns/private-dns-overview.md) or your own DNS server. If you use Azure Private DNS, create a Private DNS Zone named as the Container App EnvironmentΓÇÖs default domain (`<UNIQUE_IDENTIFIER>.<REGION_NAME>.azurecontainerapps.io`), with an `A` record. The A record contains the name `*<DNS Suffix>` and the static IP address of the Container Apps environment.
+ 1. **Non-custom domains**: If you don't plan to use custom domains, create a private DNS zone that resolves the Container Apps environment's default domain to the static IP address of the Container Apps environment. You can use [Azure Private DNS](../dns/private-dns-overview.md) or your own DNS server. If you use Azure Private DNS, create a Private DNS Zone named as the Container App environmentΓÇÖs default domain (`<UNIQUE_IDENTIFIER>.<REGION_NAME>.azurecontainerapps.io`), with an `A` record. The A record contains the name `*<DNS Suffix>` and the static IP address of the Container Apps environment.
1. **Custom domains**: If you plan to use custom domains, use a publicly resolvable domain to [add a custom domain and certificate](./custom-domains-certificates.md#add-a-custom-domain-and-certificate) to the container app. Additionally, create a private DNS zone that resolves the apex domain to the static IP address of the Container Apps environment. You can use [Azure Private DNS](../dns/private-dns-overview.md) or your own DNS server. If you use Azure Private DNS, create a Private DNS Zone named as the apex domain, with an `A` record that points to the static IP address of the Container Apps environment.
The static IP address of the Container Apps environment can be found in the Azur
When you deploy an internal or an external environment into your own network, a new resource group is created in the Azure subscription where your environment is hosted. This resource group contains infrastructure components managed by the Azure Container Apps platform, and it shouldn't be modified.
-#### Consumption only architecture
+#### Consumption only environment
The name of the resource group created in the Azure subscription where your environment is hosted is prefixed with `MC_` by default, and the resource group name *cannot* be customized during container app creation. The resource group contains Public IP addresses used specifically for outbound connectivity from your environment and a load balancer. In addition to the [Azure Container Apps billing](./billing.md), you're billed for: - Two standard static [public IPs](https://azure.microsoft.com/pricing/details/ip-addresses/), one for ingress and one for egress. If you need more IPs for egress due to SNAT issues, [open a support ticket to request an override](https://azure.microsoft.com/support/create-ticket/). - Two standard [Load Balancers](https://azure.microsoft.com/pricing/details/load-balancer/) if using an internal environment, or one standard [Load Balancer](https://azure.microsoft.com/pricing/details/load-balancer/) if using an external environment. Each load balancer has fewer than six rules. The cost of data processed (GB) includes both ingress and egress for management operations.
-#### Workload profiles architecture
+#### Workload profiles environment
The name of the resource group created in the Azure subscription where your environment is hosted is prefixed with `ME_` by default, and the resource group name *can* be customized during container app environment creation. For external environments, the resource group contains a public IP address used specifically for inbound connectivity to your external environment and a load balancer. For internal environments, the resource group only contains a Load Balancer. In addition to the [Azure Container Apps billing](./billing.md), you're billed for:
container-apps Quotas https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/container-apps/quotas.md
The *Is Configurable* column in the following tables denotes a feature maximum m
| Feature | Scope | Default | Is Configurable | Remarks | |--|--|--|--|--| | Cores | Replica | 2 | No | Maximum number of cores available to a revision replica. |
-| Cores | Environment | 40 | Yes | Maximum number of cores an environment can accommodate. Calculated by the sum of cores requested by each active replica of all revisions in an environment. |
+| Cores | Environment | 100 | Yes | Maximum number of cores an environment can accommodate. Calculated by the sum of cores requested by each active replica of all revisions in an environment. |
## Consumption + Dedicated plan structure
The *Is Configurable* column in the following tables denotes a feature maximum m
For more information regarding quotas, see the [Quotas roadmap](https://github.com/microsoft/azure-container-apps/issues/503) in the Azure Container Apps GitHub repository. > [!NOTE]
-> [Free trial](https://azure.microsoft.com/offers/ms-azr-0044p) and [Azure for Students](https://azure.microsoft.com/free/students/) subscriptions are limited to one environment per subscription globally.
+> [Free trial](https://azure.microsoft.com/offers/ms-azr-0044p) and [Azure for Students](https://azure.microsoft.com/free/students/) subscriptions are limited to one environment per subscription globally and ten (10) cores per environment.
## Considerations * If an environment runs out of allowed cores: * Provisioning times out with a failure * The app may be restricted from scaling out
+* If you encounter unexpected capacity limits, open a support ticket
container-apps User Defined Routes https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/container-apps/user-defined-routes.md
Last updated 03/29/2023
# Control outbound traffic with user defined routes (preview) >[!Note]
-> This feature is in preview and is only supported for the workload profiles architecture. User defined routes only work with an internal Azure Container Apps environment.
+> This feature is in preview and is only supported for the workload profiles environment. User defined routes only work with an internal Azure Container Apps environment.
This article shows you how to use user defined routes (UDR) with [Azure Firewall](../firewall/overview.md) to lock down outbound traffic from your Container Apps to back-end Azure resources or other network resources.
Azure creates a default route table for your virtual networks on create. By impl
You can also use a NAT gateway or any other third party appliances instead of Azure Firewall.
-For more information on networking concepts in Container Apps, see [Networking Architecture in Azure Container Apps](./networking.md).
+For more information on networking concepts in Container Apps, see [Networking Environment in Azure Container Apps](./networking.md).
## Prerequisites
-* **Internal environment**: An internal container app environment on the workload profiles architecture that's integrated with a custom virtual network. When you create an internal container app environment, your container app environment has no public IP addresses, and all traffic is routed through the virtual network. For more information, see the [guide for how to create a container app environment on the workload profiles architecture](./workload-profiles-manage-cli.md).
+* **Internal environment**: An internal container app environment on the workload profiles environment that's integrated with a custom virtual network. When you create an internal container app environment, your container app environment has no public IP addresses, and all traffic is routed through the virtual network. For more information, see the [guide for how to create a container app environment on the workload profiles environment](./workload-profiles-manage-cli.md).
* **`curl` support**: Your container app must have a container that supports `curl` commands. In this how-to, you use `curl` to verify the container app is deployed correctly. If you don't have a container app with `curl` deployed, you can deploy the following container which supports `curl`, `mcr.microsoft.com/k8se/quickstart:latest`.
container-apps Vnet Custom Internal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/container-apps/vnet-custom-internal.md
The following example shows you how to create a Container Apps environment in an
[!INCLUDE [container-apps-create-portal-steps.md](../../includes/container-apps-create-portal-steps.md)] > [!NOTE]
-> You can use an existing virtual network, but a dedicated subnet with a CIDR range of `/23` or larger is required for use with Container Apps when using the Consumption only Architecture. When using the Workload Profiles Architecture, a `/27` or larger is required. To learn more about subnet sizing, see the [networking architecture overview](./networking.md#subnet).
+> You can use an existing virtual network, but a dedicated subnet with a CIDR range of `/23` or larger is required for use with Container Apps when using the Consumption only environment. When using the Workload Profiles environment, a `/27` or larger is required. To learn more about subnet sizing, see the [networking environment overview](./networking.md#subnet).
7. Select the **Networking** tab to create a VNET. 8. Select **Yes** next to *Use your own virtual network*.
$VnetName = 'my-custom-vnet'
Now create an instance of the virtual network to associate with the Container Apps environment. The virtual network must have two subnets available for the container app instance. > [!NOTE]
-> Network subnet address prefix requires a minimum CIDR range of `/23` for use with Container Apps when using the Consumption only Architecture. When using the Workload Profiles Architecture, a `/27` or larger is required. To learn more about subnet sizing, see the [networking architecture overview](./networking.md#subnet).
+> Network subnet address prefix requires a minimum CIDR range of `/23` for use with Container Apps when using the Consumption only environment. When using the Workload Profiles environment, a `/27` or larger is required. To learn more about subnet sizing, see the [networking environment overview](./networking.md#subnet).
# [Azure CLI](#tab/azure-cli)
You must either provide values for all three of these properties, or none of the
| Parameter | Description | |||
-| `platform-reserved-cidr` | The address range used internally for environment infrastructure services. Must have a size between `/23` and `/12` when using the [Consumption only architecture](./networking.md)|
+| `platform-reserved-cidr` | The address range used internally for environment infrastructure services. Must have a size between `/23` and `/12` when using the [Consumption only environment](./networking.md)|
| `platform-reserved-dns-ip` | An IP address from the `platform-reserved-cidr` range that is used for the internal DNS server. The address can't be the first address in the range, or the network address. For example, if `platform-reserved-cidr` is set to `10.2.0.0/16`, then `platform-reserved-dns-ip` can't be `10.2.0.0` (the network address), or `10.2.0.1` (infrastructure reserves use of this IP). In this case, the first usable IP for the DNS would be `10.2.0.2`. | | `docker-bridge-cidr` | The address range assigned to the Docker bridge network. This range must have a size between `/28` and `/12`. |
You must either provide values for all three of these properties, or none of the
| Parameter | Description | |||
-| `VnetConfigurationPlatformReservedCidr` | The address range used internally for environment infrastructure services. Must have a size between `/23` and `/12` when using the [Consumption only architecture](./networking.md) |
+| `VnetConfigurationPlatformReservedCidr` | The address range used internally for environment infrastructure services. Must have a size between `/23` and `/12` when using the [Consumption only environment](./networking.md) |
| `VnetConfigurationPlatformReservedDnsIP` | An IP address from the `VnetConfigurationPlatformReservedCidr` range that is used for the internal DNS server. The address can't be the first address in the range, or the network address. For example, if `VnetConfigurationPlatformReservedCidr` is set to `10.2.0.0/16`, then `VnetConfigurationPlatformReservedDnsIP` can't be `10.2.0.0` (the network address), or `10.2.0.1` (infrastructure reserves use of this IP). In this case, the first usable IP for the DNS would be `10.2.0.2`. | | `VnetConfigurationDockerBridgeCidr` | The address range assigned to the Docker bridge network. This range must have a size between `/28` and `/12`. |
container-apps Waf App Gateway https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/container-apps/waf-app-gateway.md
Reverse proxies allow you to place services in front of your apps that supports
This article demonstrates how to protect your container apps using a [Web Application Firewall (WAF) on Azure Application Gateway](../web-application-firewall/ag/ag-overview.md) with an internal Container Apps environment.
-For more information on networking concepts in Container Apps, see [Networking Architecture in Azure Container Apps](./networking.md).
+For more information on networking concepts in Container Apps, see [Networking Environment in Azure Container Apps](./networking.md).
## Prerequisites
ddos-protection Ddos View Diagnostic Logs https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ddos-protection/ddos-view-diagnostic-logs.md
Previously updated : 03/22/2023 Last updated : 05/11/2023
Attack mitigation flow logs allow you to review the dropped traffic, forwarded t
| where Category == "DDoSMitigationFlowLogs" ```
+The following table lists the field names and descriptions:
+ | Field name | Description | | | | | **TimeGenerated** | The date and time in UTC when the flow log was created. |
Attack mitigation flow logs allow you to review the dropped traffic, forwarded t
| **DestPort** | Port number ranging from 0 to 65535. | | **Protocol** | Type of protocol. Possible values include `tcp`, `udp`, `other`.|
-### DDoS Mitigation FlowLogs
+### DDoS Mitigation Reports
Attack mitigation reports use the Netflow protocol data, which is aggregated to provide detailed information about the attack on your resource. Anytime a public IP resource is under attack, the report generation will start as soon as the mitigation starts. There will be an incremental report generated every 5 mins and a post-mitigation report for the whole mitigation period. This is to ensure that in an event the DDoS attack continues for a longer duration of time, you'll be able to view the most current snapshot of mitigation report every 5 minutes and a complete summary once the attack mitigation is over.
Attack mitigation reports use the Netflow protocol data, which is aggregated to
| where Category == "DDoSMitigationReports" ```
+The following table lists the field names and descriptions:
+ | Field name | Description | | | | | **TimeGenerated** | The date and time in UTC when the notification was created. | | **ResourceId** | The resource ID of your public IP address. |
-| **Category** | For notifications, this will be `DDoSProtectionNotifications`.|
+| **Category** | For mitigation reports, this will be `DDoSMitigationReports`. |
| **ResourceGroup** | The resource group that contains your public IP address and virtual network. | | **SubscriptionId** | Your DDoS protection plan subscription ID. | | **Resource** | The name of your public IP address. | | **ResourceType** | This will always be `PUBLICIPADDRESS`. |
-| **OperationName** | For notifications, this will be `DDoSProtectionNotifications`. |
-| **Message** | Details of the attack. |
+| **OperationName** | For mitigation reports, this will be `DDoSMitigationReports`.  |
+| **ReportType** | Possible values are `Incremental` and `PostMitigation`. |
+| **MitigationPeriodStart** | The date and time in UTC when the mitigation started. |
+| **MitigationPeriodEnd** | The date and time in UTC when the mitigation ended. |
+| **IPAddress** | Your public IP Address. |
+| **AttackVectors** | Degradation of attack types. The keys include `TCP SYN flood`, `TCP flood`, `UDP flood`, `UDP reflection`, and `Other packet flood`. |
+| **TrafficOverview** | Degradation of attack traffic. The keys include `Total packets`, `Total packets dropped`, `Total TCP packets`, `Total TCP packets dropped`, `Total UDP packets`, `Total UDP packets dropped`, `Total Other packets`, and `Total Other packets dropped`. | 
+| **Protocols**  | Breakdown of protocols included. The keys include `TCP`, `UDP`, and `Other`.   |  
+| **DropReasons** | Analysis of causes of dropped packets. The keys include `Protocol violation invalid TCP`. `syn Protocol violation invalid TCP`, `Protocol violation invalid UDP`, `UDP reflection`, `TCP rate limit exceeded`, `UDP rate limit exceeded`, `Destination limit exceeded`, `Other packet flood Rate limit exceeded`, and `Packet was forwarded to service`. |
+| **TopSourceCountries** | Breakdown of the top 10 source countries into inbound traffic. |
+| **TopSourceCountriesForDroppedPackets** | Analysis of the top 10 source countries for attack traffic that have been throttled. |
+| **TopSourceASNs** | Analysis of the top 10 sources of autonomous system numbers (ASNs) of incoming traffic.  | 
+| **SourceContinents** | Analysis of the source continent for inbound traffic. |
| **Type** | Type of notification. Possible values include `MitigationStarted`. `MitigationStopped`. |
-| **PublicIpAddress** | Your public IP address. |
## Next steps
defender-for-cloud Support Matrix Cloud Environment https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/support-matrix-cloud-environment.md
Title: Microsoft Defender for Cloud support across cloud types
+ Title: Microsoft Defender for Cloud support across Azure clouds
description: Review Defender for Cloud features and plans supported across different clouds Last updated 05/01/2023
-# Defender for Cloud support for commercial/government clouds
+# Defender for Cloud support for Azure commercial/other clouds
This article indicates which Defender for Cloud features are supported in Azure commercial and government clouds. ## Cloud support
-In the support table, **NA** indicates that the feature is not available.
+In the support table, **NA** indicates that the feature isn't available.
-**Feature/Plan** | **Azure** | **Azure Government** | **Azure China**<br/><br/>**21Vianet**
+**Feature/Plan** | **Azure** | **Azure Government** | **Azure China**<br/>**21Vianet**
| | |
-**FOUNDATIONAL CSPM FEATURES** | | |
-[Continuous export](./continuous-export.md) | GA | GA | GA
-[Workflow automation](./workflow-automation.md) | GA | GA | GA
-[Recommendation exemption rules](./exempt-resource.md) | Public preview | NA | NA
-[Alert suppression rules](./alerts-suppression-rules.md) | GA | GA | GA
-[Alert email notifications](./configure-email-notifications.md) | GA | GA | GA
-[Agent/extension deployment](monitoring-components.md) | GA | GA | GA
-[Asset inventory](./asset-inventory.md) | GA | GA | GA
-[Azure Workbooks support](./custom-dashboards-azure-workbooks.md) | GA | GA | GA
-**DEFENDER FOR CLOUD PLANS** | | |
-**[Agentless discovery for Kubernetes](concept-agentless-containers.md)** | Public preview | NA | NA
-**[Agentless vulnerability assessments for container images.](concept-agentless-containers.md)**<br/><br/> Including registry scanning (up to 20 unique images per billable resources) | Public preview | NA | NA
-**[Defender CSPM](concept-cloud-security-posture-management.md)** | GA | NA | NA
-**[Defender for APIs](defender-for-apis-introduction.md)** | Public preview | NA | NA
-**[Defender for App Service](defender-for-app-service-introduction.md)** | GA | NA | NA
-**[Defender for Azure Cosmos DB](concept-defender-for-cosmos.md)** | Public preview | NA | NA
-**[Defender for Azure SQL database servers](defender-for-sql-introduction.md)**<br/><br/> Partial GA in Vianet21<br/> - A subset of alerts/vulnerability assessments is available.<br/>- Behavioral threat protection isn't available.| GA | GA | GA
-**[Defender for Containers](defender-for-containers-introduction.md)**<br/><br/>Support for Arc-enabled Kubernetes clusters (and therefore AWS EKS too) is in public preview and not available on Azure Government.<br/>Run-time visibility of vulnerabilities in container images is also a preview feature. | GA | GA | GA
-[Defender extension for Azure Arc-enabled Kubernetes clusters/servers/data services](defender-for-kubernetes-azure-arc.md). Requires Defender for Containers/Defender for Kubernetes. | Public preview | NA | NA
-**[Defender for DNS](defender-for-dns-introduction.md)** | GA | GA | GA
-**[Defender for Key Vault](./defender-for-key-vault-introduction.md)** | GA | NA | NA
-**[Defender for Kubernetes](./defender-for-kubernetes-introduction.md)**<br/><br/> Defender for Kubernetes is deprecated and replaced by Defender for Containers. Support for Azure Arc-enabled clusters is in public preview and not available in government clouds. [Learn more](defender-for-kubernetes-introduction.md). | GA | GA | GA
-**[Defender for open-source relational databases](defender-for-databases-introduction.md)** | GA | NA | NA
-**[Defender for Resource Manager](./defender-for-resource-manager-introduction.md)** | GA | GA | GA
-**DEFENDER FOR SERVERS FEATURES** | | |
-[Just-in-time VM access](./just-in-time-access-usage.md) | GA | GA | GA
-[File integrity monitoring](./file-integrity-monitoring-overview.md) | GA | GA | GA
-[Adaptive application controls](./adaptive-application-controls.md) | GA | GA | GA
-[Adaptive network hardening](./adaptive-network-hardening.md) | GA | GA | NA
-[Docker host hardening](./harden-docker-hosts.md) | GA | GA | GA
-[Integrated Qualys scanner](./deploy-vulnerability-assessment-vm.md) | GA | NA | NA
-[Compliance dashboard/reports](./regulatory-compliance-dashboard.md)<br/><br/> Compliance standards might differ depending on the cloud type.| GA | GA | GA
-[Defender for Endpoint integration](./integration-defender-for-endpoint.md) | GA | GA | NA
-[Connect AWS account](./quickstart-onboard-aws.md) | GA | NA | NA
-[Connect GCP project](./quickstart-onboard-gcp.md) | GA | NA | NA
-**[Defender for Storage](./defender-for-storage-introduction.md)**<br/><br/> Some threat protection alerts for Defender for Storage are in public preview. | GA | GA (activity monitoring) | NA
-**[Defender for SQL servers on machines](./defender-for-sql-introduction.md)** | GA | GA | NA
-**[Kubernetes workload protection](kubernetes-workload-protections.md)** | GA | GA | GA
-**[Microsoft Sentinel bi-directional alert synchronization](../sentinel/connect-azure-security-center.md)** | Public preview | NA | NA
+**GENERAL FEATURES** | | |
+[Continuous data export](continuous-export.md) | GA | GA | GA
+[Response automation with Azure Logic Apps ](./workflow-automation.md) | GA | GA | GA
+[Security alerts](alerts-overview.md)<br/> Generated when one or more Defender for Cloud plans is enabled. | GA | GA | GA
+[Alert email notifications](configure-email-notifications.md) | GA | GA | GA
+[Alert suppression rules](alerts-suppression-rules.md) | GA | GA | GA
+[Alert bi-directional synchronization with Microsoft Sentinel](../sentinel/connect-azure-security-center.md) | Preview | NA | NA
+[Azure Workbooks integration for reporting](custom-dashboards-azure-workbooks.md) | GA | GA | GA
+[Automatic component/agent/extension provisioning](monitoring-components.md) | GA | GA | GA
+**FOUNDATIONAL CSPM FEATURES (FREE)** | | |
+[Asset inventory](asset-inventory.md) | GA | GA | GA
+[Security recommendations](security-policy-concept.md) based on the [Microsoft Cloud Security Benchmark](concept-regulatory-compliance.md) | GA | GA | GA
+[Recommendation exemptions](exempt-resource.md) | Preview | NA | NA
+[Secure score](secure-score-security-controls.md) | GA | GA | GA
+**DEFENDER FOR CLOUD PLANS** | | |
+[Defender CSPM](concept-cloud-security-posture-management.md)| GA | NA | NA
+[Defender for APIs](defender-for-apis-introduction.md). [Review support preview regions](defender-for-apis-prepare.md#cloud-and-region-support). | Preview | NA | NA
+[Defender for App Service](defender-for-app-service-introduction.md) | GA | NA | NA
+[Defender for Azure Cosmos DB](concept-defender-for-cosmos.md) | Preview | NA | NA
+[Defender for Azure SQL database servers](defender-for-sql-introduction.md) | GA | GA | GA<br/><br/>A subset of alerts/vulnerability assessments is available.<br/>Behavioral threat protection isn't available.
+[Defender for Containers](defender-for-containers-introduction.md)<br/>[Review detailed feature support](support-matrix-defender-for-containers.md) | GA | GA | GA
+[Defender for DevOps](defender-for-devops-introduction.md) |Preview | NA | NA
+[Defender for DNS](defender-for-dns-introduction.md) | GA | GA | GA
+[Defender for Key Vault](defender-for-key-vault-introduction.md) | GA | NA | NA
+[Defender for Open-Source Relational Databases](defender-for-databases-introduction.md) | GA | NA | NA
+[Defender for Resource Manager](defender-for-resource-manager-introduction.md) | GA | GA | GA
+[Defender for Servers](plan-defender-for-servers.md)<br/>[Review detailed feature support](support-matrix-defender-for-servers.md). | GA | GA | GA
+[Defender for Storage](defender-for-storage-introduction.md) | GA | GA (activity monitoring) | NA
+[Defender for SQL Servers on Machines](defender-for-sql-introduction.md) | GA | GA | NA
+
defender-for-cloud Support Matrix Defender For Cloud https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/support-matrix-defender-for-cloud.md
Defender for Cloud depends on the [Azure Monitor Agent](../azure-monitor/agents/
Also ensure your Log Analytics agent is [properly configured to send data to Defender for Cloud](working-with-log-analytics-agent.md#manual-agent).
-To learn more about the specific Defender for Cloud features available on Windows and Linux, see:
+To learn more about the specific Defender for Cloud features available on Windows and Linux, review:
-- Defender for Servers support for [Windows](support-matrix-defender-for-servers.md#windows-machines) and [Linux](support-matrix-defender-for-servers.md#linux-machines) machines-- Defender for Containers [support for Windows and Linux containers](support-matrix-defender-for-containers.md#defender-for-containers-feature-availability)
+- [Defender for Servers support](support-matrix-defender-for-servers.md)
+- [Defender for Containers support](support-matrix-defender-for-containers.md)
> [!NOTE] > Even though Microsoft Defender for Servers is designed to protect servers, most of its features are supported for Windows 10 machines. One feature that isn't currently supported is [Defender for Cloud's integrated EDR solution: Microsoft Defender for Endpoint](integration-defender-for-endpoint.md).
defender-for-cloud Support Matrix Defender For Containers https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/support-matrix-defender-for-containers.md
Title: Matrices of Defender for Containers features in Azure, multicloud, and on-premises environments
-description: Learn about the container and Kubernetes services that you can protect with Defender for Containers.
+ Title: Support for the Defender for Containers plan in Microsoft Defender for Cloud
+description: Review support requirements for the Defender for Containers plan in Microsoft Defender for Cloud.
Last updated 01/01/2023
-# Defender for Containers feature availability
+# Defender for Containers support
-These tables show the features that are available, by environment, for Microsoft Defender for Containers. For more information about Defender for Containers, see [Microsoft Defender for Containers](defender-for-containers-introduction.md).
+This article summarizes support information for the [Defender for Containers plan](defender-for-containers-introduction.md) in Microsoft Defender for Cloud.
-## Azure (AKS)
-
-| Domain | Feature | Supported Resources | Linux release state <sup>[1](#footnote1)</sup> | Windows release state <sup>[1](#footnote1)</sup> | Agentless/Agent-based | Pricing Tier | Azure clouds availability |
-|--|--|--|--|--|--|--|--|
-| Compliance | Docker CIS | VM, Virtual Machine Scale Set | GA | - | Log Analytics agent | Defender for Servers Plan 2 | Commercial clouds<br><br> National clouds: Azure Government, Azure China 21Vianet |
-| Vulnerability Assessment <sup>[2](#footnote2)</sup> | Registry scan - OS packages | ACR, Private ACR | GA | Preview | Agentless | Defender for Containers | Commercial clouds<br><br> National clouds: Azure Government, Azure China 21Vianet |
-| Vulnerability Assessment <sup>[3](#footnote3)</sup> | Registry scan - language specific packages | ACR, Private ACR | Preview | - | Agentless | Defender for Containers | Commercial clouds<br><br> National clouds: Azure Government, Azure China 21Vianet |
-| Vulnerability Assessment | View vulnerabilities for running images | AKS | GA | Preview | Defender profile | Defender for Containers | Commercial clouds |
-| Hardening | Control plane recommendations | ACR, AKS | GA | Preview | Agentless | Free | Commercial clouds<br><br> National clouds: Azure Government, Azure China 21Vianet |
-| Hardening | Kubernetes data plane recommendations | AKS | GA | - | Azure Policy | Free | Commercial clouds<br><br> National clouds: Azure Government, Azure China 21Vianet |
-| Runtime protection| Threat detection (control plane)| AKS | GA | GA | Agentless | Defender for Containers | Commercial clouds<br><br> National clouds: Azure Government, Azure China 21Vianet |
-| Runtime protection| Threat detection (workload) | AKS | GA | - | Defender profile | Defender for Containers | Commercial clouds |
-| Discovery and provisioning | Discovery of unprotected clusters | AKS | GA | GA | Agentless | Free | Commercial clouds<br><br> National clouds: Azure Government, Azure China 21Vianet |
-| Discovery and provisioning | Collection of control plane threat data | AKS | GA | GA | Agentless | Defender for Containers | Commercial clouds<br><br> National clouds: Azure Government, Azure China 21Vianet |
-| Discovery and provisioning | Auto provisioning of Defender profile | AKS | GA | - | Agentless | Defender for Containers | Commercial clouds<br><br> National clouds: Azure Government, Azure China 21Vianet |
-| Discovery and provisioning | Auto provisioning of Azure policy add-on | AKS | GA | - | Agentless | Free | Commercial clouds<br><br> National clouds: Azure Government, Azure China 21Vianet |
-
-<sup><a name="footnote1"></a>1</sup> Specific features are in preview. The [Azure Preview Supplemental Terms](https://azure.microsoft.com/support/legal/preview-supplemental-terms/) include other legal terms that apply to Azure features that are in beta, preview, or otherwise not yet released into general availability.
-
-<sup><a name="footnote2"></a>2</sup> VA can detect vulnerabilities for these [OS packages](#registries-and-images).
-
-<sup><a name="footnote3"></a>3</sup> VA can detect vulnerabilities for these [language specific packages](#registries-and-images).
+> [!NOTE]
+> Specific features are in preview. The [Azure Preview Supplemental Terms](https://azure.microsoft.com/support/legal/preview-supplemental-terms/) include other legal terms that apply to Azure features that are in beta, preview, or otherwise not yet released into general availability.
-### Additional environment information
+## Azure (AKS)
-#### Registries and images
+| Feature | Supported Resources | Linux release state | Windows release state | Agentless/Agent-based | Pricing Tier | Azure clouds availability |
+|--|--|--|--|--|--|--|
+| Compliance-Docker CIS | VM, Virtual Machine Scale Set | GA | - | Log Analytics agent | Defender for Servers Plan 2 | Commercial clouds<br><br> National clouds: Azure Government, Azure China 21Vianet |
+| [Vulnerability assessment](defender-for-containers-vulnerability-assessment-azure.md)-registry scan [OS packages](#registries-and-images-support-aks)| ACR, Private ACR | GA | Preview | Agentless | Defender for Containers | Commercial clouds<br><br> National clouds: Azure Government, Azure China 21Vianet |
+| [Vulnerability assessment](defender-for-containers-vulnerability-assessment-azure.md)-registry scan [language packages](#registries-and-images-support-aks) | ACR, Private ACR | Preview | - | Agentless | Defender for Containers | Commercial clouds<br><br> National clouds: Azure Government, Azure China 21Vianet |
+| [Vulnerability assessment-running images](defender-for-containers-vulnerability-assessment-azure.md#view-vulnerabilities-for-images-running-on-your-aks-clusters) | AKS | GA | Preview | Defender profile | Defender for Containers | Commercial clouds |
+| [Hardening (control plane)](defender-for-containers-architecture.md) | ACR, AKS | GA | Preview | Agentless | Free | Commercial clouds<br><br> National clouds: Azure Government, Azure China 21Vianet |
+| [Hardening (Kubernetes data plane)](kubernetes-workload-protections.md) | AKS | GA | - | Azure Policy | Free | Commercial clouds<br><br> National clouds: Azure Government, Azure China 21Vianet |
+| [Runtime threat detection](defender-for-containers-introduction.md#run-time-protection-for-kubernetes-nodes-and-clusters) (control plane)| AKS | GA | GA | Agentless | Defender for Containers | Commercial clouds<br><br> National clouds: Azure Government, Azure China 21Vianet |
+| Runtime threat detection (workload) | AKS | GA | - | Defender profile | Defender for Containers | Commercial clouds |
+| Discovery/provisioning-Unprotected clusters | AKS | GA | GA | Agentless | Free | Commercial clouds<br><br> National clouds: Azure Government, Azure China 21Vianet |
+| Discovery/provisioning-Collecting control plane threat data | AKS | GA | GA | Agentless | Defender for Containers | Commercial clouds<br><br> National clouds: Azure Government, Azure China 21Vianet |
+| Discovery/provisioning-Defender profile auto provisioning | AKS | GA | - | Agentless | Defender for Containers | Commercial clouds<br><br> National clouds: Azure Government, Azure China 21Vianet |
+| Discovery/provisioning-Azure policy add-on auto provisioning | AKS | GA | - | Agentless | Free | Commercial clouds<br><br> National clouds: Azure Government, Azure China 21Vianet |
+
+### Registries and images support-AKS
| Aspect | Details | |--|--|
These tables show the features that are available, by environment, for Microsoft
| OS Packages | **Supported** <br> ΓÇó Alpine Linux 3.12-3.16 <br> ΓÇó Red Hat Enterprise Linux 6, 7, 8 <br> ΓÇó CentOS 6, 7 <br> ΓÇó Oracle Linux 6, 7, 8 <br> ΓÇó Amazon Linux 1, 2 <br> ΓÇó openSUSE Leap 42, 15 <br> ΓÇó SUSE Enterprise Linux 11, 12, 15 <br> ΓÇó Debian GNU/Linux wheezy, jessie, stretch, buster, bullseye <br> ΓÇó Ubuntu 10.10-22.04 <br> ΓÇó FreeBSD 11.1-13.1 <br> ΓÇó Fedora 32, 33, 34, 35| | Language specific packages (Preview) <br><br> (**Only supported for Linux images**) | **Supported** <br> ΓÇó Python <br> ΓÇó Node.js <br> ΓÇó .NET <br> ΓÇó JAVA <br> ΓÇó Go |
-#### Kubernetes distributions and configurations
+### Kubernetes distributions and configurations
| Aspect | Details | |--|--|
These tables show the features that are available, by environment, for Microsoft
<sup><a name="footnote2"></a>2</sup> To get [Microsoft Defender for Containers](../defender-for-cloud/defender-for-containers-introduction.md) protection for your environments, you'll need to onboard [Azure Arc-enabled Kubernetes](../azure-arc/kubernetes/overview.md) and enable Defender for Containers as an Arc extension. > [!NOTE]
-> For additional requirements for Kuberenetes workload protection, see [existing limitations](../governance/policy/concepts/policy-for-kubernetes.md#limitations).
+> For additional requirements for Kubernetes workload protection, see [existing limitations](../governance/policy/concepts/policy-for-kubernetes.md#limitations).
-#### Network restrictions
-##### Private link
+### Private link restrictions
Defender for Containers relies on the Defender profile/extension for several features. The Defender profile/extension doesn't support the ability to ingest data through Private Link. You can disable public access for ingestion, so that only machines that are configured to send traffic through Azure Monitor Private Link can send data to that workstation. You can configure a private link by navigating to **`your workspace`** > **Network Isolation** and setting the Virtual networks access configurations to **No**.
Learn how to [use Azure Private Link to connect networks to Azure Monitor](../az
## AWS (EKS)
-| Domain | Feature | Supported Resources | Linux release state <sup>[1](#footnote1)</sup> | Windows release state <sup>[1](#footnote1)</sup> | Agentless/Agent-based | Pricing tier |
+| Domain | Feature | Supported Resources | Linux release state | Windows release state | Agentless/Agent-based | Pricing tier |
|--|--| -- | -- | -- | -- | --| | Compliance | Docker CIS | EC2 | Preview | - | Log Analytics agent | Defender for Servers Plan 2 | | Vulnerability Assessment | Registry scan | ECR | Preview | - | Agentless | Defender for Containers |
Learn how to [use Azure Private Link to connect networks to Azure Monitor](../az
| Discovery and provisioning | Auto provisioning of Defender extension | - | - | - | - | - | | Discovery and provisioning | Auto provisioning of Azure policy extension | - | - | - | - | - |
-<sup><a name="footnote1"></a>1</sup> Specific features are in preview. The [Azure Preview Supplemental Terms](https://azure.microsoft.com/support/legal/preview-supplemental-terms/) include other legal terms that apply to Azure features that are in beta, preview, or otherwise not yet released into general availability.
-
-### Additional environment information
-#### Images
+### Images support-EKS
| Aspect | Details | |--|--| | Registries and images | **Unsupported** <br>ΓÇó Images that have at least one layer over 2 GB<br> ΓÇó Public repositories and manifest lists <br>ΓÇó Images in the AWS management account aren't scanned so that we don't create resources in the management account. |
-#### Kubernetes distributions and configurations
+### Kubernetes distributions/configurations support-EKS
| Aspect | Details | |--|--|
Learn how to [use Azure Private Link to connect networks to Azure Monitor](../az
> [!NOTE] > For additional requirements for Kuberenetes workload protection, see [existing limitations](../governance/policy/concepts/policy-for-kubernetes.md#limitations).
-#### Network restrictions
-##### Private link
+### Private link restrictions
Defender for Containers relies on the Defender profile/extension for several features. The Defender profile/extension doesn't support the ability to ingest data through Private Link. You can disable public access for ingestion, so that only machines that are configured to send traffic through Azure Monitor Private Link can send data to that workstation. You can configure a private link by navigating to **`your workspace`** > **Network Isolation** and setting the Virtual networks access configurations to **No**.
Allowing data ingestion to occur only through Private Link Scope on your workspa
Learn how to [use Azure Private Link to connect networks to Azure Monitor](../azure-monitor/logs/private-link-security.md).
-##### Outbound proxy support
+### Outbound proxy support
Outbound proxy without authentication and outbound proxy with basic authentication are supported. Outbound proxy that expects trusted certificates is currently not supported. ## GCP (GKE)
-| Domain | Feature | Supported Resources | Linux release state <sup>[1](#footnote1)</sup> | Windows release state <sup>[1](#footnote1)</sup> | Agentless/Agent-based | Pricing tier |
+| Domain | Feature | Supported Resources | Linux release state | Windows release state | Agentless/Agent-based | Pricing tier |
|--|--| -- | -- | -- | -- | --| | Compliance | Docker CIS | GCP VMs | Preview | - | Log Analytics agent | Defender for Servers Plan 2 | | Vulnerability Assessment | Registry scan | - | - | - | - | - |
Outbound proxy without authentication and outbound proxy with basic authenticati
| Discovery and provisioning | Auto provisioning of Defender extension | GKE | Preview | - | Agentless | Defender for Containers | | Discovery and provisioning | Auto provisioning of Azure policy extension | GKE | Preview | - | Agentless | Defender for Containers |
-<sup><a name="footnote1"></a>1</sup> Specific features are in preview. The [Azure Preview Supplemental Terms](https://azure.microsoft.com/support/legal/preview-supplemental-terms/) include other legal terms that apply to Azure features that are in beta, preview, or otherwise not yet released into general availability.
-### Additional information
-
-#### Kubernetes distributions and configurations
+### Kubernetes distributions/configurations support-GKE
| Aspect | Details | |--|--|
Outbound proxy without authentication and outbound proxy with basic authenticati
> [!NOTE] > For additional requirements for Kuberenetes workload protection, see [existing limitations](../governance/policy/concepts/policy-for-kubernetes.md#limitations).
-#### Network restrictions
-##### Private link
+### Private link restrictions
Defender for Containers relies on the Defender profile/extension for several features. The Defender profile/extension doesn't support the ability to ingest data through Private Link. You can disable public access for ingestion, so that only machines that are configured to send traffic through Azure Monitor Private Link can send data to that workstation. You can configure a private link by navigating to **`your workspace`** > **Network Isolation** and setting the Virtual networks access configurations to **No**.
Allowing data ingestion to occur only through Private Link Scope on your workspa
Learn how to [use Azure Private Link to connect networks to Azure Monitor](../azure-monitor/logs/private-link-security.md).
-##### Outbound proxy support
+### Outbound proxy support
Outbound proxy without authentication and outbound proxy with basic authentication are supported. Outbound proxy that expects trusted certificates is currently not supported. ## On-premises Arc-enabled machines
-| Domain | Feature | Supported Resources | Linux release state <sup>[1](#footnote1)</sup> | Windows release state <sup>[1](#footnote1)</sup> | Agentless/Agent-based | Pricing tier |
+| Domain | Feature | Supported Resources | Linux release state | Windows release state | Agentless/Agent-based | Pricing tier |
|--|--| -- | -- | -- | -- | --| | Compliance | Docker CIS | Arc enabled VMs | Preview | - | Log Analytics agent | Defender for Servers Plan 2 |
-| Vulnerability Assessment <sup>[2](#footnote2)</sup> | Registry scan - OS packages | ACR, Private ACR | GA | Preview | Agentless | Defender for Containers |
-| Vulnerability Assessment <sup>[3](#footnote3)</sup> | Registry scan - language specific packages | ACR, Private ACR | Preview | - | Agentless | Defender for Containers |
+| Vulnerability Assessment | Registry scan - [OS packages](#registries-and-images-support--on-premises) | ACR, Private ACR | GA | Preview | Agentless | Defender for Containers |
+| Vulnerability Assessment | Registry scan - [language specific packages](#registries-and-images-support--on-premises) | ACR, Private ACR | Preview | - | Agentless | Defender for Containers |
| Vulnerability Assessment | View vulnerabilities for running images | - | - | - | - | - | | Hardening | Control plane recommendations | - | - | - | - | - | | Hardening | Kubernetes data plane recommendations | Arc enabled K8s clusters | Preview | - | Azure Policy extension | Defender for Containers | | Runtime protection| Threat detection (control plane)| Arc enabled K8s clusters | Preview | Preview | Defender extension | Defender for Containers |
-| Runtime protection <sup>[4](#footnote4)</sup> | Threat detection (workload)| Arc enabled K8s clusters | Preview | - | Defender extension | Defender for Containers |
+| Runtime protection for [supported OS](#registries-and-images-support--on-premises) | Threat detection (workload)| Arc enabled K8s clusters | Preview | - | Defender extension | Defender for Containers |
| Discovery and provisioning | Discovery of unprotected clusters | Arc enabled K8s clusters | Preview | - | Agentless | Free | | Discovery and provisioning | Collection of control plane threat data | Arc enabled K8s clusters | Preview | Preview | Defender extension | Defender for Containers | | Discovery and provisioning | Auto provisioning of Defender extension | Arc enabled K8s clusters | Preview | Preview | Agentless | Defender for Containers | | Discovery and provisioning | Auto provisioning of Azure policy extension | Arc enabled K8s clusters | Preview | - | Agentless | Defender for Containers |
-<sup><a name="footnote1"></a>1</sup> Specific features are in preview. The [Azure Preview Supplemental Terms](https://azure.microsoft.com/support/legal/preview-supplemental-terms/) include other legal terms that apply to Azure features that are in beta, preview, or otherwise not yet released into general availability.
-
-<sup><a name="footnote2"></a>2</sup> VA can detect vulnerabilities for these [OS packages](#registries-and-images-1).
-
-<sup><a name="footnote3"></a>3</sup> VA can detect vulnerabilities for these [language specific packages](#registries-and-images-1).
-
-<sup><a name="footnote4"></a>4</sup> Runtime protection can detect threats for these [Supported host operating systems](#supported-host-operating-systems).
--
-### Additional information
-#### Registries and images
+### Registries and images support -on-premises
| Aspect | Details | |--|--|
Outbound proxy without authentication and outbound proxy with basic authenticati
- Learn how [Defender for Cloud collects data using the Log Analytics Agent](monitoring-components.md). - Learn how [Defender for Cloud manages and safeguards data](data-security.md).-- Review the [platforms that support Defender for Cloud](security-center-os-coverage.md).
+- Review the [platforms that support Defender for Cloud](security-center-os-coverage.md).
defender-for-cloud Support Matrix Defender For Servers https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/support-matrix-defender-for-servers.md
Title: Matrices of Defender for Servers features in foundational CSPM, Azure Arc, multicloud, and endpoint protection solutions
-description: Learn about the environments where you can protect servers and virtual machines with Defender for Servers.
+ Title: Support for the Defender for Servers plan in Microsoft Defender for Cloud
+description: Review support requirements for the Defender for Servers plan in Microsoft Defender for Cloud.
Last updated 01/01/2023
-# Support matrices for Defender for Servers
+# Defender for Servers support
-This article provides information about the environments where you can protect servers and virtual machines with Defender for Servers and the endpoint protections that you can use to protect them.
+This article summarizes support information for the Defender for Servers plan in Microsoft Defender for Cloud.
-## Supported features for virtual machines and servers<a name="vm-server-features"></a>
+## Azure cloud support
-The following tables show the features that are supported for virtual machines and servers in Azure, Azure Arc, and other clouds.
-- [Windows machines](#windows-machines)-- [Linux machines](#linux-machines)-- [Multicloud machines](#multicloud-machines)
+This table summarizes Azure cloud support for Defender for Servers features.
-### Windows machines
+**Feature/Plan** | **Azure** | **Azure Government** | **Azure China**<br/>**21Vianet**
+ | | |
+[Microsoft Defender for Endpoint integration](./integration-defender-for-endpoint.md) | GA | GA | NA
+[Compliance standards](./regulatory-compliance-dashboard.md)<br/>Compliance standards might differ depending on the cloud type.| GA | GA | GA
+[Microsoft Cloud Security Benchmark recommendations for OS hardening](apply-security-baseline.md) | GA | GA | GA
+[VM vulnerability scanning-agentless](concept-agentless-data-collection.md) | GA | NA | NA
+[VM vulnerability scanning - Microsoft Defender for Endpoint sensor](deploy-vulnerability-assessment-defender-vulnerability-management.md) | GA | NA | NA
+[VM vulnerability scanning - Qualys](deploy-vulnerability-assessment-vm.md) | GA | NA | NA
+[Just-in-time VM access](./just-in-time-access-usage.md) | GA | GA | GA
+[File integrity monitoring](./file-integrity-monitoring-overview.md) | GA | GA | GA
+[Adaptive application controls](./adaptive-application-controls.md) | GA | GA | GA
+[Adaptive network hardening](./adaptive-network-hardening.md) | GA | NA | NA
+[Docker host hardening](./harden-docker-hosts.md) | GA | GA | GA
-| **Feature** | **Azure Virtual Machines and [Virtual Machine Scale Sets with Flexible orchestration](../virtual-machine-scale-sets/virtual-machine-scale-sets-orchestration-modes.md#scale-sets-with-flexible-orchestration)** | **Azure Arc-enabled machines** | **Defender for Servers required** |
+
+## Windows machine support
+
+The following table shows feature support for Windows machines in Azure, Azure Arc, and other clouds.
+
+| **Feature** | **Azure VMs*<br/> **[VM Scale Sets (Flexible orchestration](../virtual-machine-scale-sets/virtual-machine-scale-sets-orchestration-modes.md#scale-sets-with-flexible-orchestration)** | **Azure Arc-enabled machines** | **Defender for Servers required** |
| | :--: | :-: | :-: | | [Microsoft Defender for Endpoint integration](integration-defender-for-endpoint.md) | Γ£ö</br>(on supported versions) | Γ£ö | Yes | | [Virtual machine behavioral analytics (and security alerts)](alerts-reference.md) | Γ£ö | Γ£ö | Yes |
The following tables show the features that are supported for virtual machines a
| Third-party vulnerability assessment (BYOL) | Γ£ö | - | No | | [Network security assessment](protect-network-resources.md) | Γ£ö | - | No |
-### Linux machines
+## Linux machine support
+
+The following table shows feature support for Linux machines in Azure, Azure Arc, and other clouds.
-| **Feature** | **Azure Virtual Machines and [Virtual Machine Scale Sets with Flexible orchestration](../virtual-machine-scale-sets/virtual-machine-scale-sets-orchestration-modes.md#scale-sets-with-flexible-orchestration)** | **Azure Arc-enabled machines** | **Defender for Servers required** |
+| **Feature** | **Azure VMs**<br/> **[VM Scale Sets (Flexible orchestration](../virtual-machine-scale-sets/virtual-machine-scale-sets-orchestration-modes.md#scale-sets-with-flexible-orchestration)** | **Azure Arc-enabled machines** | **Defender for Servers required** |
| | :--: | :-: | :-: | | [Microsoft Defender for Endpoint integration](integration-defender-for-endpoint.md) | Γ£ö | Γ£ö | Yes | | [Virtual machine behavioral analytics (and security alerts)](./azure-defender.md) | Γ£ö</br>(on supported versions) | Γ£ö | Yes |
The following tables show the features that are supported for virtual machines a
| Third-party vulnerability assessment (BYOL) | Γ£ö | - | No | | [Network security assessment](protect-network-resources.md) | Γ£ö | - | No |
-### Multicloud machines
+## Multicloud machines
+
+The following table shows feature support for AWS and GCP machines.
| **Feature** | **Availability in AWS** | **Availability in GCP** | |--|:-:|
The following tables show the features that are supported for virtual machines a
| [Network security assessment](protect-network-resources.md) | - | - | | [Cloud security explorer](how-to-manage-cloud-security-explorer.md) | Γ£ö | - |
-> [!TIP]
->To experiment with features that are only available with enhanced security features enabled, you can enroll in a 30-day trial. For more information, see the [pricing page](https://azure.microsoft.com/pricing/details/defender-for-cloud/).
-
-<a name="endpoint-supported"></a>
-## Supported endpoint protection solutions
+## Endpoint protection support
-The following table provides a matrix of supported endpoint protection solutions and whether you can use Microsoft Defender for Cloud to install each solution for you.
-
-For information about when recommendations are generated for each of these solutions, see [Endpoint Protection Assessment and Recommendations](endpoint-protection-recommendations-technical.md).
+The following table provides a matrix of supported endpoint protection solutions. The table indicates whether you can use Defender for Cloud to install each solution for you.
| Solution | Supported platforms | Defender for Cloud installation | ||||
For information about when recommendations are generated for each of these solut
## Next steps -- Learn how [Defender for Cloud collects data using the Log Analytics agent](monitoring-components.md#log-analytics-agent).-- Learn how [Defender for Cloud manages and safeguards data](data-security.md).
+Start planning your [Defender for Servers deployment](plan-defender-for-servers.md).
+
dev-box How To Get Help https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/dev-box/how-to-get-help.md
description: Learn how to choose the appropriate channel to get support for Micr
-+ Last updated 04/25/2023
event-grid Communication Services Chat Events https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/event-grid/communication-services-chat-events.md
This section contains an example of what that data would look like for each even
}] ```
-### Microsoft.Communication.ChatMemberAddedToThreadWithUser event
-
-```json
-[{
- "id": "4abd2b49-d1a9-4fcc-9cd7-170fa5d96443",
- "topic": "/subscriptions/{subscription-id}/resourceGroups/{group-name}/providers/Microsoft.Communication/communicationServices/{communication-services-resource-name}",
- "subject": "thread/{thread-id}/memberAdded/{rawId}/recipient/{rawId}",
- "data": {
- "time": "2020-09-18T00:47:13.1867087Z",
- "addedBy": "8:acs:5354158b-17b7-489c-9380-95d8821ff76b_00000005-3e5f-1bc6-f40f-343a0d0003f1",
- "memberAdded": {
- "displayName": "John Smith",
- "memberId": "8:acs:5354158b-17b7-489c-9380-95d8821ff76b_00000005-3e5f-1bc6-f40f-343a0d0003fe"
- },
- "createTime": "2020-09-18T00:46:41.559Z",
- "version": 1600390033176,
- "recipientId": "8:acs:5354158b-17b7-489c-9380-95d8821ff76b_00000005-3e5f-1bc6-f40f-343a0d0003f0",
- "transactionId": "pVIjw/pHEEKUOUJ2DAAl5A.1.1.1.1.1818361951.1.1",
- "threadId": "19:6d20c2f921cd402ead7d1b31b0d030cd@thread.v2"
- },
- "eventType": "Microsoft.Communication.ChatMemberAddedToThreadWithUser",
- "dataVersion": "1.0",
- "metadataVersion": "1",
- "eventTime": "2020-09-18T00:47:13.2342692Z"
-}]
-```
-
-### Microsoft.Communication.ChatMemberRemovedFromThreadWithUser event
-
-```json
-[{
- "id": "b3701976-1ea2-4d66-be68-4ec4fc1b4b96",
- "topic": "/subscriptions/{subscription-id}/resourceGroups/{group-name}/providers/Microsoft.Communication/communicationServices/{communication-services-resource-name}",
- "subject": "thread/{thread-id}/memberRemoved/{rawId}/recipient/{rawId}",
- "data": {
- "time": "2020-09-18T00:47:51.1461742Z",
- "removedBy": "8:acs:5354158b-17b7-489c-9380-95d8821ff76b_00000005-3e5f-1bc6-f40f-343a0d0003f1",
- "memberRemoved": {
- "displayName": "John",
- "memberId": "8:acs:5354158b-17b7-489c-9380-95d8821ff76b_00000005-3e5f-1bc6-f40f-343a0d0003fe"
- },
- "createTime": "2020-09-18T00:46:41.559Z",
- "version": 1600390071131,
- "recipientId": "8:acs:5354158b-17b7-489c-9380-95d8821ff76b_00000005-3e5f-1bc6-f40f-343a0d0003f0",
- "transactionId": "G9Y+UbjVmEuxAG3O4bEyvw.1.1.1.1.1819803816.1.1",
- "threadId": "19:6d20c2f921cd402ead7d1b31b0d030cd@thread.v2"
- },
- "eventType": "Microsoft.Communication.ChatMemberRemovedFromThreadWithUser",
- "dataVersion": "1.0",
- "metadataVersion": "1",
- "eventTime": "2020-09-18T00:47:51.2244511Z"
-}]
-```
- ### Microsoft.Communication.ChatThreadCreated event ```json
expressroute Expressroute Monitoring Metrics Alerts https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/expressroute/expressroute-monitoring-metrics-alerts.md
Aggregation type: *Avg*
Split by: Gateway Instance
-This metric displays a count of the total number of active flows on the ExpressRoute Gateway. Through split at instance level, you can see active flow count per gateway instance. For more information, see [understand network flow limits](../virtual-network/virtual-machine-network-throughput.md#network-flow-limits).
+This metric displays a count of the total number of active flows on the ExpressRoute Gateway. Only inbound traffic from on-premises is captured for active flows. Through split at instance level, you can see active flow count per gateway instance. For more information, see [understand network flow limits](../virtual-network/virtual-machine-network-throughput.md#network-flow-limits).
:::image type="content" source="./media/expressroute-monitoring-metrics-alerts/active-flows.png" alt-text="Screenshot of number of active flows per second metrics dashboard.":::
expressroute Planned Maintenance https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/expressroute/planned-maintenance.md
Title: Planned maintenance for ExpressRoute
+ Title: Planned maintenance guidance for ExpressRoute
description: Learn how to plan for ExpressRoute maintenance events.
Last updated 05/10/2023
-# Planned maintenance for ExpressRoute
+# Planned maintenance guidance for ExpressRoute
ExpressRoute circuits and Direct Ports are configured with a primary and a secondary connection to Microsoft Enterprise Edge (MSEE) devices at Microsoft peering locations. These connections are established on physically different devices to offer reliable connectivity from on-premises to your Azure resources if there are planned or unplanned events.
firewall-manager Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/firewall-manager/overview.md
Azure Firewall Manager has the following known issues:
|||| |Traffic splitting|Microsoft 365 and Azure Public PaaS traffic splitting isn't currently supported. As such, selecting a third-party provider for V2I or B2I also sends all Azure Public PaaS and Microsoft 365 traffic via the partner service.|Investigating traffic splitting at the hub. |Base policies must be in same region as local policy|Create all your local policies in the same region as the base policy. You can still apply a policy that was created in one region on a secured hub from another region.|Investigating|
-|Filtering inter-hub traffic in secure virtual hub deployments|Secured Virtual Hub to Secured Virtual Hub communication filtering isn't yet supported. However, hub to hub communication still works if private traffic filtering via Azure Firewall isn't enabled.|Investigating|
-|Branch to branch traffic with private traffic filtering enabled|Branch to branch traffic isn't supported when private traffic filtering is enabled. |Investigating.<br><br>Don't secure private traffic if branch to branch connectivity is critical.|
+|Filtering inter-hub traffic in secure virtual hub deployments|Secured Virtual Hub to Secured Virtual Hub communication filtering is supported with the Routing Intent feature.|Enable Routing Intent on your Virtual WAN Hub by setting Inter-hub to **Enabled** in Azure Firewall Manager. See [Routing Intent documentation](../virtual-wan/how-to-routing-policies.md) for more information about this feature.|
+|Branch to branch traffic with private traffic filtering enabled|Branch to branch traffic can be inspected by Azure Firewall in secured hub scenarios if Routing Intent is enabled. |Enable Routing Intent on your Virtual WAN Hub by setting Inter-hub to **Enabled** in Azure Firewall Manager. See [Routing Intent documentation](../virtual-wan/how-to-routing-policies.md) for more information about this feature.|
|All Secured Virtual Hubs sharing the same virtual WAN must be in the same resource group.|This behavior is aligned with Virtual WAN Hubs today.|Create multiple Virtual WANs to allow Secured Virtual Hubs to be created in different resource groups.| |Bulk IP address addition fails|The secure hub firewall goes into a failed state if you add multiple public IP addresses.|Add smaller public IP address increments. For example, add 10 at a time.| |DDoS Protection not supported with secured virtual hubs|DDoS Protection is not integrated with vWANs.|Investigating|
firewall-manager Secure Cloud Network Powershell https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/firewall-manager/secure-cloud-network-powershell.md
Set-AzDiagnosticSetting -ResourceId $AzFW.Id -Enabled $True -Category AzureFirew
## Deploy Azure Firewall and configure custom routing
+> [!NOTE]
+> This is the configuration deployed when securing connectivity from the Azure Portal with Azure Firewall Manager when the "Inter-hub" setting is set to **disabled**. For instructions on how to configure routing using powershell when "Inter-hub" is set to **enabled**, see [Enabling routing intent](#routingintent).
+ Now you have an Azure Firewall in the hub, but you still need to modify routing so the Virtual WAN sends the traffic from the virtual networks and from the branches through the firewall. You do this in two steps: 1. Configure all virtual network connections (and branch connections if there were any) to propagate to the `None` Route Table. The effect of this configuration is that other virtual networks and branches won't learn their prefixes, and so has no routing to reach them. 1. Now you can insert static routes in the `Default` Route Table (where all virtual networks and branches are associated by default), so that all traffic is sent to the Azure Firewall.
-> [!NOTE]
-> This is the configuration deployed when securing connectivity from the Azure Portal with Azure Firewall Manager
+ Start with the first step, to configure your virtual network connections to propagate to the `None` Route Table:
$DefaultRT = Update-AzVHubRouteTable -Name "defaultRouteTable" -ResourceGroupNam
> [!NOTE] > String "***all_traffic***" as value for parameter "-Name" in the New-AzVHubRoute command above has a special meaning: if you use this exact string, the configuration applied in this article will be properly reflected in the Azure Portal (Firewall Manager --> Virtual hubs --> [Your Hub] --> Security Configuration). If a different name will be used, the desired configuration will be applied, but will not be reflected in the Azure Portal.
+## <a name="routingintent"></a> Enabling routing intent
+
+If you want to send inter-hub and inter-region traffic via Azure Firewall deployed in the Virtual WAN hub, you can instead enable the routing intent feature. For more information on routing intent, see [Routing Intent documentation](../virtual-wan/how-to-routing-policies.md).
+
+> [!NOTE]
+> This is the configuration deployed when securing connectivity from the Azure Portal with Azure Firewall Manager when the "Interhub" setting is set to **enabled**.
+
+```azurepowershell
+# Get the Azure Firewall resource ID
+$AzFWId = $(Get-AzVirtualHub -ResourceGroupName <thname> -name $HubName).AzureFirewall.Id
+
+# Create routing policy and routing intent
+$policy1 = New-AzRoutingPolicy -Name "PrivateTraffic" -Destination @("PrivateTraffic") -NextHop $firewall.Id
+$policy2 = New-AzRoutingPolicy -Name "PublicTraffic" -Destination @("Internet") -NextHop $firewall.Id
+New-AzRoutingIntent -ResourceGroupName "<rgname>" -VirtualHubName "<hubname>" -Name "hubRoutingIntent" -RoutingPolicy @($policy1, $policy2)
+```
+
+If you are using non-RFC1918 prefixes in your Virtual WAN such as 40.0.0.0/24 in your Virtual Network or on-premises, add an additional route in the defaultRouteTable after routing intent configuration completes. Make sure you name this route as **private_traffic**. If the route is named otherwise, the desired configuration will apply but it will not be reflected in Azure Portal.
+
+```azurepowershell
+# Get the defaultRouteTable
+$defaultRouteTable = Get-AzVHubRouteTable -ResourceGroupName routingIntent-Demo -HubName wus_hub1 -Name defaultRouteTable
+
+# Get the routes automatically created by routing intent. If private routing policy is enabled, this is the route named _policy_PrivateTraffic. If internet routing policy is enabled, this is the route named _policy_InternetTraffic.
+$privatepolicyroute = $defaultRouteTable.Routes[1]
++
+# Create new route named private_traffic for non-RFC1918 prefixes
+$private_traffic = New-AzVHubRoute -Name "private-traffic" -Destination @("30.0.0.0/24") -DestinationType "CIDR" -NextHop $AzFWId -NextHopType ResourceId
+
+# Create new routes for route table
+$newroutes = @($privatepolicyroute, $private_traffic)
+
+# Update route table
+Update-AzVHubRouteTable -ResourceGroupName <rgname> -ParentResourceName <hubname> -Name defaultRouteTable -Route $newroutes
+
+````
+
## Test connectivity Now you have a fully operational secure hub. To test connectivity, you need one virtual machine in each spoke virtual network connected to the hub:
firewall-manager Secure Cloud Network https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/firewall-manager/secure-cloud-network.md
Now you must ensure that network traffic gets routed through your firewall.
3. Under **Settings**, select **Security configuration**. 4. Under **Internet traffic**, select **Azure Firewall**. 5. Under **Private traffic**, select **Send via Azure Firewall**.
-6. Select **Save**.
-7. Select **OK** on the **Warning** dialog.
+6. Under **Inter-hub**, select **Enabled** to enable the Virtual WAN routing intent feature. Routing intent is the mechanism through which you can configure Virtual WAN to route branch-to-branch (on-premises to on-premises) traffic via Azure Firewall deployed in the Virtual WAN Hub. For more information regarding pre-requisites and considerations associated with the routing intent feature, see [Routing Intent documentation](../virtual-wan/how-to-routing-policies.md).
+7. Select **Save**.
+8. Select **OK** on the **Warning** dialog.
:::image type="content" source="./media/secure-cloud-network/9a-firewall-warning.png" alt-text="Screenshot of Secure Connections." lightbox="./media/secure-cloud-network/9a-firewall-warning.png":::
firewall-manager Secured Virtual Hub https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/firewall-manager/secured-virtual-hub.md
You can choose the required security providers to protect and govern your networ
Using Firewall Manager in the Azure portal, you can either create a new secured virtual hub, or convert an existing virtual hub that you previously created using Azure Virtual WAN.
-## Public preview features
-
-The following features are in public preview:
-
-| Feature | Description |
-| - | |
-| Routing Intent and Policies enabling Inter-hub security | This feature allows you to configure internet-bound, private or inter-hub traffic flow through Azure Firewall. For more information, see [Routing Intent and Policies](../virtual-wan/how-to-routing-policies.md). |
+You may configure Virtual WAN to enable inter-region security use cases in the hub by configuring routing intent. For more information on routing intent see [Routing Intent documentation](../virtual-wan/how-to-routing-policies.md).
## Next steps
firewall Firewall Preview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/firewall/firewall-preview.md
With Structured Firewall Logs, you'll be able to choose to use Resource Specific
For more information, see [Azure Structured Firewall Logs (preview)](firewall-structured-logs.md).
-### Policy Analytics (preview)
-
-Policy Analytics provides insights, centralized visibility, and control to Azure Firewall. IT teams today are challenged to keep Firewall rules up to date, manage existing rules, and remove unused rules. Any accidental rule updates can lead to a significant downtime for IT teams.
- ### Explicit proxy (preview) With the Azure Firewall Explicit proxy set on the outbound path, you can configure a proxy setting on the sending application (such as a web browser) with Azure Firewall configured as the proxy. As a result, traffic from a sending application goes to the firewall's private IP address, and therefore egresses directly from the firewall without using a user defined route (UDR).
firewall Policy Analytics https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/firewall/policy-analytics.md
Title: Azure Firewall Policy Analytics (preview)
-description: Learn about Azure Firewall Policy Analytics (preview)
+ Title: Azure Firewall Policy Analytics
+description: Learn about Azure Firewall Policy Analytics
Previously updated : 01/26/2023 Last updated : 05/09/2023
-# Azure Firewall Policy Analytics (preview)
+# Azure Firewall Policy Analytics
-> [!IMPORTANT]
-> This feature is currently in PREVIEW.
-> See the [Supplemental Terms of Use for Microsoft Azure Previews](https://azure.microsoft.com/support/legal/preview-supplemental-terms/) for legal terms that apply to Azure features that are in beta, preview, or otherwise not yet released into general availability.
- Policy Analytics provides insights, centralized visibility, and control to Azure Firewall. IT teams today are challenged to keep Firewall rules up to date, manage existing rules, and remove unused rules. Any accidental rule updates can lead to a significant downtime for IT teams. For large, geographically dispersed organizations, manually managing Firewall rules and policies is a complex and sometimes error-prone process. The new Policy Analytics feature is the answer to this common challenge faced by IT teams.
You can now refine and update Firewall rules and policies with confidence in jus
## Pricing
-Enabling Policy Analytics on a Firewall Policy associated with a single firewall is billed per policy as described on the [Azure Firewall Manager pricing](https://azure.microsoft.com/pricing/details/firewall-manager/) page. Enabling Policy Analytics on a Firewall Policy associated with more than one firewall is offered at no added cost.
+New pricing for policy analytics is now in effect. See the [Azure Firewall Manager pricing](https://azure.microsoft.com/pricing/details/firewall-manager/) page for the latest pricing details.
## Key Policy Analytics features
Enabling Policy Analytics on a Firewall Policy associated with a single firewall
- **Traffic flow analysis**: Maps traffic flow to rules by identifying top traffic flows and enabling an integrated experience. - **Single Rule analysis**: Analyzes a single rule to learn what traffic hits that rule to refine the access it provides and improve the overall security posture.
-## Prerequisites
--- An Azure Firewall Standard or Premium-- An Azure Firewall Standard or Premium policy attached to the Firewall-- The [Azure Firewall network rule name logging (preview)](firewall-network-rule-logging.md) must be enabled to view network rules analysis.-- The [Azure Structured Firewall Logs (preview)](firewall-structured-logs.md) must be enabled on Firewall Standard or Premium.- ## Enable Policy Analytics Policy analytics starts monitoring the flows in the DNAT, Network, and Application rule analysis only after you enable the feature. It can't analyze rules hit before the feature is enabled.
-### Firewall with no diagnostics settings configured
-
-1. Once all prerequisites are met, select **Policy analytics (preview)** in the table of contents.
+1. Select **Policy analytics** in the table of contents.
2. Next, select **Configure Workspaces**. 3. In the pane that opens, select the **Enable Policy Analytics** checkbox. 4. Next, choose a log analytics workspace. The log analytics workspace should be the same as the Firewall attached to the policy. 5. Select **Save** after you choose the log analytics workspace.
-6. Go to the Firewall attached to the policy and enter the **Diagnostic settings** page. You'll see the **FirewallPolicySetting** added there as part of the policy analytics feature.
-7. Select **Edit Setting**, and ensure the **Resource specific** toggle is checked, and the highlighted tables are checked. In the previous example, all logs are written to the log analytics workspace.
-
-### Firewall with Diagnostics settings already configured
-
-1. Ensure that the Firewall attached to the policy is logging to **Resource Specific** tables, and that the following three tables are also selected:
- - AZFWApplicationRuleAggregation
- - AZFWNetworkRuleAggregation
- - AZFWNatRuleAggregation
-2. Next, select **Policy Analytics (preview)** in the table of contents. Once inside the feature, select **Configure Workspaces**.
-3. Now, select **Enable Policy Analytics**.
-4. Next, choose a log analytics workspace. The log analytics workspace should be the same as the Firewall attached to the policy.
-5. Select **Save** after you choose the log analytics workspace.
-
- During the save process, you might see the following error message: **Failed to update Diagnostic Settings**
-
- You can disregard this error message if the policy was successfully updated.
> [!TIP] > Policy Analytics has a dependency on both Log Analytics and Azure Firewall resource specific logging. Verify the Firewall is configured appropriately or follow the previous instructions. Be aware that logs take 60 minutes to appear after enabling them for the first time. This is because logs are aggregated in the backend every hour. You can check logs are configured appropriately by running a log analytics query on the resource specific tables such as **AZFWNetworkRuleAggregation**, **AZFWApplicationRuleAggregation**, and **AZFWNatRuleAggregation**.
Policy analytics starts monitoring the flows in the DNAT, Network, and Applicati
## Next steps -- To learn more about Azure Firewall logs and metrics, see [Azure Firewall logs and metrics](logs-and-metrics.md).
+- To learn more about Azure Firewall logs and metrics, see [Azure Firewall logs and metrics](logs-and-metrics.md).
+- To learn more about Azure Firewall structured logs, see [Azure Firewall structured logs](firewall-structured-logs.md).
frontdoor Health Probes https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/frontdoor/health-probes.md
Title: Backend health monitoring
+ Title: Health probes
description: This article helps you understand how Azure Front Door monitors the health of your origins.
Previously updated : 03/17/2022 Last updated : 05/15/2023 # Health probes > [!NOTE]
-> An *Origin* and a *origin group* in this article refers to the backend and backend pool of the Azure Front Door (classic) configuration.
+> An *origin* and an *origin group* in this article refers to the backend and backend pool of an Azure Front Door (classic) configuration.
>
-To determine the health and proximity of each backend for a given Azure Front Door environment, each Front Door environment periodically sends a synthetic HTTP/HTTPS request to each of your configured origins. Azure Front Door then uses these responses from the probe to determine the "best" origin to route your client requests.
+To determine the health and proximity of each origin for a given Azure Front Door environment, each Front Door profile periodically sends a synthetic HTTP/HTTPS request to all your configured origins. Front Door then uses responses from the health probe to determine the *best* origin to route your client requests to.
> [!WARNING]
-> Since each Azure Front Door edge POP emits health probes to your origins, the health probe volume for your origins can be quite high. The number of probes depends on your customer's traffic location and your health probe frequency. If the Azure Front Door edge POP doesnΓÇÖt receive real traffic from your end users, the frequency of the health probe from the edge POP is decreased from the configured frequency. If there is customer traffic to all the Azure Front Door edge POP, the health probe volume can be high depending on your health probes frequency.
+> Since each Azure Front Door edge location is sending health probes to your origins, the health probe volume for your origins can be quite high. The number of probes depends on your customer's traffic location and your health probe frequency. If the Azure Front Door edge locations doesnΓÇÖt receive real traffic from your end users, the frequency of the health probe from the edge location is decreased from the configured frequency. If there is traffic to all the Azure Front Door edge locations, the health probe volume can be high depending on your health probes frequency.
>
-> An example to roughly estimate the health probe volume per minute to your origin when using the default probe frequency of 30 seconds. The probe volume on each of your origin is equal to the number of edge POPs times two requests per minute. The probing requests will be less if there is no traffic sent to all of the edge POPs. For a list of edge locations, see [edge locations by region](edge-locations-by-region.md) for Azure Front Door. There could be more than one POP in each edge location.
-
-> [!NOTE]
-> Azure Front Door HTTP/HTTPS probes are sent with `User-Agent` header set with value: `Edge Health Probe`.
+> An example to roughly estimate the health probe volume per minute to an origin when using the default probe frequency of 30 seconds. The probe volume on each of your origin is equal to the number of edge locations times two requests per minute. The probing requests will be less if there is no traffic sent to all of the edge locations. For a list of edge locations, see [edge locations by region](edge-locations-by-region.md).
## Supported protocols
-Azure Front Door supports sending probes over either HTTP or HTTPS protocols. These probes are sent over the same TCP ports configured for routing client requests, and cannot be overridden.
+Azure Front Door supports sending probes over either HTTP or HTTPS protocols. These probes are sent over the same TCP ports configured for routing client requests, and can't be overridden. Front Door HTTP/HTTPS probes are sent with `User-Agent` header set with value: `Edge Health Probe`.
## Supported HTTP methods for health probes Azure Front Door supports the following HTTP methods for sending the health probes:
-1. **GET:** The GET method means retrieve whatever information (in the form of an entity) is identified by the Request-URI.
-2. **HEAD:** The HEAD method is identical to GET except that the server MUST NOT return a message-body in the response. For new Front Door profiles, by default, the probe method is set as HEAD.
+1. **GET:** The GET method means retrieve whatever information (in the form of an entity) gets identified by the Request-URI.
+2. **HEAD:** The HEAD method is identical to GET except that the server **MUST NOT** return a message-body in the response. For new Front Door profiles, by default, the probe method is set as HEAD.
-> [!NOTE]
-> For lower load and cost on your backends, Front Door recommends using HEAD requests for health probes.
+> [!TIP]
+> To lower the load and cost to your origins, Front Door recommends using HEAD requests for health probes.
## Health probe responses | Responses | Description | | - | - |
-| Determining Health | A 200 OK status code indicates the backend is healthy. Everything else is considered a failure. If for any reason (including network failure) a valid HTTP response isn't received for a probe, the probe is counted as a failure.|
-| Measuring Latency | Latency is the wall-clock time measured from the moment immediately before we send the probe request to the moment when we receive the last byte of the response. We use a new TCP connection for each request, so this measurement isn't biased towards backends with existing warm connections. |
+| Determining health | A **200 OK** status code indicates the origin is healthy. Any other status code is considered a failure. If for any reason a valid HTTP response isn't received for a probe, the probe is counted as a failure. |
+| Measuring latency | Latency is the wall-clock time measured from the moment immediately before the probe request gets sent to the moment when Front Door receives the last byte of the response. Front Door uses a new TCP connection for each request. The measurement isn't biased towards origins with existing warm connections. |
-## How Front Door determines backend health
+## How Front Door determines origin health
-Azure Front Door uses the same three-step process below across all algorithms to determine health.
+Azure Front Door uses a three-step process across all algorithms to determine health.
-1. Exclude disabled backends.
+1. Exclude disabled origins.
-1. Exclude backends that have health probes errors:
+1. Exclude origins that have health probes errors:
- * This selection is done by looking at the last _n_ health probe responses. If at least _x_ are healthy, the backend is considered healthy.
+ * This selection is done by looking at the last _n_ health probe responses. If at least _x_ are healthy, the origin is considered healthy.
- * _n_ is configured by changing the SampleSize property in load-balancing settings.
+ * _n_ is configured by changing the **SampleSize** property in load-balancing settings.
- * _x_ is configured by changing the SuccessfulSamplesRequired property in load-balancing settings.
+ * _x_ is configured by changing the **SuccessfulSamplesRequired** property in load-balancing settings.
-1. For the sets of healthy backends in the backend pool, Front Door additionally measures and maintains the latency (round-trip time) for each backend.
+1. For sets of healthy origins in an origin group, Front Door measures and maintains the latency for each origin.
> [!NOTE]
-> If a single endpoint is a member of multiple backend pools, Azure Front Door optimizes the number of health probes sent to the backend to reduce the load on the backend. Health probe requests will be sent based on the lowest configured sample interval. The health of the endpoint in all pools will be determined by the responses from same health probes.
+> If a single endpoint is a member of multiple origin groups, Front Door will optimize the number of health probes sent to the origin to reduce the load on the origin. Health probe requests will be sent based on the lowest configured sample interval. The health of the endpoint in all origin groups will be determined by the responses from same health probes.
## Complete health probe failure
-If health probes fail for every backend in a backend pool, then Front Door considers all backends unhealthy and routes traffic in a round robin distribution across all of them.
+If health probes fail for every origin in an origin group, then Front Door considers all origins unhealthy and routes traffic in a round robin distribution across all of them.
-Once any backend returns to a healthy state, then Front Door will resume the normal load-balancing algorithm.
+Once an origin returns to a healthy state, Front Door resumes the normal load-balancing algorithm.
## Disabling health probes
-If you have a single backend in your backend pool, you can choose to disable the health probes reducing the load on your application backend. Even if you have multiple backends in the backend pool but only one of them is in enabled state, you can disable health probes.
+If you have a single origin in your origin group, you can choose to disable health probes to reduce the load on your application. If you have multiple origins in your origin group and more than one of them is in enabled state, you can't disable health probes.
## Next steps -- Learn how to [create an Front Door profile](create-front-door-portal.md).-- Learn about Azure Front Door [routing architecture](front-door-routing-architecture.md).
+- Learn how to [create an Azure Front Door profile](create-front-door-portal.md).
+- Learn about [Front Door routing architecture](front-door-routing-architecture.md).
hdinsight How To Use Hbck2 Tool https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/hdinsight/hbase/how-to-use-hbck2-tool.md
Title: How to use Apache HBase HBCK2 Tool
-description: Learn how to use HBase HBCK2 Tool
+ Title: Use the Apache HBase HBCK2 tool
+description: Learn how to use the HBase HBCK2 tool.
Last updated 05/05/2023
-# How to use Apache HBase HBCK2 Tool
+# Use the Apache HBase HBCK2 tool
-Learn how to use HBase HBCK2 Tool.
+This article shows you how to use the HBase HBCK2 tool. HBCK2 is the repair tool for Apache HBase clusters.
-## HBCK2 Overview
+## HBCK2 overview
-HBCK2 is currently a simple tool that does one thing at a time only. In hbase-2.x, the Master is the final arbiter of all state, so a general principle for most HBCK2 commands is that it asks the Master to affect all repair. A Master must be up and running, before you can run HBCK2 commands. While HBCK1 performed analysis reporting your cluster GOOD or BAD, HBCK2 is less presumptuous. In hbase-2.x, the operator figures what needs fixing and then uses tooling including HBCK2 to do fixup.
+HBCK2 is currently a simple tool that does only one thing at a time. In hbase-2.x, the Master is the final arbiter of all state, so a general principle for most HBCK2 commands is that it asks the Master to make all repairs.
+A Master must be up and running before you can run HBCK2 commands. HBCK1 performed analysis and reported your cluster as good or bad, but HBCK2 is less presumptuous. In hbase-2.x, the operator determines what needs to be fixed and then uses tooling, including HBCK2, to make repairs.
-## HBCK2 vs HBCK1
+## HBCK2 vs. HBCK1
-HBCK2 is the successor to HBCK, the repair tool that shipped with hbase-1.x (A.K.A HBCK1). Use HBCK2 in place of HBCK1 making repairs against hbase-2.x clusters. HBCK1 shouldn't be run against a hbase-2.x install. It may do damage. Its write-facility (-fix) has been removed. It can report on the state of a hbase-2.x cluster but its assessments are inaccurate since it doesn't understand the internal workings of a hbase-2.x. HBCK2 doesn't work the way HBCK1 used to, even for the case where commands are similarly named across the two versions.
+HBCK2 is the successor to HBCK, the repair tool that shipped with hbase-1.x (also known as HBCK1). You can use HBCK2 in place of HBCK1 to make repairs against hbase-2.x clusters. HBCK1 shouldn't be run against an hbase-2.x installation because it might do damage. Its write-facility (`-fix`) has been removed. It can report on the state of an hbase-2.x cluster, but its assessments are inaccurate because it doesn't understand the internal workings of an hbase-2.x.
-## Obtaining HBCK2
+HBCK2 doesn't work the way HBCK1 used to, even in cases where commands are similarly named across the two versions.
-You can find the release under the HBase distribution directory. See the [HBASE Downloads Page](https://dlcdn.apache.org/hbase/hbase-operator-tools-1.2.0/hbase-operator-tools-1.2.0-bin.tar.gz).
+## Obtain HBCK2
+You can find the release under the HBase distribution directory. For more information, see the [HBase downloads page](https://dlcdn.apache.org/hbase/hbase-operator-tools-1.2.0/hbase-operator-tools-1.2.0-bin.tar.gz).
### Master UI: The HBCK Report
-An HBCK Report page added to the Master in 2.1.6 at `/hbck.jsp`, which shows output from two inspections run by the master on an interval. One is the output by the `CatalogJanitor` whenever it runs. If overlaps or holes in, `hbase:meta`, the `CatalogJanitor` lists what it has found. Another background 'chore' process added to compare `hbase:meta` and filesystem content; if any anomaly, it makes note in its HBCK Report section.
+An HBCK Report page added to the Master in 2.1.6 at `/hbck.jsp` shows output from two inspections run by the Master on an interval. One is the output by the `CatalogJanitor` whenever it runs. If overlaps or holes are found in `hbase:meta`, the `CatalogJanitor` lists what it has found. Another background `chore` process compares the `hbase:meta` and file-system content. If an anomaly is found, it makes a note in its HBCK Report section.
-To run the CatalogJanitor, execute the command in hbase shell: `catalogjanitor_run`
+To run the `CatalogJanitor`, execute the command in the hbase shell: `catalogjanitor_run`.
-To run hbck chore, execute the command in hbase shell: `hbck_chore_run`
+To run the `hbck chore`, execute the command in the hbase shell: `hbck_chore_run`.
Both commands don't take any inputs.
-## Running HBCK2
+## Run HBCK2
+
+You can run the `hbck` command by launching it via the `$HBASE_HOME/bin/hbase` script. By default, when you run `bin/hbase hbck`, the built-in HBCK1 tooling is run. To run HBCK2, you need to point at a built HBCK2 jar by using the `-j` option, as in this example:
-We can run the hbck command by launching it via the $HBASE_HOME/bin/hbase script. By default, running bin/hbase hbck, the built-in HBCK1 tooling is run. To run HBCK2, you need to point at a built HBCK2 jar using the -j option as in:
`hbase --config /etc/hbase/conf hbck -j ~/hbase-operator-tools/hbase-hbck2/target/hbase-hbck2-1.x.x-SNAPSHOT.jar`
-This command with no options or arguments passed prints the HBCK2 help.
+This command prints the HBCK2 help, without passing options or arguments.
-## HBCK2 Commands
+## HBCK2 commands
> [!NOTE]
-> Test these commands on a test cluster to understand the functionality before running in production environment
+> Test these commands on a test cluster to understand the functionality before you run them in a production environment.
-**assigns [OPTIONS] <ENCODED_REGIONNAME/INPUTFILES_FOR_REGIONNAMES>... | -i <INPUT_FILE>...**
+`assigns [OPTIONS] <ENCODED_REGIONNAME/INPUTFILES_FOR_REGIONNAMES>... | -i <INPUT_FILE>...`
-**Options:**
+Options:
-`-o,--override` - override ownership by another procedure
+* `-o,--override`: Overrides ownership by another procedure.
+* `-i,--inputFiles`: Takes one or more encoded region names.
-`-i,--inputFiles` - takes one or more encoded region names
+This `raw` assign can be used even during Master initialization (if the `-skip` flag is specified). It skirts coprocessors and passes one or more encoded region names. `de00010733901a05f5a2a3a382e27dd4` is an example of what a user-space encoded region name looks like. For example:
-A 'raw' assign that can be used even during Master initialization (if the -skip flag is specified). Skirts Coprocessors. Pass one or more encoded region names. de00010733901a05f5a2a3a382e27dd4 is an example of what a user-space encoded region name looks like. For example:
``` hbase --config /etc/hbase/conf hbck -j ~/hbase-operator-tools/hbase-hbck2/target/hbase-hbck2-1.x.x-SNAPSHOT.jar assigns de00010733901a05f5a2a3a382e27dd4 ```
-Returns the PID(s) of the created AssignProcedure(s) or -1 if none. If `-i or --inputFiles` is specified, pass one or more input file names. Each file contains encoded region names, one per line. For example:
+
+It returns the PIDs of the created `AssignProcedures` or -1 if none. If `-i or --inputFiles` is specified, it passes one or more input file names. Each file contains encoded region names, one per line. For example:
+ ``` hbase --config /etc/hbase/conf hbck -j ~/hbase-operator-tools/hbase-hbck2/target/hbase-hbck2-1.x.x-SNAPSHOT.jar assigns -i fileName1 fileName2 ```
-**unassigns [OPTIONS] <ENCODED_REGIONNAME>...| -i <INPUT_FILE>...**
+`unassigns [OPTIONS] <ENCODED_REGIONNAME>...| -i <INPUT_FILE>...`
-**Options:**
+Options:
-`-o,--override` - override ownership by another procedure
+* `-o,--override`: Overrides ownership by another procedure.
+* `-i,--inputFiles`: Takes ones or more input files of encoded names.
-`-i,--inputFiles` - takes ones or more input files of encoded names
+This `raw` unassign can be used even during Master initialization (if the `-skip` flag is specified). It skirts coprocessors and passes one or more encoded region names. `de00010733901a05f5a2a3a382e27dd4` is an example of what a user override space encoded region name looks like. For example:
-A 'raw' unassign that can be used even during Master initialization (if the -skip flag is specified). Skirts Coprocessors. Pass one or more encoded region names. de00010733901a05f5a2a3a382e27dd4 is an example of what a user override space encoded region name looks like. For example:
``` hbase --config /etc/hbase/conf hbck -j ~/hbase-operator-tools/hbase-hbck2/target/hbase-hbck2-1.x.x-SNAPSHOT.jar unassign de00010733901a05f5a2a3a382e27dd4 ```
-Returns the PID(s) of the created UnassignProcedure(s) or -1 if none. If `-i or --inputFiles` is specified, pass one or more input file names. Each file contains encoded region names, one per line. For example:
+
+It returns the PIDs of the created `UnassignProcedures` or -1 if none. If `-i or --inputFiles` is specified, it passes one or more input file names. Each file contains encoded region names, one per line. For example:
+ ``` hbase --config /etc/hbase/conf hbck -j ~/hbase-operator-tools/hbase-hbck2/target/hbase-hbck2-1.x.x-SNAPSHOT.jar unassigns fileName1 -i fileName2 ``` `bypass [OPTIONS] <PID>...`
-**Options:**
+Options:
-`-o,--override` - override if procedure is running/stuck
+* `-o,--override`: Overrides if the procedure is running or stuck.
+* `-r,--recursive`: Bypasses the parent and its children. *This option is slow and expensive.*
+* `-w,--lockWait`: Waits milliseconds before giving up. Default=1.
+* `-i,--inputFiles`: Takes one or more input files of PIDs.
-`-r,--recursive` - bypass parent and its children. SLOW! EXPENSIVE!
+It passes one or more procedure PIDs to skip to the procedure finish. The parent of the bypassed procedure skips to the finish. Entities are left in an inconsistent state and require manual repair. It might need a Master restart to clear locks that are still held. Bypass fails if the procedure has children. Add `recursive` if all you have is a parent PID to finish the parent and children. *This option is slow and dangerous, so use it selectively. It doesn't always work*.
-`-w,--lockWait` - milliseconds to wait before giving up; default=1
-
-`-i,--inputFiles` - takes one or more input files of PIDs
-
-Pass one (or more) procedure 'PIDs to skip to procedure finish. Parent of bypassed procedure skips to the finish. Entities are left in an inconsistent state and require manual fixup May need Master restart to clear locks still held. Bypass fails if procedure has children. Add 'recursive' if all you have is a parent PID to finish parent and children. *This is SLOW, and dangerous so use selectively. Doesn't always work*.
``` hbase --config /etc/hbase/conf hbck -j ~/hbase-operator-tools/hbase-hbck2/target/hbase-hbck2-1.x.x-SNAPSHOT.jar bypass <PID> ```+ If `-i or --inputFiles` is specified, pass one or more input file names. Each file contains PIDs, one per line. For example:+ ``` hbase --config /etc/hbase/conf hbck -j ~/hbase-operator-tools/hbase-hbck2/target/hbase-hbck2-1.x.x-SNAPSHOT.jar bypass -i fileName1 fileName2 ``` `reportMissingRegionsInMeta <NAMESPACE|NAMESPACE:TABLENAME>... | -i <INPUT_FILE>...`
-**Options:**
+Option:
-`i,--inputFiles` takes one or more input files of namespace or table names
+* `i,--inputFiles`: Takes one or more input files of namespace or table names.
+
+Use this option when regions are missing from `hbase:meta` but when directories are still present in HDFS. This command is only a check method. It's designed for reporting purposes and doesn't perform any fixes. It provides a view of which regions (if any) would get readded to `hbase:meta`, grouped by respective table or namespace.
+
+To effectively readd regions in meta, run `addFsRegionsMissingInMeta`. This command needs `hbase:meta` to be online. For each namespace/table passed as a parameter, it performs a diff between regions available in `hbase:meta` against existing regions' dirs on HDFS. Region dirs with no matches are printed grouped under its related table name. Tables with no missing regions show a "no missing regions" message. If no namespace or table is specified, it verifies all existing regions.
+
+It accepts a combination of multiple namespaces and tables. Table names should include the namespace portion, even for tables in the default namespace. Otherwise, it assumes a namespace value. This example triggers missing regions reports for the tables `table_1` and `table_2`, under a default namespace:
-To be used when regions missing from `hbase:meta` but directories are present still in HDFS. This command is an only a check method, designed for reporting purposes and doesn't perform any fixes, providing a view of which regions (if any) would get readded to `hbase:meta`, grouped by respective table/namespace. To effectively readd regions in meta, run addFsRegionsMissingInMeta. This command needs `hbase:meta` to be online. For each namespace/table passed as parameter, it performs a diff between regions available in `hbase:meta` against existing regions dirs on HDFS. Region dirs with no matches are printed grouped under its related table name. Tables with no missing regions show a 'no missing regions' message. If no namespace or table is specified, it verifies all existing regions. It accepts a combination of multiple namespace and tables. Table names should include the namespace portion, even for tables in the default namespace, otherwise it assumes as a namespace value. An example triggering missing regions report for tables 'table_1' and 'table_2', under default namespace:
``` hbase --config /etc/hbase/conf hbck -j ~/hbase-operator-tools/hbase-hbck2/target/hbase-hbck2-1.x.x-SNAPSHOT.jar reportMissingRegionsInMeta default:table_1 default:table_2+ ```
-An example triggering missing regions report for table 'table_1' under default namespace, and for all tables from namespace 'ns1':
+
+This example triggers a missing regions report for the table `table_1` under a default namespace, and for all tables from namespace `ns1`:
+ ``` hbase --config /etc/hbase/conf hbck -j ~/hbase-operator-tools/hbase-hbck2/target/hbase-hbck2-1.x.x-SNAPSHOT.jar reportMissingRegionsInMeta default:table_1 ns1 ```
-Returns list of missing regions for each table passed as parameter, or for each table on namespaces specified as parameter. If `-i or --inputFiles` is specified, pass one or more input file names. Each file contains `<NAMESPACE|NAMESPACE:TABLENAME>`, one per line. For example:
+
+It returns a list of missing regions for each table passed as a parameter, or for each table on namespaces specified as a parameter. If `-i or --inputFiles` is specified, it passes one or more input file names. Each file contains `<NAMESPACE|NAMESPACE:TABLENAME>`, one per line. For example:
+ ``` hbase --config /etc/hbase/conf hbck -j ~/hbase-operator-tools/hbase-hbck2/target/hbase-hbck2-1.x.x-SNAPSHOT.jar reportMissingRegionsInMeta -i fileName1 fileName2 ``` `addFsRegionsMissingInMeta <NAMESPACE|NAMESPACE:TABLENAME>... | -i <INPUT_FILE>...`
-**Options**
+Option:
-`-i,--inputFiles` takes one or more input files of namespace of table names to be used when regions missing from `hbase:meta` but directories are present still in HDFS. **Needs `hbase:meta` to be online**. For each table name passed as parameter, performs diff between regions available in `hbase:meta` and region dirs on HDFS. Then for dirs with no `hbase:meta` matches, it reads the 'regioninfo' metadata file and re-creates given region in `hbase:meta`. Regions are re-created in 'CLOSED' state in the `hbase:meta` table, but not in the Masters' cache, and they aren't assigned either. To get these regions online, run the HBCK2 'assigns' command printed when this command-run completes.
+* `-i,--inputFiles`: Takes one or more input files of namespace table names to be used when regions are missing from `hbase:meta` but directories are still present in HDFS. *Needs `hbase:meta` to be online.*
-> [!NOTE]
-> If using hbase releases older than 2.3.0, a rolling restart of HMasters is needed prior to executing the set of 'assigns' output. An example adding missing regions for tables 'tbl_1' in the default namespace, 'tbl_2' in namespace 'n1' and for all tables from namespace 'n2':
+For each table name passed as a parameter, it performs a diff between regions available in `hbase:meta` and region dirs on HDFS. Then for dirs with no `hbase:meta` matches, it reads the `regioninfo` metadata file and re-creates a specific region in `hbase:meta`. Regions are re-created in the CLOSED state in the `hbase:meta` table, but not in the `Masters` cache. They aren't assigned either. To get these regions online, run the HBCK2 `assigns` command printed when this command run finishes.
+
+If you're using hbase releases older than 2.3.0, a rolling restart of HMasters is needed prior to executing the set of `assigns` output. This example adds missing regions for tables `tbl_1` in the default namespace, `tbl_2` in namespace `n1`, and for all tables from namespace `n2`:
``` hbase --config /etc/hbase/conf hbck -j ~/hbase-operator-tools/hbase-hbck2/target/hbase-hbck2-1.x.x-SNAPSHOT.jar addFsRegionsMissingInMeta default:tbl_1 n1:tbl_2 n2 ```
-Returns HBCK2 an 'assigns' command with all reinserted regions. If `-i or --inputFiles` is specified, pass one or more input file names. Each file contains `<NAMESPACE|NAMESPACE:TABLENAME>`, one per line. For example:
+
+It returns HBCK2 and an `assigns` command with all reinserted regions. If `-i or --inputFiles` is specified, it passes one or more input file names. Each file contains `<NAMESPACE|NAMESPACE:TABLENAME>`, one per line. For example:
+ ``` hbase --config /etc/hbase/conf hbck -j ~/hbase-operator-tools/hbase-hbck2/target/hbase-hbck2-1.x.x-SNAPSHOT.jar addFsRegionsMissingInMeta -i fileName1 fileName2 ``` `extraRegionsInMeta <NAMESPACE|NAMESPACE:TABLENAME>... | -i <INPUT_FILE>...`
-**Options**
+Options:
-`-f, --fix`- fix meta by removing all extra regions found.
+* `-f, --fix`: Fixes meta by removing all extra regions found.
+* `-i,--inputFiles`: Takes one or more input files of namespace or table names.
-`-i,--inputFiles`- take one or more input files of namespace or table names
-
-Reports regions present on `hbase:meta`, but with no related directories on the file system. Needs `hbase:meta` to be online. For each table name passed as parameter, performs diff between regions available in `hbase:meta` and region dirs on the given file system. Extra regions would get deleted from Meta if passed the --fix option.
+It reports regions present on `hbase:meta` but with no related directories on the file system. *Needs `hbase:meta` to be online.* For each table name passed as a parameter, it performs diff between regions available in `hbase:meta` and region dirs on the specific file system. Extra regions would get deleted from Meta if it passed the `--fix` option.
> [!NOTE]
-> Before deciding on use the "--fix" option, it's worth check if reported extra regions are overlapping with existing valid regions. If so, then `extraRegionsInMeta --fix` is indeed the optimal solution. Otherwise, "assigns" command is the simpler solution, as it recreates regions dirs in the filesystem, if not existing.
+> Before you decide to use the `--fix` option, it's worth checking if reported extra regions are overlapping with existing valid regions. If so, then `extraRegionsInMeta --fix` is the optimal solution. Otherwise, the `assigns` command is the simpler solution. It re-creates the regions' dirs in the file system, if they don't exist.
+
+This example triggers extra regions reports for `table_1` under the default namespace, and for all tables from the namespace `ns1`:
-An example triggering extra regions report for table 'table_1' under default namespace, and for all tables from namespace 'ns1':
``` hbase --config /etc/hbase/conf hbck -j ~/hbase-operator-tools/hbase-hbck2/target/hbase-hbck2-1.x.x-SNAPSHOT.jar extraRegionsInMeta default:table_1 ns1 ```
-An example triggering extra regions report for table 'table_1' under default namespace, and for all tables from namespace 'ns1' with the fix option:
+
+This example triggers extra regions reports for `table_1` under the default namespace, and for all tables from the namespace `ns1` with the fix option:
+ ``` hbase --config /etc/hbase/conf hbck -j ~/hbase-operator-tools/hbase-hbck2/target/hbase-hbck2-1.x.x-SNAPSHOT.jar extraRegionsInMeta -f default:table_1 ns1 ```
-Returns list of extra regions for each table passed as parameter, or for each table on namespaces specified as parameter. If `-i or --inputFiles` is specified, pass one or more input file names. Each file contains `<NAMESPACE|NAMESPACE:TABLENAME>`, one per line. For example:
+
+It returns a list of extra regions for each table passed as a parameter, or for each table on namespaces specified as a parameter. If `-i or --inputFiles` is specified, pass one or more input file names. Each file contains `<NAMESPACE|NAMESPACE:TABLENAME>`, one per line. For example:
+ ``` hbase --config /etc/hbase/conf hbck -j ~/hbase-operator-tools/hbase-hbck2/target/hbase-hbck2-1.x.x-SNAPSHOT.jar extraRegionsInMeta -i fileName1 fileName2 ```
-**fixMeta**
+`fixMeta`
> [!NOTE]
-> This doesn't work well with HBase 2.1.6. Not recommended to be used on a 2.1.6 HBase Cluster.
+> This option doesn't work well with HBase 2.1.6. We don't recommend it for use on a 2.1.6 HBase cluster.
+
+Do a server-side fix of bad or inconsistent state in `hbase:meta`. The Master UI has a matching new `HBCK Report` tab that dumps reports generated by the most recent run of `catalogjanitor` and a new `hbck chore`.
+
+*It's critical that `hbase:meta` should first be made healthy before you make any other repairs*. It fixes `holes` and `overlaps`, creating (empty) region directories in HDFS to match regions added to `hbase:meta`.
+
+ *This command isn't the same as the old _hbck1_ command that's similarly named*. It works against the reports generated by the last `catalog_janitor` and `hbck chore` runs. If there's nothing to fix, the run is a loop. Otherwise, if the `HBCK Report` UI reports problems, a run of `fixMeta` clears up `hbase:meta` issues.
-Do a server-side fix of bad or inconsistent state in `hbase:meta`. Master UI has matching, new 'HBCK Report' tab that dumps reports generated by most recent run of catalogjanitor and a new 'HBCK Chore'. **It's critical that `hbase:meta` first be made healthy before making any other repairs**. Fixes 'holes', 'overlaps', etc., creating (empty) region directories in HDFS to match regions added to `hbase:meta`. **Command isn't the same as the old _hbck1_ command named similarly**. Works against the reports generated by the last catalog_janitor and hbck chore runs. If nothing to fix, run is a loop. Otherwise, if 'HBCK Report' UI reports problems, a run of fixMeta clears up`hbase:meta` issues.
``` hbase --config /etc/hbase/conf hbck -j ~/hbase-operator-tools/hbase-hbck2/target/hbase-hbck2-1.x.x-SNAPSHOT.jar fixMeta ``` `generateMissingTableDescriptorFile <NAMESPACE:TABLENAME>`
-Trying to fix an orphan table by generating a missing table descriptor file. This command has no effect if the table folder is missing or if the `.tableinfo` is present (we don't override existing table descriptors). This command first checks if the TableDescriptor is cached in HBase Master in which case it recovers the `.tableinfo` accordingly. If TableDescriptor isn't cached in master, then it creates a default `.tableinfo` file with the following items:
-- the table name-- the column family list determined based on the file system-- the default properties for both TableDescriptor and `ColumnFamilyDescriptors`
-If the `.tableinfo` file was generated using default parameters then make sure you check the table / column family properties later (and change them if needed). This method doesn't change anything in HBase, only writes the new `.tableinfo` file to the file system. Orphan tables, for example, ServerCrashProcedures to stick, you might need to fix the error still after you generated the missing table info files.
+This command tries to fix an orphan table by generating a missing table descriptor file. This command has no effect if the table folder is missing or if `.tableinfo` is present. (We don't override existing table descriptors.) This command first checks if `TableDescriptor` is cached in HBase Master, in which case it recovers `.tableinfo` accordingly. If `TableDescriptor` isn't cached in Master, it creates a default `.tableinfo` file with the following items:
+
+- The table name.
+- The column family list determined based on the file system.
+- The default properties for both `TableDescriptor` and `ColumnFamilyDescriptors`.
+If the `.tableinfo` file was generated by using default parameters, make sure you check the table or column family properties later. (Change them if needed.) This method doesn't change anything in HBase. It only writes the new `.tableinfo` file to the file system. For orphan tables, for example, `ServerCrashProcedures` to stick, you might need to fix the error after you generated the missing table info files.
+ ``` hbase --config /etc/hbase/conf hbck -j ~/hbase-operator-tools/hbase-hbck2/target/hbase-hbck2-1.x.x-SNAPSHOT.jar generateMissingTableDescriptorFile namespace:table_name ``` `replication [OPTIONS] [<NAMESPACE:TABLENAME>... | -i <INPUT_FILE>...]`
-**Options**
+Options:
-`-f, --fix` - fix any replication issues found.
+* `-f, --fix`: Fixes any replication issues found.
+* `-i,--inputFiles`: Takes one or more input files of table names.
-`-i,--inputFiles` - take one or more input files of table names
+It looks for undeleted replication queues and deletes them if it passed the `--fix` option. It passes a table name to check for a replication barrier and purge if `--fix`.
-Looks for undeleted replication queues and deletes them if passed the '--fix' option. Pass a table name to check for replication barrier and purge if '--fix'.
``` hbase --config /etc/hbase/conf hbck -j ~/hbase-operator-tools/hbase-hbck2/target/hbase-hbck2-1.x.x-SNAPSHOT.jar replication namespace:table_name ```
-If `-i or --inputFiles` is specified, pass one or more input file names. Each file contains `<TABLENAME>`, one per line. For example:
+
+If `-i or --inputFiles` is specified, it passes one or more input file names. Each file contains `<TABLENAME>`, one per line. For example:
``` hbase --config /etc/hbase/conf hbck -j ~/hbase-operator-tools/hbase-hbck2/target/hbase-hbck2-1.x.x-SNAPSHOT.jar replication -i fileName1 fileName2
hbase --config /etc/hbase/conf hbck -j ~/hbase-operator-tools/hbase-hbck2/target
`setRegionState [<ENCODED_REGIONNAME> <STATE> | -i <INPUT_FILE>...]`
-**Options**
+Option:
-`-i,--inputFiles` take one or more input files of encoded region names and states
-
-**Possible region states:**
+* `-i,--inputFiles`: Takes one or more input files of encoded region names and states.
+
+Possible region states:
* OFFLINE * OPENING * OPEN
-* CLOSIN
-* CLOSED
-* SPLITTING
+* CLOSIN
+* CLOSED
+* SPLITTING
* SPLIT
-* FAILED_OPEN
-* FAILED_CLOSE
+* FAILED_OPEN
+* FAILED_CLOSE
* MERGING
-* MERGED
+* MERGED
* SPLITTING_NEW * MERGING_NEW * ABNORMALLY_CLOSED
-
+ > [!WARNING]
-> This is a very risky option intended for use as last resort.
+> This risky option is intended for use only as a last resort.
-Example scenarios include unassigns/assigns that can't move forward because region is in an inconsistent state in 'hbase:meta'. For example, the 'unassigns' command can only proceed if passed a region in one of the following states: **SPLITTING|SPLIT|MERGING|OPEN|CLOSING**.
+Example scenarios include unassigns or assigns that can't move forward because the region is in an inconsistent state in `hbase:meta`. For example, the `unassigns` command can only proceed if it's passed a region in one of the following states: SPLITTING, SPLIT, MERGING, OPEN or CLOSING.
- Before manually setting a region state with this command, certify that this region not handled by a running procedure, such as 'assign' or 'split'. You can get a view of running procedures in the hbase shell using the 'list_procedures' command. An example
-setting region 'de00010733901a05f5a2a3a382e27dd4' to CLOSING:
+ Before you manually set a region state with this command, certify that this region isn't handled by a running procedure, such as `assign` or `split`. You can get a view of running procedures in the hbase shell by using the `list_procedures` command. This example sets the region `de00010733901a05f5a2a3a382e27dd4` to CLOSING:
``` hbase --config /etc/hbase/conf hbck -j ~/hbase-operator-tools/hbase-hbck2/target/hbase-hbck2-1.x.x-SNAPSHOT.jar setRegionState de00010733901a05f5a2a3a382e27dd4 CLOSING ```
-Returns "0" if region state changed and "1" otherwise.
-If `-i or --inputFiles` is specified, pass one or more input file names.
-Each file contains `<ENCODED_REGIONNAME> <STATE>` one pair per line.
-For example,
+
+It returns `0` if the region state changed and `1` otherwise. If `-i or --inputFiles` is specified, pass one or more input file names. Each file contains `<ENCODED_REGIONNAME> <STATE>`, one pair per line. For example:
+ ``` hbase --config /etc/hbase/conf hbck -j ~/hbase-operator-tools/hbase-hbck2/target/hbase-hbck2-1.x.x-SNAPSHOT.jar setRegionState -i fileName1 fileName2 ``` `setTableState [<TABLENAME> <STATE> | -i <INPUT_FILE>...]`
-**Options**
+Option:
-`-i,--inputFiles` take one or more input files of table names and states
+* `-i,--inputFiles`: Takes one or more input files of table names and states.
-Possible table states: **ENABLED, DISABLED, DISABLING, ENABLING**.
+Possible table states are ENABLED, DISABLED, DISABLING, and ENABLING.
+
+To read the current table state, in the hbase shell, run:
-To read current table state, in the hbase shell run:
-
``` hbase> get 'hbase:meta', '<TABLENAME>', 'table:state' ```
-A value of x08x00 == ENABLED, x08x01 == DISABLED, etc.
-Can also run a 'describe `<TABLENAME>` at the shell prompt. An example making table name user ENABLED:
+
+A value of x08x00 == ENABLED, x08x01 == DISABLED, etc. It can also run `describe <TABLENAME>` at the shell prompt. This example makes the table name user ENABLED:
+ ``` hbase --config /etc/hbase/conf hbck -j ~/hbase-operator-tools/hbase-hbck2/target/hbase-hbck2-1.x.x-SNAPSHOT.jar setTableState users ENABLED ```
-Returns whatever the previous table state was. If `-i or --inputFiles` is specified, pass one or more input file names. Each file contains `<TABLENAME> <STATE>`, one pair per line.
-For example:
+
+It returns whatever the previous table state was. If `-i or --inputFiles` is specified, it passes one or more input file names. Each file contains `<TABLENAME> <STATE>`, one pair per line. For example:
+ ``` hbase --config /etc/hbase/conf hbck -j ~/hbase-operator-tools/hbase-hbck2/target/hbase-hbck2-1.x.x-SNAPSHOT.jar setTableState -i fileName1 fileName2 ``` `scheduleRecoveries <SERVERNAME>... | -i <INPUT_FILE>...`
-**Options**
+Option:
+
+* `-i,--inputFiles`: Takes one or more input files of server names.
-`-i,--inputFiles` take one or more input files of server names
+Schedule `ServerCrashProcedure(SCP)` for a list of `RegionServers`. Format the server name as `<HOSTNAME>,<PORT>,<STARTCODE>`. (See HBase UI/logs.)
-Schedule `ServerCrashProcedure(SCP)` for list of `RegionServers`. Format server name as `<HOSTNAME>,<PORT>,<STARTCODE>` (See HBase UI/logs).
-
-Example using RegionServer 'a.example.org, 29100,1540348649479'
+This example uses `RegionServer` `a.example.org, 29100,1540348649479`:
``` hbase --config /etc/hbase/conf hbck -j ~/hbase-operator-tools/hbase-hbck2/target/hbase-hbck2-1.x.x-SNAPSHOT.jar scheduleRecoveries a.example.org,29100,1540348649479 ```
-Returns the PID(s) of the created ServerCrashProcedure(s) or -1 if no procedure created (see master logs for why not).
-Command support added in hbase versions 2.0.3, 2.1.2, 2.2.0 or newer. If `-i or --inputFiles` is specified, pass one or more input file names. Each file contains `<SERVERNAME>`, one per line. For example:
+
+It returns the PIDs of the created `ServerCrashProcedures` or -1 if no procedure is created. (See Master logs for why it doesn't.) Command support is added in HBase versions 2.0.3, 2.1.2, 2.2.0, or newer. If `-i or --inputFiles` is specified, it passes one or more input file names. Each file contains `<SERVERNAME>`, one per line. For example:
+ ``` hbase --config /etc/hbase/conf hbck -j ~/hbase-operator-tools/hbase-hbck2/target/hbase-hbck2-1.x.x-SNAPSHOT.jar scheduleRecoveries -i fileName1 fileName2 ```
-## Fixing Problems
-### Some General Principals
-When making repair, **make sure `hbase:meta` is consistent first before you go about fixing any other issue type** such as a filesystem deviance. Deviance in the filesystem or problems with assign should be addressed after the `hbase:meta` has been put in order. If `hbase:meta` has issues, the Master can't make proper placements when adopting orphan filesystem data or making region assignments.
+## Fix problems
-Other general principals to keep in mind include a Region can't be assigned if it's in CLOSING state (or the inverse, unassigned if in OPENING state) without first transitioning via CLOSED: Regions must always move from CLOSED, to OPENING, to OPEN, and then to CLOSING, CLOSED.
+This section helps you troubleshoot common issues.
-When making repair, do fixup of a table-at-a-time.
+### General principles
-If a table is DISABLED, you cant' assign a Region. In the Master logs, you see that the Master reports skipped because the table is DISABLED. You can assign a Region because, currently in the OPENING state and you want it in the CLOSED state so it agrees with the table's DISABLED state. In this situation, you may have to temporarily set the table status to ENABLED, so you can do the assign, and then set it back again after the unassign statement. HBCK2 has facility to allow you to do this change. See the HBCK2 usage output.
+When you make a repair, *make sure that `hbase:meta` is consistent first before you fix any other issue type*, such as a file-system deviance. Deviance in the file system or problems with assign should be addressed after the `hbase:meta` is put in order. If `hbase:meta` has issues, the Master can't make proper placements when it adopts orphan file-system data or makes region assignments.
-### Assigning/Unassigning
-
-Generally, on assign, the Master persists until successful. An assign takes an exclusive lock on the Region. This precludes a concurrent assign or unassign from running. An assign against a locked Region waits until the lock is released before making progress. See the [Procedures & Locks] section for current list of outstanding Locks.
+A region can't be assigned if it's in the CLOSING state (or the inverse, unassigned if in the OPENING state) without first transitioning via CLOSED. Regions must always move from CLOSED, to OPENING, to OPEN, and then to CLOSING and CLOSED.
+
+When you make a repair, fix tables one at a time.
+
+If a table is DISABLED, you can't assign a region. In the Master logs, you see that the Master reports skipped because the table is DISABLED. You can assign a region because it's currently in the OPENING state and you want it in the CLOSED state so that it agrees with the table's DISABLED state. In this situation, you might have to temporarily set the table status to ENABLED so that you can do the assign. Then you set it back again after the unassign statement. HBCK2 has the facility to allow you to do this change. See the HBCK2 usage output.
+
+### Assign and unassign
+
+Generally, on assign, the Master persists until it's successful. An assign takes an exclusive lock on the region. The lock precludes a concurrent assign or unassign from running. An assign against a locked region waits until the lock is released before making progress.
-**Master startup cannot progress, in holding-pattern until region online**
+`Master startup cannot progress, in holding-pattern until region online`:
``` 2018-10-01 22:07:42,792 WARN org.apache.hadoop.hbase.master.HMaster: hbase:meta,1.1588230740 isn't online; state={1588230740 state=CLOSING, ts=1538456302300, server=ve1017.example.org,22101,1538449648131}; ServerCrashProcedures=true. Master startup cannot progress, in holding-pattern until region online. ```
-The Master is unable to continue startup because there's no Procedure to assign `hbase:meta` (or `hbase:namespace`). To inject one, use the HBCK2 tool:
+
+The Master is unable to continue startup because there's no procedure to assign `hbase:meta` (or `hbase:namespace`). To inject one, use the HBCK2 tool:
+ ``` hbase --config /etc/hbase/conf hbck -j ~/hbase-operator-tools/hbase-hbck2/target/hbase-hbck2-1.x.x-SNAPSHOT.jar assigns -skip 1588230740 ```
-where **1588230740 is the encoded name of the `hbase:meta` Region**. Pass the '-skip' option to stop HBCK2 doing a version check against the remote master. If the remote master isn't up, the version check prompts a 'Master is initializing response', or 'PleaseHoldException' and drop the assign attempt. The '-skip' command avoid the version check and lands the scheduled assign.
-The same may happen to the `hbase:namespace` system table. Look for the encoded Region name of the `hbase:namespace` Region and do similar to what we did for `hbase:meta`. In this latter case, the Master actually prints a helpful message that looks like
+In this example, 1588230740 is the encoded name of the `hbase:meta` region. Pass the `-skip` option to stop HBCK2 from doing a version check against the remote Master. If the remote Master isn't up, the version check prompts a `Master is initializing response` or `PleaseHoldException` and drops the assign attempt. The `-skip` command avoids the version check and lands the scheduled assign.
+
+The same might happen to the `hbase:namespace` system table. Look for the encoded region name of the `hbase:namespace` region and take similar steps to what we did for `hbase:meta`. In this latter case, the Master actually prints a helpful message that looks like this example:
``` 2019-07-09 22:08:38,966 WARN [master/localhost:16000:becomeActiveMaster] master.HMaster: hbase:namespace,,1562733904278.9559cf72b8e81e1291c626a8e781a6ae. isn't online; state={9559cf72b8e81e1291c626a8e781a6ae state=CLOSED, ts=1562735318897, server=null}; ServerCrashProcedures=true. Master startup cannot progress, in holding-pattern until region onlined. ```
-To schedule an assign for the hbase:namespace table noted in the above log line, you would do:
+
+To schedule an assign for the `hbase:namespace` table noted in the preceding log line:
+ ``` hbase --config /etc/hbase/conf hbck -j ~/hbase-operator-tools/hbase-hbck2/target/hbase-hbck2-1.x.x-SNAPSHOT.jar -skip assigns 9559cf72b8e81e1291c626a8e781a6ae ```
-passing the encoded name for the namespace region (the encoded name differs per deploy).
-### Missing Regions in `hbase:meta` region/table restore/rebuild
-There have been some unusual cases where table regions have been removed from `hbase:meta` table. Some triage on such cases revealed these were operator-induced. Users would have run the obsolete hbck1 OfflineMetaRepair tool against an HBCK2 cluster. OfflineMetaRepair is a well known tool for fixing `hbase:meta` table related issues on HBase 1.x versions. The original version isn't compatible with HBase 2.x or higher versions, and it has undergone some adjustments so in the extreme, it can now be run via HBCK2.
+Pass the encoded name for the namespace region. (The encoded name differs per deployment.)
-In most of these cases, regions end up missing in `hbase:meta` at random, but hbase may still be operational. In such situations, problem can be addressed with the Master online, using the addFsRegionsMissingInMeta command in HBCK2. This command is less disruptive to hbase than a full `hbase:meta` rebuild covered later, and it can be used even for recovering the namespace table region.
+### Missing regions in hbase:meta region/table restore/rebuild
-### Extra Regions in `hbase:meta` region/table restore/rebuild
-There can also be situations where table regions have been removed in file system, but still have related entries on `hbase:meta` table. This may happen due to problems on splitting, manual operation mistakes (like deleting/moving the region dir manually), or even meta info data loss issues such as HBASE-21843.
+Some unusual cases had table regions removed from the `hbase:meta` table. Triage on these cases revealed that they were operator induced. Users ran the obsolete HBCK1 OfflineMetaRepair tool against an HBCK2 cluster. OfflineMetaRepair is a well-known tool for fixing `hbase:meta` table-related issues on HBase 1.x versions. The original version isn't compatible with HBase 2.x or higher versions, and it has undergone some adjustments. In extreme situations, it can now be run via HBCK2.
-Such problem can be addressed with the Master online, using the **extraRegionsInMeta --fix** command in HBCK2. This command is less disruptive to hbase than a full `hbase:meta` rebuild covered later. Also useful when this happens on versions that don't support fixMeta hbck2 option (any prior to "2.0.6", "2.1.6", "2.2.1", "2.3.0","3.0.0").
+In most of these cases, regions end up missing in `hbase:meta` at random, but hbase might still be operational. In such situations, the problem can be addressed with the Master online by using the `addFsRegionsMissingInMeta` command in HBCK2. This command is less disruptive to hbase than a full `hbase:meta` rebuild, which is covered later. It can be used even for recovering the namespace table region.
+
+### Extra regions in hbase:meta region/table restore/rebuild
+
+There can also be situations where table regions were removed in the file system but still have related entries on the `hbase:meta` table. This scenario might happen because of problems on splitting, manual operation mistakes (like deleting or moving the region dir manually), or even meta info data loss issues such as HBASE-21843.
+
+Such problems can be addressed with the Master online by using the `extraRegionsInMeta --fix` command in HBCK2. This command is less disruptive to hbase than a full `hbase:meta` rebuild, which is covered later. It's also useful when this happens on versions that don't support the `fixMeta` HBCK2 option (any versions prior to 2.0.6, 2.1.6, 2.2.1, 2.3.0, or 3.0.0).
-### Online `hbase:meta` rebuild recipe
-If `hbase:meta` corruption isn't too critical, hbase would still be able to bring it online. Even if namespace region is among the missing regions, it's possible to scan `hbase:meta` during the initialization period, where Master is waiting for namespace to be assigned. To verify this situation, a` hbase:meta` scan command can be executed. If it doesn't time out or shows any errors, the `hbase:meta` is online:
+### Online hbase:meta rebuild recipe
+
+If `hbase:meta` corruption isn't too critical, hbase can still bring it online. Even if the namespace region is among the missing regions, it's possible to scan `hbase:meta` during the initialization period, where Master is waiting for the namespace to be assigned. To verify this situation, an `hbase:meta` scan command can be executed. If it doesn't time out or show any errors, the `hbase:meta` is online:
+ ``` echo "scan 'hbase:meta', {COLUMN=>'info:regioninfo'}" | hbase shell ```
-HBCK2 **addFsRegionsMissingInMeta** can be used if the message doesn't show any errors. It reads region metadata info available on the FS region directories in order to recreate regions in `hbase:meta`. Since it can run with hbase partially operational, it attempts to disable online tables that are affected the reported problem and it's going to readd regions to `hbase:meta`. It can check for specific tables/namespaces, or all tables from all namespaces. An example shows adding missing regions for tables 'tbl_1' in the default namespace, 'tbl_2' in namespace 'n1', and for all tables from namespace 'n2':
+
+HBCK2 `addFsRegionsMissingInMeta` can be used if the message doesn't show any errors. It reads region metadata info available on the FS region directories to re-create regions in `hbase:meta`. Because it can run with hbase partially operational, it attempts to disable online tables that are affected by the reported problem and it's going to readd regions to `hbase:meta`. It can check for specific tables or namespaces, or all tables from all namespaces. This example shows adding missing regions for tables `tbl_1` in the default namespace, `tbl_2` in namespace `n1`, and for all tables from the namespace `n2`:
+ ``` hbase --config /etc/hbase/conf hbck -j ~/hbase-operator-tools/hbase-hbck2/target/hbase-hbck2-1.x.x-SNAPSHOT.jar addFsRegionsMissingInMeta default:tbl_1 n1:tbl_2 n2 ```
-As it operates independently from Master, once it finishes successfully, more steps are required to actually have the readded regions assigned. These messages are listed as
-**addFsRegionsMissingInMeta** outputs an assigns command with all regions that got readded. This command needs to be executed later, so copy and save it for convenience.
+Because it operates independently from Master, after it finishes successfully, more steps are required to have the readded regions assigned. These messages are listed as follows:
-**For HBase versions prior to 2.3.0, after addFsRegionsMissingInMeta finished successfully and output has been saved, restart all running HBase Masters.**
+- `addFsRegionsMissingInMeta` outputs an assigns command with all regions that got readded. This command must be executed later, so copy and save it for convenience.
+- For HBase versions prior to 2.3.0, after `addFsRegionsMissingInMeta` finished successfully and output has been saved, restart all running HBase Masters.
-Once Master's are restarted and `hbase:meta` is already online (check if Web UI is accessible), run assigns command from addFsRegionsMissingInMeta output saved earlier.
+After Masters are restarted and `hbase:meta` is already online (check if the web UI is accessible), run the assigns command from `addFsRegionsMissingInMeta` output saved earlier.
> [!NOTE]
-> If namespace region is among the missing regions, you will need to add --skip flag at the beginning of assigns command returned.
+> If the namespace region is among the missing regions, you need to add the `--skip` flag at the beginning of the assigns command returned.
-Should a cluster suffer a catastrophic loss of the `hbase:meta` table, a rough rebuild is possible using the following recipe. In outline, we stop the cluster. Run the HBCK2 OfflineMetaRepair tool, which reads directories and metadata dropped into the filesystem makes the best effort at reconstructing a viable `hbase:met` table; restart your cluster. Inject an assign to bring the system namespace table online; and then finally, reassign user space tables you'd like enabled (the rebuilt `hbase:meta` creates a table with all tables offline and no regions assigned).
+If a cluster suffers a catastrophic loss of the `hbase:meta` table, a rough rebuild is possible by using the following recipe. In outline, we stop the cluster. Run the HBCK2 OfflineMetaRepair tool, which reads directories and metadata dropped into the file system and makes the best effort at reconstructing a viable `hbase:met` table. Restart your cluster. Inject an assign to bring the system namespace table online. Finally, reassign user space tables you want enabled. (The rebuilt `hbase:meta` creates a table with all tables offline and no regions assigned.)
### Detailed rebuild recipe > [!NOTE]
-> Use it only as a last resort. Not recommended.
+> Use this option only as a last resort. We don't recommend it.
* Stop the cluster.
+* Run the rebuild `hbase:meta` command from HBCK2. This command moves aside the original `hbase:meta` and puts in place a newly rebuilt one. This example shows how to run the tool. It adds the `-details` flag so that the tool dumps info on the regions it found in HDFS:
-* Run the rebuild `hbase:meta` command from HBCK2. This moves aside the original `hbase:meta` and puts in place a newly rebuilt one. As an example of how to run the tool. It adds the -details flag so the tool dumps info on the regions its found in hdfs:
``` hbase --config /etc/hbase/conf -j ~/hbase-operator-tools/hbase-hbck2/target/hbase-hbck2-1.x.x-SNAPSHOT.jar org.apache.hbase.hbck1.OfflineMetaRepair -details ```
-* Start up the cluster. It won't up fully. It's stuck because the namespace table isn't online and there's no assign procedure in the procedure store for this contingency. The hbase master log shows this state. Here's an example of what it logs:
+* Start up the cluster. It won't start up fully. It's stuck because the namespace table isn't online and there's no assign procedure in the procedure store for this contingency. The HBase Master log shows this state. This example shows what it logs:
+ ``` 2019-07-10 18:30:51,090 WARN [master/localhost:16000:becomeActiveMaster] master.HMaster: hbase:namespace,,1562808216225.725a0fe6c2c869d3d0a9ed82bfa80fa3. isn't online; state={725a0fe6c2c869d3d0a9ed82bfa80fa3 state=CLOSED, ts=1562808619952, server=null}; ServerCrashProcedures=false. Master startup can't progress, in holding-pattern until region onlined. ```
- To assign the namespace table region, you can't use the shell. If you use the shell, it fails with a PleaseHoldException because the master isn't yet up (it's waiting for the namespace table to come online before it declares itself ΓÇÿupΓÇÖ). You have to use the HBCK2 assigns command. To assign, you need the namespace encoded name. It shows in the log quoted. That is, 725a0fe6c2c869d3d0a9ed82bfa80fa3 in this case. You have to pass the -skip command to ΓÇÿskipΓÇÖ the master version check (without it, your HBCK2 invocation elicits the PleaseHoldException because the master isn't yet up). Here's an example adding an assign of the namespace table:
+
+ To assign the namespace table region, you can't use the shell. If you use the shell, it fails with `PleaseHoldException` because the Master isn't yet up. (It's waiting for the namespace table to come online before it declares itself "up.") You have to use the HBCK2 assigns command. To assign, you need the namespace encoded name. It shows in the log quoted. That's `725a0fe6c2c869d3d0a9ed82bfa80fa3` in this case. You have to pass the `-skip` command to skip the Master version check. (Without it, your HBCK2 invocation elicits the `PleaseHoldException` because the Master isn't up yet.) This example adds an assign of the namespace table:
+ ``` hbase --config /etc/hbase/conf hbck -j ~/hbase-operator-tools/hbase-hbck2/target/hbase-hbck2-1.x.x-SNAPSHOT.jar -skip assigns 725a0fe6c2c869d3d0a9ed82bfa80fa3 ```
- If the invocation comes back with ΓÇÿConnection refusedΓÇÖ, is the Master up? The Master will shut down after a while if it canΓÇÖt initialize itself. Just restart the cluster/master and rerun the assigns command.
+
+ If the invocation comes back with `Connection refused`, is the Master up? The Master shuts down after a while if it can't initialize itself. Restart the cluster/Master and rerun the assigns command.
+
+* When the assigns run successfully, you see it emit something similar to the following example. The `48` on the end is the PID of the assign procedure schedule. If the PID returned is `-1`, the Master startup hasn't progressed sufficiently, so retry. Or, the encoded region name might be incorrect, so check for this issue.
-* When the assigns run successfully, you see it emit the likes of the following. The ‘48’ on the end is the PID of the assign procedure schedule. If the PID returned is ‘-1’, then the master startup hasn't progressed sufficently… retry. Or, the encoded regionname is incorrect. Check.
``` hbase --config /etc/hbase/conf hbck -j ~/hbase-operator-tools/hbase-hbck2/target/hbase-hbck2-1.x.x-SNAPSHOT.jar -skip assigns 725a0fe6c2c869d3d0a9ed82bfa80fa3 ```
Should a cluster suffer a catastrophic loss of the `hbase:meta` table, a rough r
18:40:44.315 [main] INFO org.apache.hbase.HBCK2 - hbck sufpport check skipped [48] ```
-* Check the master logs. The master should have come up. You see successful completion of PID=48. Look for a line like this to verify successful master launch:
+* Check the Master logs. The Master should have come up. You see successful completion of PID=48. Look for a line like this example to verify a successful Master launch:
+ ``` master.HMaster: Master has completed initialization 132.515sec ```
+
It might take a while to appear.
- The rebuild of `hbase:meta` adds the user tables in DISABLED state and the regions in CLOSED mode. Re-enable tables via the shell to bring all table regions back online. Do it one-at-a-time or see the enable all ".*" command to enable all tables in one shot.
+ The rebuild of `hbase:meta` adds the user tables in DISABLED state and the regions in CLOSED mode. Reenable tables via the shell to bring all table regions back online. Do it one at a time or see the enable all ".*" command to enable all tables at once.
- The rebuild meta is missing edits and may need subsequent repair and cleaning using facility outlined higher up in this TSG.
+ The rebuild meta is missing edits and might need subsequent repair and cleaning by using the facility outlined previously in this article.
### Dropped reference files, missing hbase.version file, and corrupted files
-HBCK2 can check for hanging references and corrupt files. You can ask it to sideline bad files, which may be needed to get over humps where regions won't online or reads are failing. See the filesystem command in the HBCK2 listing. Pass one or more tablename (or 'none' to check all tables). It reports bad files. Pass the `--fix` option to effect repairs.
+HBCK2 can check for hanging references and corrupt files. You can ask it to sideline bad files, which might be needed to get over humps where regions won't online or reads are failing. See the file-system command in the HBCK2 listing. Pass one or more table names (or use `none` to check all tables). Bad files are reported. Pass the `--fix` option to make repairs.
+
+### Procedure restart
-### Procedure Start-over
+As a last resort, if the Master is distraught and all attempts at repair only turn up undoable locks or procedures that can't finish, or if the set of `MasterProcWALs` is growing without bounds, it's possible to wipe the Master state clean. Move aside the `/hbase/MasterProcWALs/` directory under your HBase installation and restart the Master process. It comes back as a tabular format without memory.
-At an extreme, as a last resource, if the Master is distraught and all attempts at fixup only turn up undoable locks or Procedures that can't finish, and/or the set of MasterProcWALs is growing without bound. It's possible to wipe the Master state clean. Just move aside the `/hbase/MasterProcWALs/` directory under your HBase install and restart the Master process. It comes back as a tabular format without memory.
+If at the time of the erasure all regions were happily assigned or offlined, on Master restart, the Master should pick up and continue as though nothing happened. But if there were regions in transition at the time, the operator has to intervene to bring outstanding assigns or unassigns to their terminal point.
-If at the time of the erasure, all Regions were happily assigned or off lined, then on Master restart, the Master should pick up and continue as though nothing happened. But if there were Regions-In-Transition at the time, then the operator has to intervene to bring outstanding assigns/unassigns to their terminal point. Read the `hbase:meta` `info:state` columns as described to figure what needs assigning/unassigning. Having erased all history moving aside the MasterProcWALs, none of the entities should be locked so you 'Improved free to bulk assign/unassign.
+Read the `hbase:meta` `info:state` columns as described to determine what needs to be assigned or unassigned. After all history is erased by moving aside the `MasterProcWALs`, none of the entities should be locked, so you're free to bulk assign or unassign.
healthcare-apis Dicom Services Conformance Statement V2 https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/healthcare-apis/dicom/dicom-services-conformance-statement-v2.md
Retrieving metadata won't return attributes with the following value representat
Retrieved metadata includes the null character when the attribute was padded with nulls and stored as is.
-### Retrieve metadata cache validation for (study, series, or instance)
+### Retrieve metadata cache validation (for study, series, or instance)
Cache validation is supported using the `ETag` mechanism. In the response to a metadata request, ETag is returned as one of the headers. This ETag can be cached and added as `If-None-Match` header in the later requests for the same metadata. Two types of responses are possible if the data exists: * Data hasn't changed since the last request: `HTTP 304 (Not Modified)` response is sent with no response body. * Data has changed since the last request: `HTTP 200 (OK)` response is sent with updated ETag. Required data is returned as part of the body.
-### Retrieve Rendered Image (For Instance or Frame)
+### Retrieve rendered image (for instance or frame)
The following `Accept` header(s) are supported for retrieving a rendered image an instance or a frame: - `image/jpeg`
healthcare-apis Dicom Services Conformance Statement https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/healthcare-apis/dicom/dicom-services-conformance-statement.md
Retrieving metadata doesn't return attributes with the following value represent
| OW | Other Word | | UN | Unknown |
-### Retrieve metadata cache validation for (study, series, or instance)
+### Retrieve metadata cache validation (for study, series, or instance)
Cache validation is supported using the `ETag` mechanism. In the response to a metadata request, ETag is returned as one of the headers. This ETag can be cached and added as `If-None-Match` header in the later requests for the same metadata. Two types of responses are possible if the data exists: * Data hasn't changed since the last request: `HTTP 304 (Not Modified)` response is sent with no response body. * Data has changed since the last request: `HTTP 200 (OK)` response is sent with updated ETag. Required data is also returned as part of the body.
-### Retrieve Rendered Image (For Instance or Frame)
+### Retrieve rendered image (for instance or frame)
The following `Accept` header(s) are supported for retrieving a rendered image an instance or a frame: - `image/jpeg`
healthcare-apis Frequently Asked Questions https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/healthcare-apis/iot/frequently-asked-questions.md
Title: Frequently asked questions about the MedTech service - Azure Health Data
description: Learn about the MedTech service frequently asked questions. - - Previously updated : 04/28/2023+ Last updated : 05/15/2023
To learn about the MedTech service open-source projects, see [Open-source projec
## Next steps
-In this article, you learned about the MedTech service frequently asked questions (FAQs)
+In this article, you learned about the MedTech service frequently asked questions (FAQs).
For an overview of the MedTech service, see
iot-develop Quickstart Devkit Nxp Mimxrt1050 Evkb https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-develop/quickstart-devkit-nxp-mimxrt1050-evkb.md
- Title: Connect an NXP MIMXRT1050-EVKB to Azure IoT Central quickstart
-description: Use Azure RTOS embedded software to connect an NXP MIMXRT1050-EVKB device to Azure IoT and send telemetry.
---- Previously updated : 10/21/2022---
-# Quickstart: Connect an NXP MIMXRT1050-EVKB Evaluation kit to IoT Central
-
-**Applies to**: [Embedded device development](about-iot-develop.md#embedded-device-development)<br>
-**Total completion time**: 30 minutes
-
-[![Browse code](media/common/browse-code.svg)](https://github.com/azure-rtos/getting-started/tree/master/NXP/MIMXRT1050-EVKB/)
-
-In this quickstart, you use Azure RTOS to connect an NXP MIMXRT1050-EVKB Evaluation kit (from now on, NXP EVK) to Azure IoT.
-
-You'll complete the following tasks:
-
-* Install a set of embedded development tools for programming an NXP EVK in C
-* Build an image and flash it onto the NXP EVK
-* Use Azure IoT Central to create cloud components, view properties, view device telemetry, and call direct commands
-
-## Prerequisites
-
-* A PC running Windows 10
-* [Git](https://git-scm.com/downloads) for cloning the repository
-* Hardware
-
- * The [NXP MIMXRT1050-EVKB](https://www.nxp.com/design/development-boards/i-mx-evaluation-and-development-boards/i-mx-rt1050-evaluation-kit:MIMXRT1050-EVK) (NXP EVK)
- * USB 2.0 A male to Micro USB male cable
- * Wired Ethernet access
- * Ethernet cable
-
-## Prepare the development environment
-
-To set up your development environment, first you clone a GitHub repo that contains all the assets you need for the quickstart. Then you install a set of programming tools.
-
-### Clone the repo for the quickstart
-
-Clone the following repo to download all sample device code, setup scripts, and offline versions of the documentation. If you previously cloned this repo in another quickstart, you don't need to do it again.
-
-To clone the repo, run the following command:
-
-```shell
-git clone --recursive https://github.com/azure-rtos/getting-started.git
-```
-
-### Install the tools
-
-The cloned repo contains a setup script that installs and configures the required tools. If you installed these tools in another embedded device quickstart, you don't need to do it again.
-
-> [!NOTE]
-> The setup script installs the following tools:
-> * [CMake](https://cmake.org): Build
-> * [ARM GCC](https://developer.arm.com/tools-and-software/open-source-software/developer-tools/gnu-toolchain/gnu-rm): Compile
-> * [Termite](https://www.compuphase.com/software_termite.htm): Monitor serial port output for connected devices
-
-To install the tools:
-
-1. From File Explorer, navigate to the following path in the repo and run the setup script named *get-toolchain.bat*:
-
- *getting-started\tools\get-toolchain.bat*
-
-1. After the installation, open a new console window to recognize the configuration changes made by the setup script. Use this console to complete the remaining programming tasks in the quickstart. You can use Windows CMD, PowerShell, or Git Bash for Windows.
-1. Run the following code to confirm that CMake version 3.14 or later is installed.
-
- ```shell
- cmake --version
- ```
--
-## Prepare the device
-
-To connect the NXP EVK to Azure, you'll modify a configuration file for Wi-Fi and Azure IoT settings, rebuild the image, and flash the image to the device.
-
-### Add configuration
-
-1. Open the following file in a text editor:
-
- *getting-started\NXP\MIMXRT1050-EVKB\app\azure_config.h*
-
-1. Set the Azure IoT device information constants to the values that you saved after you created Azure resources.
-
- |Constant name|Value|
- |-|--|
- |`IOT_DPS_ID_SCOPE` |{*Your ID scope value*}|
- |`IOT_DPS_REGISTRATION_ID` |{*Your Device ID value*}|
- |`IOT_DEVICE_SAS_KEY` |{*Your Primary key value*}|
-
-1. Save and close the file.
-
-### Build the image
-
-1. In your console or in File Explorer, run the script *rebuild.bat* at the following path to build the image:
-
- *getting-started\NXP\MIMXRT1050-EVKB\tools\rebuild.bat*
-
-2. After the build completes, confirm that the binary file was created in the following path:
-
- *getting-started\NXP\MIMXRT1050-EVKB\build\app\mimxrt1050_azure_iot.bin*
-
-### Flash the image
-
-1. On the NXP EVK, locate the **Reset** button, the Micro USB port, and the Ethernet port. You use these components in the following steps. All three are highlighted in the following picture:
-
- :::image type="content" source="media/quickstart-devkit-nxp-mimxrt1050-evkb/nxp-1050-evkb-board.png" alt-text="Locate key components on the NXP EVK board":::
-
-1. Connect the Micro USB cable to the Micro USB port on the NXP EVK, and then connect it to your computer. After the device powers up, a solid green LED shows the power status.
-1. Use the Ethernet cable to connect the NXP EVK to an Ethernet port.
-1. In File Explorer, find the binary file that you created in the previous section.
-1. Copy the binary file *mimxrt1050_azure_iot.bin*
-1. In File Explorer, find the NXP EVK device connected to your computer. The device appears as a drive on your system with the drive label **RT1050-EVK**.
-1. Paste the binary file into the root folder of the NXP EVK. Flashing starts automatically and completes in a few seconds.
-
- > [!NOTE]
- > During the flashing process, a red LED blinks rapidly on the NXP EVK.
-
-### Confirm device connection details
-
-You can use the **Termite** app to monitor communication and confirm that your device is set up correctly.
-
-1. Start **Termite**.
- > [!TIP]
- > If you have issues getting your device to initialize or connect after flashing, see [Troubleshooting](troubleshoot-embedded-device-quickstarts.md).
-1. Select **Settings**.
-1. In the **Serial port settings** dialog, check the following settings and update if needed:
- * **Baud rate**: 115,200
- * **Port**: The port that your NXP EVK is connected to. If there are multiple port options in the dropdown, you can find the correct port to use. Open Windows **Device Manager**, and view **Ports** to identify which port to use.
-
- :::image type="content" source="media/quickstart-devkit-nxp-mimxrt1050-evkb/termite-settings.png" alt-text="Screenshot of serial port settings in the Termite app":::
-
-1. Select OK.
-1. Press the **Reset** button on the device. The button is labeled on the device and located near the Micro USB connector.
-1. In the **Termite** app, check the following checkpoint values to confirm that the device is initialized and connected to Azure IoT.
-
- ```output
- Starting Azure thread
-
- Initializing DHCP
- IP address: 10.0.0.77
- Mask: 255.255.255.0
- Gateway: 10.0.0.1
- SUCCESS: DHCP initialized
-
- Initializing DNS client
- DNS address: 10.0.0.1
- SUCCESS: DNS client initialized
-
- Initializing SNTP client
- SNTP server 0.pool.ntp.org
- SNTP IP address: 142.147.92.5
- SNTP time update: May 28, 2021 17:36:33.325 UTC
- SUCCESS: SNTP initialized
-
- Initializing Azure IoT DPS client
- DPS endpoint: global.azure-devices-provisioning.net
- DPS ID scope: ***
- Registration ID: mydevice
- SUCCESS: Azure IoT DPS client initialized
-
- Initializing Azure IoT Hub client
- Hub hostname: ***.azure-devices.net
- Device id: mydevice
- Model id: dtmi:azurertos:devkit:gsg;1
- Connected to IoT Hub
- SUCCESS: Azure IoT Hub client initialized
- ```
-
-Keep Termite open to monitor device output in the following steps.
-
-## Verify the device status
-
-To view the device status in IoT Central portal:
-1. From the application dashboard, select **Devices** on the side navigation menu.
-1. Confirm that the **Device status** is updated to **Provisioned**.
-1. Confirm that the **Device template** is updated to **Getting Started Guide**.
-
- :::image type="content" source="media/quickstart-devkit-nxp-mimxrt1050-evkb/iot-central-device-view-status.png" alt-text="Screenshot of device status in IoT Central":::
-
-## View telemetry
-
-With IoT Central, you can view the flow of telemetry from your device to the cloud.
-
-To view telemetry in IoT Central portal:
-
-1. From the application dashboard, select **Devices** on the side navigation menu.
-1. Select the device from the device list.
-1. View the telemetry as the device sends messages to the cloud in the **Overview** tab.
-1. The temperature is measured from the MCU wafer.
-
- :::image type="content" source="media/quickstart-devkit-nxp-mimxrt1050-evkb/iot-central-device-telemetry.png" alt-text="Screenshot of device telemetry in IoT Central":::
-
- > [!NOTE]
- > You can also monitor telemetry from the device by using the Termite app.
-
-## Call a direct method on the device
-
-You can also use IoT Central to call a direct method that you've implemented on your device. Direct methods have a name, and can optionally have a JSON payload, configurable connection, and method timeout. In this section, you call a method that enables you to turn an LED on or off.
-
-To call a method in IoT Central portal:
-
-1. Select the **Command** tab from the device page.
-1. In the **State** dropdown, select **True**, and then select **Run**. There will be no change on the device as there isn't an available LED to toggle. You can view the output in Termite to monitor the status of the methods.
-
- :::image type="content" source="media/quickstart-devkit-nxp-mimxrt1050-evkb/iot-central-invoke-method.png" alt-text="Screenshot of calling a direct method on a device in IoT Central":::
-
-1. In the **State** dropdown, select **False**, and then select **Run**.
-
-## View device information
-
-You can view the device information from IoT Central.
-
-Select **About** tab from the device page.
--
-> [!TIP]
-> To customize these views, edit the [device template](../iot-central/core/howto-edit-device-template.md).
-
-## Troubleshoot and debug
-
-If you experience issues building the device code, flashing the device, or connecting, see [Troubleshooting](troubleshoot-embedded-device-quickstarts.md).
-
-For debugging the application, see [Debugging with Visual Studio Code](https://github.com/azure-rtos/getting-started/blob/master/docs/debugging.md).
-
-## Clean up resources
-
-If you no longer need the Azure resources created in this quickstart, you can delete them from the IoT Central portal.
-
-To remove the entire Azure IoT Central sample application and all its devices and resources:
-1. Select **Administration** > **Your application**.
-1. Select **Delete**.
-
-## Next steps
-
-In this quickstart, you built a custom image that contains Azure RTOS sample code, and then flashed the image to the NXP EVK device. You also used the IoT Central portal to create Azure resources, connect the NXP EVK securely to Azure, view telemetry, and send messages.
-
-As a next step, explore the following articles to learn more about using the IoT device SDKs to connect devices to Azure IoT.
-
-> [!div class="nextstepaction"]
-> [Connect a simulated device to IoT Central](quickstart-send-telemetry-central.md)
-> [!div class="nextstepaction"]
-> [Connect a simulated device to IoT Hub](quickstart-send-telemetry-iot-hub.md)
-
-> [!IMPORTANT]
-> Azure RTOS provides OEMs with components to secure communication and to create code and data isolation using underlying MCU/MPU hardware protection mechanisms. However, each OEM is ultimately responsible for ensuring that their device meets evolving security requirements.
iot-develop Quickstart Devkit Renesas Rx65n 2Mb https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-develop/quickstart-devkit-renesas-rx65n-2mb.md
- Title: Connect a Renesas RX65N-2MB to Azure IoT Central quickstart
-description: Use Azure RTOS embedded software to connect a Renesas RX65N-2MB device to Azure IoT and send telemetry.
---- Previously updated : 10/21/2022---
-# Quickstart: Connect a Renesas Starter Kit+ for RX65N-2MB to IoT Central
-
-**Applies to**: [Embedded device development](about-iot-develop.md#embedded-device-development)<br>
-**Total completion time**: 30 minutes
-
-[![Browse code](media/common/browse-code.svg)](https://github.com/azure-rtos/getting-started/tree/master/Renesas/RSK_RX65N_2MB)
-
-In this quickstart, you use Azure RTOS to connect the Renesas Starter Kit+ for RX65N-2MB (hereafter, the Renesas RX65N) to Azure IoT.
-
-You will complete the following tasks:
-
-* Install a set of embedded development tools for programming a Renesas RX65N in C
-* Build an image and flash it onto the Renesas RX65N
-* Use Azure IoT Central to create cloud components, view properties, view device telemetry, and call direct commands
-
-## Prerequisites
-
-* A PC running Windows 10
-* [Git](https://git-scm.com/downloads) for cloning the repository
-* Hardware
-
- * The [Renesas Starter Kit+ for RX65N-2MB](https://www.renesas.com/products/microcontrollers-microprocessors/rx-32-bit-performance-efficiency-mcus/rx65n-2mb-starter-kit-plus-renesas-starter-kit-rx65n-2mb) (Renesas RX65N)
- * The [Renesas E2 emulator Lite](https://www.renesas.com/software-tool/e2-emulator-lite-rte0t0002lkce00000r)
- * 2 USB 2.0 A male to Mini USB male cables
- * The included 5V power supply
- * Ethernet cable
- * Wired Ethernet access
-* An active Azure subscription. If you don't have an Azure subscription, create a [free account](https://azure.microsoft.com/free/?WT.mc_id=A261C142F) before you begin.
-
-## Prepare the development environment
-
-To set up your development environment, first you clone a GitHub repo that contains all the assets you need for the quickstart. Then you install a set of programming tools.
-
-### Clone the repo for the quickstart
-
-Clone the following repo to download all sample device code, setup scripts, and offline versions of the documentation. If you previously cloned this repo in another quickstart, you don't need to do it again.
-
-To clone the repo, run the following command:
-
-```shell
-git clone --recursive https://github.com/azure-rtos/getting-started.git
-```
-
-### Install the tools
-
-The cloned repo contains a setup script that installs and configures the required tools. If you installed these tools in another embedded device quickstart, you don't need to do it again.
-
-> [!NOTE]
-> The setup script installs the following tools:
-> * [CMake](https://cmake.org): Build
-> * [RX GCC](http://gcc-renesas.com/downloads/get.php?f=rx/8.3.0.202004-gnurx/gcc-8.3.0.202004-GNURX-ELF.exe): Compile
-> * [Termite](https://www.compuphase.com/software_termite.htm): Monitor serial port output for connected devices
-
-To install the tools:
-
-1. From File Explorer, navigate to the following path in the repo and run the setup script named *get-toolchain.bat*:
-
- *getting-started\tools\get-toolchain-rx.bat*
-
-1. Add the RX compiler to the Windows Path:
-
- *%USERPROFILE%\AppData\Roaming\GCC for Renesas RX 8.3.0.202004-GNURX-ELF\rx-elf\rx-elf\bin*
-
-1. After the installation, open a new console window to recognize the configuration changes made by the setup script. Use this console to complete the remaining programming tasks in the quickstart. You can use Windows CMD, PowerShell, or Git Bash for Windows.
-1. Run the following commands to confirm that CMake version 3.14 or later is installed and the RX compiler path is set up correctly.
-
- ```shell
- cmake --version
- rx-elf-gcc
- ```
-To install the remaining tools:
-
-* Install [Renesas Flash Programmer](https://www.renesas.com/software-tool/renesas-flash-programmer-programming-gui). The Renesas Flash Programmer contains the drivers and tools needed to flash the Renesas RX65N via the Renesas E2 Lite.
--
-## Prepare the device
-
-To connect the Renesas RX65N to Azure, you'll modify a configuration file for Wi-Fi and Azure IoT settings, rebuild the image, and flash the image to the device.
-
-### Add configuration
-
-1. Open the following file in a text editor:
-
- *getting-started\Renesas\RSK_RX65N_2MB\app\azure_config.h*
-
-1. Set the Azure IoT device information constants to the values that you saved after you created Azure resources.
-
- |Constant name|Value|
- |-|--|
- |`IOT_DPS_ID_SCOPE` |{*Your ID scope value*}|
- |`IOT_DPS_REGISTRATION_ID` |{*Your Device ID value*}|
- |`IOT_DEVICE_SAS_KEY` |{*Your Primary key value*}|
-
-1. Save and close the file.
-
-### Build the image
-
-1. In your console or in File Explorer, run the script *rebuild.bat* at the following path to build the image:
-
- *getting-started\Renesas\RSK_RX65N_2MB\tools\rebuild.bat*
-
-2. After the build completes, confirm that the binary file was created in the following path:
-
- *getting-started\Renesas\RSK_RX65N_2MB\build\app\rx65n_azure_iot.hex*
-
-### Connect the device
-
-> [!NOTE]
-> For more information about setting up and getting started with the Renesas RX65N, see [Renesas Starter Kit+ for RX65N-2MB Quick Start](https://www.renesas.com/document/man/e2studio-renesas-starter-kit-rx65n-2mb-quick-start-guide).
-
-1. Complete the following steps using the following image as a reference.
-
- :::image type="content" source="media/quickstart-devkit-renesas-rx65n-2mb/renesas-rx65n.jpg" alt-text="Locate reset, power, ethernet, USB, and E1/E2Lite on the Renesas RX65N board":::
-
-1. Using the 5V power supply, connect the **Power Input** on the Renesas RX65N to an electrical outlet.
-
-1. Using the Ethernet cable, connect the **Ethernet** on the Renesas RX65N to your router.
-
-1. Using the first Mini USB cable, connect the **USB Serial** on the Renesas RX65N to your computer.
-
-1. Using the second Mini USB cable, connect the **E2 Lite USB Serial** on the Renesas E2 Lite to your computer.
-
-1. Using the supplied ribbon cable, connect the **E1/E2Lite** on the Renesas RX65N to the Renesas E2 Lite.
-
-### Flash the image
-
-1. Launch the *Renesas Flash Programmer* application from the Start menu.
-
-2. Select *New Project...* from the *File* menu, and enter the following settings:
- * **Microcontroller**: RX65x
- * **Project Name**: RX65N
- * **Tool**: E2 emulator Lite
-
- :::image type="content" source="media/quickstart-devkit-renesas-rx65n-2mb/rfp-new.png" alt-text="Screenshot of Renesas Flash Programmer, New Project":::
-
-3. Select the *Tool Details* button, and navigate to the *Reset Settings* tab.
-
-4. Select *Reset Pin as Hi-Z* and press the *OK* button.
-
- :::image type="content" source="media/quickstart-devkit-renesas-rx65n-2mb/rfp-reset.png" alt-text="Screenshot of Renesas Flash Programmer, Reset Settings":::
-
-5. Press the *Connect* button and when prompted, check the *Auto Authentication* checkbox and then press *OK*.
-
- :::image type="content" source="media/quickstart-devkit-renesas-rx65n-2mb/rfp-auth.png" alt-text="Screenshot of Renesas Flash Programmer, Authentication":::
-
-6. Select the *Browse...* button and locate the *rx65n_azure_iot.hex* file created in the previous section.
-
-7. Press *Start* to begin flashing. This process will take approximately 10 seconds.
-
-### Confirm device connection details
-
-You can use the **Termite** app to monitor communication and confirm that your device is set up correctly.
-> [!TIP]
-> If you have issues getting your device to initialize or connect after flashing, see [Troubleshooting](troubleshoot-embedded-device-quickstarts.md).
-
-1. Start **Termite**.
-1. Select **Settings**.
-1. In the **Serial port settings** dialog, check the following settings and update if needed:
- * **Baud rate**: 115,200
- * **Port**: The port that your Renesas RX65N is connected to. If there are multiple port options in the dropdown, you can find the correct port to use. Open Windows **Device Manager**, and view **Ports** to identify which port to use.
-
- :::image type="content" source="media/quickstart-devkit-renesas-rx65n-2mb/termite-settings.png" alt-text="Screenshot of serial port settings in the Termite app":::
-
-1. Select OK.
-1. Press the **Reset** button on the device.
-1. In the **Termite** app, check the following checkpoint values to confirm that the device is initialized and connected to Azure IoT.
-
- ```output
- Starting Azure thread
-
- Initializing DHCP
- IP address: 10.0.0.81
- Mask: 255.255.255.0
- Gateway: 10.0.0.1
- SUCCESS: DHCP initialized
-
- Initializing DNS client
- DNS address: 10.0.0.1
- SUCCESS: DNS client initialized
-
- Initializing SNTP client
- SNTP server 0.pool.ntp.org
- SNTP IP address: 104.194.242.237
- SNTP time update: May 28, 2021 22:53:27.54 UTC
- SUCCESS: SNTP initialized
-
- Initializing Azure IoT DPS client
- DPS endpoint: global.azure-devices-provisioning.net
- DPS ID scope: ***
- Registration ID: mydevice
- SUCCESS: Azure IoT DPS client initialized
-
- Initializing Azure IoT Hub client
- Hub hostname: ***.azure-devices.net
- Device id: mydevice
- Model id: dtmi:azurertos:devkit:gsg;1
- Connected to IoT Hub
- SUCCESS: Azure IoT Hub client initialized
- ```
-
-Keep Termite open to monitor device output in the following steps.
-
-## Verify the device status
-
-To view the device status in IoT Central portal:
-1. From the application dashboard, select **Devices** on the side navigation menu.
-1. Confirm that the **Device status** is updated to **Provisioned**.
-1. Confirm that the **Device template** is updated to **Getting Started Guide**.
-
- :::image type="content" source="media/quickstart-devkit-renesas-rx65n-2mb/iot-central-device-view-status.png" alt-text="Screenshot of device status in IoT Central":::
-
-## View telemetry
-
-With IoT Central, you can view the flow of telemetry from your device to the cloud.
-
-To view telemetry in IoT Central portal:
-
-1. From the application dashboard, select **Devices** on the side navigation menu.
-1. Select the device from the device list.
-1. View the telemetry as the device sends messages to the cloud in the **Overview** tab.
-
- :::image type="content" source="media/quickstart-devkit-renesas-rx65n-2mb/iot-central-device-telemetry.png" alt-text="Screenshot of device telemetry in IoT Central":::
-
- > [!NOTE]
- > You can also monitor telemetry from the device by using the Termite app.
-
-## Call a direct method on the device
-
-You can also use IoT Central to call a direct method that you've implemented on your device. Direct methods have a name, and can optionally have a JSON payload, configurable connection, and method timeout. In this section, you call a method that enables you to turn an LED on or off.
-
-To call a method in IoT Central portal:
-
-1. Select the **Command** tab from the device page.
-1. In the **State** dropdown, select **True**, and then select **Run**. The LED light should turn on.
-
- :::image type="content" source="media/quickstart-devkit-renesas-rx65n-2mb/iot-central-invoke-method.png" alt-text="Screenshot of calling a direct method on a device in IoT Central":::
-
-1. In the **State** dropdown, select **False**, and then select **Run**. The LED light should turn off.
-
-## View device information
-
-You can view the device information from IoT Central.
-
-Select **About** tab from the device page.
--
-> [!TIP]
-> To customize these views, edit the [device template](../iot-central/core/howto-edit-device-template.md).
-
-## Troubleshoot
-
-If you experience issues building the device code, flashing the device, or connecting, see [Troubleshooting](troubleshoot-embedded-device-quickstarts.md).
-
-## Clean up resources
-
-If you no longer need the Azure resources created in this quickstart, you can delete them from the IoT Central portal.
-
-To remove the entire Azure IoT Central sample application and all its devices and resources:
-1. Select **Administration** > **Your application**.
-1. Select **Delete**.
-
-## Next steps
-
-In this quickstart, you built a custom image that contains Azure RTOS sample code, and then flashed the image to the Renesas RX65N device. You also used the IoT Central portal to create Azure resources, connect the Renesas RX65N securely to Azure, view telemetry, and send messages.
-
-As a next step, explore the following articles to learn more about using the IoT device SDKs to connect devices to Azure IoT.
-
-> [!div class="nextstepaction"]
-> [Connect a simulated device to IoT Central](quickstart-send-telemetry-central.md)
-> [!div class="nextstepaction"]
-> [Connect a simulated device to IoT Hub](quickstart-send-telemetry-iot-hub.md)
-
-> [!IMPORTANT]
-> Azure RTOS provides OEMs with components to secure communication and to create code and data isolation using underlying MCU/MPU hardware protection mechanisms. However, each OEM is ultimately responsible for ensuring that their device meets evolving security requirements.
logic-apps Create Managed Service Identity https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/logic-apps/create-managed-service-identity.md
The following table lists the connectors that support using a managed identity i
| Connector type | Supported connectors | |-|-|
-| Built-in | - Azure Automation <br>- Azure Blob Storage <br>- Azure Event Hubs <br>- Azure Service Bus <br>- Azure Queues <br>- Azure Tables <br>- HTTP <br>- HTTP + Webhook <br>- SQL Server <br><br>**Note**: HTTP operations can authenticate connections to Azure Storage accounts behind Azure firewalls with the system-assigned identity. |
+| Built-in | - Azure Automation <br>- Azure Blob Storage <br>- Azure Event Hubs <br>- Azure Service Bus <br>- Azure Queues <br>- Azure Tables <br>- HTTP <br>- HTTP + Webhook <br>- SQL Server <br><br>**Note**: Currently, most [built-in, service provider-based connectors](/azure/logic-apps/connectors/built-in/reference/) don't support selecting user-assigned managed identities for authentication. HTTP operations can authenticate connections to Azure Storage accounts behind Azure firewalls with the system-assigned identity. |
| Managed | - Azure AD Identity Protection <br>- Azure App Service <br>- Azure Automation <br>- Azure Blob Storage <br>- Azure Container Instance <br>- Azure Cosmos DB <br>- Azure Data Explorer <br>- Azure Data Factory <br>- Azure Data Lake <br>- Azure Event Grid <br>- Azure Event Hubs <br>- Azure IoT Central V2 <br>- Azure IoT Central V3 <br>- Azure Key Vault <br>- Azure Log Analytics <br>- Azure Queues <br>- Azure Resource Manager <br>- Azure Service Bus <br>- Azure Sentinel <br>- Azure Table Storage <br>- Azure VM <br>- HTTP with Azure AD <br>- SQL Server |
logic-apps Logic Apps Securing A Logic App https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/logic-apps/logic-apps-securing-a-logic-app.md
The following table identifies the authentication types that are available on th
| [Client Certificate](#client-certificate-authentication) | Azure API Management, Azure App Services, HTTP, HTTP + Swagger, HTTP Webhook | | [Active Directory OAuth](#azure-active-directory-oauth-authentication) | - **Consumption**: Azure API Management, Azure App Services, Azure Functions, HTTP, HTTP + Swagger, HTTP Webhook <br><br>- **Standard**: Azure Automation, Azure Blob Storage, Azure Event Hubs, Azure Queues, Azure Service Bus, Azure Tables, HTTP, HTTP Webhook, SQL Server | | [Raw](#raw-authentication) | Azure API Management, Azure App Services, Azure Functions, HTTP, HTTP + Swagger, HTTP Webhook |
-| [Managed identity](#managed-identity-authentication) | **Built-in connectors**: <br><br>- **Consumption**: Azure API Management, Azure App Services, Azure Functions, HTTP, HTTP Webhook <br><br>- **Standard**: Azure Automation, Azure Blob Storage, Azure Event Hubs, Azure Queues, Azure Service Bus, Azure Tables, HTTP, HTTP Webhook, SQL Server <br><br>**Managed connectors**: Azure AD Identity Protection, Azure App Service, Azure Automation, Azure Blob Storage, Azure Container Instance, Azure Cosmos DB, Azure Data Explorer, Azure Data Factory, Azure Data Lake, Azure Event Grid, Azure Event Hubs, Azure IoT Central V2, Azure IoT Central V3, Azure Key Vault, Azure Log Analytics, Azure Queues, Azure Resource Manager, Azure Service Bus, Azure Sentinel, Azure Table Storage, Azure VM, HTTP with Azure AD, SQL Server |
+| [Managed identity](#managed-identity-authentication) | **Built-in connectors**: <br><br>- **Consumption**: Azure API Management, Azure App Services, Azure Functions, HTTP, HTTP Webhook <br><br>- **Standard**: Azure Automation, Azure Blob Storage, Azure Event Hubs, Azure Queues, Azure Service Bus, Azure Tables, HTTP, HTTP Webhook, SQL Server <br><br>**Note**: Currently, most [built-in, service provider-based connectors](/azure/logic-apps/connectors/built-in/reference/) don't support selecting user-assigned managed identities for authentication. <br><br>**Managed connectors**: Azure AD Identity Protection, Azure App Service, Azure Automation, Azure Blob Storage, Azure Container Instance, Azure Cosmos DB, Azure Data Explorer, Azure Data Factory, Azure Data Lake, Azure Event Grid, Azure Event Hubs, Azure IoT Central V2, Azure IoT Central V3, Azure Key Vault, Azure Log Analytics, Azure Queues, Azure Resource Manager, Azure Service Bus, Azure Sentinel, Azure Table Storage, Azure VM, HTTP with Azure AD, SQL Server |
<a name="secure-inbound-requests"></a>
logic-apps Single Tenant Overview Compare https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/logic-apps/single-tenant-overview-compare.md
The single-tenant model and **Logic App (Standard)** resource type include many
* **Logic App (Standard)** resources can run anywhere because Azure Logic Apps generates Shared Access Signature (SAS) connection strings that these logic apps can use for sending requests to the cloud connection runtime endpoint. Azure Logic Apps service saves these connection strings with other application settings so that you can easily store these values in Azure Key Vault when you deploy in Azure.
- * The **Logic App (Standard)** resource type supports having the [system-assigned managed identity *and* multiple user-assigned managed identities](create-managed-service-identity.md) enabled at the same time, though you still can only select one identity to use at any time.
+ * The **Logic App (Standard)** resource type supports having the [system-assigned managed identity *and* multiple user-assigned managed identities](create-managed-service-identity.md) enabled at the same time, though you still can only select one identity to use at any time. However, most [built-in, service provider-based connectors](/azure/logic-apps/connectors/built-in/reference/) currently don't support selecting user-assigned managed identities for authentication.
> [!NOTE] > By default, the system-assigned identity is already enabled to authenticate connections at run time.
For the **Logic App (Standard)** resource, these capabilities have changed, or t
* Azure Active Directory Open Authentication (Azure AD OAuth) for inbound calls to request-based triggers, such as the Request trigger and HTTP Webhook trigger.
- * User-assigned managed identity. Currently, only the system-assigned managed identity is available and automatically enabled.
+ * Managed identity authentication: Both system-assigned and user-assigned managed identity support is available. By default, the system-assigned managed identity is automatically enabled. However, most [built-in, service provider-based connectors](/azure/logic-apps/connectors/built-in/reference/) don't currently support selecting user-assigned managed identities for authentication.
* **XML transformation**: Support for referencing assemblies from maps is currently unavailable. Also, only XSLT 1.0 is currently supported.
machine-learning Concept Designer https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/concept-designer.md
Compute targets are attached to your [Azure Machine Learning workspace](concept-
## Deploy
-To perform real-time inferencing, you must deploy a pipeline as an [online endpoint](concept-endpoints.md#what-are-online-endpoints). The online endpoint creates an interface between an external application and your scoring model. A call to an online endpoint returns prediction results to the application in real time. To make a call to an online endpoint, you pass the API key that was created when you deployed the endpoint. The endpoint is based on REST, a popular architecture choice for web programming projects.
+To perform real-time inferencing, you must deploy a pipeline as an [online endpoint](concept-endpoints-online.md). The online endpoint creates an interface between an external application and your scoring model. A call to an online endpoint returns prediction results to the application in real time. To make a call to an online endpoint, you pass the API key that was created when you deployed the endpoint. The endpoint is based on REST, a popular architecture choice for web programming projects.
Online endpoints must be deployed to an Azure Kubernetes Service cluster.
machine-learning Concept Endpoints Batch https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/concept-endpoints-batch.md
+
+ Title: What are batch endpoints?
+
+description: Learn how Azure Machine Learning uses batch endpoints to simplify machine learning deployments.
++++++++ Last updated : 04/01/2023
+#Customer intent: As an MLOps administrator, I want to understand what a managed endpoint is and why I need it.
++
+# Batch endpoints
+
+After you train a machine learning model, you need to deploy it so that others can consume its predictions. Such execution mode of a model is called *inference*. Azure Machine Learning uses the concept of [endpoints and deployments](concept-endpoints.md) for machine learning models inference.
+
+> [!IMPORTANT]
+> Items marked (preview) in this article are currently in public preview.
+> The preview version is provided without a service level agreement, and it's not recommended for production workloads. Certain features might not be supported or might have constrained capabilities.
+> For more information, see [Supplemental Terms of Use for Microsoft Azure Previews](https://azure.microsoft.com/support/legal/preview-supplemental-terms/).
+
+**Batch endpoints** are endpoints that are used to do batch inferencing on large volumes of data over in asynchronous way. Batch endpoints receive pointers to data and run jobs asynchronously to process the data in parallel on compute clusters. Batch endpoints store outputs to a data store for further analysis.
+
+We recommend using them when:
+
+> [!div class="checklist"]
+> * You have expensive models or pipelines that requires a longer time to run.
+> * You want to operationalize machine learning pipelines and reuse components.
+> * You need to perform inference over large amounts of data, distributed in multiple files.
+> * You don't have low latency requirements.
+> * Your model's inputs are stored in an Storage Account or in an Azure Machine learning data asset.
+> * You can take advantage of parallelization.
+
+## Batch deployments
+
+A deployment is a set of resources and computes required to implement the functionality the endpoint provides. Each endpoint can host multiple deployments with different configuration, which helps *decouple the interface* indicated by the endpoint, from *the implementation details* indicated by the deployment. Batch endpoints automatically route the client to the default deployment which can be configured and changed at any time.
++
+There are two types of deployments in batch endpoints:
+
+* [Model deployments](#model-deployments)
+* [Pipeline component deployment (preview)](#pipeline-component-deployment-preview)
+
+### Model deployments
+
+Model deployment allows operationalizing model inference at scale, processing big amounts of data in a low latency and asynchronous way. Scalability is automatically instrumented by Azure Machine Learning by providing parallelization of the inferencing processes across multiple nodes in a compute cluster.
+
+Use __Model deployments__ when:
+
+> [!div class="checklist"]
+> * You have expensive models that requires a longer time to run inference.
+> * You need to perform inference over large amounts of data, distributed in multiple files.
+> * You don't have low latency requirements.
+> * You can take advantage of parallelization.
+
+The main benefit of this kind of deployments is that you can use the very same assets deployed in the online world (Online Endpoints) but now to run at scale in batch. If your model requires simple pre or pos processing, you can [author an scoring script](how-to-batch-scoring-script.md) that performs the data transformations required.
+
+To create a model deployment in a batch endpoint, you need to specify the following elements:
+
+- Model
+- Compute cluster
+- Scoring script (optional for MLflow models)
+- Environment (optional for MLflow models)
+
+> [!div class="nextstepaction"]
+> [Create your first model deployment](how-to-use-batch-model-deployments.md)
+
+### Pipeline component deployment (preview)
++
+Pipeline component deployment allows operationalizing entire processing graphs (pipelines) to perform batch inference in a low latency and asynchronous way.
+
+Use __Pipeline component deployments__ when:
+
+> [!div class="checklist"]
+> * You need to operationalize complete compute graphs that can be decomposed in multiple steps.
+> * You need to reuse components from training pipelines in your inference pipeline.
+> * You don't have low latency requirements.
+
+The main benefit of this kind of deployments is reusability of components already existing in your platform and the capability to operationalize complex inference routines.
+
+To create a pipeline component deployment in a batch endpoint, you need to specify the following elements:
+
+- Pipeline component
+- Compute cluster configuration
+
+> [!div class="nextstepaction"]
+> [Create your first pipeline component deployment](how-to-use-batch-pipeline-deployments.md)
+
+Batch endpoints also allow you to [create Pipeline component deployments from an existing pipeline job (preview)](how-to-use-batch-pipeline-from-job.md). When doing that, Azure Machine Learning automatically creates a Pipeline component out of the job. This simplifies the use of these kinds of deployments. However, it is a best practice to always [create pipeline components explicitly to streamline your MLOps practice](how-to-use-batch-pipeline-deployments.md).
+
+## Cost management
+
+Invoking a batch endpoint triggers an asynchronous batch inference job. Compute resources are automatically provisioned when the job starts, and automatically de-allocated as the job completes. So you only pay for compute when you use it.
+
+> [!TIP]
+> When deploying models, you can [override compute resource settings](how-to-use-batch-endpoint.md#overwrite-deployment-configuration-per-each-job) (like instance count) and advanced settings (like mini batch size, error threshold, and so on) for each individual batch inference job to speed up execution and reduce cost if you know that you can take advantage of specific configurations.
+
+Batch endpoints also can run on low-priority VMs. Batch endpoints can automatically recover from deallocated VMs and resume the work from where it was left when deploying models for inference. See [Use low-priority VMs in batch endpoints](how-to-use-low-priority-batch.md).
+
+Finally, Azure Machine Learning doesn't charge for batch endpoints or batch deployments themselves, so you can organize your endpoints and deployments as best suits your scenario. Endpoints and deployment can use independent or shared clusters, so you can achieve fine grained control over which compute the produced jobs consume. Use __scale-to-zero__ in clusters to ensure no resources are consumed when they are idle.
++
+## Flexible data sources and storage
+
+Batch endpoints reads and write data directly from storage. You can indicate Azure Machine Learning datastores, Azure Machine Learning data asset, or Storage Accounts as inputs. For more information on supported input options and how to indicate them, see [Create jobs and input data to batch endpoints](how-to-access-data-batch-endpoints-jobs.md).
+
+## Security
+
+Batch endpoints provide all the capabilities required to operate production level workloads in an enterprise setting. They support [private networking](how-to-secure-batch-endpoint.md) on secured workspaces and [Azure Active Directory authentication](how-to-authenticate-batch-endpoint.md), either using a user principal (like a user account) or a service principal (like a managed or unmanaged identity). Jobs generated by a batch endpoint run under the identity of the invoker which gives you flexibility to implement any scenario. See [How to authenticate to batch endpoints](how-to-authenticate-batch-endpoint.md) for details.
+
+## Next steps
+
+- [Deploy models with batch endpoints](how-to-use-batch-model-deployments.md)
+- [Deploy pipelines with batch endpoints (preview)](how-to-use-batch-pipeline-deployments.md)
+- [Deploy MLFlow models in batch deployments](how-to-mlflow-batch.md)
+- [Create jobs and input data to batch endpoints](how-to-access-data-batch-endpoints-jobs.md)
+- [Network isolation for Batch Endpoints](how-to-secure-batch-endpoint.md)
machine-learning Concept Endpoints Online https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/concept-endpoints-online.md
+
+ Title: What are online endpoints?
+
+description: Learn how Azure Machine Learning uses online endpoints to simplify machine learning deployments.
++++++++ Last updated : 04/01/2023
+#Customer intent: As an MLOps administrator, I want to understand what a managed endpoint is and why I need it.
++
+# Online endpoints
++
+After you train a machine learning model, you need to deploy it so that others can consume its predictions. Such execution mode of a model is called *inference*. Azure Machine Learning uses the concept of [endpoints and deployments](concept-endpoints.md) for machine learning models inference.
+
+**Online endpoints** are endpoints that are used for online (real-time) inferencing. They deploy models under a web server that can return predictions under the HTTP protocol.
+
+The following diagram shows an online endpoint that has two deployments, 'blue' and 'green'. The blue deployment uses VMs with a CPU SKU, and runs version 1 of a model. The green deployment uses VMs with a GPU SKU, and uses version 2 of the model. The endpoint is configured to route 90% of incoming traffic to the blue deployment, while green receives the remaining 10%.
++
+## Online deployments requirements
+
+To create an online endpoint, you need to specify the following elements:
+
+- Model to deploy
+- Scoring script - code needed to do scoring/inferencing
+- Environment - a Docker image with Conda dependencies, or a dockerfile
+- Compute instance & scale settings
+
+Learn how to deploy online endpoints from the [CLI/SDK](how-to-deploy-online-endpoints.md) and the [studio web portal](how-to-use-managed-online-endpoint-studio.md).
+
+## Test and deploy locally for faster debugging
+
+Deploy locally to test your endpoints without deploying to the cloud. Azure Machine Learning creates a local Docker image that mimics the Azure Machine Learning image. Azure Machine Learning will build and run deployments for you locally, and cache the image for rapid iterations.
+
+## Native blue/green deployment
+
+Recall, that a single endpoint can have multiple deployments. The online endpoint can do load balancing to give any percentage of traffic to each deployment.
+
+Traffic allocation can be used to do safe rollout blue/green deployments by balancing requests between different instances.
+
+> [!TIP]
+> A request can bypass the configured traffic load balancing by including an HTTP header of `azureml-model-deployment`. Set the header value to the name of the deployment you want the request to route to.
+++
+Traffic to one deployment can also be mirrored (or copied) to another deployment. Mirroring traffic (also called shadowing) is useful when you want to test for things like response latency or error conditions without impacting live clients; for example, when implementing a blue/green deployment where 100% of the traffic is routed to blue and 10% is mirrored to the green deployment. With mirroring, the results of the traffic to the green deployment aren't returned to the clients but metrics and logs are collected. Testing the new deployment with traffic mirroring/shadowing is also known as [shadow testing](https://microsoft.github.io/code-with-engineering-playbook/automated-testing/shadow-testing/), and the functionality is currently a __preview__ feature.
++
+Learn how to [safely rollout to online endpoints](how-to-safely-rollout-online-endpoints.md).
+
+## Application Insights integration
+
+All online endpoints integrate with Application Insights to monitor SLAs and diagnose issues.
+
+However [managed online endpoints](#managed-online-endpoints-vs-kubernetes-online-endpoints) also include out-of-box integration with Azure Logs and Azure Metrics.
+
+## Security
+
+- Authentication: Key and Azure Machine Learning Tokens
+- Managed identity: User assigned and system assigned
+- SSL by default for endpoint invocation
+
+## Autoscaling
+
+Autoscale automatically runs the right amount of resources to handle the load on your application. Managed endpoints support autoscaling through integration with the [Azure monitor autoscale](../azure-monitor/autoscale/autoscale-overview.md) feature. You can configure metrics-based scaling (for instance, CPU utilization >70%), schedule-based scaling (for example, scaling rules for peak business hours), or a combination.
++
+## Visual Studio Code debugging
+
+Visual Studio Code enables you to interactively debug endpoints.
++
+## Private endpoint support
+
+Optionally, you can secure communication with a managed online endpoint by using private endpoints.
+
+You can configure security for inbound scoring requests and outbound communications with the workspace and other services separately. Inbound communications use the private endpoint of the Azure Machine Learning workspace. Outbound communications use private endpoints created per deployment.
+
+For more information, see [Secure online endpoints](how-to-secure-online-endpoint.md).
+
+## Managed online endpoints vs Kubernetes online endpoints
+
+There are two types of online endpoints: **managed online endpoints** and **Kubernetes online endpoints**.
+
+Managed online endpoints help to deploy your ML models in a turnkey manner. Managed online endpoints work with powerful CPU and GPU machines in Azure in a scalable, fully managed way. Managed online endpoints take care of serving, scaling, securing, and monitoring your models, freeing you from the overhead of setting up and managing the underlying infrastructure. The main example in this doc uses managed online endpoints for deployment.
+
+Kubernetes online endpoint allows you to deploy models and serve online endpoints at your fully configured and managed [Kubernetes cluster anywhere](./how-to-attach-kubernetes-anywhere.md),with CPUs or GPUs.
+
+The following table highlights the key differences between managed online endpoints and Kubernetes online endpoints.
+
+| | Managed online endpoints | Kubernetes online endpoints |
+| -- | | -- |
+| **Recommended users** | Users who want a managed model deployment and enhanced MLOps experience | Users who prefer Kubernetes and can self-manage infrastructure requirements |
+| **Node provisioning** | Managed compute provisioning, update, removal | User responsibility |
+| **Node maintenance** | Managed host OS image updates, and security hardening | User responsibility |
+| **Cluster sizing (scaling)** | [Managed manual and autoscale](how-to-autoscale-endpoints.md), supporting additional nodes provisioning | [Manual and autoscale](how-to-kubernetes-inference-routing-azureml-fe.md#autoscaling), supporting scaling the number of replicas within fixed cluster boundaries |
+| **Compute type** | Managed by the service | Customer-managed Kubernetes cluster (Kubernetes) |
+| **Managed identity** | [Supported](how-to-access-resources-from-endpoints-managed-identities.md) | Supported |
+| **Virtual Network (VNET)** | [Supported via managed network isolation](how-to-secure-online-endpoint.md) | User responsibility |
+| **Out-of-box monitoring & logging** | [Azure Monitor and Log Analytics powered](how-to-monitor-online-endpoints.md) (includes key metrics and log tables for endpoints and deployments) | User responsibility |
+| **Logging with Application Insights (legacy)** | Supported | Supported |
+| **View costs** | [Detailed to endpoint / deployment level](how-to-view-online-endpoints-costs.md) | Cluster level |
+| **Cost applied to** | VMs assigned to the deployments | VMs assigned to the cluster |
+| **Mirrored traffic** | [Supported](how-to-safely-rollout-online-endpoints.md#test-the-deployment-with-mirrored-traffic-preview) (preview) | Unsupported |
+| **No-code deployment** | Supported ([MLflow](how-to-deploy-mlflow-models-online-endpoints.md) and [Triton](how-to-deploy-with-triton.md) models) | Supported ([MLflow](how-to-deploy-mlflow-models-online-endpoints.md) and [Triton](how-to-deploy-with-triton.md) models) |
+
+### Managed online endpoints
+
+Managed online endpoints can help streamline your deployment process. Managed online endpoints provide the following benefits over Kubernetes online endpoints:
+
+- Managed infrastructure
+ - Automatically provisions the compute and hosts the model (you just need to specify the VM type and scale settings)
+ - Automatically updates and patches the underlying host OS image
+ - Automatic node recovery if there's a system failure
+
+- Monitoring and logs
+ - Monitor model availability, performance, and SLA using [native integration with Azure Monitor](how-to-monitor-online-endpoints.md).
+ - Debug deployments using the logs and native integration with Azure Log Analytics.
+
+ :::image type="content" source="media/concept-endpoints/log-analytics-and-azure-monitor.png" alt-text="Screenshot showing Azure Monitor graph of endpoint latency.":::
+
+- View costs
+ - Managed online endpoints let you [monitor cost at the endpoint and deployment level](how-to-view-online-endpoints-costs.md)
+
+ :::image type="content" source="media/concept-endpoints/endpoint-deployment-costs.png" alt-text="Screenshot cost chart of an endpoint and deployment.":::
+
+ > [!NOTE]
+ > Managed online endpoints are based on Azure Machine Learning compute. When using a managed online endpoint, you pay for the compute and networking charges. There is no additional surcharge.
+ >
+ > If you use a virtual network and secure outbound (egress) traffic from the managed online endpoint, there is an additional cost. For egress, three private endpoints are created _per deployment_ for the managed online endpoint. These are used to communicate with the default storage account, Azure Container Registry, and workspace. Additional networking charges may apply. For more information on pricing, see the [Azure pricing calculator](https://azure.microsoft.com/pricing/calculator/).
+
+For a step-by-step tutorial, see [How to deploy online endpoints](how-to-deploy-online-endpoints.md).
+
+## Next steps
+
+- [How to deploy online endpoints with the Azure CLI and Python SDK](how-to-deploy-online-endpoints.md)
+- [How to deploy batch endpoints with the Azure CLI and Python SDK](batch-inference/how-to-use-batch-endpoint.md)
+- [How to use online endpoints with the studio](how-to-use-managed-online-endpoint-studio.md)
+- [Deploy models with REST](how-to-deploy-with-rest.md)
+- [How to monitor managed online endpoints](how-to-monitor-online-endpoints.md)
+- [How to view managed online endpoint costs](how-to-view-online-endpoints-costs.md)
+- [Manage and increase quotas for resources with Azure Machine Learning](how-to-manage-quotas.md#azure-machine-learning-managed-online-endpoints)
machine-learning Concept Endpoints https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/concept-endpoints.md
Title: What are endpoints?
+ Title: Use endpoints for inference
description: Learn how Azure Machine Learning endpoints to simplify machine learning deployments.
-+ Last updated 02/07/2023 #Customer intent: As an MLOps administrator, I want to understand what a managed endpoint is and why I need it.
-# What are Azure Machine Learning endpoints?
+# Use endpoints for inference
[!INCLUDE [dev v2](../../includes/machine-learning-dev-v2.md)]
+After you train a machine learning model or a machine learning pipeline, you need to deploy them so others can consume their predictions. Such execution mode is called *inference*. Azure Machine Learning uses the concept of __endpoints and deployments__ for machine learning inference.
-Use Azure Machine Learning endpoints to streamline model deployments for both real-time and batch inference deployments. Endpoints provide a unified interface to invoke and manage model deployments across compute types.
+Endpoints and deployments are two constructs that allow you to decouple the interface of your production workload from the implementation that serves it.
-> [!IMPORTANT]
-> Items marked (preview) in this article are currently in public preview.
-> The preview version is provided without a service level agreement, and it's not recommended for production workloads. Certain features might not be supported or might have constrained capabilities.
-> For more information, see [Supplemental Terms of Use for Microsoft Azure Previews](https://azure.microsoft.com/support/legal/preview-supplemental-terms/).
+## Intuition
-In this article, you learn about:
-> [!div class="checklist"]
-> * Endpoints
-> * Deployments
-> * Managed online endpoints
-> * Kubernetes online endpoints
-> * Batch inference endpoints
-
-## What are endpoints and deployments?
-
-After you train a machine learning model, you need to deploy the model so that others can use it to do inferencing. In Azure Machine Learning, you can use **endpoints** and **deployments** to do so.
-
-An **endpoint**, in this context, is an HTTPS path that provides an interface for clients to send requests (input data) and receive the inferencing (scoring) output of a trained model. An endpoint provides:
-- Authentication using "key & token" based auth -- SSL termination -- A stable scoring URI (endpoint-name.region.inference.ml.azure.com)--
-A **deployment** is a set of resources required for hosting the model that does the actual inferencing.
-
-A single endpoint can contain multiple deployments. Endpoints and deployments are independent Azure Resource Manager resources that appear in the Azure portal.
-
-Azure Machine Learning allows you to implement both [online endpoints](#what-are-online-endpoints) and [batch endpoints](#what-are-batch-endpoints).
-
-### Multiple developer interfaces
-
-Create and manage batch and online endpoints with multiple developer tools:
-- The Azure CLI and the Python SDK-- Azure Resource Manager/REST API-- Azure Machine Learning studio web portal-- Azure portal (IT/Admin)-- Support for CI/CD MLOps pipelines using the Azure CLI interface & REST/ARM interfaces-
-## What are online endpoints?
-
-**Online endpoints** are endpoints that are used for online (real-time) inferencing. Compared to **batch endpoints**, **online endpoints** contain **deployments** that are ready to receive data from clients and can send responses back in real time.
-
-The following diagram shows an online endpoint that has two deployments, 'blue' and 'green'. The blue deployment uses VMs with a CPU SKU, and runs version 1 of a model. The green deployment uses VMs with a GPU SKU, and uses version 2 of the model. The endpoint is configured to route 90% of incoming traffic to the blue deployment, while green receives the remaining 10%.
--
-### Online deployments requirements
-
-To create an online endpoint, you need to specify the following elements:
-- Model files (or specify a registered model in your workspace) -- Scoring script - code needed to do scoring/inferencing-- Environment - a Docker image with Conda dependencies, or a dockerfile -- Compute instance & scale settings -
-Learn how to deploy online endpoints from the [CLI/SDK](how-to-deploy-online-endpoints.md) and the [studio web portal](how-to-use-managed-online-endpoint-studio.md).
-
-### Test and deploy locally for faster debugging
-
-Deploy locally to test your endpoints without deploying to the cloud. Azure Machine Learning creates a local Docker image that mimics the Azure Machine Learning image. Azure Machine Learning will build and run deployments for you locally, and cache the image for rapid iterations.
-
-### Native blue/green deployment
-
-Recall, that a single endpoint can have multiple deployments. The online endpoint can do load balancing to give any percentage of traffic to each deployment.
-
-Traffic allocation can be used to do safe rollout blue/green deployments by balancing requests between different instances.
-
-> [!TIP]
-> A request can bypass the configured traffic load balancing by including an HTTP header of `azureml-model-deployment`. Set the header value to the name of the deployment you want the request to route to.
+Let's imagine you are working on an application that needs to predict the type and color of a car given its photo. The application only needs to know that they make an HTTP request to a URL using some sort of credentials, provide a picture of a car, and they get the type and color of the car back as string values. This thing we have just described is __an endpoint__.
+Now, let's imagine that one data scientist, Alice, is working on its implementation. Alice is well versed on TensorFlow so she decided to implement the model using a Keras sequential classifier with a RestNet architecture she consumed from TensorFlow Hub. She tested the model and she is happy with the results. She decides to use that model to solve the car prediction problem. Her model is large in size, it would require 8GB of memory with 4 cores to run it. This thing we have just described is __a deployment__.
-Traffic to one deployment can also be mirrored (or copied) to another deployment. Mirroring traffic (also called shadowing) is useful when you want to test for things like response latency or error conditions without impacting live clients; for example, when implementing a blue/green deployment where 100% of the traffic is routed to blue and 10% is mirrored to the green deployment. With mirroring, the results of the traffic to the green deployment aren't returned to the clients but metrics and logs are collected. Testing the new deployment with traffic mirroring/shadowing is also known as [shadow testing](https://microsoft.github.io/code-with-engineering-playbook/automated-testing/shadow-testing/), and the functionality is currently a __preview__ feature.
+Finally, let's imagine that after running for a couple of months, the organization discovers that the application performs poorly on images with no ideal illumination conditions. Bob, another data scientist, knows a lot about data argumentation techniques that can be used to help the model build robustness on that factor. However, he feels more comfortable using Torch rather than TensorFlow. He trained another model then using those techniques and he's happy with the results. He would like to try this model on production gradually until the organization is ready to retire the old one. His model shows better performance when deployed to GPU, so he needs one to the deployment. We have just described __another deployment under the same endpoint__.
-Learn how to [safely rollout to online endpoints](how-to-safely-rollout-online-endpoints.md).
-### Application Insights integration
+## Endpoints and deployments
-All online endpoints integrate with Application Insights to monitor SLAs and diagnose issues.
+An **endpoint**, is a stable and durable URL that can be used to request or invoke the model, provide the required inputs, and get the outputs back. An endpoint provides:
-However [managed online endpoints](#managed-online-endpoints-vs-kubernetes-online-endpoints) also include out-of-box integration with Azure Logs and Azure Metrics.
+- a stable and durable URL (like endpoint-name.region.inference.ml.azure.com).
+- An authentication and authentication mechanism.
-### Security
+A **deployment** is a set of resources required for hosting the model or component that does the actual inferencing. A single endpoint can contain multiple deployments which can host independent assets and consume different resources based on what the actual assets require. Endpoints have a routing mechanism that can route the request generated for the clients to specific deployments under the endpoint.
-- Authentication: Key and Azure Machine Learning Tokens-- Managed identity: User assigned and system assigned-- SSL by default for endpoint invocation
+To function properly, __each endpoint needs to have at least one deployment__. Endpoints and deployments are independent Azure Resource Manager resources that appear in the Azure portal.
-### Autoscaling
+## Online and batch endpoints
-Autoscale automatically runs the right amount of resources to handle the load on your application. Managed endpoints support autoscaling through integration with the [Azure monitor autoscale](../azure-monitor/autoscale/autoscale-overview.md) feature. You can configure metrics-based scaling (for instance, CPU utilization >70%), schedule-based scaling (for example, scaling rules for peak business hours), or a combination.
+Azure Machine Learning allows you to implement [online endpoints](concept-endpoints-online.md) and [batch endpoints](concept-endpoints-batch.md). Online endpoints are designed for real-time inference so the results are returned in the response of the invocation. Batch endpoints, on the other hand, are designed for long-running batch inference so each time you invoke the endpoint you generate a batch job that performs the actual work.
+### When to use what
-### Visual Studio Code debugging
+Use [online endpoints](concept-endpoints-online.md) to operationalize models for real-time inference in synchronous low-latency requests. We recommend using them when:
-Visual Studio Code enables you to interactively debug endpoints.
--
-### Private endpoint support
-
-Optionally, you can secure communication with a managed online endpoint by using private endpoints.
-
-You can configure security for inbound scoring requests and outbound communications with the workspace and other services separately. Inbound communications use the private endpoint of the Azure Machine Learning workspace. Outbound communications use private endpoints created per deployment.
-
-For more information, see [Secure online endpoints](how-to-secure-online-endpoint.md).
-
-## Managed online endpoints vs Kubernetes online endpoints
-
-There are two types of online endpoints: **managed online endpoints** and **Kubernetes online endpoints**.
-
-Managed online endpoints help to deploy your ML models in a turnkey manner. Managed online endpoints work with powerful CPU and GPU machines in Azure in a scalable, fully managed way. Managed online endpoints take care of serving, scaling, securing, and monitoring your models, freeing you from the overhead of setting up and managing the underlying infrastructure. The main example in this doc uses managed online endpoints for deployment.
-
-Kubernetes online endpoint allows you to deploy models and serve online endpoints at your fully configured and managed [Kubernetes cluster anywhere](./how-to-attach-kubernetes-anywhere.md),with CPUs or GPUs.
-
-The following table highlights the key differences between managed online endpoints and Kubernetes online endpoints.
-
-| | Managed online endpoints | Kubernetes online endpoints |
-| -- | | -- |
-| **Recommended users** | Users who want a managed model deployment and enhanced MLOps experience | Users who prefer Kubernetes and can self-manage infrastructure requirements |
-| **Node provisioning** | Managed compute provisioning, update, removal | User responsibility |
-| **Node maintenance** | Managed host OS image updates, and security hardening | User responsibility |
-| **Cluster sizing (scaling)** | [Managed manual and autoscale](how-to-autoscale-endpoints.md), supporting additional nodes provisioning | [Manual and autoscale](how-to-kubernetes-inference-routing-azureml-fe.md#autoscaling), supporting scaling the number of replicas within fixed cluster boundaries |
-| **Compute type** | Managed by the service | Customer-managed Kubernetes cluster (Kubernetes) |
-| **Managed identity** | [Supported](how-to-access-resources-from-endpoints-managed-identities.md) | Supported |
-| **Virtual Network (VNET)** | [Supported via managed network isolation](how-to-secure-online-endpoint.md) | User responsibility |
-| **Out-of-box monitoring & logging** | [Azure Monitor and Log Analytics powered](how-to-monitor-online-endpoints.md) (includes key metrics and log tables for endpoints and deployments) | User responsibility |
-| **Logging with Application Insights (legacy)** | Supported | Supported |
-| **View costs** | [Detailed to endpoint / deployment level](how-to-view-online-endpoints-costs.md) | Cluster level |
-| **Cost applied to** | VMs assigned to the deployments | VMs assigned to the cluster |
-| **Mirrored traffic** | [Supported](how-to-safely-rollout-online-endpoints.md#test-the-deployment-with-mirrored-traffic-preview) (preview) | Unsupported |
-| **No-code deployment** | Supported ([MLflow](how-to-deploy-mlflow-models-online-endpoints.md) and [Triton](how-to-deploy-with-triton.md) models) | Supported ([MLflow](how-to-deploy-mlflow-models-online-endpoints.md) and [Triton](how-to-deploy-with-triton.md) models) |
-
-### Managed online endpoints
-
-Managed online endpoints can help streamline your deployment process. Managed online endpoints provide the following benefits over Kubernetes online endpoints:
--- Managed infrastructure
- - Automatically provisions the compute and hosts the model (you just need to specify the VM type and scale settings)
- - Automatically updates and patches the underlying host OS image
- - Automatic node recovery if there's a system failure
--- Monitoring and logs
- - Monitor model availability, performance, and SLA using [native integration with Azure Monitor](how-to-monitor-online-endpoints.md).
- - Debug deployments using the logs and native integration with Azure Log Analytics.
-
- :::image type="content" source="media/concept-endpoints/log-analytics-and-azure-monitor.png" alt-text="Screenshot showing Azure Monitor graph of endpoint latency.":::
--- View costs
- - Managed online endpoints let you [monitor cost at the endpoint and deployment level](how-to-view-online-endpoints-costs.md)
-
- :::image type="content" source="media/concept-endpoints/endpoint-deployment-costs.png" alt-text="Screenshot cost chart of an endpoint and deployment.":::
-
- > [!NOTE]
- > Managed online endpoints are based on Azure Machine Learning compute. When using a managed online endpoint, you pay for the compute and networking charges. There is no additional surcharge.
- >
- > If you use a virtual network and secure outbound (egress) traffic from the managed online endpoint, there is an additional cost. For egress, three private endpoints are created _per deployment_ for the managed online endpoint. These are used to communicate with the default storage account, Azure Container Registry, and workspace. Additional networking charges may apply. For more information on pricing, see the [Azure pricing calculator](https://azure.microsoft.com/pricing/calculator/).
-
-For a step-by-step tutorial, see [How to deploy online endpoints](how-to-deploy-online-endpoints.md).
-
-## What are batch endpoints?
-
-**Batch endpoints** are endpoints that are used to do batch inferencing on large volumes of data over a period of time. **Batch endpoints** receive pointers to data and run jobs asynchronously to process the data in parallel on compute clusters. Batch endpoints store outputs to a data store for further analysis.
+> [!div class="checklist"]
+> * You have low-latency requirements.
+> * Your model can answer the request in a relatively short amount of time.
+> * Your model's inputs fit on the HTTP payload of the request.
+> * You need to scale up in term of number of request.
+Use [batch endpoints](concept-endpoints-batch.md) to operationalize models or pipelines (preview) for long-running asynchronous inference. We recommend using them when:
-### Batch deployment requirements
+> [!div class="checklist"]
+> * You have expensive models or pipelines that require a longer time to run.
+> * You want to operationalize machine learning pipelines and reuse components.
+> * You need to perform inference over large amounts of data, distributed in multiple files.
+> * You don't have low latency requirements.
+> * Your model's inputs are stored in a Storage Account or in an Azure Machine Learning data asset.
+> * You can take advantage of parallelization.
-To create a batch deployment, you need to specify the following elements:
+### Comparison
-- Model files (or specify a model registered in your workspace)-- Compute-- Scoring script - code needed to do the scoring/inferencing-- Environment - a Docker image with Conda dependencies
+Both online and batch endpoints are based on the idea of endpoints and deployments, which help you transition easily from one to the other. However, when moving from one to another, there are some differences that are important to take into account. Some of these differences are due to the nature of the work:
-If you're deploying [MLFlow models in batch deployments](batch-inference/how-to-mlflow-batch.md), there's no need to provide a scoring script and execution environment, as both are autogenerated.
+#### Endpoints
-Learn more about how to [deploy and use batch endpoints](batch-inference/how-to-use-batch-endpoint.md).
+The following table shows a summary of the different features in Online and Batch endpoints.
-### Managed cost with autoscaling compute
+| Feature | [Online Endpoints](concept-endpoints-online.md) | [Batch endpoints](concept-endpoints-batch.md) |
+||-|--|
+| Stable invocation URL | Yes | Yes |
+| Multiple deployments support | Yes | Yes |
+| Deployment's routing | Traffic split | Switch to default |
+| Mirror traffic to all deployment | Yes | No |
+| Swagger support | Yes | No |
+| Authentication | Key and token | Azure AD |
+| Private network support | Yes | Yes |
+| Managed network isolation<sup>1</sup> | Yes | No |
+| Customer-managed keys | Yes | No |
-Invoking a batch endpoint triggers an asynchronous batch inference job. Compute resources are automatically provisioned when the job starts, and automatically de-allocated as the job completes. So you only pay for compute when you use it.
+<sup>1</sup> [*Managed network isolation*](how-to-secure-online-endpoint.md) allows managing the networking configuration of the endpoint independently from the Azure Machine Learning workspace configuration.
-You can [override compute resource settings](batch-inference/how-to-use-batch-endpoint.md#overwrite-deployment-configuration-per-each-job) (like instance count) and advanced settings (like mini batch size, error threshold, and so on) for each individual batch inference job to speed up execution and reduce cost.
+#### Deployments
-### Flexible data sources and storage
+The following table shows a summary of the different features in Online and Batch endpoints at the deployment level. These concepts apply per each deployment under the endpoint.
-You can use the following options for input data when invoking a batch endpoint:
+| Feature | [Online Endpoints](concept-endpoints-online.md) | [Batch endpoints](concept-endpoints-batch.md) |
+|-|-|--|
+| Deployment's types | Models | Models and Pipeline components (preview) |
+| MLflow model's deployment | Yes (requires public networking) | Yes |
+| Custom model's deployment | Yes, with scoring script | Yes, with scoring script |
+| Inference server <sup>1</sup> | - Azure Machine Learning Inferencing Server<br /> - Triton<br /> - Custom (using BYOC) | Batch Inference |
+| Compute resource consumed | Instances or granular resources | Cluster instances |
+| Compute type | Managed compute and Kubernetes | Managed compute and Kubernetes |
+| Low-priority compute | No | Yes |
+| Scales compute to zero | No | Yes |
+| Autoscale compute<sup>2</sup> | Yes, based on resources' load | Yes, based on jobs count |
+| Overcapacity management | Throttling | Queuing |
+| Test deployments locally | Yes | No |
-- Cloud data: Either a path on Azure Machine Learning registered datastore, a reference to Azure Machine Learning registered V2 data asset, or a public URI. For more information, see [Data in Azure Machine Learning](concept-data.md).-- Data stored locally: The data will be automatically uploaded to the Azure Machine Learning registered datastore and passed to the batch endpoint.
+<sup>1</sup> *Inference server* makes reference to the serving technology that takes request, process them, and creates a response. The inference server also dictates the format of the input and the expected outputs.
-> [!NOTE]
-> - If you're using existing V1 FileDatasets for batch endpoints, we recommend migrating them to V2 data assets. You can then refer to the V2 data assets directly when invoking batch endpoints. Currently, only data assets of type `uri_folder` or `uri_file` are supported. Batch endpoints created with GA CLIv2 (2.4.0 and newer) or GA REST API (2022-05-01 and newer) will not support V1 Datasets.
-> - You can also extract the datastores' URI or path from V1 FileDatasets. For this, you'll use the `az ml dataset show` command with the `--query` parameter and use that information for invoke.
-> - While batch endpoints created with earlier APIs will continue to support V1 FileDatasets, we'll be adding more support for V2 data assets in the latest API versions for better usability and flexibility. For more information on V2 data assets, see [Work with data using SDK v2](how-to-read-write-data-v2.md). For more information on the new V2 experience, see [What is v2](concept-v2.md).
+<sup>2</sup> *Autoscale* makes reference to the ability of dynamically scaling up or down the deployment's allocated resources based on its load. Online and Batch Deployments use different strategies. While online deployments scale up and down based on the resource utilization (like CPU, memory, requests, etc.), batch endpoints scale up or down based on the number of jobs created.
-For more information on supported input options, see [Accessing data from batch endpoints jobs](batch-inference/how-to-access-data-batch-endpoints-jobs.md).
+## Developer interfaces
-Specify the storage output location to any datastore and path. By default, batch endpoints store their output to the workspace's default blob store, organized by the Job Name (a system-generated GUID).
+Endpoints are designed to help organization operationalize production level workloads in Azure Machine Learning. They're robust, and scalable resources and they provide the best of the capabilities to implement MLOps workflows.
-### Security
+Create and manage batch and online endpoints with multiple developer tools:
-- Authentication: Azure Active Directory Tokens-- SSL: enabled by default for endpoint invocation-- VNET support: Batch endpoints support ingress protection. A batch endpoint with ingress protection will accept scoring requests only from hosts inside a virtual network but not from the public internet. A batch endpoint that is created in a private-link enabled workspace will have ingress protection. To create a private-link enabled workspace, see [Create a secure workspace](tutorial-create-secure-workspace.md).
+- The Azure CLI and the Python SDK
+- Azure Resource Manager/REST API
+- Azure Machine Learning studio web portal
+- Azure portal (IT/Admin)
+- Support for CI/CD MLOps pipelines using the Azure CLI interface & REST/ARM interfaces
-> [!NOTE]
-> Creating batch endpoints in a private-link enabled workspace is only supported in the following versions.
-> - CLI - version 2.15.1 or higher.
-> - REST API - version 2022-05-01 or higher.
-> - SDK V2 - version 0.1.0b3 or higher.
## Next steps - [How to deploy online endpoints with the Azure CLI and Python SDK](how-to-deploy-online-endpoints.md)-- [How to deploy batch endpoints with the Azure CLI and Python SDK](batch-inference/how-to-use-batch-endpoint.md)
+- [How to deploy models with batch endpoints](how-to-use-batch-model-deployments.md)
+- [How to deploy pipelines with batch endpoints (preview)](how-to-use-batch-pipeline-deployments.md)
- [How to use online endpoints with the studio](how-to-use-managed-online-endpoint-studio.md)-- [Deploy models with REST](how-to-deploy-with-rest.md) - [How to monitor managed online endpoints](how-to-monitor-online-endpoints.md)-- [How to view managed online endpoint costs](how-to-view-online-endpoints-costs.md) - [Manage and increase quotas for resources with Azure Machine Learning](how-to-manage-quotas.md#azure-machine-learning-managed-online-endpoints)
machine-learning How To Access Data Batch Endpoints Jobs https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-access-data-batch-endpoints-jobs.md
Previously updated : 10/10/2022 Last updated : 5/01/2023 # Create jobs and input data for batch endpoints
-Batch endpoints can be used to perform batch scoring on large amounts of data. Such data can be placed in different places. In this tutorial we'll cover the different places where batch endpoints can read data from and how to reference it.
+Batch endpoints can be used to perform long batch operations over large amounts of data. Such data can be placed in different places. Some type of batch endpoints can also receive literal parameters as inputs. In this tutorial we'll cover how you can specify those inputs, and the different types or locations supported.
## Prerequisites
-* This example assumes that you've a model correctly deployed as a batch endpoint. Particularly, we're using the *heart condition classifier* created in the tutorial [Using MLflow models in batch deployments](how-to-mlflow-batch.md).
+* This example assumes that you've created a batch endpoint with at least one deployment. To create an endpoint, follow the steps at [How to use batch endpoints for production workloads](how-to-use-batch-endpoints.md).
-## Supported data inputs
+* You would need permissions to run a batch endpoint deployment. Read [Authorization on batch endpoints](how-to-authenticate-batch-endpoint.md) for details.
+
+## Understanding inputs and outputs
+
+Batch endpoints provide a durable API that consumers can use to create batch jobs. The same interface can be used to indicate the inputs and the outputs your deployment expects. Use inputs to pass any information your endpoint needs to perform the job.
++
+The number and type of inputs and outputs depend on the [type of batch deployment](concept-endpoints-batch.md#batch-deployments). Model deployments always require 1 data input and produce 1 data output. However, pipeline component deployments provide a more general construct to build endpoints. You can indicate any number of inputs and outputs.
+
+The following table summarizes it:
+
+| Deployment type | Input's number | Supported input's types | Output's number | Supported output's types |
+|--|--|--|--|--|
+| [Model deployment](concept-endpoints-batch.md#model-deployments) | 1 | [Data inputs](#data-inputs) | 1 | [Data outputs](#data-inputs) |
+| [Pipeline component deployment (preview)](concept-endpoints-batch.md#pipeline-component-deployment-preview) | [0..N] | [Data inputs](#data-inputs) and [literal inputs](#literal-inputs) | [0..N] | [Data outputs](#data-outputs) |
+++
+> [!TIP]
+> Inputs and outputs are always named. Those names serve as keys to indentify them and pass the actual value during invocation. For model deployments, since they always require 1 input and output, the name is ignored during invocation. You can assign the name its best describe your use case, like "salest_estimations".
+
+## Data inputs
+
+Data inputs refer to inputs that point to a location where data is placed. Since batch endpoints usually consume large amounts of data, you can't pass the input data as part of the invocation request. Instead, you indicate the location where the batch endpoint should go to look for the data. Input data is mounted and streamed on the target compute to improve performance.
Batch endpoints support reading files located in the following storage options:
Batch endpoints support reading files located in the following storage options:
> __Deprecation notice__: Datasets of type `FileDataset` (V1) are deprecated and will be retired in the future. Existing batch endpoints relying on this functionality will continue to work but batch endpoints created with GA CLIv2 (2.4.0 and newer) or GA REST API (2022-05-01 and newer) will not support V1 dataset.
-## Input data from a data asset
+#### Input data from a data asset
Azure Machine Learning data assets (formerly known as datasets) are supported as inputs for jobs. Follow these steps to run a batch endpoint job using data stored in a registered data asset in Azure Machine Learning:
Azure Machine Learning data assets (formerly known as datasets) are supported as
# [REST](#tab/rest)
- Use the Azure Machine Learning CLI, Azure Machine Learning SDK for Python, or Studio to get the location (region), workspace, and data asset name and version. You will need them later.
+ Use the Azure Machine Learning CLI, Azure Machine Learning SDK for Python, or Studio to get the location (region), workspace, and data asset name and version. You need them later.
1. Create a data input:
Azure Machine Learning data assets (formerly known as datasets) are supported as
# [Python](#tab/sdk) ```python
- input = Input(type=AssetTypes.URI_FOLDER, path=heart_dataset_unlabeled.id)
+ input = Input(path=heart_dataset_unlabeled.id)
``` # [REST](#tab/rest)
Azure Machine Learning data assets (formerly known as datasets) are supported as
{ "properties": { "InputData": {
- "mnistinput": {
+ "heart_dataset": {
"JobInputType" : "UriFolder", "Uri": "azureml://locations/<location>/workspaces/<workspace>/data/<dataset_name>/versions/labels/latest" }
Azure Machine Learning data assets (formerly known as datasets) are supported as
> [!NOTE]
- > Data assets ID would look like `/subscriptions/<subscription>/resourcegroups/<resource-group>/providers/Microsoft.MachineLearningServices/workspaces/<workspace>/data/<data-asset>/versions/<version>`.
+ > Data assets ID would look like `/subscriptions/<subscription>/resourcegroups/<resource-group>/providers/Microsoft.MachineLearningServices/workspaces/<workspace>/data/<data-asset>/versions/<version>`. You can also use `azureml:/<datasset_name>@latest` as a way to indicate the input.
1. Run the deployment: # [Azure CLI](#tab/cli)
- ```bash
- INVOKE_RESPONSE = $(az ml batch-endpoint invoke --name $ENDPOINT_NAME --input $DATASET_ID)
+ Use the argument `--set` to indicate the input:
+
+ ```azurecli
+ az ml batch-endpoint invoke --name $ENDPOINT_NAME \
+ --set inputs.heart_dataset.type uri_folder inputs.heart_dataset.path $DATASET_ID
```
- > [!TIP]
- > You can also use `--input azureml:/<dataasset_name>@latest` as a way to indicate the input.
+ If your endpoint serves a model deployment, you can use the short form which supports only 1 input:
+
+ ```azurecli
+ az ml batch-endpoint invoke --name $ENDPOINT_NAME --input $DATASET_ID
+ ```
+
+ The argument `--set` tends to produce long commands when multiple inputs are indicated. On those cases, place your inputs in a `YAML` file and use `--file` to indicate the inputs you need for your endpoint invocation.
+
+ __inputs.yml__
+
+ ```yml
+ inputs:
+ heart_dataset: azureml:/<datasset_name>@latest
+ ```
+
+ ```azurecli
+ az ml batch-endpoint invoke --name $ENDPOINT_NAME --file inputs.yml
+ ```
# [Python](#tab/sdk)+
+ ```python
+ job = ml_client.batch_endpoints.invoke(
+ endpoint_name=endpoint.name,
+ inputs={
+ "heart_dataset": input,
+ }
+ )
+ ```
+
+ If your endpoint serves a model deployment, you can use the short form which supports only 1 input:
```python job = ml_client.batch_endpoints.invoke(
Azure Machine Learning data assets (formerly known as datasets) are supported as
Content-Type: application/json ```
-## Input data from data stores
+#### Input data from data stores
Data from Azure Machine Learning registered data stores can be directly referenced by batch deployments jobs. In this example, we're going to first upload some data to the default data store in the Azure Machine Learning workspace and then run a batch deployment on it. Follow these steps to run a batch endpoint job using data stored in a data store:
Data from Azure Machine Learning registered data stores can be directly referenc
> [!TIP] > The default blob data store in a workspace is called __workspaceblobstore__. You can skip this step if you already know the resource ID of the default data store in your workspace.
-1. We'll need to upload some sample data to it. This example assumes you've uploaded the sample data included in the repo in the folder `sdk/python/endpoints/batch/heart-classifier/data` in the folder `heart-classifier/data` in the blob storage account. Ensure you have done that before moving forward.
+1. We'll need to upload some sample data to it. This example assumes you've uploaded the sample data included in the repo in the folder `sdk/python/endpoints/batch/deploy-models/heart-classifier-mlflow/data` in the folder `heart-disease-uci-unlabeled` in the blob storage account. Ensure you have done that before moving forward.
1. Create a data input:
Data from Azure Machine Learning registered data stores can be directly referenc
# [Python](#tab/sdk) ```python
- data_path = "heart-classifier/data"
+ data_path = "heart-disease-uci-unlabeled"
input = Input(type=AssetTypes.URI_FOLDER, path=f"{default_ds.id}/paths/{data_path}) ```
+ If your data is a file, change `type=AssetTypes.URI_FILE`.
+ # [REST](#tab/rest) __Body__
Data from Azure Machine Learning registered data stores can be directly referenc
{ "properties": { "InputData": {
- "mnistinput": {
+ "heart_dataset": {
"JobInputType" : "UriFolder", "Uri": "azureml:/subscriptions/<subscription>/resourceGroups/<resource-group/providers/Microsoft.MachineLearningServices/workspaces/<workspace>/datastores/<data-store>/paths/<data-path>" }
Data from Azure Machine Learning registered data stores can be directly referenc
} } ```+
+ If your data is a file, use `UriFile` as type instead.
+ > [!NOTE]
Data from Azure Machine Learning registered data stores can be directly referenc
# [Azure CLI](#tab/cli)
- ```bash
- INVOKE_RESPONSE = $(az ml batch-endpoint invoke --name $ENDPOINT_NAME --input $INPUT_PATH)
+ Use the argument `--set` to indicate the input:
+
+ ```azurecli
+ az ml batch-endpoint invoke --name $ENDPOINT_NAME \
+ --set inputs.heart_dataset.type uri_folder inputs.heart_dataset.path $INPUT_PATH
+ ```
+
+ If your endpoint serves a model deployment, you can use the short form which supports only 1 input:
+
+ ```azurecli
+ az ml batch-endpoint invoke --name $ENDPOINT_NAME --input $INPUT_PATH --input-type uri_folder
+ ```
+
+ The argument `--set` tends to produce long commands when multiple inputs are indicated. On those cases, place your inputs in a `YAML` file and use `--file` to indicate the inputs you need for your endpoint invocation.
+
+ __inputs.yml__
+
+ ```yml
+ inputs:
+ heart_dataset:
+ type: uri_folder
+ path: azureml://datastores/<data-store>/paths/<data-path>
+ ```
+
+ ```azurecli
+ az ml batch-endpoint invoke --name $ENDPOINT_NAME --file inputs.yml
```+
+ If your data is a file, use `uri_file` as type instead.
# [Python](#tab/sdk)+
+ ```python
+ job = ml_client.batch_endpoints.invoke(
+ endpoint_name=endpoint.name,
+ inputs={
+ "heart_dataset": input,
+ }
+ )
+ ```
+
+ If your endpoint serves a model deployment, you can use the short form which supports only 1 input:
```python job = ml_client.batch_endpoints.invoke(
Data from Azure Machine Learning registered data stores can be directly referenc
Content-Type: application/json ```
-## Input data from Azure Storage Accounts
+#### Input data from Azure Storage Accounts
Azure Machine Learning batch endpoints can read data from cloud locations in Azure Storage Accounts, both public and private. Use the following steps to run a batch endpoint job using data stored in a storage account:
Azure Machine Learning batch endpoints can read data from cloud locations in Azu
{ "properties": { "InputData": {
- "mnistinput": {
+ "heart_dataset": {
"JobInputType" : "UriFolder", "Uri": "https://azuremlexampledata.blob.core.windows.net/data/heart-disease-uci/data" }
Azure Machine Learning batch endpoints can read data from cloud locations in Azu
{ "properties": { "InputData": {
- "mnistinput": {
- "JobInputType" : "UriFolder",
+ "heart_dataset": {
+ "JobInputType" : "UriFile",
"Uri": "https://azuremlexampledata.blob.core.windows.net/data/heart-disease-uci/data/heart.csv" } }
Azure Machine Learning batch endpoints can read data from cloud locations in Azu
# [Azure CLI](#tab/cli)
- If your data is a folder, use `--input-type uri_folder`:
-
- ```bash
- INVOKE_RESPONSE = $(az ml batch-endpoint invoke --name $ENDPOINT_NAME --input-type uri_folder --input $INPUT_DATA)
+ Use the argument `--set` to indicate the input:
+
+ ```azurecli
+ az ml batch-endpoint invoke --name $ENDPOINT_NAME \
+ --set inputs.heart_dataset.type uri_folder inputs.heart_dataset.path $INPUT_DATA
```
- If your data is a file, use `--input-type uri_file`:
+ If your endpoint serves a model deployment, you can use the short form which supports only 1 input:
- ```bash
- INVOKE_RESPONSE = $(az ml batch-endpoint invoke --name $ENDPOINT_NAME --input-type uri_file --input $INPUT_DATA)
+ ```azurecli
+ az ml batch-endpoint invoke --name $ENDPOINT_NAME --input $INPUT_DATA --input-type uri_folder
+ ```
+
+ The argument `--set` tends to produce long commands when multiple inputs are indicated. On those cases, place your inputs in a `YAML` file and use `--file` to indicate the inputs you need for your endpoint invocation.
+
+ __inputs.yml__
+
+ ```yml
+ inputs:
+ heart_dataset:
+ type: uri_folder
+ path: https://azuremlexampledata.blob.core.windows.net/data/heart-disease-uci/data
+ ```
+
+ ```azurecli
+ az ml batch-endpoint invoke --name $ENDPOINT_NAME --file inputs.yml
```
+ If your data is a file, use `uri_file` as type instead.
+ # [Python](#tab/sdk)+
+ ```python
+ job = ml_client.batch_endpoints.invoke(
+ endpoint_name=endpoint.name,
+ inputs={
+ "heart_dataset": input,
+ }
+ )
+ ```
+
+ If your endpoint serves a model deployment, you can use the short form which supports only 1 input:
```python job = ml_client.batch_endpoints.invoke(
Azure Machine Learning batch endpoints can read data from cloud locations in Azu
```
-## Security considerations when reading data
+### Security considerations when reading data
-Batch endpoints ensure that only authorized users are able to invoke batch deployments and generate jobs. However, depending on how the input data is configured, other credentials may be used to read the underlying data. Use the following table to understand which credentials are used and any additional requirements.
+Batch endpoints ensure that only authorized users are able to invoke batch deployments and generate jobs. However, depending on how the input data is configured, other credentials may be used to read the underlying data. Use the following table to understand which credentials are used:
| Data input type | Credential in store | Credentials used | Access granted by | ||||-|
The managed identity of the compute cluster is used for mounting and configuring
> [!NOTE] > To assign an identity to the compute used by a batch deployment, follow the instructions at [Set up authentication between Azure Machine Learning and other services](how-to-identity-based-service-authentication.md#compute-cluster). Configure the identity on the compute cluster associated with the deployment. Notice that all the jobs running on such compute are affected by this change. However, different deployments (even under the same deployment) can be configured to run under different clusters so you can administer the permissions accordingly depending on your requirements. +
+## Literal inputs
+
+Literal inputs refer to inputs that can be represented and resolved at invocation time, like strings, numbers, and boolean values. You typically use literal inputs to pass parameters to your endpoint as part of a pipeline component deployment.
+
+Batch endpoints support the following literal types:
+
+- `string`
+- `boolean`
+- `float`
+- `integer`
+
+The following example shows how to indicate an input named `score_mode`, of type `string`, with a value of `append`:
+
+# [Azure CLI](#tab/cli)
+
+Place your inputs in a `YAML` file and use `--file` to indicate the inputs you need for your endpoint invocation.
+
+__inputs.yml__
+
+```yml
+inputs:
+ score_mode:
+ type: string
+ default: append
+```
+
+```azurecli
+az ml batch-endpoint invoke --name $ENDPOINT_NAME --file inputs.yml
+```
+
+You can also use the argument `--set` to indicate the value. However, it tends to produce long commands when multiple inputs are indicated:
+
+```azurecli
+az ml batch-endpoint invoke --name $ENDPOINT_NAME \
+ --set inputs.score_mode.type string inputs.score_mode.default append
+```
+
+# [Python](#tab/sdk)
+
+```python
+job = ml_client.batch_endpoints.invoke(
+ endpoint_name=endpoint.name,
+ inputs = {
+ 'score_mode': Input(type="string", default="append")
+ }
+)
+```
+
+# [REST](#tab/rest)
+
+__Body__
+
+```json
+{
+ "properties": {
+ "InputData": {
+ "score_mode": {
+ "JobInputType" : "Literal",
+ "Value": "append"
+ }
+ }
+ }
+}
+```
+
+__Request__
+
+```http
+POST jobs HTTP/1.1
+Host: <ENDPOINT_URI>
+Authorization: Bearer <TOKEN>
+Content-Type: application/json
+```
++
+## Data outputs
+
+Data outputs refer to the location where the results of a batch job should be placed. Outputs are identified by name and Azure Machine Learning automatically assign a unique path to each named output. However, you can indicate another path if required. Batch Endpoints only support writing outputs in blob Azure Machine Learning data stores.
+
+The following example shows how to change the location where an output named `score` is placed. For completeness, these examples also configure an input named `heart_dataset`.
+
+1. Let's use the default data store in the Azure Machine Learning workspace to save the outputs. You can use any other data store in your workspace as long as it's a blob storage account.
+
+ # [Azure CLI](#tab/cli)
+
+ ```azurecli
+ DATASTORE_ID=$(az ml datastore show -n workspaceblobstore | jq -r '.id')
+ ```
+
+ > [!NOTE]
+ > Data stores ID would look like `/subscriptions/<subscription>/resourceGroups/<resource-group>/providers/Microsoft.MachineLearningServices/workspaces/<workspace>/datastores/<data-store>`.
+
+ # [Python](#tab/sdk)
+
+ ```python
+ default_ds = ml_client.datastores.get_default()
+ ```
+
+ # [REST](#tab/rest)
+
+ Use the Azure Machine Learning CLI, Azure Machine Learning SDK for Python, or Studio to get the data store information.
+
+
+1. Create a data output:
+
+ # [Azure CLI](#tab/cli)
+
+ ```azurecli
+ DATA_PATH="batch-jobs/my-unique-path"
+ OUTPUT_PATH="$DATASTORE_ID/paths/$DATA_PATH"
+ ```
+
+ For completeness, let's also create a data input:
+
+ ```azurecli
+ INPUT_PATH="https://azuremlexampledata.blob.core.windows.net/data/heart-disease-uci/data"
+ ```
+
+ # [Python](#tab/sdk)
+
+ ```python
+ data_path = "batch-jobs/my-unique-path"
+ output = Output(type=AssetTypes.URI_FOLDER, path=f"{default_ds.id}/paths/{data_path})
+ ```
+
+ For completeness, let's also create a data input:
+
+ ```python
+ input="https://azuremlexampledata.blob.core.windows.net/data/heart-disease-uci/data"
+ ```
+
+ # [REST](#tab/rest)
+
+ __Body__
+
+ ```json
+ {
+ "properties": {
+ "InputData": {
+ "heart_dataset": {
+ "JobInputType" : "UriFolder",
+ "Uri": "https://azuremlexampledata.blob.core.windows.net/data/heart-disease-uci/data"
+ }
+ },
+ "OutputData": {
+ "score": {
+ "JobOutputType" : "UriFolder",
+ "Uri": "azureml:/subscriptions/<subscription>/resourceGroups/<resource-group/providers/Microsoft.MachineLearningServices/workspaces/<workspace>/datastores/<data-store>/paths/<data-path>"
+ }
+ }
+ }
+ }
+ ```
+
+
+ > [!NOTE]
+ > See how the path `paths` is appended to the resource id of the data store to indicate that what follows is a path inside of it.
+
+1. Run the deployment:
+
+ # [Azure CLI](#tab/cli)
+
+ Use the argument `--set` to indicate the input:
+
+ ```azurecli
+ az ml batch-endpoint invoke --name $ENDPOINT_NAME \
+ --set inputs.heart_dataset.path $INPUT_PATH \
+ --set outputs.score.path $OUTPUT_PATH
+ ```
+
+ # [Python](#tab/sdk)
+
+ ```python
+ job = ml_client.batch_endpoints.invoke(
+ endpoint_name=endpoint.name,
+ inputs={ "heart_dataset": input },
+ outputs={ "score": output }
+ )
+ ```
+
+ # [REST](#tab/rest)
+
+ __Request__
+
+ ```http
+ POST jobs HTTP/1.1
+ Host: <ENDPOINT_URI>
+ Authorization: Bearer <TOKEN>
+ Content-Type: application/json
+ ```
++ ## Next steps * [Troubleshooting batch endpoints](how-to-troubleshoot-batch-endpoints.md).
machine-learning How To Batch Scoring Script https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-batch-scoring-script.md
[!INCLUDE [cli v2](../../includes/machine-learning-dev-v2.md)]
-Batch endpoints allow you to deploy models to perform long-running inference at scale. To indicate how batch endpoints should use your model over the input data to create predictions, you need to create and specify a scoring script (also known as batch driver script). In this article, you will learn how to use scoring scripts in different scenarios and their best practices.
+Batch endpoints allow you to deploy models to perform long-running inference at scale. When deploying models, you need to create and specify a scoring script (also known as batch driver script) to indicate how we should use it over the input data to create predictions. In this article, you will learn how to use scoring scripts in model deployments for different scenarios and their best practices.
> [!TIP] > MLflow models don't require a scoring script as it is autogenerated for you. For more details about how batch endpoints work with MLflow models, see the dedicated tutorial [Using MLflow models in batch deployments](how-to-mlflow-batch.md).
The resulting DataFrame or array is appended to the output file indicated. There
Any library that your scoring script requires to run needs to be indicated in the environment where your batch deployment runs. As for scoring scripts, environments are indicated per deployment. Usually, you indicate your requirements using a `conda.yml` dependencies file, which may look as follows:
-__mnist/environment/conda.yml__
+__mnist/environment/conda.yaml__
:::code language="yaml" source="~/azureml-examples-main/cli/endpoints/batch/deploy-models/mnist-classifier/deployment-torch/environment/conda.yaml":::
machine-learning How To Create Component Pipeline Python https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-create-component-pipeline-python.md
Using `ml_client.components.get()`, you can get a registered component by name a
* For more examples of how to build pipelines by using the machine learning SDK, see the [example repository](https://github.com/Azure/azureml-examples/tree/main/sdk/python/jobs/pipelines). * For how to use studio UI to submit and debug your pipeline, refer to [how to create pipelines using component in the UI](how-to-create-component-pipelines-ui.md). * For how to use Azure Machine Learning CLI to create components and pipelines, refer to [how to create pipelines using component with CLI](how-to-create-component-pipelines-cli.md).
+* For how to deploy pipelines into production using Batch Endpoints, see [how to deploy pipelines with batch endpoints (preview)](how-to-use-batch-pipeline-deployments.md).
machine-learning How To Deploy Batch With Rest https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-deploy-batch-with-rest.md
- Title: "Deploy models using batch endpoints with REST APIs"-
-description: Learn how to deploy models using batch endpoints with REST APIs.
------- Previously updated : 05/24/2022---
-# Deploy models with REST for batch scoring
--
-Learn how to use the Azure Machine Learning REST API to deploy models for batch scoring.
---
-The REST API uses standard HTTP verbs to create, retrieve, update, and delete resources. The REST API works with any language or tool that can make HTTP requests. REST's straightforward structure makes it a good choice in scripting environments and for MLOps automation.
-
-In this article, you learn how to use the new REST APIs to:
-
-> [!div class="checklist"]
-> * Create machine learning assets
-> * Create a batch endpoint and a batch deployment
-> * Invoke a batch endpoint to start a batch scoring job
-
-## Prerequisites
--- An **Azure subscription** for which you have administrative rights. If you don't have such a subscription, try the [free or paid personal subscription](https://azure.microsoft.com/free/).-- An [Azure Machine Learning workspace](quickstart-create-resources.md).-- A service principal in your workspace. Administrative REST requests use [service principal authentication](how-to-setup-authentication.md#use-service-principal-authentication).-- A service principal authentication token. Follow the steps in [Retrieve a service principal authentication token](./how-to-manage-rest.md#retrieve-a-service-principal-authentication-token) to retrieve this token. -- The **curl** utility. The **curl** program is available in the [Windows Subsystem for Linux](/windows/wsl/install-win10) or any UNIX distribution. In PowerShell, **curl** is an alias for **Invoke-WebRequest** and `curl -d "key=val" -X POST uri` becomes `Invoke-WebRequest -Body "key=val" -Method POST -Uri uri`. -- The [jq](https://stedolan.github.io/jq/) JSON processor.-
-> [!IMPORTANT]
-> The code snippets in this article assume that you are using the Bash shell.
->
-> The code snippets are pulled from the `/cli/batch-score-rest.sh` file in the [Azure Machine Learning Example repository](https://github.com/Azure/azureml-examples).
-
-## Set endpoint name
-
-> [!NOTE]
-> Batch endpoint names need to be unique at the Azure region level. For example, there can be only one batch endpoint with the name mybatchendpoint in westus2.
--
-## Azure Machine Learning batch endpoints
-
-[Batch endpoints](concept-endpoints.md#what-are-batch-endpoints) simplify the process of hosting your models for batch scoring, so you can focus on machine learning, not infrastructure. In this article, you'll create a batch endpoint and deployment, and invoking it to start a batch scoring job. But first you'll have to register the assets needed for deployment, including model, code, and environment.
-
-There are many ways to create an Azure Machine Learning batch endpoint, including the Azure CLI, Azure Machine Learning SDK for Python, and visually with the studio. The following example creates a batch endpoint and a batch deployment with the REST API.
-
-## Create machine learning assets
-
-First, set up your Azure Machine Learning assets to configure your job.
-
-In the following REST API calls, we use `SUBSCRIPTION_ID`, `RESOURCE_GROUP`, `LOCATION`, and `WORKSPACE` as placeholders. Replace the placeholders with your own values.
-
-Administrative REST requests a [service principal authentication token](how-to-manage-rest.md#retrieve-a-service-principal-authentication-token). Replace `TOKEN` with your own value. You can retrieve this token with the following command:
--
-The service provider uses the `api-version` argument to ensure compatibility. The `api-version` argument varies from service to service. Set the API version as a variable to accommodate future versions:
--
-### Create compute
-Batch scoring runs only on cloud computing resources, not locally. The cloud computing resource is a reusable virtual computer cluster where you can run batch scoring workflows.
-
-Create a compute cluster:
--
-> [!TIP]
-> If you want to use an existing compute instead, you must specify the full Azure Resource Manager ID when [creating the batch deployment](#create-batch-deployment). The full ID uses the format `/subscriptions/$SUBSCRIPTION_ID/resourceGroups/$RESOURCE_GROUP/providers/Microsoft.MachineLearningServices/workspaces/$WORKSPACE/computes/<your-compute-name>`.
-
-### Get storage account details
-
-To register the model and code, first they need to be uploaded to a storage account. The details of the storage account are available in the data store. In this example, you get the default datastore and Azure Storage account for your workspace. Query your workspace with a GET request to get a JSON file with the information.
-
-You can use the tool [jq](https://stedolan.github.io/jq/) to parse the JSON result and get the required values. You can also use the Azure portal to find the same information:
--
-### Upload & register code
-
-Now that you have the datastore, you can upload the scoring script. For more information about how to author the scoring script, see [Understanding the scoring script](batch-inference/how-to-batch-scoring-script.md#understanding-the-scoring-script). Use the Azure Storage CLI to upload a blob into your default container:
--
-> [!TIP]
-> You can also use other methods to upload, such as the Azure portal or [Azure Storage Explorer](https://azure.microsoft.com/features/storage-explorer/).
-
-Once you upload your code, you can specify your code with a PUT request:
--
-### Upload and register model
-
-Similar to the code, upload the model files:
--
-Now, register the model:
--
-### Create environment
-The deployment needs to run in an environment that has the required dependencies. Create the environment with a PUT request. Use a docker image from Microsoft Container Registry. You can configure the docker image with `image` and add conda dependencies with `condaFile`.
-
-Run the following code to read the `condaFile` defined in json. The source file is at `/cli/endpoints/batch/mnist/environment/conda.json` in the example repository:
--
-Now, run the following snippet to create an environment:
--
-## Deploy with batch endpoints
-
-Next, create a batch endpoint, a batch deployment, and set the default deployment for the endpoint.
-
-### Create batch endpoint
-
-Create the batch endpoint:
--
-### Create batch deployment
-
-Create a batch deployment under the endpoint:
--
-### Set the default batch deployment under the endpoint
-
-There's only one default batch deployment under one endpoint, which will be used when invoke to run batch scoring job.
--
-## Run batch scoring
-
-Invoking a batch endpoint triggers a batch scoring job. A job `id` is returned in the response, and can be used to track the batch scoring progress. In the following snippets, `jq` is used to get the job `id`.
-
-### Invoke the batch endpoint to start a batch scoring job
-
-#### Getting the Scoring URI and access token
-
-Get the scoring uri and access token to invoke the batch endpoint. First get the scoring uri:
--
-Get the batch endpoint access token:
--
-#### Invoke the batch endpoint with different input options
-
-It's time to invoke the batch endpoint to start a batch scoring job. If your data is a folder (potentially with multiple files) publicly available from the web, you can use the following snippet:
-
-```rest-api
-response=$(curl --location --request POST $SCORING_URI \
header "Authorization: Bearer $SCORING_TOKEN" \header "Content-Type: application/json" \data-raw "{
- \"properties\": {
- \"InputData\": {
- \"mnistinput\": {
- \"JobInputType\" : \"UriFolder\",
- \"Uri\": \"https://pipelinedata.blob.core.windows.net/sampledata/mnist\"
- }
- }
- }
-}")
-
-JOB_ID=$(echo $response | jq -r '.id')
-JOB_ID_SUFFIX=$(echo ${JOB_ID##/*/})
-```
-
-Now, let's look at other options for invoking the batch endpoint. When it comes to input data, there are multiple scenarios you can choose from, depending on the input type (whether you are specifying a folder or a single file), and the URI type (whether you are using a path on Azure Machine Learning registered datastore, a reference to Azure Machine Learning registered V2 data asset, or a public URI).
--- An `InputData` property has `JobInputType` and `Uri` keys. When you are specifying a single file, use `"JobInputType": "UriFile"`, and when you are specifying a folder, use `'JobInputType": "UriFolder"`.--- When the file or folder is on Azure Machine Learning registered datastore, the syntax for the `Uri` is `azureml://datastores/<datastore-name>/paths/<path-on-datastore>` for folder, and `azureml://datastores/<datastore-name>/paths/<path-on-datastore>/<file-name>` for a specific file. You can also use the longer form to represent the same path, such as `azureml://subscriptions/<subscription-id>/resourceGroups/<resource-group-name>/workspaces/<workspace-name>/datastores/<datastore-name>/paths/<path-on-datastore>/`.--- When the file or folder is registered as V2 data asset as `uri_folder` or `uri_file`, the syntax for the `Uri` is `\"azureml://locations/<location-name>/workspaces/<workspace-name>/data/<data-name>/versions/<data-version>"` (Asset ID form) or `\"/subscriptions/<subscription-id>/resourcegroups/<resource-group-name>/providers/Microsoft.MachineLearningServices/workspaces/<workspace-name>/data/<data-name>/versions/<data-version>\"` (ARM ID form).--- When the file or folder is a publicly accessible path, the syntax for the URI is `https://<public-path>` for folder, `https://<public-path>/<file-name>` for a specific file.-
-> [!NOTE]
-> For more information about data URI, see [Azure Machine Learning data reference URI](reference-yaml-core-syntax.md#azure-machine-learning-data-reference-uri).
-
-Below are some examples using different types of input data.
--- If your data is a folder on the Azure Machine Learning registered datastore, you can either:-
- - Use the short form to represent the URI:
-
- ```rest-api
- response=$(curl --location --request POST $SCORING_URI \
- --header "Authorization: Bearer $SCORING_TOKEN" \
- --header "Content-Type: application/json" \
- --data-raw "{
- \"properties\": {
- \"InputData\": {
- \"mnistInput\": {
- \"JobInputType\" : \"UriFolder\",
- \"Uri": \"azureml://datastores/workspaceblobstore/paths/$ENDPOINT_NAME/mnist\"
- }
- }
- }
- }")
-
- JOB_ID=$(echo $response | jq -r '.id')
- JOB_ID_SUFFIX=$(echo ${JOB_ID##/*/})
- ```
-
- - Or use the long form for the same URI:
-
- ```rest-api
- response=$(curl --location --request POST $SCORING_URI \
- --header "Authorization: Bearer $SCORING_TOKEN" \
- --header "Content-Type: application/json" \
- --data-raw "{
- \"properties\": {
- \"InputData\": {
- \"mnistinput\": {
- \"JobInputType\" : \"UriFolder\",
- \"Uri\": \"azureml://subscriptions/$SUBSCRIPTION_ID/resourceGroups/$RESOURCE_GROUP/workspaces/$WORKSPACE/datastores/workspaceblobstore/paths/$ENDPOINT_NAME/mnist\"
- }
- }
- }
- }")
-
- JOB_ID=$(echo $response | jq -r '.id')
- JOB_ID_SUFFIX=$(echo ${JOB_ID##/*/})
- ```
--- If you want to manage your data as Azure Machine Learning registered V2 data asset as `uri_folder`, you can follow the two steps below:-
- 1. Create the V2 data asset:
-
- ```rest-api
- DATA_NAME="mnist"
- DATA_VERSION=$RANDOM
-
- response=$(curl --location --request PUT https://management.azure.com/subscriptions/$SUBSCRIPTION_ID/resourceGroups/$RESOURCE_GROUP/providers/Microsoft.MachineLearningServices/workspaces/$WORKSPACE/data/$DATA_NAME/versions/$DATA_VERSION?api-version=$API_VERSION \
- --header "Content-Type: application/json" \
- --header "Authorization: Bearer $TOKEN" \
- --data-raw "{
- \"properties\": {
- \"dataType\": \"uri_folder\",
- \"dataUri\": \"https://pipelinedata.blob.core.windows.net/sampledata/mnist\",
- \"description\": \"Mnist data asset\"
- }
- }")
- ```
-
- 2. Reference the data asset in the batch scoring job:
-
- ```rest-api
- response=$(curl --location --request POST $SCORING_URI \
- --header "Authorization: Bearer $SCORING_TOKEN" \
- --header "Content-Type: application/json" \
- --data-raw "{
- \"properties\": {
- \"InputData\": {
- \"mnistInput\": {
- \"JobInputType\" : \"UriFolder\",
- \"Uri": \"azureml://locations/$LOCATION_NAME/workspaces/$WORKSPACE_NAME/data/$DATA_NAME/versions/$DATA_VERSION/\"
- }
- }
- }
- }")
-
- JOB_ID=$(echo $response | jq -r '.id')
- JOB_ID_SUFFIX=$(echo ${JOB_ID##/*/})
- ```
--- If your data is a single file publicly available from the web, you can use the following snippet:-
- ```rest-api
- response=$(curl --location --request POST $SCORING_URI \
- --header "Authorization: Bearer $SCORING_TOKEN" \
- --header "Content-Type: application/json" \
- --data-raw "{
- \"properties\": {
- \"InputData\": {
- \"mnistInput\": {
- \"JobInputType\" : \"UriFile\",
- \"Uri": \"https://pipelinedata.blob.core.windows.net/sampledata/mnist/0.png\"
- }
- }
- }
- }")
-
- JOB_ID=$(echo $response | jq -r '.id')
- JOB_ID_SUFFIX=$(echo ${JOB_ID##/*/})
- ```
-
-> [!NOTE]
-> We strongly recommend using the latest REST API version for batch scoring.
-> - If you want to use local data, you can upload it to Azure Machine Learning registered datastore and use REST API for Cloud data.
-> - If you are using existing V1 FileDataset for batch endpoint, we recommend migrating them to V2 data assets and refer to them directly when invoking batch endpoints. Currently only data assets of type `uri_folder` or `uri_file` are supported. Batch endpoints created with GA CLIv2 (2.4.0 and newer) or GA REST API (2022-05-01 and newer) will not support V1 Dataset.
-> - You can also extract the URI or path on datastore extracted from V1 FileDataset by using `az ml dataset show` command with `--query` parameter and use that information for invoke.
-> - While Batch endpoints created with earlier APIs will continue to support V1 FileDataset, we will be adding further V2 data assets support with the latest API versions for even more usability and flexibility. For more information on V2 data assets, see [Work with data using SDK v2](how-to-read-write-data-v2.md). For more information on the new V2 experience, see [What is v2](concept-v2.md).
-
-#### Configure the output location and overwrite settings
-
-The batch scoring results are by default stored in the workspace's default blob store within a folder named by job name (a system-generated GUID). You can configure where to store the scoring outputs when you invoke the batch endpoint. Use `OutputData` to configure the output file path on an Azure Machine Learning registered datastore. `OutputData` has `JobOutputType` and `Uri` keys. `UriFile` is the only supported value for `JobOutputType`. The syntax for `Uri` is the same as that of `InputData`, i.e., `azureml://datastores/<datastore-name>/paths/<path-on-datastore>/<file-name>`.
-
-Following is the example snippet for configuring the output location for the batch scoring results.
-
-```rest-api
-response=$(curl --location --request POST $SCORING_URI \
header "Authorization: Bearer $SCORING_TOKEN" \header "Content-Type: application/json" \data-raw "{
- \"properties\": {
- \"InputData\":
- {
- \"mnistInput\": {
- \"JobInputType\" : \"UriFolder\",
- \"Uri": \"azureml://datastores/workspaceblobstore/paths/$ENDPOINT_NAME/mnist\"
- }
- },
- \"OutputData\":
- {
- \"mnistOutput\": {
- \"JobOutputType\": \"UriFile\",
- \"Uri\": \"azureml://datastores/workspaceblobstore/paths/$ENDPOINT_NAME/mnistOutput/$OUTPUT_FILE_NAME\"
- }
- }
- }
-}")
-
-JOB_ID=$(echo $response | jq -r '.id')
-JOB_ID_SUFFIX=$(echo ${JOB_ID##/*/})
-```
-
-> [!IMPORTANT]
-> You must use a unique output location. If the output file exists, the batch scoring job will fail.
-
-### Check the batch scoring job
-
-Batch scoring jobs usually take some time to process the entire set of inputs. Monitor the job status and check the results after it's completed:
-
-> [!TIP]
-> The example invokes the default deployment of the batch endpoint. To invoke a non-default deployment, use the `azureml-model-deployment` HTTP header and set the value to the deployment name. For example, using a parameter of `--header "azureml-model-deployment: $DEPLOYMENT_NAME"` with curl.
--
-### Check batch scoring results
-
-For information on checking the results, see [Check batch scoring results](batch-inference/how-to-use-batch-endpoint.md#check-batch-scoring-results).
-
-## Delete the batch endpoint
-
-If you aren't going use the batch endpoint, you should delete it with the below command (it deletes the batch endpoint and all the underlying deployments):
--
-## Next steps
-
-* Learn [how to deploy your model for batch scoring](batch-inference/how-to-use-batch-endpoint.md).
-* Learn to [Troubleshoot batch endpoints](batch-inference/how-to-troubleshoot-batch-endpoints.md)
machine-learning How To Deploy Model Custom Output https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-deploy-model-custom-output.md
You can follow along this sample in a Jupyter Notebook. In the cloned repository
## Prerequisites -
-* A model registered in the workspace. In this tutorial, we'll use an MLflow model. Particularly, we are using the *heart condition classifier* created in the tutorial [Using MLflow models in batch deployments](how-to-mlflow-batch.md).
-* You must have an endpoint already created. If you don't, follow the instructions at [Use batch endpoints for batch scoring](how-to-use-batch-endpoint.md). This example assumes the endpoint is named `heart-classifier-batch`.
-* You must have a compute created where to deploy the deployment. If you don't, follow the instructions at [Create compute](how-to-use-batch-endpoint.md#create-compute). This example assumes the name of the compute is `cpu-cluster`.
## Creating a batch deployment with a custom output
Batch Endpoint can only deploy registered models. In this case, we already have
# [Azure CLI](#tab/cli)
-```azurecli
-MODEL_NAME='heart-classifier-sklpipe'
-az ml model create --name $MODEL_NAME --type "custom_model" --path "model"
-```
-# [Python](#tab/sdk)
+# [Python](#tab/python)
-```python
-model_name = 'heart-classifier'
-model = ml_client.models.create_or_update(
- Model(name=model_name, path='model', type=AssetTypes.CUSTOM_MODEL)
-)
-```
+[!notebook-python[] (~/azureml-examples-batch-pup/sdk/python/endpoints/batch/deploy-models/custom-outputs-parquet/custom-output-batch.ipynb?name=register_model)]
### Creating a scoring script
We need to create a scoring script that can read the input data provided by the
__code/batch_driver.py__ __Remarks:__ * Notice how the environment variable `AZUREML_BI_OUTPUT_PATH` is used to get access to the output path of the deployment job.
__Remarks:__
> [!WARNING] > Take into account that all the batch executors will have write access to this path at the same time. This means that you need to account for concurrency. In this case, we are ensuring each executor writes its own file by using the input file name as the name of the output folder.
+## Creating the endpoint
+
+We are going to create a batch endpoint named `heart-classifier-batch` where to deploy the model.
+
+1. Decide on the name of the endpoint. The name of the endpoint will end-up in the URI associated with your endpoint. Because of that, __batch endpoint names need to be unique within an Azure region__. For example, there can be only one batch endpoint with the name `mybatchendpoint` in `westus2`.
+
+ # [Azure CLI](#tab/cli)
+
+ In this case, let's place the name of the endpoint in a variable so we can easily reference it later.
+
+ :::code language="azurecli" source="~/azureml-examples-batch-pup/cli/endpoints/batch/deploy-models/custom-outputs-parquet/deploy-and-run.sh" ID="name_endpoint" :::
+
+ # [Python](#tab/python)
+
+ In this case, let's place the name of the endpoint in a variable so we can easily reference it later.
+
+ [!notebook-python[] (~/azureml-examples-batch-pup/sdk/python/endpoints/batch/deploy-models/custom-outputs-parquet/custom-output-batch.ipynb?name=name_endpoint)]
+
+1. Configure your batch endpoint
+
+ # [Azure CLI](#tab/cli)
+
+ The following YAML file defines a batch endpoint:
+
+ __endpoint.yml__
+
+ :::code language="yaml" source="~/azureml-examples-batch-pup/cli/endpoints/batch/deploy-models/custom-outputs-parquet/endpoint.yml":::
+
+ # [Python](#tab/python)
+
+ [!notebook-python[] (~/azureml-examples-batch-pup/sdk/python/endpoints/batch/deploy-models/custom-outputs-parquet/custom-output-batch.ipynb?name=configure_endpoint)]
+
+1. Create the endpoint:
+
+ # [Azure CLI](#tab/cli)
+
+ :::code language="azurecli" source="~/azureml-examples-batch-pup/cli/endpoints/batch/deploy-models/custom-outputs-parquet/deploy-and-run.sh" ID="create_endpoint" :::
+
+ # [Python](#tab/python)
+
+ [!notebook-python[] (~/azureml-examples-batch-pup/sdk/python/endpoints/batch/deploy-models/custom-outputs-parquet/custom-output-batch.ipynb?name=create_endpoint)]
+ ### Creating the deployment Follow the next steps to create a deployment using the previous scoring script:
Follow the next steps to create a deployment using the previous scoring script:
No extra step is required for the Azure Machine Learning CLI. The environment definition will be included in the deployment file.
- :::code language="yaml" source="~/azureml-examples-main/cli/endpoints/batch/deploy-models/custom-outputs-parquet/deployment.yml" range="6-9":::
+ :::code language="yaml" source="~/azureml-examples-batch-pup/cli/endpoints/batch/deploy-models/custom-outputs-parquet/deployment.yml" range="7-10":::
- # [Python](#tab/sdk)
+ # [Python](#tab/python)
Let's get a reference to the environment:
- ```python
- environment = Environment(
- name="batch-mlflow-xgboost",
- conda_file="environment/conda.yaml",
- image="mcr.microsoft.com/azureml/openmpi4.1.0-ubuntu20.04:latest",
- )
- ```
+ [!notebook-python[] (~/azureml-examples-batch-pup/sdk/python/endpoints/batch/deploy-models/custom-outputs-parquet/custom-output-batch.ipynb?name=configure_environment)]
-2. Create the deployment
+2. Create the deployment. Notice that now `output_action` is set to `SUMMARY_ONLY`.
> [!NOTE]
- > This example assumes you have an endpoint created with the name `heart-classifier-batch` and a compute cluster with name `cpu-cluster`. If you don't, please follow the steps in the doc [Use batch endpoints for batch scoring](how-to-use-batch-endpoint.md).
+ > This example assumes you have aa compute cluster with name `batch-cluster`. Change that name accordinly.
# [Azure CLI](#tab/cli) To create a new deployment under the created endpoint, create a `YAML` configuration like the following:
- :::code language="yaml" source="~/azureml-examples-main/cli/endpoints/batch/deploy-models/custom-outputs-parquet/deployment.yml":::
+ :::code language="yaml" source="~/azureml-examples-batch-pup/cli/endpoints/batch/deploy-models/custom-outputs-parquet/deployment.yml":::
Then, create the deployment with the following command:
- :::code language="azurecli" source="~/azureml-examples-main/cli/endpoints/batch/deploy-models/custom-outputs-parquet/deploy-and-run.sh" ID="create_batch_deployment_set_default" :::
+ :::code language="azurecli" source="~/azureml-examples-batch-pup/cli/endpoints/batch/deploy-models/custom-outputs-parquet/deploy-and-run.sh" ID="create_deployment" :::
- # [Python](#tab/sdk)
+ # [Python](#tab/python)
To create a new deployment under the created endpoint, use the following script:
- ```python
- deployment = BatchDeployment(
- name="classifier-xgboost-parquet",
- description="A heart condition classifier based on XGBoost",
- endpoint_name=endpoint.name,
- model=model,
- environment=environment,
- code_configuration=CodeConfiguration(
- code="code/",
- scoring_script="batch_driver.py",
- ),
- compute=compute_name,
- instance_count=2,
- max_concurrency_per_instance=2,
- mini_batch_size=2,
- output_action=BatchDeploymentOutputAction.SUMMARY_ONLY,
- retry_settings=BatchRetrySettings(max_retries=3, timeout=300),
- logging_level="info",
- )
- ```
+ [!notebook-python[] (~/azureml-examples-batch-pup/sdk/python/endpoints/batch/deploy-models/custom-outputs-parquet/custom-output-batch.ipynb?name=configure_deployment)]
Then, create the deployment with the following command:
- ```python
- ml_client.batch_deployments.begin_create_or_update(deployment)
- ```
-
+ [!notebook-python[] (~/azureml-examples-batch-pup/sdk/python/endpoints/batch/deploy-models/custom-outputs-parquet/custom-output-batch.ipynb?name=create_deployment)]
- > [!IMPORTANT]
- > Notice that now `output_action` is set to `SUMMARY_ONLY`.
- 3. At this point, our batch endpoint is ready to be used. ## Testing out the deployment For testing our endpoint, we are going to use a sample of unlabeled data located in this repository and that can be used with the model. Batch endpoints can only process data that is located in the cloud and that is accessible from the Azure Machine Learning workspace. In this example, we are going to upload it to an Azure Machine Learning data store. Particularly, we are going to create a data asset that can be used to invoke the endpoint for scoring. However, notice that batch endpoints accept data that can be placed in multiple type of locations.
-1. Let's create the data asset first. This data asset consists of a folder with multiple CSV files that we want to process in parallel using batch endpoints. You can skip this step is your data is already registered as a data asset or you want to use a different input type.
+1. Let's invoke the endpoint with data from a storage account:
# [Azure CLI](#tab/cli)
- Create a data asset definition in `YAML`:
-
- __heart-dataset-unlabeled.yml__
-
- ```yaml
- $schema: https://azuremlschemas.azureedge.net/latest/data.schema.json
- name: heart-dataset-unlabeled
- description: An unlabeled dataset for heart classification.
- type: uri_folder
- path: heart-dataset
- ```
-
- Then, create the data asset:
-
- ```azurecli
- az ml data create -f heart-dataset-unlabeled.yml
- ```
-
- # [Python](#tab/sdk)
-
- ```python
- data_path = "resources/heart-dataset/"
- dataset_name = "heart-dataset-unlabeled"
-
- heart_dataset_unlabeled = Data(
- path=data_path,
- type=AssetTypes.URI_FOLDER,
- description="An unlabeled dataset for heart classification",
- name=dataset_name,
- )
- ```
-
- Then, create the data asset:
-
- ```python
- ml_client.data.create_or_update(heart_dataset_unlabeled)
- ```
-
- To get the newly created data asset, use:
-
- ```python
- heart_dataset_unlabeled = ml_client.data.get(name=dataset_name, label="latest")
- ```
-
-1. Now that the data is uploaded and ready to be used, let's invoke the endpoint:
-
- # [Azure CLI](#tab/cli)
-
- ```azurecli
- JOB_NAME = $(az ml batch-endpoint invoke --name $ENDPOINT_NAME --deployment-name $DEPLOYMENT_NAME --input azureml:heart-dataset-unlabeled@latest | jq -r '.name')
- ```
+ :::code language="azurecli" source="~/azureml-examples-batch-pup/cli/endpoints/batch/deploy-models/custom-outputs-parquet/deploy-and-run.sh" ID="start_batch_scoring_job" :::
> [!NOTE] > The utility `jq` may not be installed on every installation. You can get instructions in [this link](https://stedolan.github.io/jq/download/).
- # [Python](#tab/sdk)
+ # [Python](#tab/python)
+
+ Configure the inputs:
+
+ [!notebook-python[] (~/azureml-examples-batch-pup/sdk/python/endpoints/batch/deploy-models/custom-outputs-parquet/custom-output-batch.ipynb?name=configure_inputs)]
+
+ Create a job:
- ```python
- input = Input(type=AssetTypes.URI_FOLDER, path=heart_dataset_unlabeled.id)
- job = ml_client.batch_endpoints.invoke(
- endpoint_name=endpoint.name,
- deployment_name=deployment.name,
- input=input,
- )
- ```
+ [!notebook-python[] (~/azureml-examples-batch-pup/sdk/python/endpoints/batch/deploy-models/custom-outputs-parquet/custom-output-batch.ipynb?name=start_batch_scoring_job)]
1. A batch job is started as soon as the command returns. You can monitor the status of the job until it finishes: # [Azure CLI](#tab/cli)
- ```azurecli
- az ml job show --name $JOB_NAME
- ```
+ :::code language="azurecli" source="~/azureml-examples-batch-pup/cli/endpoints/batch/deploy-models/custom-outputs-parquet/deploy-and-run.sh" ID="show_job_in_studio" :::
- # [Python](#tab/sdk)
+ # [Python](#tab/python)
- ```python
- ml_client.jobs.get(job.name)
- ```
+ [!notebook-python[] (~/azureml-examples-batch-pup/sdk/python/endpoints/batch/deploy-models/custom-outputs-parquet/custom-output-batch.ipynb?name=get_job)]
## Analyzing the outputs
You can download the results of the job by using the job name:
To download the predictions, use the following command:
-```azurecli
-az ml job download --name $JOB_NAME --output-name score --download-path ./
-```
-# [Python](#tab/sdk)
+# [Python](#tab/python)
-```python
-ml_client.jobs.download(name=job.name, output_name='score', download_path='./')
-```
+[!notebook-python[] (~/azureml-examples-batch-pup/sdk/python/endpoints/batch/deploy-models/custom-outputs-parquet/custom-output-batch.ipynb?name=download_outputs)]
Once the file is downloaded, you can open it using your favorite tool. The following example loads the predictions using `Pandas` dataframe.
-```python
-import pandas as pd
-import glob
-
-output_files = glob.glob("named-outputs/score/*.parquet")
-score = pd.concat((pd.read_parquet(f) for f in output_files))
-```
+[!notebook-python[] (~/azureml-examples-batch-pup/sdk/python/endpoints/batch/deploy-models/custom-outputs-parquet/custom-output-batch.ipynb?name=read_outputs)]
The output looks as follows:
The output looks as follows:
| 67 | 1 | ... | reversible | 0 | | 37 | 1 | ... | normal | 0 |
+## Clean up resources
+
+# [Azure CLI](#tab/cli)
+
+Run the following code to delete the batch endpoint and all the underlying deployments. Batch scoring jobs won't be deleted.
++
+# [Python](#tab/python)
+
+Run the following code to delete the batch endpoint and all the underlying deployments. Batch scoring jobs won't be deleted.
+
+[!notebook-python[] (~/azureml-examples-batch-pup/sdk/python/endpoints/batch/deploy-models/custom-outputs-parquet/custom-output-batch.ipynb?name=delete_endpoint)]
+++ ## Next steps
machine-learning How To Deploy Online Endpoints https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-deploy-online-endpoints.md
Learn how to use an online endpoint to deploy your model, so you don't have to c
You'll also learn how to view the logs and monitor the service-level agreement (SLA). You start with a model and end up with a scalable HTTPS/REST endpoint that you can use for real-time scoring.
-Online endpoints are endpoints that are used for real-time inferencing. There are two types of online endpoints: **managed online endpoints** and **Kubernetes online endpoints**. For more information on endpoints and differences between managed online endpoints and Kubernetes online endpoints, see [What are Azure Machine Learning endpoints?](concept-endpoints.md#managed-online-endpoints-vs-kubernetes-online-endpoints).
+Online endpoints are endpoints that are used for real-time inferencing. There are two types of online endpoints: **managed online endpoints** and **Kubernetes online endpoints**. For more information on endpoints and differences between managed online endpoints and Kubernetes online endpoints, see [What are Azure Machine Learning endpoints?](concept-endpoints-online.md#managed-online-endpoints-vs-kubernetes-online-endpoints).
Managed online endpoints help to deploy your ML models in a turnkey manner. Managed online endpoints work with powerful CPU and GPU machines in Azure in a scalable, fully managed way. Managed online endpoints take care of serving, scaling, securing, and monitoring your models, freeing you from the overhead of setting up and managing the underlying infrastructure.
machine-learning How To Deploy With Triton https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-deploy-with-triton.md
ms.devlang: azurecli
[!INCLUDE [dev v2](../../includes/machine-learning-dev-v2.md)]
-Learn how to use [NVIDIA Triton Inference Server](https://aka.ms/nvidia-triton-docs) in Azure Machine Learning with [online endpoints](concept-endpoints.md#what-are-online-endpoints).
+Learn how to use [NVIDIA Triton Inference Server](https://aka.ms/nvidia-triton-docs) in Azure Machine Learning with [online endpoints](concept-endpoints-online.md).
-Triton is multi-framework, open-source software that is optimized for inference. It supports popular machine learning frameworks like TensorFlow, ONNX Runtime, PyTorch, NVIDIA TensorRT, and more. It can be used for your CPU or GPU workloads. No-code deployment for Triton models is supported in both [managed online endpoints and Kubernetes online endpoints](concept-endpoints.md#managed-online-endpoints-vs-kubernetes-online-endpoints).
+Triton is multi-framework, open-source software that is optimized for inference. It supports popular machine learning frameworks like TensorFlow, ONNX Runtime, PyTorch, NVIDIA TensorRT, and more. It can be used for your CPU or GPU workloads. No-code deployment for Triton models is supported in both [managed online endpoints and Kubernetes online endpoints](concept-endpoints-online.md#managed-online-endpoints-vs-kubernetes-online-endpoints).
-In this article, you will learn how to deploy Triton and a model to a [managed online endpoint](concept-endpoints.md#managed-online-endpoints). Information is provided on using the CLI (command line), Python SDK v2, and Azure Machine Learning studio.
+In this article, you will learn how to deploy Triton and a model to a [managed online endpoint](concept-endpoints-online.md#managed-online-endpoints). Information is provided on using the CLI (command line), Python SDK v2, and Azure Machine Learning studio.
> [!NOTE] > * [NVIDIA Triton Inference Server](https://aka.ms/nvidia-triton-docs) is an open-source third-party software that is integrated in Azure Machine Learning.
machine-learning How To Image Processing Batch https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-image-processing-batch.md
Title: "Image processing with batch deployments"
+ Title: "Image processing with batch model deployments"
description: Learn how to deploy a model in batch endpoints that process images
-# Image processing with batch deployments
+# Image processing with batch model deployments
[!INCLUDE [ml v2](../../includes/machine-learning-dev-v2.md)]
-Batch Endpoints can be used for processing tabular data, but also any other file type like images. Those deployments are supported in both MLflow and custom models. In this tutorial, we will learn how to deploy a model that classifies images according to the ImageNet taxonomy.
+Batch model deployments can be used for processing tabular data, but also any other file type like images. Those deployments are supported in both MLflow and custom models. In this tutorial, we will learn how to deploy a model that classifies images according to the ImageNet taxonomy.
## About this sample
You can follow along this sample in a Jupyter Notebook. In the cloned repository
## Prerequisites
-* You must have a batch endpoint already created. This example assumes the endpoint is named `imagenet-classifier-batch`. If you don't have one, follow the instructions at [Use batch endpoints for batch scoring](how-to-use-batch-endpoint.md).
-* You must have a compute created where to deploy the deployment. This example assumes the name of the compute is `cpu-cluster`. If you don't, follow the instructions at [Create compute](how-to-use-batch-endpoint.md#create-compute).
## Image classification with batch deployments
In this example, we are going to learn how to deploy a deep learning model that
First, let's create the endpoint that will host the model:
-# [Azure CLI](#tab/azure-cli)
+# [Azure CLI](#tab/cli)
Decide on the name of the endpoint:
ml_client.batch_endpoints.begin_create_or_update(endpoint)
### Registering the model
-Batch Endpoint can only deploy registered models so we need to register it. You can skip this step if the model you are trying to deploy is already registered.
+Model deployments can only deploy registered models so we need to register it. You can skip this step if the model you are trying to deploy is already registered.
1. Downloading a copy of the model:
Batch Endpoint can only deploy registered models so we need to register it. You
:::code language="azurecli" source="~/azureml-examples-main/cli/endpoints/batch/deploy-models/imagenet-classifier/deploy-and-run.sh" ID="download_model" :::
- # [Python](#tab/sdk)
+ # [Python](#tab/python)
```python import os
Batch Endpoint can only deploy registered models so we need to register it. You
az ml model create --name $MODEL_NAME --path "model" ```
- # [Python](#tab/sdk)
+ # [Python](#tab/python)
```python model_name = 'imagenet-classifier'
One the scoring script is created, it's time to create a batch deployment for it
:::code language="yaml" source="~/azureml-examples-main/cli/endpoints/batch/deploy-models/imagenet-classifier/deployment-by-file.yml" range="7-10":::
- # [Python](#tab/sdk)
+ # [Python](#tab/python)
Let's get a reference to the environment:
One the scoring script is created, it's time to create a batch deployment for it
:::code language="azurecli" source="~/azureml-examples-main/cli/endpoints/batch/deploy-models/imagenet-classifier/deploy-and-run.sh" ID="create_batch_deployment_set_default" :::
- # [Python](#tab/sdk)
+ # [Python](#tab/python)
To create a new deployment with the indicated environment and scoring script use the following code:
One the scoring script is created, it's time to create a batch deployment for it
az ml batch-endpoint update --name $ENDPOINT_NAME --set defaults.deployment_name=$DEPLOYMENT_NAME ```
- # [Azure Machine Learning SDK for Python](#tab/sdk)
+ # [Azure Machine Learning SDK for Python](#tab/python)
```python endpoint.defaults.deployment_name = deployment.name
For testing our endpoint, we are going to use a sample of 1000 images from the o
:::code language="azurecli" source="~/azureml-examples-main/cli/endpoints/batch/deploy-models/imagenet-classifier/deploy-and-run.sh" ID="download_sample_data" :::
- # [Python](#tab/sdk)
+ # [Python](#tab/python)
```python !wget https://azuremlexampledata.blob.core.windows.net/data/imagenet-1000.zip
For testing our endpoint, we are going to use a sample of 1000 images from the o
:::code language="azurecli" source="~/azureml-examples-main/cli/endpoints/batch/deploy-models/imagenet-classifier/deploy-and-run.sh" ID="create_sample_data_asset" :::
- # [Python](#tab/sdk)
+ # [Python](#tab/python)
```python data_path = "data"
For testing our endpoint, we are going to use a sample of 1000 images from the o
> [!NOTE] > The utility `jq` may not be installed on every installation. You can get instructions in [this link](https://stedolan.github.io/jq/download/).
- # [Python](#tab/sdk)
+ # [Python](#tab/python)
```python input = Input(type=AssetTypes.URI_FOLDER, path=imagenet_sample.id)
For testing our endpoint, we are going to use a sample of 1000 images from the o
:::code language="azurecli" source="~/azureml-examples-main/cli/endpoints/batch/deploy-models/imagenet-classifier/deploy-and-run.sh" ID="show_job_in_studio" :::
- # [Python](#tab/sdk)
+ # [Python](#tab/python)
```python ml_client.jobs.get(job.name)
For testing our endpoint, we are going to use a sample of 1000 images from the o
:::code language="azurecli" source="~/azureml-examples-main/cli/endpoints/batch/deploy-models/imagenet-classifier/deploy-and-run.sh" ID="download_scores" :::
- # [Python](#tab/sdk)
+ # [Python](#tab/python)
```python ml_client.jobs.download(name=job.name, output_name='score', download_path='./')
On those cases, we may want to perform inference on the entire batch of data. Th
:::code language="azurecli" source="~/azureml-examples-main/cli/endpoints/batch/deploy-models/imagenet-classifier/deploy-and-run.sh" ID="create_batch_deployment_ht" :::
- # [Python](#tab/sdk)
+ # [Python](#tab/python)
To create a new deployment with the indicated environment and scoring script use the following code:
machine-learning How To Import Data Assets https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-import-data-assets.md
[!INCLUDE [dev v2](../../includes/machine-learning-dev-v2.md)] > [!div class="op_single_selector" title1="Select the version of Azure Machine Learning SDK you are using:"]
-> * [v2](how-to-import-data-assets.md)
+> * [v2 ](how-to-import-data-assets.md)
-In this article, learn how to import data into the Azure Machine Learning platform from external sources. A successful import automatically creates and registers an Azure Machine Learning data asset with the name provided during the import. An Azure Machine Learning data asset resembles a web browser bookmark (favorites). You don't need to remember long storage paths (URIs) that point to your most frequently used data. Instead, you can create a data asset, and then access that asset with a friendly name.
+In this article, learn how to import data into the Azure Machine Learning platform from external sources. A successful import automatically creates and registers an Azure Machine Learning data asset with the name provided during the import. An Azure Machine Learning data asset resembles a web browser bookmark (favorites). You don't need to remember long storage paths (URIs) that point to your most-frequently used data. Instead, you can create a data asset, and then access that asset with a friendly name.
A data import creates a cache of the source data, along with metadata, for faster and reliable data access in Azure Machine Learning training jobs. The data cache avoids network and connection constraints. The cached data is versioned to support reproducibility (which provides versioning capabilities for data imported from SQL Server sources). Additionally, the cached data provides data lineage for auditability. A data import uses ADF (Azure Data Factory pipelines) behind the scenes, which means that users can avoid complex interactions with ADF. Behind the scenes, Azure Machine Learning also handles management of ADF compute resource pool size, compute resource provisioning, and tear-down to optimize data transfer by determining proper parallelization.
-The transferred data is partitioned and securely stored as parquet files in Azure storage, to enable faster processing during training. ADF compute costs only involve the time used for data transfers. Storage costs only involve the time needed to cache the data, because cached data is a copy of the data imported from an external source. That external source is hosted in Azure storage.
+The transferred data is partitioned and securely stored as parquet files in Azure storage. This enables faster processing during training. ADF compute costs only involve the time used for data transfers. Storage costs only involve the time needed to cache the data, because cached data is a copy of the data imported from an external source. That external source is hosted in Azure storage.
-The caching feature involves upfront compute and storage costs. However, it pays for itself, and can save money, because it reduces recurring training compute costs, compared to direct connections to external source data during training. It caches data as parquet files, which makes job training faster and more reliable against connection timeouts for larger data sets. Additionally, the caching feature leads to fewer reruns, and fewer training failures.
-
-Customers who want the "auto-deletion" of unused imported data assets can now choose to import data into "workspacemanageddatastore," also known as "workspacemanagedstore". Microsoft manages this datastore on behalf of the customer and provides the convenience of automatic data management on certain conditions like - last used time or created time. By default, all the data assets imported into the workspace-managed datastore have an auto-delete setting configured to "not used for 30 days". If a data asset isn't used for 30 days, it will automatically delete. Within that time, you can edit the "auto-delete" settings in the imported data asset. You can increase or decrease the duration (number of days), or you can change the "condition". As of now, created time and unused time are the two conditions supported. If you chose to work with a "managed datastore", you must only point the `path` on your data import to `azureml://datastores/workspacemanagedstore`, and Azure Machine Learning will create one for you. The managed datastore costs the same as a regular ADLS Gen2 datastore, which charges by the amount of data that is stored in it. However, the managed datastore offers the benefit of data management.
-
-> [!NOTE]
-> - There will be only one `workspacemanagedstore` per workspace that would be created
-> - The managed datastore backfills, or is automatic, when the first import job that refers to the managed datastore is submitted.
-> - Users cannot create a `workspacemanagedstore` using any datastore APIs or methods.
-> - In the import definition, users must refer to the managed datastore in this way: `path: azureml://datastores/manageddatastore`. The system automatically assigns a unique path for storage of the imported data. Unlike customer-owned datastores, or a workspace default blobstore, there is no need to provide the entire path where you want to import data.
-> - Currently, the path on the `workspacemanagedstore` can be accessed only by data import service, and `workspacemanagedstore` cannot be given as a destination in any other process or step
-> - The data path in the `workspacemanagedstore` can be accessed only by AzureML service
-> - To access data from the `workspacemanagedstore`, reference the data asset name and version, similar to any other data asset in your jobs or scripts, or processes submitted to AzureML. AzureML knows how to read data from managed datastore.
+The caching feature involves upfront compute and storage costs. However, it pays for itself, and can save money, because it reduces recurring training compute costs compared to direct connections to external source data during training. It caches data as parquet files, which makes job training faster and more reliable against connection timeouts for larger data sets. This leads to fewer reruns, and fewer training failures.
You can now import data from Snowflake, Amazon S3 and Azure SQL.
To create and work with data assets, you need:
* [Workspace connections created](how-to-connection.md) > [!NOTE]
-> For a successful data import, please verify that you have installed the latest Azure-ai-ml package (version 1.5.0 or later) for SDK, and the ml extension (version 2.15.1 or later).
+> For a successful data import, please verify that you have installed the latest azure-ai-ml package (version 1.5.0 or later) for SDK, and the ml extension (version 2.15.1 or later).
>
-> If you have an older SDK package or CLI extension, please remove the old one and install the new one with the code shown in the tab section. Follow these instructions for SDK and CLI:
+> If you have an older SDK package or CLI extension, please remove the old one and install the new one with the code shown in the tab section. Follow the instructions for SDK and CLI below:
### Code versions
Create a `YAML` file `<file-name>.yml`:
$schema: http://azureml/sdk-2-0/DataImport.json # Supported connections include: # Connection: azureml:<workspace_connection_name>
-# Supported "paths" include either on regular datastore or managed datastore as shown below:
-# path: azureml://datastores/<data_store_name>/paths/<my_path>/${{name}}
-# or path: azureml://datastores/workspacemanagedstore
+# Supported paths include:
+# Datastore: azureml://datastores/<data_store_name>/paths/<my_path>/${{name}}
type: mltable
from azure.ai.ml import MLClient
# Supported connections include: # Connection: azureml:<workspace_connection_name>
-# Supported "paths" include either on regular datastore or managed datastore as shown below:
+# Supported paths include:
# path: azureml://datastores/<data_store_name>/paths/<my_path>/${{name}}
-# or path: azureml://datastores/workspacemanagedstore
ml_client = MLClient.from_config()
Create a `YAML` file `<file-name>.yml`:
$schema: http://azureml/sdk-2-0/DataImport.json # Supported connections include: # Connection: azureml:<workspace_connection_name>
-# Supported "paths" include either on regular datastore or managed datastore as shown below:
+# Supported paths include:
# path: azureml://datastores/<data_store_name>/paths/<my_path>/${{name}}
-# or path: azureml://datastores/workspacemanagedstore
type: uri_folder
from azure.ai.ml import MLClient
# Supported connections include: # Connection: azureml:<workspace_connection_name>
-# Supported "paths" include either on regular datastore or managed datastore as shown below:
+# Supported paths include:
# path: azureml://datastores/<data_store_name>/paths/<my_path>/${{name}}
-# or path: azureml://datastores/workspacemanagedstore
ml_client = MLClient.from_config()
ml_client.data.import_data(data_import=data_import)
## Check the import status of external data sources
-The data import action is an asynchronous action. It can take a long time. After submission of an import data action via the CLI or SDK, the Azure Machine Learning service might need several minutes to connect to the external data source. Then the service would start the data import and handle data caching and registration. The times needed for a data import also depends on the size of the source data set.
+The data import action is an asynchronous action. It can take a long time. After submission of an import data action via the CLI or SDK, the Azure Machine Learning service might need several minutes to connect to the external data source. Then the service would start the data import and handle data caching and registration. The time needed for a data import also depends on the size of the source data set.
The next example returns the status of the submitted data import activity. The command or method uses the "data asset" name as the input to determine the status of the data materialization. # [Azure CLI](#tab/cli) + ```cli > az ml data list-materialization-status --name <name> ```
ml_client.data.show_materialization_status(name="<name>")
-## Additional Capabilities
--- [Import from external data sources on a schedule (preview)](reference-yaml-schedule.md)-- [Edit auto-delete settings on imported data asset](how-to-manage-imported-data-assets.md)- ## Next steps - [Read data in a job](how-to-read-write-data-v2.md#read-data-in-a-job)
machine-learning How To Manage Imported Data Assets https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-manage-imported-data-assets.md
- Title: Manage imported data assets (preview)-
-description: Learn how to manage imported data assets also known as edit auto-deletion.
------- Previously updated : 04/30/2023---
-# Manage imported data assets (preview)
-
-> [!div class="op_single_selector" title1="Select the version of Azure Machine Learning SDK you are using:"]
-> * [v2](how-to-import-data-assets.md)
-
-In this article, learn how to manage imported data assets from life-cycle point of view. We learn how to modify or update auto-delete settings on the data assets that are imported on to a managed datastore (`workspacemanagedstore`) that Microsoft manages for the customer.
-
-> [!NOTE]
-> Auto-delete settings capability or lifecycle management is offered currently only on the imported data assets in managed datastore aka `workspacemanagedstore`.
--
-## Modifying auto delete settings
-
-You can change the auto-delete setting value or condition
-# [Azure CLI](#tab/cli)
-
-```cli
-> az ml data update -n <my_imported_ds> -v <version_number> --set auto_delete_setting.value='45d'
-
-> az ml data update -n <my_imported_ds> -v <version_number> --set auto_delete_setting.condition='created_greater_than'
-
-```
-
-# [Python SDK](#tab/Python-SDK)
-```python
-from azure.ai.ml.entities import DataΓÇ»
-from azure.ai.ml.constants import AssetTypesΓÇ»
-
-name='<my_imported_ds>'
-version='<version_number>'
-type='mltable'
-auto_delete_setting = AutoDeleteSetting(
- condition='created_greater_than', value='45d'
-)ΓÇ»
-my_data=Data(name=name,version=version,type=type, auto_delete_setting=auto_delete_setting)
-
-ml_client.data.create_or_update(my_data)ΓÇ»
-
-```
---
-## Deleting/removing auto delete settings
-
-You can remove a previously configured auto-delete setting.
-
-# [Azure CLI](#tab/cli)
-
-```cli
-> az ml data update -n <my_imported_ds> -v <version_number> --remove auto_delete_setting
--
-```
-
-# [Python SDK](#tab/Python-SDK)
-```python
-from azure.ai.ml.entities import DataΓÇ»
-from azure.ai.ml.constants import AssetTypesΓÇ»
-
-name='<my_imported_ds>'
-version='<version_number>'
-type='mltable'
-ΓÇ»
-my_data=Data(name=name,version=version,type=type, auto_delete_setting=None)
-
-ml_client.data.create_or_update(my_data)ΓÇ»
-
-```
---
-## Query on the configured auto delete settings
-
-You can view and list the data assets with certain conditions or with values configured in the "auto-delete" settings, as shown in this Azure CLI code sample:
-
-```cli
-> az ml data list --query '[?auto_delete_setting.\"condition\"==''created_greater_than'']'
-
-> az ml data list --query '[?auto_delete_setting.\"value\"==''30d'']'
-```
-
-## Next steps
--- [Read data in a job](how-to-read-write-data-v2.md#read-data-in-a-job)-- [Working with tables in Azure Machine Learning](how-to-mltable.md)-- [Access data from Azure cloud storage during interactive development](how-to-access-data-interactive.md)
machine-learning How To Mlflow Batch https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-mlflow-batch.md
Title: "Using MLflow models in batch deployments"
+ Title: Deploy MLflow models in batch deployments
description: Learn how to deploy MLflow models in batch deployments
-# Use MLflow models in batch deployments
+# Deploy MLflow models in batch deployments
[!INCLUDE [cli v2](../../includes/machine-learning-dev-v2.md)]
-In this article, learn how to deploy your [MLflow](https://www.mlflow.org) model to Azure Machine Learning for both batch inference using batch endpoints. Azure Machine Learning supports no-code deployment of models created and logged with MLflow. This means that you don't have to provide a scoring script or an environment.
-
-For no-code-deployment, Azure Machine Learning
+In this article, learn how to deploy [MLflow](https://www.mlflow.org) models to Azure Machine Learning for both batch inference using batch endpoints. When deploying MLflow models to batch endpoints, Azure Machine Learning:
* Provides a MLflow base image/curated environment that contains the required dependencies to run an Azure Machine Learning Batch job. * Creates a batch job pipeline with a scoring script for you that can be used to process data using parallelization. > [!NOTE]
-> For more information about the supported file types in batch endpoints with MLflow, view [Considerations when deploying to batch inference](#considerations-when-deploying-to-batch-inference).
+> For more information about the supported input file types in model deployments with MLflow, view [Considerations when deploying to batch inference](#considerations-when-deploying-to-batch-inference).
## About this example
This example shows how you can deploy an MLflow model to a batch endpoint to per
The model has been trained using an `XGBBoost` classifier and all the required preprocessing has been packaged as a `scikit-learn` pipeline, making this model an end-to-end pipeline that goes from raw data to predictions.
-The information in this article is based on code samples contained in the [azureml-examples](https://github.com/azure/azureml-examples) repository. To run the commands locally without having to copy/paste YAML and other files, clone the repo and then change directories to the `cli/endpoints/batch/deploy-models/heart-classifier-mlflow` if you are using the Azure CLI or `sdk/python/endpoints/batch/deploy-models/heart-classifier-mlflow` if you are using our SDK for Python.
+
+The files for this example are in:
```azurecli
-git clone https://github.com/Azure/azureml-examples --depth 1
-cd azureml-examples/cli/endpoints/batch/deploy-models/heart-classifier-mlflow
+cd endpoints/batch/deploy-models/heart-classifier-mlflow
``` ### Follow along in Jupyter Notebooks
You can follow along this sample in the following notebooks. In the cloned repos
## Prerequisites -
-* You must have a MLflow model. If your model is not in MLflow format and you want to use this feature, you can [convert your custom ML model to MLflow format](how-to-convert-custom-model-to-mlflow.md).
## Steps Follow these steps to deploy an MLflow model to a batch endpoint for running batch inference over new data:
-1. First, let's connect to Azure Machine Learning workspace where we are going to work on.
-
- # [Azure CLI](#tab/cli)
-
- ```azurecli
- az account set --subscription <subscription>
- az configure --defaults workspace=<workspace> group=<resource-group> location=<location>
- ```
-
- # [Python](#tab/sdk)
-
- The workspace is the top-level resource for Azure Machine Learning, providing a centralized place to work with all the artifacts you create when you use Azure Machine Learning. In this section, we'll connect to the workspace in which you'll perform deployment tasks.
-
- 1. Import the required libraries:
-
- ```python
- from azure.ai.ml import MLClient, Input
- from azure.ai.ml.entities import BatchEndpoint, BatchDeployment, Model, AmlCompute, Data, BatchRetrySettings
- from azure.ai.ml.constants import AssetTypes, BatchDeploymentOutputAction
- from azure.identity import DefaultAzureCredential
- ```
-
- 2. Configure workspace details and get a handle to the workspace:
-
- ```python
- subscription_id = "<subscription>"
- resource_group = "<resource-group>"
- workspace = "<workspace>"
-
- ml_client = MLClient(DefaultAzureCredential(), subscription_id, resource_group, workspace)
- ```
-- 1. Batch Endpoint can only deploy registered models. In this case, we already have a local copy of the model in the repository, so we only need to publish the model to the registry in the workspace. You can skip this step if the model you are trying to deploy is already registered.
- # [Azure CLI](#tab/cli)
+ # [Azure CLI](#tab/cli)
- ```azurecli
- MODEL_NAME='heart-classifier'
- az ml model create --name $MODEL_NAME --type "mlflow_model" --path "model"
- ```
+ :::code language="azurecli" source="~/azureml-examples-batch-pup/cli/endpoints/batch/deploy-models/heart-classifier-mlflow/deploy-and-run.sh" ID="register_model" :::
- # [Python](#tab/sdk)
+ # [Python](#tab/python)
- ```python
- model_name = 'heart-classifier'
- model_local_path = "heart-classifier-mlflow/model"
- model = ml_client.models.create_or_update(
- Model(name=model_name, path=model_local_path, type=AssetTypes.MLFLOW_MODEL)
- )
- ```
+ [!notebook-python[] (~/azureml-examples-batch-pup/sdk/python/endpoints/batch/deploy-models/heart-classifier-mlflow/mlflow-for-batch-tabular.ipynb?name=register_model)]
1. Before moving any forward, we need to make sure the batch deployments we are about to create can run on some infrastructure (compute). Batch deployments can run on any Azure Machine Learning compute that already exists in the workspace. That means that multiple batch deployments can share the same compute infrastructure. In this example, we are going to work on an Azure Machine Learning compute cluster called `cpu-cluster`. Let's verify the compute exists on the workspace or create it otherwise.
- # [Azure CLI](#tab/cli)
-
- Create a compute definition `YAML` like the following one:
-
- __cpu-cluster.yml__
-
- ```yaml
- $schema: https://azuremlschemas.azureedge.net/latest/amlCompute.schema.json
- name: cluster-cpu
- type: amlcompute
- size: STANDARD_DS3_v2
- min_instances: 0
- max_instances: 2
- idle_time_before_scale_down: 120
- ```
+ # [Azure CLI](#tab/cli)
- Create the compute using the following command:
+ Create a compute cluster as follows:
- ```azurecli
- az ml compute create -f cpu-cluster.yml
- ```
+ :::code language="azurecli" source="~/azureml-examples-batch-pup/cli/endpoints/batch/deploy-models/heart-classifier-mlflow/deploy-and-run.sh" ID="create_compute" :::
- # [Python](#tab/sdk)
+ # [Python](#tab/python)
- To create a new compute cluster where to create the deployment, use the following script:
+ To create a new compute cluster where to create the deployment, use the following script:
- ```python
- compute_name = "cpu-cluster"
- if not any(filter(lambda m : m.name == compute_name, ml_client.compute.list())):
- compute_cluster = AmlCompute(name=compute_name, description="amlcompute", min_instances=0, max_instances=2)
- ml_client.begin_create_or_update(compute_cluster)
- ```
+ [!notebook-python[] (~/azureml-examples-batch-pup/sdk/python/endpoints/batch/deploy-models/heart-classifier-mlflow/mlflow-for-batch-tabular.ipynb?name=create_compute)]
1. Now it is time to create the batch endpoint and deployment. Let's start with the endpoint first. Endpoints only require a name and a description to be created. The name of the endpoint will end-up in the URI associated with your endpoint. Because of that, __batch endpoint names need to be unique within an Azure region__. For example, there can be only one batch endpoint with the name `mybatchendpoint` in `westus2`.
Follow these steps to deploy an MLflow model to a batch endpoint for running bat
In this case, let's place the name of the endpoint in a variable so we can easily reference it later.
- ```azurecli
- ENDPOINT_NAME="heart-classifier-batch"
- ```
+ :::code language="azurecli" source="~/azureml-examples-batch-pup/cli/endpoints/batch/deploy-models/heart-classifier-mlflow/deploy-and-run.sh" ID="name_endpoint" :::
- # [Python](#tab/sdk)
+ # [Python](#tab/python)
In this case, let's place the name of the endpoint in a variable so we can easily reference it later.
- ```python
- endpoint_name="heart-classifier-batch"
- ```
+ [!notebook-python[] (~/azureml-examples-batch-pup/sdk/python/endpoints/batch/deploy-models/heart-classifier-mlflow/mlflow-for-batch-tabular.ipynb?name=name_endpoint)]
1. Create the endpoint:
- # [Azure CLI](#tab/cli)
+ # [Azure CLI](#tab/cli)
- To create a new endpoint, create a `YAML` configuration like the following:
+ To create a new endpoint, create a `YAML` configuration like the following:
- :::code language="yaml" source="~/azureml-examples-main/cli/endpoints/batch/deploy-models/heart-classifier-mlflow/endpoint.yml" :::
+ :::code language="yaml" source="~/azureml-examples-batch-pup/cli/endpoints/batch/deploy-models/heart-classifier-mlflow/endpoint.yml" :::
- Then, create the endpoint with the following command:
+ Then, create the endpoint with the following command:
- :::code language="azurecli" source="~/azureml-examples-main/cli/endpoints/batch/deploy-models/heart-classifier-mlflow/deploy-and-run.sh" ID="create_batch_endpoint" :::
+ :::code language="azurecli" source="~/azureml-examples-batch-pup/cli/endpoints/batch/deploy-models/heart-classifier-mlflow/deploy-and-run.sh" ID="create_endpoint" :::
- # [Python](#tab/sdk)
+ # [Python](#tab/python)
- To create a new endpoint, use the following script:
+ To create a new endpoint, use the following script:
- ```python
- endpoint = BatchEndpoint(
- name=endpoint_name,
- description="A heart condition classifier for batch inference",
- )
- ```
+ [!notebook-python[] (~/azureml-examples-batch-pup/sdk/python/endpoints/batch/deploy-models/heart-classifier-mlflow/mlflow-for-batch-tabular.ipynb?name=configure_endpoint)]
- Then, create the endpoint with the following command:
+ Then, create the endpoint with the following command:
- ```python
- ml_client.batch_endpoints.begin_create_or_update(endpoint)
- ```
+ [!notebook-python[] (~/azureml-examples-batch-pup/sdk/python/endpoints/batch/deploy-models/heart-classifier-mlflow/mlflow-for-batch-tabular.ipynb?name=create_endpoint)]
5. Now, let create the deployment. MLflow models don't require you to indicate an environment or a scoring script when creating the deployments as it is created for you. However, you can specify them if you want to customize how the deployment does inference.
- # [Azure CLI](#tab/cli)
+ # [Azure CLI](#tab/cli)
- To create a new deployment under the created endpoint, create a `YAML` configuration like the following:
+ To create a new deployment under the created endpoint, create a `YAML` configuration like the following:
- :::code language="yaml" source="~/azureml-examples-main/cli/endpoints/batch/deploy-models/heart-classifier-mlflow/deployment-simple/deployment.yml" :::
+ :::code language="yaml" source="~/azureml-examples-batch-pup/cli/endpoints/batch/deploy-models/heart-classifier-mlflow/deployment-simple/deployment.yml" :::
- Then, create the deployment with the following command:
+ Then, create the deployment with the following command:
- :::code language="azurecli" source="~/azureml-examples-main/cli/endpoints/batch/deploy-models/heart-classifier-mlflow/deploy-and-run.sh" ID="create_batch_deployment_set_default" :::
+ :::code language="azurecli" source="~/azureml-examples-batch-pup/cli/endpoints/batch/deploy-models/heart-classifier-mlflow/deploy-and-run.sh" ID="create_deployment" :::
- # [Python](#tab/sdk)
+ # [Python](#tab/python)
- To create a new deployment under the created endpoint, first define the deployment:
+ To create a new deployment under the created endpoint, first define the deployment:
- ```python
- deployment = BatchDeployment(
- name="classifier-xgboost-mlflow",
- description="A heart condition classifier based on XGBoost",
- endpoint_name=endpoint.name,
- model=model,
- compute=compute_name,
- instance_count=2,
- max_concurrency_per_instance=2,
- mini_batch_size=2,
- output_action=BatchDeploymentOutputAction.APPEND_ROW,
- output_file_name="predictions.csv",
- retry_settings=BatchRetrySettings(max_retries=3, timeout=300),
- logging_level="info",
- )
- ```
+ [!notebook-python[] (~/azureml-examples-batch-pup/sdk/python/endpoints/batch/deploy-models/heart-classifier-mlflow/mlflow-for-batch-tabular.ipynb?name=configure_deployment)]
- Then, create the deployment with the following command:
+ Then, create the deployment with the following command:
- ```python
- ml_client.batch_deployments.begin_create_or_update(deployment)
- ```
-
+ [!notebook-python[] (~/azureml-examples-batch-pup/sdk/python/endpoints/batch/deploy-models/heart-classifier-mlflow/mlflow-for-batch-tabular.ipynb?name=create_deployment)]
+
+
- > [!NOTE]
- > Batch deployments only support deploying MLflow models with a `pyfunc` flavor. To use a different flavor, see [Customizing MLflow models deployments with a scoring script](#customizing-mlflow-models-deployments-with-a-scoring-script)..
+ > [!NOTE]
+ > Batch deployments only support deploying MLflow models with a `pyfunc` flavor. To use a different flavor, see [Customizing MLflow models deployments with a scoring script](#customizing-mlflow-models-deployments-with-a-scoring-script)..
6. Although you can invoke a specific deployment inside of an endpoint, you will usually want to invoke the endpoint itself and let the endpoint decide which deployment to use. Such deployment is named the "default" deployment. This gives you the possibility of changing the default deployment and hence changing the model serving the deployment without changing the contract with the user invoking the endpoint. Use the following instruction to update the default deployment:
- # [Azure CLI](#tab/cli)
+ # [Azure CLI](#tab/cli)
- :::code language="azurecli" source="~/azureml-examples-main/cli/endpoints/batch/deploy-models/heart-classifier-mlflow/deploy-and-run.sh" ID="update_default_deployment" :::
+ :::code language="azurecli" source="~/azureml-examples-batch-pup/cli/endpoints/batch/deploy-models/heart-classifier-mlflow/deploy-and-run.sh" ID="set_default_deployment" :::
- # [Python](#tab/sdk)
+ # [Python](#tab/python)
- ```python
- endpoint = ml_client.batch_endpoints.get(endpoint.name)
- endpoint.defaults.deployment_name = deployment.name
- ml_client.batch_endpoints.begin_create_or_update(endpoint)
- ```
+ [!notebook-python[] (~/azureml-examples-batch-pup/sdk/python/endpoints/batch/deploy-models/heart-classifier-mlflow/mlflow-for-batch-tabular.ipynb?name=set_default_deployment)]
7. At this point, our batch endpoint is ready to be used.
For testing our endpoint, we are going to use a sample of unlabeled data located
1. Let's create the data asset first. This data asset consists of a folder with multiple CSV files that we want to process in parallel using batch endpoints. You can skip this step is your data is already registered as a data asset or you want to use a different input type.
- # [Azure CLI](#tab/cli)
+ # [Azure CLI](#tab/cli)
- a. Create a data asset definition in `YAML`:
+ a. Create a data asset definition in `YAML`:
- __heart-dataset-unlabeled.yml__
+ __heart-dataset-unlabeled.yml__
- :::code language="yaml" source="~/azureml-examples-main/cli/endpoints/batch/deploy-models/heart-classifier-mlflow/heart-dataset-unlabeled.yml" :::
+ :::code language="yaml" source="~/azureml-examples-batch-pup/cli/endpoints/batch/deploy-models/heart-classifier-mlflow/heart-dataset-unlabeled.yml" :::
- b. Create the data asset:
+ b. Create the data asset:
- :::code language="azurecli" source="~/azureml-examples-main/cli/endpoints/batch/deploy-models/heart-classifier-mlflow/deploy-and-run.sh" ID="register_dataset" :::
+ :::code language="azurecli" source="~/azureml-examples-batch-pup/cli/endpoints/batch/deploy-models/heart-classifier-mlflow/deploy-and-run.sh" ID="create_data_asset" :::
- # [Python](#tab/sdk)
+ # [Python](#tab/python)
- a. Create a data asset definition:
+ a. Create a data asset definition:
- ```python
- data_path = "data"
- dataset_name = "heart-dataset-unlabeled"
-
- heart_dataset_unlabeled = Data(
- path=data_path,
- type=AssetTypes.URI_FOLDER,
- description="An unlabeled dataset for heart classification",
- name=dataset_name,
- )
- ```
+ [!notebook-python[] (~/azureml-examples-batch-pup/sdk/python/endpoints/batch/deploy-models/heart-classifier-mlflow/mlflow-for-batch-tabular.ipynb?name=configure_data_asset)]
- b. Create the data asset:
+ b. Create the data asset:
- ```python
- ml_client.data.create_or_update(heart_dataset_unlabeled)
- ```
+ [!notebook-python[] (~/azureml-examples-batch-pup/sdk/python/endpoints/batch/deploy-models/heart-classifier-mlflow/mlflow-for-batch-tabular.ipynb?name=create_data_asset)]
- c. Refresh the object to reflect the changes:
+ c. Refresh the object to reflect the changes:
- ```python
- heart_dataset_unlabeled = ml_client.data.get(name=dataset_name, label="latest")
- ```
+ [!notebook-python[] (~/azureml-examples-batch-pup/sdk/python/endpoints/batch/deploy-models/heart-classifier-mlflow/mlflow-for-batch-tabular.ipynb?name=get_data_asset)]
2. Now that the data is uploaded and ready to be used, let's invoke the endpoint:
- # [Azure CLI](#tab/cli)
+ # [Azure CLI](#tab/cli)
- :::code language="azurecli" source="~/azureml-examples-main/cli/endpoints/batch/deploy-models/heart-classifier-mlflow/deploy-and-run.sh" ID="start_batch_scoring_job" :::
+ :::code language="azurecli" source="~/azureml-examples-batch-pup/cli/endpoints/batch/deploy-models/heart-classifier-mlflow/deploy-and-run.sh" ID="start_batch_scoring_job" :::
- > [!NOTE]
- > The utility `jq` may not be installed on every installation. You can get installation instructions in [this link](https://stedolan.github.io/jq/download/).
+ > [!NOTE]
+ > The utility `jq` may not be installed on every installation. You can get installation instructions in [this link](https://stedolan.github.io/jq/download/).
- # [Python](#tab/sdk)
+ # [Python](#tab/python)
- ```python
- input = Input(type=AssetTypes.URI_FOLDER, path=heart_dataset_unlabeled.id)
- job = ml_client.batch_endpoints.invoke(
- endpoint_name=endpoint.name,
- input=input,
- )
- ```
-
+ [!notebook-python[] (~/azureml-examples-batch-pup/sdk/python/endpoints/batch/deploy-models/heart-classifier-mlflow/mlflow-for-batch-tabular.ipynb?name=start_batch_scoring_job)]
+
+
- > [!TIP]
- > Notice how we are not indicating the deployment name in the invoke operation. That's because the endpoint automatically routes the job to the default deployment. Since our endpoint only has one deployment, then that one is the default one. You can target an specific deployment by indicating the argument/parameter `deployment_name`.
+ > [!TIP]
+ > Notice how we are not indicating the deployment name in the invoke operation. That's because the endpoint automatically routes the job to the default deployment. Since our endpoint only has one deployment, then that one is the default one. You can target an specific deployment by indicating the argument/parameter `deployment_name`.
3. A batch job is started as soon as the command returns. You can monitor the status of the job until it finishes:
- # [Azure CLI](#tab/cli)
+ # [Azure CLI](#tab/cli)
- :::code language="azurecli" source="~/azureml-examples-main/cli/endpoints/batch/deploy-models/heart-classifier-mlflow/deploy-and-run.sh" ID="show_job_in_studio" :::
+ :::code language="azurecli" source="~/azureml-examples-batch-pup/cli/endpoints/batch/deploy-models/heart-classifier-mlflow/deploy-and-run.sh" ID="show_job_in_studio" :::
- # [Python](#tab/sdk)
+ # [Python](#tab/python)
- ```python
- ml_client.jobs.get(job.name)
- ```
+ [!notebook-python[] (~/azureml-examples-batch-pup/sdk/python/endpoints/batch/deploy-models/heart-classifier-mlflow/mlflow-for-batch-tabular.ipynb?name=get_job)]
## Analyzing the outputs
Output predictions are generated in the `predictions.csv` file as indicated in t
The file is structured as follows: * There is one row per each data point that was sent to the model. For tabular data, this means that one row is generated for each row in the input files and hence the number of rows in the generated file (`predictions.csv`) equals the sum of all the rows in all the processed files. For other data types, there is one row per each processed file.+ * Two columns are indicated:
- * The file name where the data was read from. In tabular data, use this field to know which prediction belongs to which input data. For any given file, predictions are returned in the same order they appear in the input file so you can rely on the row number to match the corresponding prediction.
- * The prediction associated with the input data. This value is returned "as-is" it was provided by the model's `predict().` function.
+
+ * The file name where the data was read from. In tabular data, use this field to know which prediction belongs to which input data. For any given file, predictions are returned in the same order they appear in the input file so you can rely on the row number to match the corresponding prediction.
+ * The prediction associated with the input data. This value is returned "as-is" it was provided by the model's `predict().` function.
You can download the results of the job by using the job name:
You can download the results of the job by using the job name:
To download the predictions, use the following command:
-# [Python](#tab/sdk)
+# [Python](#tab/python)
-```python
-ml_client.jobs.download(name=job.name, output_name='score', download_path='./')
-```
+[!notebook-python[] (~/azureml-examples-batch-pup/sdk/python/endpoints/batch/deploy-models/heart-classifier-mlflow/mlflow-for-batch-tabular.ipynb?name=download_outputs)]
Once the file is downloaded, you can open it using your favorite tool. The following example loads the predictions using `Pandas` dataframe.
-```python
-import pandas as pd
-from ast import literal_eval
-
-with open('named-outputs/score/predictions.csv', 'r') as f:
- pd.DataFrame(literal_eval(f.read().replace('\n', ',')), columns=['file', 'prediction'])
-```
+[!notebook-python[] (~/azureml-examples-batch-pup/sdk/python/endpoints/batch/deploy-models/heart-classifier-mlflow/mlflow-for-batch-tabular.ipynb?name=read_outputs)]
> [!WARNING] > The file `predictions.csv` may not be a regular CSV file and can't be read correctly using `pandas.read_csv()` method.
The following data types are supported for batch inference when deploying MLflow
| File extension | Type returned as model's input | Signature requirement | | :- | :- | :- |
-| `.csv`, `.parquet` | `pd.DataFrame` | `ColSpec`. If not provided, columns typing is not enforced. |
+| `.csv`, `.parquet`, `.pqt` | `pd.DataFrame` | `ColSpec`. If not provided, columns typing is not enforced. |
| `.png`, `.jpg`, `.jpeg`, `.tiff`, `.bmp`, `.gif` | `np.ndarray` | `TensorSpec`. Input is reshaped to match tensors shape if available. If no signature is available, tensors of type `np.uint8` are inferred. For additional guidance read [Considerations for MLflow models processing images](how-to-image-processing-batch.md#considerations-for-mlflow-models-processing-images). | > [!WARNING]
-> Be advised that any unsupported file that may be present in the input data will make the job to fail. You will see an error entry as follows: *"ERROR:azureml:Error processing input file: '/mnt/batch/tasks/.../a-given-file.parquet'. File type 'parquet' is not supported."*.
+> Be advised that any unsupported file that may be present in the input data will make the job to fail. You will see an error entry as follows: *"ERROR:azureml:Error processing input file: '/mnt/batch/tasks/.../a-given-file.avro'. File type 'avro' is not supported."*.
> [!TIP] > If you like to process a different file type, or execute inference in a different way that batch endpoints do by default you can always create the deploymnet with a scoring script as explained in [Using MLflow models with a scoring script](#customizing-mlflow-models-deployments-with-a-scoring-script).
Use the following steps to deploy an MLflow model with a custom scoring script.
__deployment-custom/code/batch_driver.py__
- :::code language="python" source="~/azureml-examples-main/cli/endpoints/batch/deploy-models/heart-classifier-mlflow/deployment-custom/code/batch_driver.py" :::
+ :::code language="python" source="~/azureml-examples-batch-pup/cli/endpoints/batch/deploy-models/heart-classifier-mlflow/deployment-custom/code/batch_driver.py" :::
1. Let's create an environment where the scoring script can be executed. Since our model is MLflow, the conda requirements are also specified in the model package (for more details about MLflow models and the files included on it see [The MLmodel format](concept-mlflow-models.md#the-mlmodel-format)). We are going then to build the environment using the conda dependencies from the file. However, __we need also to include__ the package `azureml-core` which is required for Batch Deployments.
- > [!TIP]
- > If your model is already registered in the model registry, you can download/copy the `conda.yml` file associated with your model by going to [Azure Machine Learning studio](https://ml.azure.com) > Models > Select your model from the list > Artifacts. Open the root folder in the navigation and select the `conda.yml` file listed. Click on Download or copy its content.
+ > [!TIP]
+ > If your model is already registered in the model registry, you can download/copy the `conda.yml` file associated with your model by going to [Azure Machine Learning studio](https://ml.azure.com) > Models > Select your model from the list > Artifacts. Open the root folder in the navigation and select the `conda.yml` file listed. Click on Download or copy its content.
- > [!IMPORTANT]
- > This example uses a conda environment specified at `/heart-classifier-mlflow/environment/conda.yaml`. This file was created by combining the original MLflow conda dependencies file and adding the package `azureml-core`. __You can't use the `conda.yml` file from the model directly__.
+ > [!IMPORTANT]
+ > This example uses a conda environment specified at `/heart-classifier-mlflow/environment/conda.yaml`. This file was created by combining the original MLflow conda dependencies file and adding the package `azureml-core`. __You can't use the `conda.yml` file from the model directly__.
- # [Azure CLI](#tab/cli)
-
- No extra step is required for the Azure Machine Learning CLI. The environment definition will be included in the deployment file.
+ # [Azure CLI](#tab/cli)
- # [Python](#tab/sdk)
+ The environment definition will be included in the deployment definition itself as an anonymous environment. You'll see in the following lines in the deployment:
+
+ :::code language="yaml" source="~/azureml-examples-batch-pup/cli/endpoints/batch/deploy-models/heart-classifier-mlflow/deployment-custom/deployment.yml" range="7-10":::
- Let's get a reference to the environment:
+ # [Python](#tab/python)
- ```python
- environment = Environment(
- name="batch-mlflow-xgboost",
- conda_file="deployment-custom/environment/conda.yaml",
- image="mcr.microsoft.com/azureml/openmpi4.1.0-ubuntu20.04:latest",
- )
- ```
+ Let's get a reference to the environment:
-1. Let's create the deployment now:
+ [!notebook-python[] (~/azureml-examples-batch-pup/sdk/python/endpoints/batch/deploy-models/heart-classifier-mlflow/mlflow-for-batch-tabular.ipynb?name=configure_environment_custom)]
- # [Azure CLI](#tab/cli)
+1. Configure the deployment:
+
+ # [Azure CLI](#tab/cli)
- To create a new deployment under the created endpoint, create a `YAML` configuration like the following:
+ To create a new deployment under the created endpoint, create a `YAML` configuration like the following:
- :::code language="yaml" source="~/azureml-examples-main/cli/endpoints/batch/deploy-models/heart-classifier-mlflow/deployment-custom/deployment.yml" :::
+ :::code language="yaml" source="~/azureml-examples-batch-pup/cli/endpoints/batch/deploy-models/heart-classifier-mlflow/deployment-custom/deployment.yml" :::
- Then, create the deployment with the following command:
+ # [Python](#tab/python)
- :::code language="azurecli" source="~/azureml-examples-main/cli/endpoints/batch/deploy-models/heart-classifier-mlflow/deploy-and-run.sh" ID="create_new_deployment_not_default" :::
+ [!notebook-python[] (~/azureml-examples-batch-pup/sdk/python/endpoints/batch/deploy-models/heart-classifier-mlflow/mlflow-for-batch-tabular.ipynb?name=configure_deployment_custom)]
+
+1. Let's create the deployment now:
+
+ # [Azure CLI](#tab/cli)
- # [Python](#tab/sdk)
+ :::code language="azurecli" source="~/azureml-examples-batch-pup/cli/endpoints/batch/deploy-models/heart-classifier-mlflow/deploy-and-run.sh" ID="create_deployment_non_default" :::
- To create a new deployment under the created endpoint, use the following script:
+ # [Python](#tab/python)
- ```python
- deployment = BatchDeployment(
- name="classifier-xgboost-custom",
- description="A heart condition classifier based on XGBoost",
- endpoint_name=endpoint.name,
- model=model,
- environment=environment,
- code_configuration=CodeConfiguration(
- code="deployment-custom/code/",
- scoring_script="batch_driver.py",
- ),
- compute=compute_name,
- instance_count=2,
- max_concurrency_per_instance=2,
- mini_batch_size=2,
- output_action=BatchDeploymentOutputAction.APPEND_ROW,
- output_file_name="predictions.csv",
- retry_settings=BatchRetrySettings(max_retries=3, timeout=300),
- logging_level="info",
- )
- ml_client.batch_deployments.begin_create_or_update(deployment)
- ```
-
+ [!notebook-python[] (~/azureml-examples-batch-pup/sdk/python/endpoints/batch/deploy-models/heart-classifier-mlflow/mlflow-for-batch-tabular.ipynb?name=create_deployment_custom)]
1. At this point, our batch endpoint is ready to be used.
+## Clean up resources
+
+# [Azure CLI](#tab/cli)
+
+Run the following code to delete the batch endpoint and all the underlying deployments. Batch scoring jobs won't be deleted.
++
+# [Python](#tab/python)
+
+Run the following code to delete the batch endpoint and all the underlying deployments. Batch scoring jobs won't be deleted.
+
+[!notebook-python[] (~/azureml-examples-batch-pup/sdk/python/endpoints/batch/deploy-models/heart-classifier-mlflow/mlflow-for-batch-tabular.ipynb?name=delete_endpoint)]
+++ ## Next steps * [Customize outputs in batch deployments](how-to-deploy-model-custom-output.md)
machine-learning How To Nlp Processing Batch https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-nlp-processing-batch.md
Title: "Text processing with batch deployments"
+ Title: "Text processing with batch endpoints"
description: Learn how to use batch deployments to process text and output results.
-# Text processing with batch deployments
+# Deploy language models in batch endpoints
[!INCLUDE [cli v2](../../includes/machine-learning-dev-v2.md)]
-Batch Endpoints can be used for processing tabular data that contain text. Those deployments are supported in both MLflow and custom models. In this tutorial we will learn how to deploy a model that can perform text summarization of long sequences of text using a model from HuggingFace.
+Batch Endpoints can be used to deploy expensive models, like language models, over text data. In this tutorial you'll learn how to deploy a model that can perform text summarization of long sequences of text using a model from HuggingFace.
## About this sample
The model we are going to work with was built using the popular library transfor
* It is trained for summarization of text in English. * We are going to use Torch as a backend.
-The information in this article is based on code samples contained in the [azureml-examples](https://github.com/azure/azureml-examples) repository. To run the commands locally without having to copy/paste YAML and other files, clone the repo and then change directories to the [`cli/endpoints/batch/deploy-models/huggingface-text-summarization`](https://github.com/azure/azureml-examples/tree/main/cli/endpoints/batch/deploy-models/huggingface-text-summarization) if you are using the Azure CLI or [`sdk/python/endpoints/batch/deploy-models/huggingface-text-summarization`](https://github.com/azure/azureml-examples/tree/main/sdk/python/endpoints/batch/deploy-models/huggingface-text-summarization) if you are using our SDK for Python.
-# [Azure CLI](#tab/cli)
+The files for this example are in:
```azurecli
-git clone https://github.com/Azure/azureml-examples --depth 1
-cd azureml-examples/cli/endpoints/batch/deploy-models/huggingface-text-summarization
-```
-
-# [Python](#tab/python)
-
-In a Jupyter notebook:
-
-```python
-!git clone https://github.com/Azure/azureml-examples --depth 1
-!cd azureml-examples/sdk/python/endpoints/batch/deploy-models/huggingface-text-summarization
+cd endpoints/batch/deploy-models/huggingface-text-summarization
``` -- ### Follow along in Jupyter Notebooks You can follow along this sample in a Jupyter Notebook. In the cloned repository, open the notebook: [text-summarization-batch.ipynb](https://github.com/Azure/azureml-examples/blob/main/sdk/python/endpoints/batch/deploy-models/huggingface-text-summarization/text-summarization-batch.ipynb). ## Prerequisites -
-### Connect to your workspace
-
-First, let's connect to Azure Machine Learning workspace where we're going to work on.
-
-# [Azure CLI](#tab/cli)
-
-```azurecli
-az account set --subscription <subscription>
-az configure --defaults workspace=<workspace> group=<resource-group> location=<location>
-```
-
-# [Python](#tab/python)
-
-The workspace is the top-level resource for Azure Machine Learning, providing a centralized place to work with all the artifacts you create when you use Azure Machine Learning. In this section, we'll connect to the workspace in which you'll perform deployment tasks.
-
-1. Import the required libraries:
-
-```python
-from azure.ai.ml import MLClient, Input
-from azure.ai.ml.entities import BatchEndpoint, BatchDeployment, Model, AmlCompute, Data, BatchRetrySettings
-from azure.ai.ml.constants import AssetTypes, BatchDeploymentOutputAction
-from azure.identity import DefaultAzureCredential
-```
-
-2. Configure workspace details and get a handle to the workspace:
-
-```python
-subscription_id = "<subscription>"
-resource_group = "<resource-group>"
-workspace = "<workspace>"
-
-ml_client = MLClient(DefaultAzureCredential(), subscription_id, resource_group, workspace)
-```
-- ### Registering the model
Let's create the deployment that will host the model:
> [!TIP] > Although files are provided in mini-batches by the deployment, this scoring script processes one row at a time. This is a common pattern when dealing with expensive models (like transformers) as trying to load the entire batch and send it to the model at once may result in high-memory pressure on the batch executor (OOM exeptions).
-1. We need to indicate over which environment we are going to run the deployment. In our case, our model runs on `Torch` and it requires the libraries `transformers`, `accelerate`, and `optimum` from HuggingFace. Azure Machine Learning already has an environment with Torch and GPU support available. We are just going to add a couple of dependencies in a `conda.yml` file.
+1. We need to indicate over which environment we are going to run the deployment. In our case, our model runs on `Torch` and it requires the libraries `transformers`, `accelerate`, and `optimum` from HuggingFace. Azure Machine Learning already has an environment with Torch and GPU support available. We are just going to add a couple of dependencies in a `conda.yaml` file.
- __environment/conda.yml__
+ __environment/torch200-conda.yaml__
:::code language="yaml" source="~/azureml-examples-main/cli/endpoints/batch/deploy-models/huggingface-text-summarization/environment/torch200-conda.yaml" :::
Let's create the deployment that will host the model:
```python environment = Environment( name="torch200-transformers-gpu",
- conda_file="environment/torch200-conda.yml",
+ conda_file="environment/torch200-conda.yaml",
image="mcr.microsoft.com/azureml/openmpi4.1.0-cuda11.8-cudnn8-ubuntu22.04:latest", ) ``` > [!IMPORTANT]
- > The environment `torch200-transformers-gpu` we've created requires a CUDA 11.8 compatible hardware device to run Torch 2.0 and Ubuntu 20.04. If your GPU device doesn't support this version of CUDA, you can check the alternative `torch113-conda.yml` conda environment (also available on the repository), which runs Torch 1.3 over Ubuntu 18.04 with CUDA 10.1. However, acceleration using the `optimum` and `accelerate` libraries won't be supported on this configuration.
+ > The environment `torch200-transformers-gpu` we've created requires a CUDA 11.8 compatible hardware device to run Torch 2.0 and Ubuntu 20.04. If your GPU device doesn't support this version of CUDA, you can check the alternative `torch113-conda.yaml` conda environment (also available on the repository), which runs Torch 1.3 over Ubuntu 18.04 with CUDA 10.1. However, acceleration using the `optimum` and `accelerate` libraries won't be supported on this configuration.
1. Each deployment runs on compute clusters. They support both [Azure Machine Learning Compute clusters (AmlCompute)](./how-to-create-attach-compute-cluster.md) or [Kubernetes clusters](./how-to-attach-kubernetes-anywhere.md). In this example, our model can benefit from GPU acceleration, which is why we will use a GPU cluster.
machine-learning How To Safely Rollout Online Endpoints https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-safely-rollout-online-endpoints.md
In this article, you'll learn how to deploy a new version of a machine learning model in production without causing any disruption. You'll use a blue-green deployment strategy (also known as a safe rollout strategy) to introduce a new version of a web service to production. This strategy will allow you to roll out your new version of the web service to a small subset of users or requests before rolling it out completely.
-This article assumes you're using online endpoints, that is, endpoints that are used for online (real-time) inferencing. There are two types of online endpoints: **managed online endpoints** and **Kubernetes online endpoints**. For more information on endpoints and the differences between managed online endpoints and Kubernetes online endpoints, see [What are Azure Machine Learning endpoints?](concept-endpoints.md#managed-online-endpoints-vs-kubernetes-online-endpoints).
+This article assumes you're using online endpoints, that is, endpoints that are used for online (real-time) inferencing. There are two types of online endpoints: **managed online endpoints** and **Kubernetes online endpoints**. For more information on endpoints and the differences between managed online endpoints and Kubernetes online endpoints, see [What are Azure Machine Learning endpoints?](concept-endpoints-online.md#managed-online-endpoints-vs-kubernetes-online-endpoints).
The main example in this article uses managed online endpoints for deployment. To use Kubernetes endpoints instead, see the notes in this document that are inline with the managed online endpoint discussion.
machine-learning How To Secure Batch Endpoint https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-secure-batch-endpoint.md
# Network isolation in batch endpoints
-When deploying a machine learning model to a batch endpoint, you can secure their communication using private networks. This article explains the requirements to use batch endpoint in an environment secured by private networks.
+You can secure batch endpoints communication using private networks. This article explains the requirements to use batch endpoint in an environment secured by private networks.
## Securing batch endpoints
Consider the following limitations when working on batch endpoints deployed rega
- If you change the networking configuration of the workspace from public to private, or from private to public, such doesn't affect existing batch endpoints networking configuration. Batch endpoints rely on the configuration of the workspace at the time of creation. You can recreate your endpoints if you want them to reflect changes you made in the workspace. -- When working on a private link-enabled workspace, batch endpoints can be created and managed using Azure Machine Learning studio. However, they can't be invoked from the UI in studio. Use the Azure Machine Learning CLI v2 instead for job creation. For more details about how to use it see [Run batch endpoint to start a batch scoring job](how-to-use-batch-endpoint.md#run-batch-endpoints-and-access-results).
+- When working on a private link-enabled workspace, batch endpoints can be created and managed using Azure Machine Learning studio. However, they can't be invoked from the UI in studio. Use the Azure Machine Learning CLI v2 instead for job creation. For more details about how to use it see [Run batch endpoint to start a batch scoring job](how-to-use-batch-endpoint.md#run-endpoint-and-configure-inputs-and-outputs).
## Recommended read
machine-learning How To Secure Online Endpoint https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-secure-online-endpoint.md
When deploying a machine learning model to a managed online endpoint, you can secure communication with the online endpoint by using [private endpoints](../private-link/private-endpoint-overview.md).
-You can secure the inbound scoring requests from clients to an _online endpoint_. You can also secure the outbound communications between a _deployment_ and the Azure resources it uses. Security for inbound and outbound communication are configured separately. For more information on endpoints and deployments, see [What are endpoints and deployments](concept-endpoints.md#what-are-endpoints-and-deployments).
+You can secure the inbound scoring requests from clients to an _online endpoint_. You can also secure the outbound communications between a _deployment_ and the Azure resources it uses. Security for inbound and outbound communication are configured separately. For more information on endpoints and deployments, see [What are endpoints and deployments](concept-endpoints-online.md).
The following diagram shows how communications flow through private endpoints to the managed online endpoint. Incoming scoring requests from clients are received through the workspace private endpoint from your virtual network. Outbound communication with services is handled through private endpoints to those service instances from the deployment:
machine-learning How To Secure Workspace Vnet https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-secure-workspace-vnet.md
Azure Machine Learning supports storage accounts configured to use either a priv
* **Blob** * **File**
- * **Queue** - Only needed if you plan to use [Batch endpoints](concept-endpoints.md#what-are-batch-endpoints) or the [ParallelRunStep](./tutorial-pipeline-batch-scoring-classification.md) in an Azure Machine Learning pipeline.
- * **Table** - Only needed if you plan to use [Batch endpoints](concept-endpoints.md#what-are-batch-endpoints) or the [ParallelRunStep](./tutorial-pipeline-batch-scoring-classification.md) in an Azure Machine Learning pipeline.
+ * **Queue** - Only needed if you plan to use [Batch endpoints](concept-endpoints-batch.md) or the [ParallelRunStep](./tutorial-pipeline-batch-scoring-classification.md) in an Azure Machine Learning pipeline.
+ * **Table** - Only needed if you plan to use [Batch endpoints](concept-endpoints-batch.md) or the [ParallelRunStep](./tutorial-pipeline-batch-scoring-classification.md) in an Azure Machine Learning pipeline.
:::image type="content" source="./media/how-to-enable-studio-virtual-network/configure-storage-private-endpoint.png" alt-text="Screenshot showing private endpoint configuration page with blob and file options":::
machine-learning How To Use Batch Azure Data Factory https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-use-batch-azure-data-factory.md
You can use a service principal or a [managed identity](../active-directory/mana
## About the pipeline
-We are going to create a pipeline in Azure Data Factory that can invoke a given batch endpoint over some data. The pipeline will communicate with Azure Machine Learning batch endpoints using REST. To know more about how to use the REST API of batch endpoints read [Deploy models with REST for batch scoring](how-to-deploy-batch-with-rest.md).
+We are going to create a pipeline in Azure Data Factory that can invoke a given batch endpoint over some data. The pipeline will communicate with Azure Machine Learning batch endpoints using REST. To know more about how to use the REST API of batch endpoints read [Create jobs and input data for batch endpoints](how-to-access-data-batch-endpoints-jobs.md?tabs=rest).
The pipeline will look as follows:
machine-learning How To Use Batch Endpoints https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-use-batch-endpoints.md
+
+ Title: 'Use batch endpoints and deployments'
+
+description: Learn how to use batch endpoints to operationalize long running machine learning jobs under a stable API.
+++++++
+reviewer: msakande
Last updated : 05/01/2023+++
+# Use batch endpoints and deployments
+
+Use Azure Machine Learning batch endpoints to operationalize your machine learning workloads in a repeatable and scalable way. Batch endpoints provide a unified interface to invoke and manage long running machine learning jobs.
+
+In this article, you'll learn how to work with batch endpoints.
+
+## Prerequisites
+++
+## Create a batch endpoint
+
+A batch endpoint is an HTTPS endpoint that clients can call to trigger a batch inference job. A batch deployment is a set of compute resources hosting the model or pipeline (preview) that does the actual inferencing. One batch endpoint can have multiple batch deployments.
+
+### Steps
+
+1. Provide a name for the endpoint. The endpoint name appears in the URI associated with your endpoint; therefore, __batch endpoint names need to be unique within an Azure region__. For example, there can be only one batch endpoint with the name `mybatchendpoint` in `westus2`.
+
+ # [Azure CLI](#tab/cli)
+
+ In this case, let's place the name of the endpoint in a variable so we can easily reference it later.
+
+ ```azurecli
+ ENDPOINT_NAME="mnist-batch"
+ ```
+
+ # [Python](#tab/python)
+
+ In this case, let's place the name of the endpoint in a variable so we can easily reference it later.
+
+ ```python
+ endpoint_name="mnist-batch"
+ ```
+
+ # [Studio](#tab/studio)
+
+ *You'll configure the name of the endpoint later in the creation wizard.*
+
+
+1. Configure your batch endpoint
+
+ # [Azure CLI](#tab/cli)
+
+ The following YAML file defines a batch endpoint. You can include the YAML file in the CLI command for [batch endpoint creation](#create-a-batch-endpoint).
+
+ __endpoint.yml__
+
+ :::code language="yaml" source="~/azureml-examples-main/cli/endpoints/batch/deploy-models/mnist-classifier/endpoint.yml":::
+
+ The following table describes the key properties of the endpoint. For the full batch endpoint YAML schema, see [CLI (v2) batch endpoint YAML schema](./reference-yaml-endpoint-batch.md).
+
+ | Key | Description |
+ | | -- |
+ | `name` | The name of the batch endpoint. Needs to be unique at the Azure region level.|
+ | `description` | The description of the batch endpoint. This property is optional. |
+ | `auth_mode` | The authentication method for the batch endpoint. Currently only Azure Active Directory token-based authentication (`aad_token`) is supported. |
+
+ # [Python](#tab/python)
+
+ ```python
+ endpoint = BatchEndpoint(
+ name=endpoint_name,
+ description="A batch endpoint for scoring images from the MNIST dataset.",
+ )
+ ```
+
+ | Key | Description |
+ | | -- |
+ | `name` | The name of the batch endpoint. Needs to be unique at the Azure region level.|
+ | `description` | The description of the batch endpoint. This property is optional. |
+
+ # [Studio](#tab/studio)
+
+ *You'll create the endpoint in the same step you create the deployment.*
+
+
+1. Create the endpoint:
+
+ # [Azure CLI](#tab/cli)
+
+ Run the following code to create a batch deployment under the batch endpoint and set it as the default deployment.
+
+ :::code language="azurecli" source="~/azureml-examples-main/cli/endpoints/batch/deploy-models/mnist-classifier/deploy-and-run.sh" ID="create_batch_endpoint" :::
+
+ # [Python](#tab/python)
+
+ ```python
+ ml_client.batch_endpoints.begin_create_or_update(endpoint)
+ ```
+ # [Studio](#tab/studio)
+
+ *You'll create the endpoint at the same time that you create the deployment later.*
+
+## Create a batch deployment
+
+A deployment is a set of resources and computes required to implement the functionality the endpoint provides. There are two types of deployments depending on the asset you want to deploy:
+
+* [Model deployment](concept-endpoints-batch.md#model-deployments): Use this to operationalize machine learning model inference routines. See [How to deploy a model in a batch endpoint](how-to-use-batch-model-deployments.md) for a guide to deploy models in batch endpoints.
+* [Pipeline component deployment (preview)](concept-endpoints-batch.md#pipeline-component-deployment-preview): Use this to operationalize complex inference pipelines under a stable URI. See [How to deploy a pipeline component in a batch endpoint (preview)](how-to-use-batch-pipeline-deployments.md) for a guide to deploy pipeline components.
++
+## Create jobs from batch endpoints
+
+When you invoke a batch endpoint, it triggers a batch scoring job. The invoke response returns a job `name` that can be used to track the batch scoring progress.
+
+# [Azure CLI](#tab/cli)
+
+
+# [Python](#tab/python)
+
+```python
+job = ml_client.batch_endpoints.invoke(
+ endpoint_name=endpoint_name,
+ inputs=Input(path="https://azuremlexampledata.blob.core.windows.net/data/mnist/sample/", type=AssetTypes.URI_FOLDER)
+)
+```
+
+# [Studio](#tab/studio)
+
+1. Navigate to the __Endpoints__ tab on the side menu.
+
+1. Select the tab __Batch endpoints__.
+
+1. Select the batch endpoint you just created.
+
+1. Select __Create job__.
+
+ :::image type="content" source="./media/how-to-use-batch-endpoints-studio/create-batch-job.png" alt-text="Screenshot of the create job option to start batch scoring.":::
+
+1. On __Deployment__, select the deployment you want to execute.
+
+ :::image type="content" source="./media/how-to-use-batch-endpoints-studio/job-setting-batch-scoring.png" alt-text="Screenshot of using the deployment to submit a batch job.":::
+
+1. Select __Next__.
+
+1. On __Select data source__, select the data input you want to use. For this example, select __Datastore__ and in the section __Path__ enter the full URL `https://azuremlexampledata.blob.core.windows.net/dat) for details.
+
+ :::image type="content" source="./media/how-to-use-batch-endpoints-studio/select-datastore-job.png" alt-text="Screenshot of selecting datastore as an input option.":::
+
+1. Start the job.
+++
+Batch endpoints support reading files or folders from different locations. To learn more about the supported types and how to specify them read [Accessing data from batch endpoints jobs](how-to-access-data-batch-endpoints-jobs.md).
+
+> [!TIP]
+> Local data folders/files can be used when executing batch endpoints from the Azure Machine Learning CLI or Azure Machine Learning SDK for Python. However, that operation will result in the local data to be uploaded to the default Azure Machine Learning Data Store of the workspace you are working on.
+
+> [!IMPORTANT]
+> __Deprecation notice__: Datasets of type `FileDataset` (V1) are deprecated and will be retired in the future. Existing batch endpoints relying on this functionality will continue to work but batch endpoints created with GA CLIv2 (2.4.0 and newer) or GA REST API (2022-05-01 and newer) will not support V1 dataset.
++
+## Accessing outputs from batch jobs
+
+When you invoke a batch endpoint, it triggers a batch scoring job. The invoke response returns a job `name` that can be used to track the batch scoring progress. When the job is finished, you can access any output the endpoint provides. Each output has a name that allows you to access it.
+
+For instance, the following example downloads the output __score__ from the job. All model deployments have an output with that name:
+
+# [Azure CLI](#tab/cli)
++
+# [Python](#tab/python)
+
+```python
+ml_client.jobs.download(name=job.name, output_name='score', download_path='./')
+```
+
+# [Studio](#tab/studio)
+
+1. In the graph of the job, select the `batchscoring` step.
+
+1. Select the __Outputs + logs__ tab and then select **Show data outputs**.
+
+1. From __Data outputs__, select the icon to open __Storage Explorer__.
+
+ :::image type="content" source="media/how-to-use-batch-endpoint/view-data-outputs.png" alt-text="Studio screenshot showing view data outputs location." lightbox="media/how-to-use-batch-endpoint/view-data-outputs.png":::
+
+ The scoring results in Storage Explorer are similar to the following sample page:
+
+ :::image type="content" source="media/how-to-use-batch-endpoint/scoring-view.png" alt-text="Screenshot of the scoring output." lightbox="media/how-to-use-batch-endpoint/scoring-view.png":::
+++
+## Manage multiple deployments
+
+Batch endpoints can handle multiple deployments under the same endpoint, allowing you to change the implementation of the endpoint without changing the URL your consumers use to invoke it.
+
+You can add, remove, and update deployments without affecting the endpoint itself.
+
+### Add non-default deployments
+
+To add a new deployment to an existing endpoint, use the code:
+
+# [Azure CLI](#tab/cli)
++
+# [Python](#tab/python)
+
+Using the `MLClient` created earlier, we'll now create the deployment in the workspace. This command will start the deployment creation and return a confirmation response while the deployment creation continues.
+
+```python
+ml_client.batch_deployments.begin_create_or_update(deployment)
+```
+
+# [Studio](#tab/studio)
+
+In the wizard, select __Create__ to start the deployment process.
++++
+Azure Machine Learning will add a new deployment to the endpoint but won't set it as default. Before you switch traffic to this deployment, you can test it to confirm that the results are what you expect.
+
+### Change the default deployment
+
+Batch endpoints can have one deployment marked as __default__. Changing the default deployment gives you the possibility of changing the model or pipeline (preview) serving the deployment without changing the contract with the user. Use the following instruction to update the default deployment:
+
+# [Azure CLI](#tab/cli)
++
+# [Python](#tab/python)
+
+```python
+endpoint = ml_client.batch_endpoints.get(endpoint_name)
+endpoint.defaults.deployment_name = deployment.name
+ml_client.batch_endpoints.begin_create_or_update(endpoint)
+```
+
+# [Studio](#tab/studio)
+
+1. Navigate to the __Endpoints__ tab on the side menu.
+
+1. Select the tab __Batch endpoints__.
+
+1. Select the batch endpoint you want to configure.
+
+1. Select __Update default deployment__.
+
+ :::image type="content" source="./media/how-to-use-batch-endpoints-studio/update-default-deployment.png" alt-text="Screenshot of updating default deployment.":::
+
+1. On __Select default deployment__, select the name of the deployment you want to be the default one.
+
+1. Select __Update__.
+
+1. The selected deployment is now the default one.
+++
+### Delete a deployment
+
+You can delete a given deployment as long as it's not the default one. Deleting a deployment doesn't delete the jobs or outputs it generated.
+
+# [Azure CLI](#tab/cli)
+++
+# [Python](#tab/python)
+
+```python
+ml_client.batch_deployments.begin_delete(name=deployment.name, endpoint_name=endpoint.name)
+```
+
+# [Studio](#tab/studio)
+
+1. Navigate to the __Endpoints__ tab on the side menu.
+
+1. Select the tab __Batch endpoints__.
+
+1. Select the batch endpoint where the deployment is located.
+
+1. On the batch deployment you want to delete, select __Delete__.
+
+1. Notice that deleting the endpoint won't affect the compute cluster where the deployment(s) run.
+++
+## Delete an endpoint
+
+Deleting an endpoint will delete all the deployments under it. However, this deletion won't remove any previously executed jobs and their outputs from the workspace.
+
+# [Azure CLI](#tab/cli)
++
+# [Python](#tab/python)
+
+```python
+ml_client.batch_endpoints.begin_delete(name=endpoint.name)
+```
+
+# [Studio](#tab/studio)
+
+1. Navigate to the __Endpoints__ tab on the side menu.
+
+1. Select the tab __Batch endpoints__.
+
+1. Select the batch endpoint you want to delete.
+
+1. Select __Delete__.
+
+Now, the endpoint all along with its deployments will be deleted. Notice that this deletion won't affect the compute cluster where the deployment(s) run.
+++
+## Next steps
+
+- [Deploy models with batch endpoints](how-to-use-batch-model-deployments.md)
+- [Deploy pipelines with batch endpoints (preview)](how-to-use-batch-pipeline-deployments.md)
+- [Deploy MLFlow models in batch deployments](how-to-mlflow-batch.md)
+- [Create jobs and input data to batch endpoints](how-to-access-data-batch-endpoints-jobs.md)
+- [Network isolation for Batch Endpoints](how-to-secure-batch-endpoint.md)
++
machine-learning How To Use Batch Model Deployments https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-use-batch-model-deployments.md
+
+ Title: 'Deploy models for scoring in batch endpoints'
+
+description: In this article, learn how to create a batch endpoint to continuously batch score large data.
+++++++ Last updated : 11/04/2022+
+#Customer intent: As an ML engineer or data scientist, I want to create an endpoint to host my models for batch scoring, so that I can use the same endpoint continuously for different large datasets on-demand or on-schedule.
++
+# Deploy models for scoring in batch endpoints
++
+Batch endpoints provide a convenient way to deploy models to run inference over large volumes of data. They simplify the process of hosting your models for batch scoring, so you can focus on machine learning, not infrastructure. We call this type of deployments *model deployments*.
+
+Use batch endpoints to deploy models when:
+
+> [!div class="checklist"]
+> * You have expensive models that requires a longer time to run inference.
+> * You need to perform inference over large amounts of data, distributed in multiple files.
+> * You don't have low latency requirements.
+> * You can take advantage of parallelization.
+
+In this article, you'll learn how to use batch endpoints to deploy a machine learning model to perform inference.
+
+## About this example
+
+In this example, we're going to deploy a model to solve the classic MNIST ("Modified National Institute of Standards and Technology") digit recognition problem to perform batch inferencing over large amounts of data (image files). In the first section of this tutorial, we're going to create a batch deployment with a model created using Torch. Such deployment will become our default one in the endpoint. In the second half, [we're going to see how we can create a second deployment](#adding-deployments-to-an-endpoint) using a model created with TensorFlow (Keras), test it out, and then switch the endpoint to start using the new deployment as default.
++
+The files for this example are in:
+
+```azurecli
+cd endpoints/batch/deploy-models/mnist-classifier
+```
+
+### Follow along in Jupyter Notebooks
+
+You can follow along this sample in the following notebooks. In the cloned repository, open the notebook: [mnist-batch.ipynb](https://github.com/Azure/azureml-examples/blob/main/sdk/python/endpoints/batch/deploy-models/mnist-classifier/mnist-batch.ipynb).
+
+## Prerequisites
++
+### Create compute
+
+Batch endpoints run on compute clusters. They support both [Azure Machine Learning Compute clusters (AmlCompute)](./how-to-create-attach-compute-cluster.md) or [Kubernetes clusters](./how-to-attach-kubernetes-anywhere.md). Clusters are a shared resource so one cluster can host one or many batch deployments (along with other workloads if desired).
+
+This article uses a compute created here named `batch-cluster`. Adjust as needed and reference your compute using `azureml:<your-compute-name>` or create one as shown.
+
+# [Azure CLI](#tab/cli)
++
+# [Python](#tab/python)
+
+[!notebook-python[] (~/azureml-examples-batch-pup/sdk/python/endpoints/batch/deploy-models/mnist-classifier/mnist-batch.ipynb?name=create_compute)]
+
+# [Studio](#tab/studio)
+
+*Create a compute cluster as explained in the following tutorial [Create an Azure Machine Learning compute cluster](./how-to-create-attach-compute-cluster.md?tabs=studio).*
+++
+> [!NOTE]
+> You are not charged for compute at this point as the cluster will remain at 0 nodes until a batch endpoint is invoked and a batch scoring job is submitted. Learn more about [manage and optimize cost for AmlCompute](./how-to-manage-optimize-cost.md#use-azure-machine-learning-compute-cluster-amlcompute).
++
+## Create a batch endpoint
+
+A batch endpoint is an HTTPS endpoint that clients can call to trigger a batch scoring job. A batch scoring job is a job that scores multiple inputs (for more, see [What are batch endpoints?](concept-endpoints-batch.md)). A batch deployment is a set of compute resources hosting the model that does the actual batch scoring. One batch endpoint can have multiple batch deployments.
+
+> [!TIP]
+> One of the batch deployments will serve as the default deployment for the endpoint. The default deployment will be used to do the actual batch scoring when the endpoint is invoked. Learn more about [batch endpoints and batch deployment](concept-endpoints-batch.md).
+
+### Steps
+
+1. Decide on the name of the endpoint. The name of the endpoint will end-up in the URI associated with your endpoint. Because of that, __batch endpoint names need to be unique within an Azure region__. For example, there can be only one batch endpoint with the name `mybatchendpoint` in `westus2`.
+
+ # [Azure CLI](#tab/cli)
+
+ In this case, let's place the name of the endpoint in a variable so we can easily reference it later.
+
+ :::code language="azurecli" source="~/azureml-examples-batch-pup/cli/endpoints/batch/deploy-models/mnist-classifier/deploy-and-run.sh" ID="name_endpoint" :::
+
+ # [Python](#tab/python)
+
+ In this case, let's place the name of the endpoint in a variable so we can easily reference it later.
+
+ [!notebook-python[] (~/azureml-examples-batch-pup/sdk/python/endpoints/batch/deploy-models/mnist-classifier/mnist-batch.ipynb?name=name_endpoint)]
+
+ # [Studio](#tab/studio)
+
+ *You'll configure the name of the endpoint later in the creation wizard.*
+
+
+1. Configure your batch endpoint
+
+ # [Azure CLI](#tab/cli)
+
+ The following YAML file defines a batch endpoint, which you can include in the CLI command for [batch endpoint creation](#create-a-batch-endpoint).
+
+ __endpoint.yml__
+
+ :::code language="yaml" source="~/azureml-examples-batch-pup/cli/endpoints/batch/deploy-models/mnist-classifier/endpoint.yml":::
+
+ The following table describes the key properties of the endpoint. For the full batch endpoint YAML schema, see [CLI (v2) batch endpoint YAML schema](./reference-yaml-endpoint-batch.md).
+
+ | Key | Description |
+ | | -- |
+ | `name` | The name of the batch endpoint. Needs to be unique at the Azure region level.|
+ | `description` | The description of the batch endpoint. This property is optional. |
+ | `tags` | The tags to include in the endpoint. This property is optional.
+
+ # [Python](#tab/python)
+
+ [!notebook-python[] (~/azureml-examples-batch-pup/sdk/python/endpoints/batch/deploy-models/mnist-classifier/mnist-batch.ipynb?name=configure_endpoint)]
+
+ | Key | Description |
+ | | -- |
+ | `name` | The name of the batch endpoint. Needs to be unique at the Azure region level.|
+ | `description` | The description of the batch endpoint. This property is optional. |
+ | `tags` | The tags to include in the endpoint. This property is optional. |
+
+ # [Studio](#tab/studio)
+
+ *You'll create the endpoint in the same step you create the deployment.*
+
+
+1. Create the endpoint:
+
+ # [Azure CLI](#tab/cli)
+
+ Run the following code to create a batch deployment under the batch endpoint and set it as the default deployment.
+
+ :::code language="azurecli" source="~/azureml-examples-batch-pup/cli/endpoints/batch/deploy-models/mnist-classifier/deploy-and-run.sh" ID="create_endpoint" :::
+
+ # [Python](#tab/python)
+
+ [!notebook-python[] (~/azureml-examples-batch-pup/sdk/python/endpoints/batch/deploy-models/mnist-classifier/mnist-batch.ipynb?name=create_endpoint)]
+
+ # [Studio](#tab/studio)
+
+ *You'll create the endpoint in the same step you are creating the deployment later.*
+
+## Create a batch deployment
+
+A model deployment is a set of resources required for hosting the model that does the actual inferencing. To create a batch model deployment, you need all the following items:
+
+* A registered model in the workspace.
+* The code to score the model.
+* The environment with the model's dependencies installed.
+* The pre-created compute and resource settings.
+
+1. Let's start by registering the model we want to deploy. Batch Deployments can only deploy models registered in the workspace. You can skip this step if the model you're trying to deploy is already registered. In this case, we're registering a Torch model for the popular digit recognition problem (MNIST).
+
+ > [!TIP]
+ > Models are associated with the deployment rather than with the endpoint. This means that a single endpoint can serve different models or different model versions under the same endpoint as long as they are deployed in different deployments.
+
+
+ # [Azure CLI](#tab/cli)
+
+ :::code language="azurecli" source="~/azureml-examples-batch-pup/cli/endpoints/batch/deploy-models/mnist-classifier/deploy-and-run.sh" ID="register_model" :::
+
+ # [Python](#tab/python)
+
+ [!notebook-python[] (~/azureml-examples-batch-pup/sdk/python/endpoints/batch/deploy-models/mnist-classifier/mnist-batch.ipynb?name=register_model)]
+
+ # [Studio](#tab/studio)
+
+ 1. Navigate to the __Models__ tab on the side menu.
+
+ 1. Select __Register__ > __From local files__.
+
+ 1. In the wizard, leave the option *Model type* as __Unspecified type__.
+
+ 1. Select __Browse__ > __Browse folder__ > Select the folder `deployment-torch/model` > __Next__.
+
+ 1. Configure the name of the model: `mnist-classifier-torch`. You can leave the rest of the fields as they are.
+
+ 1. Select __Register__.
+
+1. Now it's time to create a scoring script. Batch deployments require a scoring script that indicates how a given model should be executed and how input data must be processed. Batch Endpoints support scripts created in Python. In this case, we're deploying a model that reads image files representing digits and outputs the corresponding digit. The scoring script is as follows:
+
+ > [!NOTE]
+ > For MLflow models, Azure Machine Learning automatically generates the scoring script, so you're not required to provide one. If your model is an MLflow model, you can skip this step. For more information about how batch endpoints work with MLflow models, see the dedicated tutorial [Using MLflow models in batch deployments](how-to-mlflow-batch.md).
+
+ > [!WARNING]
+ > If you're deploying an Automated ML model under a batch endpoint, notice that the scoring script that Automated ML provides only works for online endpoints and is not designed for batch execution. Please see [Author scoring scripts for batch deployments](how-to-batch-scoring-script.md) to learn how to create one depending on what your model does.
+
+ __deployment-torch/code/batch_driver.py__
+
+ :::code language="python" source="~/azureml-examples-batch-pup/cli/endpoints/batch/deploy-models/mnist-classifier/deployment-torch/code/batch_driver.py" :::
+
+1. Create an environment where your batch deployment will run. Such environment needs to include the packages `azureml-core` and `azureml-dataset-runtime[fuse]`, which are required by batch endpoints, plus any dependency your code requires for running. In this case, the dependencies have been captured in a `conda.yaml`:
+
+ __deployment-torch/environment/conda.yaml__
+
+ :::code language="yaml" source="~/azureml-examples-batch-pup/cli/endpoints/batch/deploy-models/mnist-classifier/deployment-torch/environment/conda.yaml":::
+
+ > [!IMPORTANT]
+ > The packages `azureml-core` and `azureml-dataset-runtime[fuse]` are required by batch deployments and should be included in the environment dependencies.
+
+ Indicate the environment as follows:
+
+ # [Azure CLI](#tab/cli)
+
+ The environment definition will be included in the deployment definition itself as an anonymous environment. You'll see in the following lines in the deployment:
+
+ :::code language="yaml" source="~/azureml-examples-batch-pup/cli/endpoints/batch/deploy-models/mnist-classifier/deployment-torch/deployment.yml" range="12-15":::
+
+ # [Python](#tab/python)
+
+ Let's get a reference to the environment:
+
+ [!notebook-python[] (~/azureml-examples-batch-pup/sdk/python/endpoints/batch/deploy-models/mnist-classifier/mnist-batch.ipynb?name=configure_environment)]
+
+ # [Studio](#tab/studio)
+
+ On [Azure Machine Learning studio portal](https://ml.azure.com), follow these steps:
+
+ 1. Navigate to the __Environments__ tab on the side menu.
+
+ 1. Select the tab __Custom environments__ > __Create__.
+
+ 1. Enter the name of the environment, in this case `torch-batch-env`.
+
+ 1. On __Select environment type__ select __Use existing docker image with conda__.
+
+ 1. On __Container registry image path__, enter `mcr.microsoft.com/azureml/openmpi4.1.0-ubuntu20.04`.
+
+ 1. On __Customize__ section copy the content of the file `deployment-torch/environment/conda.yaml` included in the repository into the portal.
+
+ 1. Select __Next__ and then on __Create__.
+
+ 1. The environment is ready to be used.
+
+
+
+ > [!WARNING]
+ > Curated environments are not supported in batch deployments. You will need to indicate your own environment. You can always use the base image of a curated environment as yours to simplify the process.
+
+1. Create a deployment definition
+
+ # [Azure CLI](#tab/cli)
+
+ __deployment-torch/deployment.yml__
+
+ :::code language="yaml" source="~/azureml-examples-batch-pup/cli/endpoints/batch/deploy-models/mnist-classifier/deployment-torch/deployment.yml":::
+
+ For the full batch deployment YAML schema, see [CLI (v2) batch deployment YAML schema](./reference-yaml-deployment-batch.md).
+
+ | Key | Description |
+ | | -- |
+ | `name` | The name of the deployment. |
+ | `endpoint_name` | The name of the endpoint to create the deployment under. |
+ | `model` | The model to be used for batch scoring. The example defines a model inline using `path`. Model files will be automatically uploaded and registered with an autogenerated name and version. Follow the [Model schema](./reference-yaml-model.md#yaml-syntax) for more options. As a best practice for production scenarios, you should create the model separately and reference it here. To reference an existing model, use the `azureml:<model-name>:<model-version>` syntax. |
+ | `code_configuration.code` | The local directory that contains all the Python source code to score the model. |
+ | `code_configuration.scoring_script` | The Python file in the above directory. This file must have an `init()` function and a `run()` function. Use the `init()` function for any costly or common preparation (for example, load the model in memory). `init()` will be called only once at beginning of process. Use `run(mini_batch)` to score each entry; the value of `mini_batch` is a list of file paths. The `run()` function should return a pandas DataFrame or an array. Each returned element indicates one successful run of input element in the `mini_batch`. For more information on how to author scoring script, see [Understanding the scoring script](how-to-batch-scoring-script.md#understanding-the-scoring-script). |
+ | `environment` | The environment to score the model. The example defines an environment inline using `conda_file` and `image`. The `conda_file` dependencies will be installed on top of the `image`. The environment will be automatically registered with an autogenerated name and version. Follow the [Environment schema](./reference-yaml-environment.md#yaml-syntax) for more options. As a best practice for production scenarios, you should create the environment separately and reference it here. To reference an existing environment, use the `azureml:<environment-name>:<environment-version>` syntax. |
+ | `compute` | The compute to run batch scoring. The example uses the `batch-cluster` created at the beginning and references it using `azureml:<compute-name>` syntax. |
+ | `resources.instance_count` | The number of instances to be used for each batch scoring job. |
+ | `settings.max_concurrency_per_instance` | [Optional] The maximum number of parallel `scoring_script` runs per instance. |
+ | `settings.mini_batch_size` | [Optional] The number of files the `scoring_script` can process in one `run()` call. |
+ | `settings.output_action` | [Optional] How the output should be organized in the output file. `append_row` will merge all `run()` returned output results into one single file named `output_file_name`. `summary_only` won't merge the output results and only calculate `error_threshold`. |
+ | `settings.output_file_name` | [Optional] The name of the batch scoring output file for `append_row` `output_action`. |
+ | `settings.retry_settings.max_retries` | [Optional] The number of max tries for a failed `scoring_script` `run()`. |
+ | `settings.retry_settings.timeout` | [Optional] The timeout in seconds for a `scoring_script` `run()` for scoring a mini batch. |
+ | `settings.error_threshold` | [Optional] The number of input file scoring failures that should be ignored. If the error count for the entire input goes above this value, the batch scoring job will be terminated. The example uses `-1`, which indicates that any number of failures is allowed without terminating the batch scoring job. |
+ | `settings.logging_level` | [Optional] Log verbosity. Values in increasing verbosity are: WARNING, INFO, and DEBUG. |
+
+ # [Python](#tab/python)
+
+ [!notebook-python[] (~/azureml-examples-batch-pup/sdk/python/endpoints/batch/deploy-models/mnist-classifier/mnist-batch.ipynb?name=configure_deployment)]
+
+ This class allows user to configure the following key aspects:
+
+ | Key | Description |
+ | | -- |
+ | `name` | Name of the deployment. |
+ | `endpoint_name` | Name of the endpoint to create the deployment under. |
+ | `model` | The model to use for the deployment. This value can be either a reference to an existing versioned model in the workspace or an inline model specification. |
+ | `environment` | The environment to use for the deployment. This value can be either a reference to an existing versioned environment in the workspace or an inline environment specification (optional for MLflow models). |
+ | `code_configuration` | The configuration about how to run inference for the model (optional for MLflow models). |
+ | `code_configuration.code` | Path to the source code directory for scoring the model |
+ | `code_configuration.scoring_script` | Relative path to the scoring file in the source code directory |
+ | `compute` | Name of the compute target to execute the batch scoring jobs on |
+ | `instance_count` | The number of nodes to use for each batch scoring job. |
+ | `settings` | The model deployment inference configuration |
+ | `settings.max_concurrency_per_instance` | The maximum number of parallel scoring_script runs per instance.
+ | `settings.mini_batch_size` | The number of files the code_configuration.scoring_script can process in one `run`() call.
+ | `settings.retry_settings` | Retry settings for scoring each mini batch. |
+ | `settings.retry_settingsmax_retries` | The maximum number of retries for a failed or timed-out mini batch (default is 3) |
+ | `settings.retry_settingstimeout` | The timeout in seconds for scoring a mini batch (default is 30) |
+ | `settings.output_action` | Indicates how the output should be organized in the output file. Allowed values are `append_row` or `summary_only`. Default is `append_row` |
+ | `settings.logging_level` | The log verbosity level. Allowed values are `warning`, `info`, `debug`. Default is `info`. |
+ | `environment_variables` | Dictionary of environment variable name-value pairs to set for each batch scoring job. |
+
+ # [Studio](#tab/studio)
+
+ On [Azure Machine Learning studio portal](https://ml.azure.com), follow these steps:
+
+ 1. Navigate to the __Endpoints__ tab on the side menu.
+
+ 1. Select the tab __Batch endpoints__ > __Create__.
+
+ 1. Give the endpoint a name, in this case `mnist-batch`. You can configure the rest of the fields or leave them blank.
+
+ 1. Select __Next__.
+
+ 1. On the model list, select the model `mnist` and select __Next__.
+
+ 1. On the deployment configuration page, give the deployment a name.
+
+ 1. On __Output action__, ensure __Append row__ is selected.
+
+ 1. On __Output file name__, ensure the batch scoring output file is the one you need. Default is `predictions.csv`.
+
+ 1. On __Mini batch size__, adjust the size of the files that will be included on each mini-batch. This will control the amount of data your scoring script receives per each batch.
+
+ 1. On __Scoring timeout (seconds)__, ensure you're giving enough time for your deployment to score a given batch of files. If you increase the number of files, you usually have to increase the timeout value too. More expensive models (like those based on deep learning), may require high values in this field.
+
+ 1. On __Max concurrency per instance__, configure the number of executors you want to have per each compute instance you get in the deployment. A higher number here guarantees a higher degree of parallelization but it also increases the memory pressure on the compute instance. Tune this value altogether with __Mini batch size__.
+
+ 1. Once done, select __Next__.
+
+ 1. On environment, go to __Select scoring file and dependencies__ and select __Browse__.
+
+ 1. Select the scoring script file on `deployment-torch/code/batch_driver.py`.
+
+ 1. On the section __Choose an environment__, select the environment you created a previous step.
+
+ 1. Select __Next__.
+
+ 1. On the section __Compute__, select the compute cluster you created in a previous step.
+
+ > [!WARNING]
+ > Azure Kubernetes cluster are supported in batch deployments, but only when created using the Azure Machine Learning CLI or Python SDK.
+
+ 1. On __Instance count__, enter the number of compute instances you want for the deployment. In this case, we'll use 2.
+
+ 1. Select __Next__.
+
+1. Create the deployment:
+
+ # [Azure CLI](#tab/cli)
+
+ Run the following code to create a batch deployment under the batch endpoint and set it as the default deployment.
+
+ :::code language="azurecli" source="~/azureml-examples-batch-pup/cli/endpoints/batch/deploy-models/mnist-classifier/deploy-and-run.sh" ID="create_deployment" :::
+
+ > [!TIP]
+ > The `--set-default` parameter sets the newly created deployment as the default deployment of the endpoint. It's a convenient way to create a new default deployment of the endpoint, especially for the first deployment creation. As a best practice for production scenarios, you may want to create a new deployment without setting it as default, verify it, and update the default deployment later. For more information, see the [Deploy a new model](#adding-deployments-to-an-endpoint) section.
+
+ # [Python](#tab/python)
+
+ Using the `MLClient` created earlier, we'll now create the deployment in the workspace. This command will start the deployment creation and return a confirmation response while the deployment creation continues.
+
+ [!notebook-python[] (~/azureml-examples-batch-pup/sdk/python/endpoints/batch/deploy-models/mnist-classifier/mnist-batch.ipynb?name=create_deployment)]
+
+ Once the deployment is completed, we need to ensure the new deployment is the default deployment in the endpoint:
+
+ [!notebook-python[] (~/azureml-examples-batch-pup/sdk/python/endpoints/batch/deploy-models/mnist-classifier/mnist-batch.ipynb?name=set_default_deployment)]
+
+ # [Studio](#tab/studio)
+
+ In the wizard, select __Create__ to start the deployment process.
+
+ :::image type="content" source="./media/how-to-use-batch-endpoints-studio/review-batch-wizard.png" alt-text="Screenshot of batch endpoints/deployment review screen.":::
+
+
+
+1. Check batch endpoint and deployment details.
+
+ # [Azure CLI](#tab/cli)
+
+ Use `show` to check endpoint and deployment details. To check a batch deployment, run the following code:
+
+ :::code language="azurecli" source="~/azureml-examples-batch-pup/cli/endpoints/batch/deploy-models/mnist-classifier/deploy-and-run.sh" ID="query_deployment" :::
+
+ # [Python](#tab/python)
+
+ To check a batch deployment, run the following code:
+
+ [!notebook-python[] (~/azureml-examples-batch-pup/sdk/python/endpoints/batch/deploy-models/mnist-classifier/mnist-batch.ipynb?name=query_deployment)]
+
+ # [Studio](#tab/studio)
+
+ 1. Navigate to the __Endpoints__ tab on the side menu.
+
+ 1. Select the tab __Batch endpoints__.
+
+ 1. Select the batch endpoint you want to get details from.
+
+ 1. In the endpoint page, you'll see all the details of the endpoint along with all the deployments available.
+
+ :::image type="content" source="./media/how-to-use-batch-endpoints-studio/batch-endpoint-details.png" alt-text="Screenshot of the check batch endpoints and deployment details.":::
+
+## Run batch endpoints and access results
+
+Invoking a batch endpoint triggers a batch scoring job. A job `name` will be returned from the invoke response and can be used to track the batch scoring progress.
+
+When running models for scoring in Batch Endpoints, you need to indicate the input data path where the endpoints should look for the data you want to score. The following example shows how to start a new job over a sample data of the MNIST dataset stored in an Azure Storage Account:
+
+> [!NOTE]
+> __How does parallelization work?__:
+>
+> Batch deployments distribute work at the file level, which means that a folder containing 100 files with mini-batches of 10 files will generate 10 batches of 10 files each. Notice that this will happen regardless of the size of the files involved. If your files are too big to be processed in large mini-batches we suggest to either split the files in smaller files to achieve a higher level of parallelism or to decrease the number of files per mini-batch. At this moment, batch deployment can't account for skews in the file's size distribution.
+
+# [Azure CLI](#tab/cli)
+
+
+# [Python](#tab/python)
+
+[!notebook-python[] (~/azureml-examples-batch-pup/sdk/python/endpoints/batch/deploy-models/mnist-classifier/mnist-batch.ipynb?name=start_batch_scoring_job)]
+
+# [Studio](#tab/studio)
+
+1. Navigate to the __Endpoints__ tab on the side menu.
+
+1. Select the tab __Batch endpoints__.
+
+1. Select the batch endpoint you just created.
+
+1. Select __Create job__.
+
+ :::image type="content" source="./media/how-to-use-batch-endpoints-studio/create-batch-job.png" alt-text="Screenshot of the create job option to start batch scoring.":::
+
+1. On __Deployment__, select the deployment you want to execute.
+
+ :::image type="content" source="./media/how-to-use-batch-endpoints-studio/job-setting-batch-scoring.png" alt-text="Screenshot of using the deployment to submit a batch job.":::
+
+1. Select __Next__.
+
+1. On __Select data source__, select the data input you want to use. For this example, select __Datastore__ and in the section __Path__ enter the full URL `https://azuremlexampledata.blob.core.windows.net/dat) for details.
+
+ :::image type="content" source="./media/how-to-use-batch-endpoints-studio/select-datastore-job.png" alt-text="Screenshot of selecting datastore as an input option.":::
+
+1. Start the job.
+++
+Batch endpoints support reading files or folders that are located in different locations. To learn more about how the supported types and how to specify them read [Accessing data from batch endpoints jobs](how-to-access-data-batch-endpoints-jobs.md).
+
+> [!TIP]
+> Local data folders/files can be used when executing batch endpoints from the Azure Machine Learning CLI or Azure Machine Learning SDK for Python. However, that operation will result in the local data to be uploaded to the default Azure Machine Learning Data Store of the workspace you are working on.
+
+> [!IMPORTANT]
+> __Deprecation notice__: Datasets of type `FileDataset` (V1) are deprecated and will be retired in the future. Existing batch endpoints relying on this functionality will continue to work but batch endpoints created with GA CLIv2 (2.4.0 and newer) or GA REST API (2022-05-01 and newer) will not support V1 dataset.
+
+### Monitor batch job execution progress
+
+Batch scoring jobs usually take some time to process the entire set of inputs.
+
+# [Azure CLI](#tab/cli)
+
+The following code checks the job status and outputs a link to the Azure Machine Learning studio for further details.
++
+# [Python](#tab/python)
+
+The following code checks the job status and outputs a link to the Azure Machine Learning studio for further details.
+
+[!notebook-python[] (~/azureml-examples-batch-pup/sdk/python/endpoints/batch/deploy-models/mnist-classifier/mnist-batch.ipynb?name=get_job)]
+
+# [Studio](#tab/studio)
+
+1. Navigate to the __Endpoints__ tab on the side menu.
+
+1. Select the tab __Batch endpoints__.
+
+1. Select the batch endpoint you want to monitor.
+
+1. Select the tab __Jobs__.
+
+ :::image type="content" source="media/how-to-use-batch-endpoints-studio/summary-jobs.png" alt-text="Screenshot of summary of jobs submitted to a batch endpoint.":::
+
+1. You'll see a list of the jobs created for the selected endpoint.
+
+1. Select the last job that is running.
+
+1. You'll be redirected to the job monitoring page.
+++
+### Check batch scoring results
+
+The job outputs will be stored in cloud storage, either in the workspace's default blob storage, or the storage you specified. See [Configure the output location](#configure-the-output-location) to know how to change the defaults. Follow the following steps to view the scoring results in Azure Storage Explorer when the job is completed:
+
+1. Run the following code to open batch scoring job in Azure Machine Learning studio. The job studio link is also included in the response of `invoke`, as the value of `interactionEndpoints.Studio.endpoint`.
+
+ :::code language="azurecli" source="~/azureml-examples-batch-pup/cli/endpoints/batch/deploy-models/mnist-classifier/deploy-and-run.sh" ID="show_job_in_studio" :::
+
+1. In the graph of the job, select the `batchscoring` step.
+
+1. Select the __Outputs + logs__ tab and then select **Show data outputs**.
+
+1. From __Data outputs__, select the icon to open __Storage Explorer__.
+
+ :::image type="content" source="media/how-to-use-batch-endpoint/view-data-outputs.png" alt-text="Studio screenshot showing view data outputs location." lightbox="media/how-to-use-batch-endpoint/view-data-outputs.png":::
+
+ The scoring results in Storage Explorer are similar to the following sample page:
+
+ :::image type="content" source="media/how-to-use-batch-endpoint/scoring-view.png" alt-text="Screenshot of the scoring output." lightbox="media/how-to-use-batch-endpoint/scoring-view.png":::
+
+### Configure the output location
+
+The batch scoring results are by default stored in the workspace's default blob store within a folder named by job name (a system-generated GUID). You can configure where to store the scoring outputs when you invoke the batch endpoint.
+
+# [Azure CLI](#tab/cli)
+
+Use `output-path` to configure any folder in an Azure Machine Learning registered datastore. The syntax for the `--output-path` is the same as `--input` when you're specifying a folder, that is, `azureml://datastores/<datastore-name>/paths/<path-on-datastore>/`. Use `--set output_file_name=<your-file-name>` to configure a new output file name.
++
+# [Python](#tab/python)
+
+Use `params_override` to configure any folder in an Azure Machine Learning registered data store. Only registered data stores are supported as output paths. In this example we will use the default data store:
+
+[!notebook-python[] (~/azureml-examples-batch-pup/sdk/python/endpoints/batch/deploy-models/mnist-classifier/mnist-batch.ipynb?name=get_data_store)]
+
+Once you identified the data store you want to use, configure the output as follows:
+
+[!notebook-python[] (~/azureml-examples-batch-pup/sdk/python/endpoints/batch/deploy-models/mnist-classifier/mnist-batch.ipynb?name=start_batch_scoring_job_set_output)]
+
+# [Studio](#tab/studio)
+
+1. Navigate to the __Endpoints__ tab on the side menu.
+
+1. Select the tab __Batch endpoints__.
+
+1. Select the batch endpoint you just created.
+
+1. Select __Create job__.
+
+ :::image type="content" source="./media/how-to-use-batch-endpoints-studio/create-batch-job.png" alt-text="Screenshot of the create job option to start batch scoring.":::
+
+1. On __Deployment__, select the deployment you want to execute.
+
+1. Select __Next__.
+
+1. Check the option __Override deployment settings__.
+
+ :::image type="content" source="./media/how-to-use-batch-endpoints-studio/overwrite-setting.png" alt-text="Screenshot of the overwrite setting when starting a batch job.":::
+
+1. You can now configure __Output file name__ and some extra properties of the deployment execution. Just this execution will be affected.
+
+1. On __Select data source__, select the data input you want to use.
+
+1. On __Configure output location__, check the option __Enable output configuration__.
+
+ :::image type="content" source="./media/how-to-use-batch-endpoints-studio/configure-output-location.png" alt-text="Screenshot of optionally configuring output location.":::
+
+1. Configure the __Blob datastore__ where the outputs should be placed.
+++
+> [!WARNING]
+> You must use a unique output location. If the output file exists, the batch scoring job will fail.
+
+> [!IMPORTANT]
+> As opposite as for inputs, only Azure Machine Learning data stores running on blob storage accounts are supported for outputs.
+
+## Overwrite deployment configuration per each job
+
+Some settings can be overwritten when invoke to make best use of the compute resources and to improve performance. The following settings can be configured in a per-job basis:
+
+* Use __instance count__ to overwrite the number of instances to request from the compute cluster. For example, for larger volume of data inputs, you may want to use more instances to speed up the end to end batch scoring.
+* Use __mini-batch size__ to overwrite the number of files to include on each mini-batch. The number of mini batches is decided by total input file counts and mini_batch_size. Smaller mini_batch_size generates more mini batches. Mini batches can be run in parallel, but there might be extra scheduling and invocation overhead.
+* Other settings can be overwritten other settings including __max retries__, __timeout__, and __error threshold__. These settings might impact the end to end batch scoring time for different workloads.
+
+# [Azure CLI](#tab/cli)
++
+# [Python](#tab/python)
+
+[!notebook-python[] (~/azureml-examples-batch-pup/sdk/python/endpoints/batch/deploy-models/mnist-classifier/mnist-batch.ipynb?name=start_batch_scoring_job_overwrite)]
+
+# [Studio](#tab/studio)
+
+1. Navigate to the __Endpoints__ tab on the side menu.
+
+1. Select the tab __Batch endpoints__.
+
+1. Select the batch endpoint you just created.
+
+1. Select __Create job__.
+
+ :::image type="content" source="./media/how-to-use-batch-endpoints-studio/create-batch-job.png" alt-text="Screenshot of the create job option to start batch scoring.":::
+
+1. On __Deployment__, select the deployment you want to execute.
+
+1. Select __Next__.
+
+1. Check the option __Override deployment settings__.
+
+ :::image type="content" source="./media/how-to-use-batch-endpoints-studio/overwrite-setting.png" alt-text="Screenshot of the overwrite setting when starting a batch job.":::
+
+1. Configure the job parameters. Only the current job execution will be affected by this configuration.
+++
+## Adding deployments to an endpoint
+
+Once you have a batch endpoint with a deployment, you can continue to refine your model and add new deployments. Batch endpoints will continue serving the default deployment while you develop and deploy new models under the same endpoint. Deployments can't affect one to another.
+
+In this example, you'll learn how to add a second deployment __that solves the same MNIST problem but using a model built with Keras and TensorFlow__.
+
+### Adding a second deployment
+
+1. Create an environment where your batch deployment will run. Include in the environment any dependency your code requires for running. You'll also need to add the library `azureml-core` as it is required for batch deployments to work. The following environment definition has the required libraries to run a model with TensorFlow.
+
+ # [Azure CLI](#tab/cli)
+
+ The environment definition will be included in the deployment definition itself as an anonymous environment. You'll see in the following lines in the deployment:
+
+ :::code language="yaml" source="~/azureml-examples-batch-pup/cli/endpoints/batch/deploy-models/mnist-classifier/deployment-keras/deployment.yml" range="12-15":::
+
+ # [Python](#tab/python)
+
+ Let's get a reference to the environment:
+
+ [!notebook-python[] (~/azureml-examples-batch-pup/sdk/python/endpoints/batch/deploy-models/mnist-classifier/mnist-batch.ipynb?name=configure_environment_non_default)]
+
+ # [Studio](#tab/studio)
+
+ 1. Navigate to the __Environments__ tab on the side menu.
+
+ 1. Select the tab __Custom environments__ > __Create__.
+
+ 1. Enter the name of the environment, in this case `keras-batch-env`.
+
+ 1. On __Select environment type__ select __Use existing docker image with conda__.
+
+ 1. On __Container registry image path__, enter `mcr.microsoft.com/azureml/openmpi4.1.0-ubuntu20.0`.
+
+ 1. On __Customize__ section copy the content of the file `deployment-keras/environment/conda.yaml` included in the repository into the portal.
+
+ 1. Select __Next__ and then on __Create__.
+
+ 1. The environment is ready to be used.
+
+
+
+ The conda file used looks as follows:
+
+ __deployment-keras/environment/conda.yaml__
+
+ :::code language="yaml" source="~/azureml-examples-batch-pup/cli/endpoints/batch/deploy-models/mnist-classifier/deployment-keras/environment/conda.yaml":::
+
+1. Create a scoring script for the model:
+
+ __deployment-keras/code/batch_driver.py__
+
+ :::code language="python" source="~/azureml-examples-batch-pup/cli/endpoints/batch/deploy-models/mnist-classifier/deployment-keras/code/batch_driver.py" :::
+
+3. Create a deployment definition
+
+ # [Azure CLI](#tab/cli)
+
+ __deployment-keras/deployment.yml__
+
+ :::code language="yaml" source="~/azureml-examples-batch-pup/cli/endpoints/batch/deploy-models/mnist-classifier/deployment-keras/deployment.yml":::
+
+ # [Python](#tab/python)
+
+ [!notebook-python[] (~/azureml-examples-batch-pup/sdk/python/endpoints/batch/deploy-models/mnist-classifier/mnist-batch.ipynb?name=configure_deployment_non_default)]
+
+ # [Studio](#tab/studio)
+
+ 1. Navigate to the __Endpoints__ tab on the side menu.
+
+ 1. Select the tab __Batch endpoints__.
+
+ 1. Select the existing batch endpoint where you want to add the deployment.
+
+ 1. Select __Add deployment__.
+
+ :::image type="content" source="./media/how-to-use-batch-endpoints-studio/add-deployment-option.png" alt-text="Screenshot of add new deployment option.":::
+
+ 1. On the model list, select the model `mnist` and select __Next__.
+
+ 1. On the deployment configuration page, give the deployment a name.
+
+ 1. On __Output action__, ensure __Append row__ is selected.
+
+ 1. On __Output file name__, ensure the batch scoring output file is the one you need. Default is `predictions.csv`.
+
+ 1. On __Mini batch size__, adjust the size of the files that will be included on each mini-batch. This will control the amount of data your scoring script receives per each batch.
+
+ 1. On __Scoring timeout (seconds)__, ensure you're giving enough time for your deployment to score a given batch of files. If you increase the number of files, you usually have to increase the timeout value too. More expensive models (like those based on deep learning), may require high values in this field.
+
+ 1. On __Max concurrency per instance__, configure the number of executors you want to have per each compute instance you get in the deployment. A higher number here guarantees a higher degree of parallelization but it also increases the memory pressure on the compute instance. Tune this value altogether with __Mini batch size__.
+ 1. Once done, select __Next__.
+
+ 1. On environment, go to __Select scoring file and dependencies__ and select __Browse__.
+
+ 1. Select the scoring script file on `deployment-keras/code/batch_driver.py`.
+
+ 1. On the section __Choose an environment__, select the environment you created a previous step.
+
+ 1. Select __Next__.
+
+ 1. On the section __Compute__, select the compute cluster you created in a previous step.
+
+ 1. On __Instance count__, enter the number of compute instances you want for the deployment. In this case, we'll use 2.
+
+ 1. Select __Next__.
+
+1. Create the deployment:
+
+ # [Azure CLI](#tab/cli)
+
+ Run the following code to create a batch deployment under the batch endpoint and set it as the default deployment.
+
+ :::code language="azurecli" source="~/azureml-examples-batch-pup/cli/endpoints/batch/deploy-models/mnist-classifier/deploy-and-run.sh" ID="create_deployment_non_default" :::
+
+ > [!TIP]
+ > The `--set-default` parameter is missing in this case. As a best practice for production scenarios, you may want to create a new deployment without setting it as default, verify it, and update the default deployment later.
+
+ # [Python](#tab/python)
+
+ Using the `MLClient` created earlier, we'll now create the deployment in the workspace. This command will start the deployment creation and return a confirmation response while the deployment creation continues.
+
+ [!notebook-python[] (~/azureml-examples-batch-pup/sdk/python/endpoints/batch/deploy-models/mnist-classifier/mnist-batch.ipynb?name=create_deployment_non_default)]
+
+ # [Studio](#tab/studio)
+
+ In the wizard, select __Create__ to start the deployment process.
++
+### Test a non-default batch deployment
+
+To test the new non-default deployment, you'll need to know the name of the deployment you want to run.
+
+# [Azure CLI](#tab/cli)
++
+Notice `--deployment-name` is used to specify the deployment we want to execute. This parameter allows you to `invoke` a non-default deployment, and it will not update the default deployment of the batch endpoint.
+
+# [Python](#tab/python)
+
+[!notebook-python[] (~/azureml-examples-batch-pup/sdk/python/endpoints/batch/deploy-models/mnist-classifier/mnist-batch.ipynb?name=test_deployment_non_default)]
+
+Notice `deployment_name` is used to specify the deployment we want to execute. This parameter allows you to `invoke` a non-default deployment, and it will not update the default deployment of the batch endpoint.
+
+# [Studio](#tab/studio)
+
+1. Navigate to the __Endpoints__ tab on the side menu.
+
+1. Select the tab __Batch endpoints__.
+
+1. Select the batch endpoint you just created.
+
+1. Select __Create job__.
+
+1. On __Deployment__, select the deployment you want to execute. In this case, `mnist-keras`.
+
+1. Complete the job creation wizard to get the job started.
+++
+### Update the default batch deployment
+
+Although you can invoke a specific deployment inside of an endpoint, you'll usually want to invoke the endpoint itself and let the endpoint decide which deployment to use. Such deployment is named the "default" deployment. This gives you the possibility of changing the default deployment and hence changing the model serving the deployment without changing the contract with the user invoking the endpoint. Use the following instruction to update the default deployment:
+
+# [Azure CLI](#tab/cli)
++
+# [Python](#tab/python)
+
+[!notebook-python[] (~/azureml-examples-batch-pup/sdk/python/endpoints/batch/deploy-models/mnist-classifier/mnist-batch.ipynb?name=update_default_deployment)]
+
+# [Studio](#tab/studio)
+
+1. Navigate to the __Endpoints__ tab on the side menu.
+
+1. Select the tab __Batch endpoints__.
+
+1. Select the batch endpoint you want to configure.
+
+1. Select __Update default deployment__.
+
+ :::image type="content" source="./media/how-to-use-batch-endpoints-studio/update-default-deployment.png" alt-text="Screenshot of updating default deployment.":::
+
+1. On __Select default deployment__, select the name of the deployment you want to be the default one.
+
+1. Select __Update__.
+
+1. The selected deployment is now the default one.
+++
+## Delete the batch endpoint and the deployment
+
+# [Azure CLI](#tab/cli)
+
+If you aren't going to use the old batch deployment, you should delete it by running the following code. `--yes` is used to confirm the deletion.
++
+Run the following code to delete the batch endpoint and all the underlying deployments. Batch scoring jobs won't be deleted.
++
+# [Python](#tab/python)
+
+If you aren't going to use the old batch deployment, you should delete it by running the following code.
+
+[!notebook-python[] (~/azureml-examples-batch-pup/sdk/python/endpoints/batch/deploy-models/mnist-classifier/mnist-batch.ipynb?name=delete_deployment)]
+
+Run the following code to delete the batch endpoint and all the underlying deployments. Batch scoring jobs won't be deleted.
+
+[!notebook-python[] (~/azureml-examples-batch-pup/sdk/python/endpoints/batch/deploy-models/mnist-classifier/mnist-batch.ipynb?name=delete_endpoint)]
+
+# [Studio](#tab/studio)
+
+1. Navigate to the __Endpoints__ tab on the side menu.
+
+1. Select the tab __Batch endpoints__.
+
+1. Select the batch endpoint you want to delete.
+
+1. Select __Delete__.
+
+1. The endpoint all along with its deployments will be deleted.
+
+1. Notice that this won't affect the compute cluster where the deployment(s) run.
+++
+## Next steps
+
+* [Accessing data from batch endpoints jobs](how-to-access-data-batch-endpoints-jobs.md).
+* [Authentication on batch endpoints](how-to-authenticate-batch-endpoint.md).
+* [Network isolation in batch endpoints](how-to-secure-batch-endpoint.md).
+* [Troubleshooting batch endpoints](how-to-troubleshoot-batch-endpoints.md).
machine-learning How To Use Batch Pipeline Deployments https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-use-batch-pipeline-deployments.md
+
+ Title: "Deploy pipelines with batch endpoints (preview)"
+
+description: Learn how to create a batch deploy a pipeline component and invoke it.
++++++ Last updated : 04/21/2023
+reviewer: msakande
++++
+# How to deploy pipelines with batch endpoints (preview)
++
+You can deploy pipeline components under a batch endpoint, providing a convenient way to operationalize them in Azure Machine Learning. In this article, you'll learn how to create a batch deployment that contains a simple pipeline. You'll learn to:
+
+> [!div class="checklist"]
+> * Create and register a pipeline component
+> * Create a batch endpoint and deploy a pipeline component
+> * Test the deployment
++
+## About this example
+
+In this example, we're going to deploy a pipeline component consisting of a simple command job that prints "hello world!". This component requires no inputs or outputs and is the simplest pipeline deployment scenario.
++
+The files for this example are in:
+
+```azurecli
+cd endpoints/batch/deploy-pipelines/hello-batch
+```
+
+### Follow along in Jupyter notebooks
+
+You can follow along with the Python SDK version of this example by opening the [sdk-deploy-and-test.ipynb](https://github.com/Azure/azureml-examples/blob/main/sdk/python/endpoints/batch/deploy-pipelines/hello-batch/sdk-deploy-and-test.ipynb) notebook in the cloned repository.
+
+## Prerequisites
++
+## Create the pipeline component
+
+Batch endpoints can deploy either models or pipeline components. Pipeline components are reusable, and you can streamline your MLOps practice by using [shared registries](concept-machine-learning-registries-mlops.md) to move these components from one workspace to another.
+
+The pipeline component in this example contains one single step that only prints a "hello world" message in the logs. It doesn't require any inputs or outputs.
+
+The `hello-component/hello.yml` file contains the configuration for the pipeline component:
+
+__hello-component/hello.yml__
++
+Register the component:
+
+# [Azure CLI](#tab/cli)
++
+# [Python](#tab/python)
+
+[!notebook-python[] (~/azureml-examples-batch-pup/sdk/python/endpoints/batch/deploy-pipelines/hello-batch/sdk-deploy-and-test.ipynb?name=register_component)]
+++
+## Create a batch endpoint
+
+1. Provide a name for the endpoint. A batch endpoint's name needs to be unique in each region since the name is used to construct the invocation URI. To ensure uniqueness, append any trailing characters to the name specified in the following code.
+
+ # [Azure CLI](#tab/cli)
+
+ :::code language="azurecli" source="~/azureml-examples-batch-pup/cli/endpoints/batch/deploy-pipelines/hello-batch/deploy-and-run.sh" ID="name_endpoint" :::
+
+ # [Python](#tab/python)
+
+ [!notebook-python[] (~/azureml-examples-batch-pup/sdk/python/endpoints/batch/deploy-pipelines/hello-batch/sdk-deploy-and-test.ipynb?name=name_endpoint)]
+
+1. Configure the endpoint:
+
+ # [Azure CLI](#tab/cli)
+
+ The `endpoint.yml` file contains the endpoint's configuration.
+
+ __endpoint.yml__
+
+ :::code language="yaml" source="~/azureml-examples-batch-pup/cli/endpoints/batch/deploy-pipelines/hello-batch/endpoint.yml" :::
+
+ # [Python](#tab/python)
+
+ [!notebook-python[] (~/azureml-examples-batch-pup/sdk/python/endpoints/batch/deploy-pipelines/hello-batch/sdk-deploy-and-test.ipynb?name=configure_endpoint)]
+
+1. Create the endpoint:
+
+ # [Azure CLI](#tab/cli)
+
+ :::code language="azurecli" source="~/azureml-examples-batch-pup/cli/endpoints/batch/deploy-pipelines/hello-batch/deploy-and-run.sh" ID="create_endpoint" :::
+
+ # [Python](#tab/python)
+
+ [!notebook-python[] (~/azureml-examples-batch-pup/sdk/python/endpoints/batch/deploy-pipelines/hello-batch/sdk-deploy-and-test.ipynb?name=create_endpoint)]
+
+1. Query the endpoint URI:
+
+ # [Azure CLI](#tab/cli)
+
+ :::code language="azurecli" source="~/azureml-examples-batch-pup/cli/endpoints/batch/deploy-pipelines/hello-batch/deploy-and-run.sh" ID="query_endpoint" :::
+
+ # [Python](#tab/python)
+
+ [!notebook-python[] (~/azureml-examples-batch-pup/sdk/python/endpoints/batch/deploy-pipelines/hello-batch/sdk-deploy-and-test.ipynb?name=query_endpoint)]
+
+## Deploy the pipeline component
+
+To deploy the pipeline component, we have to create a batch deployment. A deployment is a set of resources required for hosting the asset that does the actual work.
+
+1. Create a compute cluster. Batch endpoints and deployments run on compute clusters. They can run on any Azure Machine Learning compute cluster that already exists in the workspace. Therefore, multiple batch deployments can share the same compute infrastructure. In this example, we'll work on an Azure Machine Learning compute cluster called `batch-cluster`. Let's verify that the compute exists on the workspace or create it otherwise.
+
+ # [Azure CLI](#tab/cli)
+
+ :::code language="azurecli" source="~/azureml-examples-batch-pup/cli/endpoints/batch/deploy-pipelines/hello-batch/deploy-and-run.sh" ID="create_compute" :::
+
+ # [Python](#tab/python)
+
+ [!notebook-python[] (~/azureml-examples-batch-pup/sdk/python/endpoints/batch/deploy-pipelines/hello-batch/sdk-deploy-and-test.ipynb?name=create_compute)]
+
+1. Configure the deployment:
+
+ # [Azure CLI](#tab/cli)
+
+ The `deployment.yml` file contains the deployment's configuration.
+
+ __deployment.yml__
+
+ :::code language="yaml" source="~/azureml-examples-batch-pup/cli/endpoints/batch/deploy-pipelines/hello-batch/deployment.yml" :::
+
+ # [Python](#tab/python)
+
+ [!notebook-python[] (~/azureml-examples-batch-pup/sdk/python/endpoints/batch/deploy-pipelines/hello-batch/sdk-deploy-and-test.ipynb?name=configure_deployment)]
+
+1. Create the deployment:
+
+ # [Azure CLI](#tab/cli)
+
+ Run the following code to create a batch deployment under the batch endpoint and set it as the default deployment.
+
+ :::code language="azurecli" source="~/azureml-examples-batch-pup/cli/endpoints/batch/deploy-pipelines/hello-batch/deploy-and-run.sh" ID="create_deployment" :::
+
+ > [!TIP]
+ > Notice the use of the `--set-default` flag to indicate that this new deployment is now the default.
+
+ # [Python](#tab/python)
+
+ This command will start the deployment creation and return a confirmation response while the deployment creation continues.
+
+ [!notebook-python[] (~/azureml-examples-batch-pup/sdk/python/endpoints/batch/deploy-pipelines/hello-batch/sdk-deploy-and-test.ipynb?name=create_deployment)]
+
+ Once created, let's configure this new deployment as the default one:
+
+ [!notebook-python[] (~/azureml-examples-batch-pup/sdk/python/endpoints/batch/deploy-pipelines/hello-batch/sdk-deploy-and-test.ipynb?name=update_default_deployment)]
+
+1. Your deployment is ready for use.
+
+## Test the deployment
+
+Once the deployment is created, it's ready to receive jobs. You can invoke the default deployment as follows:
+
+# [Azure CLI](#tab/cli)
++
+# [Python](#tab/python)
+
+[!notebook-python[] (~/azureml-examples-batch-pup/sdk/python/endpoints/batch/deploy-pipelines/hello-batch/sdk-deploy-and-test.ipynb?name=invoke_deployment_inline)]
+++
+> [!TIP]
+> In this example, the pipeline doesn't have inputs or outputs. However, they can be indicated at invocation time if any. To learn more about how to indicate inputs and outputs, see [Create jobs and input data for batch endpoints](how-to-access-data-batch-endpoints-jobs.md).
+
+You can monitor the progress of the show and stream the logs using:
+
+# [Azure CLI](#tab/cli)
++
+# [Python](#tab/python)
+
+[!notebook-python[] (~/azureml-examples-batch-pup/sdk/python/endpoints/batch/deploy-pipelines/hello-batch/sdk-deploy-and-test.ipynb?name=get_job)]
+
+To wait for the job to finish, run the following code:
+
+[!notebook-python[] (~/azureml-examples-batch-pup/sdk/python/endpoints/batch/deploy-pipelines/hello-batch/sdk-deploy-and-test.ipynb?name=stream_job_logs)]
++
+## Clean up resources
+
+Once you're done, delete the associated resources from the workspace:
+
+# [Azure CLI](#tab/cli)
+
+Run the following code to delete the batch endpoint and its underlying deployment. `--yes` is used to confirm the deletion.
++
+# [Python](#tab/python)
+
+Delete the endpoint:
+
+[!notebook-python[] (~/azureml-examples-batch-pup/sdk/python/endpoints/batch/deploy-pipelines/hello-batch/sdk-deploy-and-test.ipynb?name=delete_endpoint)]
++
+(Optional) Delete compute, unless you plan to reuse your compute cluster with later deployments.
+
+# [Azure CLI](#tab/cli)
+
+```azurecli
+az ml compute delete -n batch-cluster
+```
+
+# [Python](#tab/python)
+
+```python
+ml_client.compute.begin_delete(name="batch-cluster")
+```
++
+## Next steps
+
+- [How to deploy a training pipeline with batch endpoints (preview)](how-to-use-batch-training-pipeline.md)
+- [How to deploy a pipeline to perform batch scoring with preprocessing (preview)](how-to-use-batch-scoring-pipeline.md)
+- [Create batch endpoints from pipeline jobs (preview)](how-to-use-batch-pipeline-from-job.md)
+- [Access data from batch endpoints jobs](how-to-access-data-batch-endpoints-jobs.md)
+- [Troubleshooting batch endpoints](how-to-troubleshoot-batch-endpoints.md)
machine-learning How To Use Batch Pipeline From Job https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-use-batch-pipeline-from-job.md
+
+ Title: How to deploy existing pipeline jobs to a batch endpoint (preview)
+
+description: Learn how to create pipeline component deployment for Batch Endpoints
+++++
+reviewer: msakande
++ Last updated : 05/12/2023+++
+# Deploy existing pipeline jobs to batch endpoints (preview)
++
+Batch endpoints allow you to deploy pipeline components, providing a convenient way to operationalize pipelines in Azure Machine Learning. Batch endpoints accept pipeline components for deployment. However, if you already have a pipeline job that runs successfully, Azure Machine Learning can accept that job as input to your batch endpoint and create the pipeline component automatically for you. In this article, you'll learn how to use your existing pipeline job as input for batch deployment.
+
+You'll learn to:
+
+> [!div class="checklist"]
+> * Run and create the pipeline job that you want to deploy
+> * Create a batch deployment from the existing job
+> * Test the deployment
++
+## About this example
+
+In this example, we're going to deploy a pipeline consisting of a simple command job that prints "hello world!". Instead of registering the pipeline component before deployment, we indicate an existing pipeline job to use for deployment. Azure Machine Learning will then create the pipeline component automatically and deploy it as a batch endpoint pipeline component deployment.
++
+The files for this example are in:
+
+```azurecli
+cd endpoints/batch/deploy-pipelines/hello-batch
+```
+
+## Prerequisites
++
+## Run the pipeline job you want to deploy
+
+In this section, we begin by running a pipeline job:
+
+# [Azure CLI](#tab/cli)
+
+The following `pipeline-job.yml` file contains the configuration for the pipeline job:
+
+__pipeline-job.yml__
++
+# [Python](#tab/python)
+
+Load the pipeline component and instantiate it:
+
+```python
+hello_batch = load_component(source="hello-component/hello.yml")
+pipeline_job = hello_batch()
+```
+
+Now, configure some run settings to run the test. This article assumes you have a compute cluster named `batch-cluster`. You can replace the cluster with the name of yours.
+
+```python
+pipeline_job.settings.default_compute = "batch-cluster"
+pipeline_job.settings.default_datastore = "workspaceblobstore"
+```
+++
+Create the pipeline job:
+
+# [Azure CLI](#tab/cli)
++
+# [Python](#tab/python)
+
+```python
+pipeline_job_run = ml_client.jobs.create_or_update(
+ pipeline_job, experiment_name="hello-batch-pipeline"
+)
+pipeline_job_run
+```
+++
+## Create a batch endpoint
+
+Before we deploy the pipeline job, we need to deploy a batch endpoint to host the deployment.
+
+1. Provide a name for the endpoint. A batch endpoint's name needs to be unique in each region since the name is used to construct the invocation URI. To ensure uniqueness, append any trailing characters to the name specified in the following code.
+
+ # [Azure CLI](#tab/cli)
+
+ :::code language="azurecli" source="~/azureml-examples-batch-pup/cli/endpoints/batch/deploy-pipelines/hello-batch/deploy-and-run.sh" ID="name_endpoint" :::
+
+ # [Python](#tab/python)
+
+ ```python
+ endpoint_name="hello-batch"
+ ```
+
+1. Configure the endpoint:
+
+ # [Azure CLI](#tab/cli)
+
+ The `endpoint.yml` file contains the endpoint's configuration.
+
+ __endpoint.yml__
+
+ :::code language="yaml" source="~/azureml-examples-batch-pup/cli/endpoints/batch/deploy-pipelines/hello-batch/endpoint.yml" :::
+
+ # [Python](#tab/python)
+
+ ```python
+ endpoint = BatchEndpoint(
+ name=endpoint_name,
+ description="A hello world endpoint for component deployments",
+ )
+ ```
+
+1. Create the endpoint:
+
+ # [Azure CLI](#tab/cli)
+
+ :::code language="azurecli" source="~/azureml-examples-batch-pup/cli/endpoints/batch/deploy-pipelines/hello-batch/deploy-and-run.sh" ID="create_endpoint" :::
+
+ # [Python](#tab/python)
+
+ ```python
+ ml_client.batch_endpoints.begin_create_or_update(endpoint).result()
+ ```
+
+1. Query the endpoint URI:
+
+ # [Azure CLI](#tab/cli)
+
+ :::code language="azurecli" source="~/azureml-examples-batch-pup/cli/endpoints/batch/deploy-pipelines/hello-batch/deploy-and-run.sh" ID="query_endpoint" :::
+
+ # [Python](#tab/python)
+
+ ```python
+ endpoint = ml_client.batch_endpoints.get(name=endpoint_name)
+ print(endpoint)
+ ```
+
+## Deploy the pipeline job
+
+To deploy the pipeline component, we have to create a batch deployment from the existing job.
+
+1. We need to tell Azure Machine Learning the name of the job that we want to deploy. In our case, that job is indicated in the following variable:
+
+ # [Azure CLI](#tab/cli)
+
+ ```azurecli
+ echo $JOB_NAME
+ ```
+
+ # [Python](#tab/python)
+
+ ```python
+ print(job.name)
+ ```
+
+1. Configure the deployment.
+
+ # [Azure CLI](#tab/cli)
+
+ The `deployment-from-job.yml` file contains the deployment's configuration. Notice how we use the key `job_definition` instead of `component` to indicate that this deployment is created from a pipeline job:
+
+ __deployment-from-job.yml__
+
+ :::code language="yaml" source="~/azureml-examples-batch-pup/cli/endpoints/batch/deploy-pipelines/hello-batch/deployment-from-job.yml" :::
+
+ # [Python](#tab/python)
+
+ Notice now how we use the property `job_definition` instead of `component`:
+
+ ```python
+ deployment = BatchPipelineComponentDeployment(
+ name="hello-batch-from-job,
+ description="A hello world deployment with a single step. This deployment is created from a pipeline job.",
+ endpoint_name=endpoint.name,
+ job_definition=pipeline_job_run,
+ settings={
+ "default_comput": "batch-cluster",
+ "continue_on_step_failure": False
+ }
+ )
+ ```
+
+
+
+ > [!TIP]
+ > This configuration assumes you have a compute cluster named `batch-cluster`. You can replace this value with the name of your cluster.
+
+1. Create the deployment:
+
+ # [Azure CLI](#tab/cli)
+
+ Run the following code to create a batch deployment under the batch endpoint and set it as the default deployment.
+
+ :::code language="azurecli" source="~/azureml-examples-batch-pup/cli/endpoints/batch/deploy-pipelines/hello-batch/deploy-and-run.sh" ID="create_deployment_from_job" :::
+
+ > [!TIP]
+ > Notice the use of `--set job_definition=azureml:$JOB_NAME`. Since job names are unique, the command `--set` is used here to change the name of the job when you run it in your workspace.
+
+ # [Python](#tab/python)
+
+ This command starts the deployment creation and returns a confirmation response while the deployment creation continues.
+
+ ```python
+ ml_client.batch_deployments.begin_create_or_update(deployment).result()
+ ```
+
+ Once created, let's configure this new deployment as the default one:
+
+ ```python
+ endpoint = ml_client.batch_endpoints.get(endpoint.name)
+ endpoint.defaults.deployment_name = deployment.name
+ ml_client.batch_endpoints.begin_create_or_update(endpoint).result()
+ ```
+
+1. Your deployment is ready for use.
+
+### Test the deployment
+
+Once the deployment is created, it's ready to receive jobs. You can invoke the default deployment as follows:
+
+# [Azure CLI](#tab/cli)
++
+# [Python](#tab/python)
+
+```python
+job = ml_client.batch_endpoints.invoke(
+ endpoint_name=endpoint.name,
+)
+```
+++
+You can monitor the progress of the show and stream the logs using:
+
+# [Azure CLI](#tab/cli)
++
+# [Python](#tab/python)
+
+```python
+ml_client.jobs.get(name=job.name)
+```
+
+To wait for the job to finish, run the following code:
+
+```python
+ml_client.jobs.stream(name=job.name)
+```
++
+## Clean up resources
+
+Once you're done, delete the associated resources from the workspace:
+
+# [Azure CLI](#tab/cli)
+
+Run the following code to delete the batch endpoint and its underlying deployment. `--yes` is used to confirm the deletion.
++
+# [Python](#tab/python)
+
+Delete the endpoint:
+
+```python
+ml_client.batch_endpoints.begin_delete(endpoint.name).result()
+```
+++
+## Next steps
+
+- [How to deploy a training pipeline with batch endpoints (preview)](how-to-use-batch-training-pipeline.md)
+- [How to deploy a pipeline to perform batch scoring with preprocessing (preview)](how-to-use-batch-scoring-pipeline.md)
+- [Access data from batch endpoints jobs](how-to-access-data-batch-endpoints-jobs.md)
+- [Troubleshooting batch endpoints](how-to-troubleshoot-batch-endpoints.md)
machine-learning How To Use Batch Scoring Pipeline https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-use-batch-scoring-pipeline.md
+
+ Title: "Operationalize a scoring pipeline on batch endpoints (preview)"
+
+description: Learn how to operationalize a pipeline that performs batch scoring with preprocessing.
++++++ Last updated : 04/21/2023
+reviewer: msakande
++++
+# How to deploy a pipeline to perform batch scoring with preprocessing (preview)
++
+In this article, you'll learn how to deploy an inference (or scoring) pipeline under a batch endpoint. The pipeline performs scoring over a registered model while also reusing a preprocessing component from when the model was trained. Reusing the same preprocessing component ensures that the same preprocessing is applied during scoring.
+
+You'll learn to:
+
+> [!div class="checklist"]
+> * Create a pipeline that reuses existing components from the workspace
+> * Deploy the pipeline to an endpoint
+> * Consume predictions generated by the pipeline
++
+## About this example
+
+This example shows you how to reuse preprocessing code and the parameters learned during preprocessing before you use your model for inferencing. By reusing the preprocessing code and learned parameters, we can ensure that the same transformations (such as normalization and feature encoding) that were applied to the input data during training are also applied during inferencing. The model used for inference will perform predictions on tabular data from the [UCI Heart Disease Data Set](https://archive.ics.uci.edu/ml/datasets/Heart+Disease).
+
+A visualization of the pipeline is as follows:
+++
+The files for this example are in:
+
+```azurecli
+cd endpoints/batch/deploy-pipelines/batch-scoring-with-preprocessing
+```
+
+### Follow along in Jupyter notebooks
+
+You can follow along with the Python SDK version of this example by opening the [sdk-deploy-and-test.ipynb](https://github.com/Azure/azureml-examples/blob/main/sdk/python/endpoints/batch/deploy-pipelines/batch-scoring-with-preprocessing/sdk-deploy-and-test.ipynb) notebook in the cloned repository.
++
+## Prerequisites
+++
+## Create the inference pipeline
+
+In this section, we'll create all the assets required for our inference pipeline. We'll begin by creating an environment that includes necessary libraries for the pipeline's components. Next, we'll create a compute cluster on which the batch deployment will run. Afterwards, we'll register the components, models, and transformations we need to build our inference pipeline. Finally, we'll build and test the pipeline.
+
+### Create the environment
+
+The components in this example will use an environment with the `XGBoost` and `scikit-learn` libraries. The `environment/conda.yml` file contains the environment's configuration:
+
+__environment/conda.yml__
++
+Create the environment as follows:
+
+1. Define the environment:
+
+ # [Azure CLI](#tab/cli)
+
+ __environment/xgboost-sklearn-py38.yml__
+
+ :::code language="yaml" source="~/azureml-examples-batch-pup/cli/endpoints/batch/deploy-pipelines/batch-scoring-with-preprocessing/environment/xgboost-sklearn-py38.yml" :::
+
+ # [Python](#tab/python)
+
+ [!notebook-python[] (~/azureml-examples-batch-pup/sdk/python/endpoints/batch/deploy-pipelines/batch-scoring-with-preprocessing/sdk-deploy-and-test.ipynb?name=configure_environment)]
+
+1. Create the environment:
+
+ # [Azure CLI](#tab/cli)
+
+ :::code language="azurecli" source="~/azureml-examples-batch-pup/cli/endpoints/batch/deploy-pipelines/batch-scoring-with-preprocessing/deploy-and-run.sh" ID="create_environment" :::
+
+ # [Python](#tab/python)
+
+ [!notebook-python[] (~/azureml-examples-batch-pup/sdk/python/endpoints/batch/deploy-pipelines/batch-scoring-with-preprocessing/sdk-deploy-and-test.ipynb?name=create_environment)]
+
+### Create a compute cluster
+
+Batch endpoints and deployments run on compute clusters. They can run on any Azure Machine Learning compute cluster that already exists in the workspace. Therefore, multiple batch deployments can share the same compute infrastructure. In this example, we'll work on an Azure Machine Learning compute cluster called `batch-cluster`. Let's verify that the compute exists on the workspace or create it otherwise.
+
+# [Azure CLI](#tab/cli)
++
+# [Python](#tab/python)
+
+[!notebook-python[] (~/azureml-examples-batch-pup/sdk/python/endpoints/batch/deploy-pipelines/batch-scoring-with-preprocessing/sdk-deploy-and-test.ipynb?name=create_compute)]
+++
+### Register components and models
+
+We're going to register components, models, and transformations that we need to build our inference pipeline. We can reuse some of these assets for training routines.
+
+> [!TIP]
+> In this tutorial, we'll reuse the model and the preprocessing component from an earlier training pipeline. You can see how they were created by following the example [How to deploy a training pipeline with batch endpoints](how-to-use-batch-training-pipeline.md).
+
+1. Register the model to use for prediction:
+
+ # [Azure CLI](#tab/cli)
+
+ :::code language="azurecli" source="~/azureml-examples-batch-pup/cli/endpoints/batch/deploy-pipelines/batch-scoring-with-preprocessing/deploy-and-run.sh" ID="register_model" :::
+
+ # [Python](#tab/python)
+
+ [!notebook-python[] (~/azureml-examples-batch-pup/sdk/python/endpoints/batch/deploy-pipelines/batch-scoring-with-preprocessing/sdk-deploy-and-test.ipynb?name=register_model)]
+
+
+
+1. The registered model wasn't trained directly on input data. Instead, the input data was preprocessed (or transformed) before training, using a prepare component. We'll also need to register this component. Register the prepare component:
+
+ # [Azure CLI](#tab/cli)
+
+ :::code language="azurecli" source="~/azureml-examples-batch-pup/cli/endpoints/batch/deploy-pipelines/batch-scoring-with-preprocessing/deploy-and-run.sh" ID="register_preprocessing_component" :::
+
+ # [Python](#tab/python)
+
+ [!notebook-python[] (~/azureml-examples-batch-pup/sdk/python/endpoints/batch/deploy-pipelines/batch-scoring-with-preprocessing/sdk-deploy-and-test.ipynb?name=register_preprocessing_component)]
+
+
+
+ > [!TIP]
+ > After registering the prepare component, you can now reference it from the workspace. For example, `azureml:uci_heart_prepare@latest` will get the last version of the prepare component.
+
+1. As part of the data transformations in the prepare component, the input data was normalized to center the predictors and limit their values in the range of [-1, 1]. The transformation parameters were captured in a scikit-learn transformation that we can also register to apply later when we have new data. Register the transformation as follows:
+
+ # [Azure CLI](#tab/cli)
+
+ :::code language="azurecli" source="~/azureml-examples-batch-pup/cli/endpoints/batch/deploy-pipelines/batch-scoring-with-preprocessing/deploy-and-run.sh" ID="register_transformation" :::
+
+ # [Python](#tab/python)
+
+ [!notebook-python[] (~/azureml-examples-batch-pup/sdk/python/endpoints/batch/deploy-pipelines/batch-scoring-with-preprocessing/sdk-deploy-and-test.ipynb?name=register_transformation)]
+
+1. We'll perform inferencing for the registered model, using another component named `score` that computes the predictions for a given model. We'll reference the component directly from its definition.
+ > [!TIP]
+ > Best practice would be to register the component and reference it from the pipeline. However, in this example, we're going to reference the component directly from its definition to help you see which components are reused from the training pipeline and which ones are new.
++
+### Build the pipeline
+
+Now it's time to bind all the elements together. The inference pipeline we'll deploy has two components (steps):
+
+- `preprocess_job`: This step reads the input data and returns the prepared data and the applied transformations. The step receives two inputs:
+ - `data`: a folder containing the input data to score
+ - `transformations`: (optional) Path to the transformations that will be applied, if available. When provided, the transformations are read from the model that is indicated at the path. However, if the path isn't provided, then the transformations will be learned from the input data. For inferencing, though, you can't learn the transformation parameters (in this example, the normalization coefficients) from the input data because you need to use the same parameter values that were learned during training. Since this input is optional, the `preprocess_job` component can be used during training and scoring.
+- `score_job`: This step will perform inferencing on the transformed data, using the input model. Notice that the component uses an MLflow model to perform inference. Finally, the scores are written back in the same format as they were read.
+
+# [Azure CLI](#tab/cli)
+
+The pipeline configuration is defined in the `pipeline.yml` file:
+
+__pipeline.yml__
++
+# [Python](#tab/python)
+
+[!notebook-python[] (~/azureml-examples-batch-pup/sdk/python/endpoints/batch/deploy-pipelines/batch-scoring-with-preprocessing/sdk-deploy-and-test.ipynb?name=load_component)]
+
+Let's build the pipeline in a function:
+
+[!notebook-python[] (~/azureml-examples-batch-pup/sdk/python/endpoints/batch/deploy-pipelines/batch-scoring-with-preprocessing/sdk-deploy-and-test.ipynb?name=configure_pipeline)]
+++
+A visualization of the pipeline is as follows:
++
+### Test the pipeline
+
+Let's test the pipeline with some sample data. To do that, we'll create a job using the pipeline and the `batch-cluster` compute cluster created previously.
+
+# [Azure CLI](#tab/cli)
+
+The following `pipeline-job.yml` file contains the configuration for the pipeline job:
+
+__pipeline-job.yml__
++
+# [Python](#tab/python)
+
+[!notebook-python[] (~/azureml-examples-batch-pup/sdk/python/endpoints/batch/deploy-pipelines/batch-scoring-with-preprocessing/sdk-deploy-and-test.ipynb?name=configure_pipeline_job)]
+
+Now, we'll configure some run settings to run the test:
+
+[!notebook-python[] (~/azureml-examples-batch-pup/sdk/python/endpoints/batch/deploy-pipelines/batch-scoring-with-preprocessing/sdk-deploy-and-test.ipynb?name=configure_pipeline_job_defaults)]
+++
+Create the test job:
+
+# [Azure CLI](#tab/cli)
++
+# [Python](#tab/python)
+
+[!notebook-python[] (~/azureml-examples-batch-pup/sdk/python/endpoints/batch/deploy-pipelines/batch-scoring-with-preprocessing/sdk-deploy-and-test.ipynb?name=test_pipeline)]
+++
+## Create a batch endpoint
+
+1. Provide a name for the endpoint. A batch endpoint's name needs to be unique in each region since the name is used to construct the invocation URI. To ensure uniqueness, append any trailing characters to the name specified in the following code.
+
+ # [Azure CLI](#tab/cli)
+
+ :::code language="azurecli" source="~/azureml-examples-batch-pup/cli/endpoints/batch/deploy-pipelines/batch-scoring-with-preprocessing/deploy-and-run.sh" ID="name_endpoint" :::
+
+ # [Python](#tab/python)
+
+ [!notebook-python[] (~/azureml-examples-batch-pup/sdk/python/endpoints/batch/deploy-pipelines/batch-scoring-with-preprocessing/sdk-deploy-and-test.ipynb?name=name_endpoint)]
+
+1. Configure the endpoint:
+
+ # [Azure CLI](#tab/cli)
+
+ The `endpoint.yml` file contains the endpoint's configuration.
+
+ __endpoint.yml__
+
+ :::code language="yaml" source="~/azureml-examples-batch-pup/cli/endpoints/batch/deploy-pipelines/batch-scoring-with-preprocessing/endpoint.yml" :::
+
+ # [Python](#tab/python)
+
+ [!notebook-python[] (~/azureml-examples-batch-pup/sdk/python/endpoints/batch/deploy-pipelines/batch-scoring-with-preprocessing/sdk-deploy-and-test.ipynb?name=configure_endpoint)]
+
+1. Create the endpoint:
+
+ # [Azure CLI](#tab/cli)
+
+ :::code language="azurecli" source="~/azureml-examples-batch-pup/cli/endpoints/batch/deploy-pipelines/batch-scoring-with-preprocessing/deploy-and-run.sh" ID="create_endpoint" :::
+
+ # [Python](#tab/python)
+
+ [!notebook-python[] (~/azureml-examples-batch-pup/sdk/python/endpoints/batch/deploy-pipelines/batch-scoring-with-preprocessing/sdk-deploy-and-test.ipynb?name=create_endpoint)]
+
+1. Query the endpoint URI:
+
+ # [Azure CLI](#tab/cli)
+
+ :::code language="azurecli" source="~/azureml-examples-batch-pup/cli/endpoints/batch/deploy-pipelines/batch-scoring-with-preprocessing/deploy-and-run.sh" ID="query_endpoint" :::
+
+ # [Python](#tab/python)
+
+ [!notebook-python[] (~/azureml-examples-batch-pup/sdk/python/endpoints/batch/deploy-pipelines/batch-scoring-with-preprocessing/sdk-deploy-and-test.ipynb?name=query_endpoint)]
+
+## Deploy the pipeline component
+
+To deploy the pipeline component, we have to create a batch deployment. A deployment is a set of resources required for hosting the asset that does the actual work.
+
+1. Configure the deployment
+
+ # [Azure CLI](#tab/cli)
+
+ The `deployment.yml` file contains the deployment's configuration.
+
+ __deployment.yml__
+
+ :::code language="yaml" source="~/azureml-examples-batch-pup/cli/endpoints/batch/deploy-pipelines/batch-scoring-with-preprocessing/deployment.yml" :::
+
+ # [Python](#tab/python)
+
+ Our pipeline is defined in a function. To transform it to a component, you'll use the `build()` method. Pipeline components are reusable compute graphs that can be included in batch deployments or used to compose more complex pipelines.
+
+ [!notebook-python[] (~/azureml-examples-batch-pup/sdk/python/endpoints/batch/deploy-pipelines/batch-scoring-with-preprocessing/sdk-deploy-and-test.ipynb?name=build_pipeline)]
+
+ Now we can define the deployment:
+
+ [!notebook-python[] (~/azureml-examples-batch-pup/sdk/python/endpoints/batch/deploy-pipelines/batch-scoring-with-preprocessing/sdk-deploy-and-test.ipynb?name=configure_deployment)]
+
+1. Create the deployment
+
+ # [Azure CLI](#tab/cli)
+
+ Run the following code to create a batch deployment under the batch endpoint and set it as the default deployment.
+
+ :::code language="azurecli" source="~/azureml-examples-batch-pup/cli/endpoints/batch/deploy-pipelines/batch-scoring-with-preprocessing/deploy-and-run.sh" ID="create_deployment" :::
+
+ > [!TIP]
+ > Notice the use of the `--set-default` flag to indicate that this new deployment is now the default.
+
+ # [Python](#tab/python)
+
+ This command will start the deployment creation and return a confirmation response while the deployment creation continues.
+
+ [!notebook-python[] (~/azureml-examples-batch-pup/sdk/python/endpoints/batch/deploy-pipelines/batch-scoring-with-preprocessing/sdk-deploy-and-test.ipynb?name=create_deployment)]
+
+ Once created, let's configure this new deployment as the default one:
+
+ [!notebook-python[] (~/azureml-examples-batch-pup/sdk/python/endpoints/batch/deploy-pipelines/batch-scoring-with-preprocessing/sdk-deploy-and-test.ipynb?name=update_default_deployment)]
+
+1. Your deployment is ready for use.
+
+## Test the deployment
+
+Once the deployment is created, it's ready to receive jobs. Follow these steps to test it:
+
+1. Our deployment requires that we indicate one data input and one literal input.
+
+ # [Azure CLI](#tab/cli)
+
+ The `inputs.yml` file contains the definition for the input data asset:
+
+ __inputs.yml__
+
+ :::code language="yaml" source="~/azureml-examples-batch-pup/cli/endpoints/batch/deploy-pipelines/batch-scoring-with-preprocessing/inputs.yml" :::
+
+ # [Python](#tab/python)
+
+ The input data asset definition:
+
+ [!notebook-python[] (~/azureml-examples-batch-pup/sdk/python/endpoints/batch/deploy-pipelines/batch-scoring-with-preprocessing/sdk-deploy-and-test.ipynb?name=configure_inputs)]
+
+
+
+ > [!TIP]
+ > To learn more about how to indicate inputs, see [Create jobs and input data for batch endpoints](how-to-access-data-batch-endpoints-jobs.md).
+
+1. You can invoke the default deployment as follows:
+
+ # [Azure CLI](#tab/cli)
+
+ :::code language="azurecli" source="~/azureml-examples-batch-pup/cli/endpoints/batch/deploy-pipelines/batch-scoring-with-preprocessing/deploy-and-run.sh" ID="invoke_deployment_file" :::
+
+ # [Python](#tab/python)
+
+ [!notebook-python[] (~/azureml-examples-batch-pup/sdk/python/endpoints/batch/deploy-pipelines/batch-scoring-with-preprocessing/sdk-deploy-and-test.ipynb?name=invoke_deployment)]
+
+1. You can monitor the progress of the show and stream the logs using:
+
+ # [Azure CLI](#tab/cli)
+
+ :::code language="azurecli" source="~/azureml-examples-batch-pup/cli/endpoints/batch/deploy-pipelines/batch-scoring-with-preprocessing/deploy-and-run.sh" ID="stream_job_logs" :::
+
+ # [Python](#tab/python)
+
+ [!notebook-python[] (~/azureml-examples-batch-pup/sdk/python/endpoints/batch/deploy-pipelines/batch-scoring-with-preprocessing/sdk-deploy-and-test.ipynb?name=get_job)]
+
+ To wait for the job to finish, run the following code:
+
+ [!notebook-python[] (~/azureml-examples-batch-pup/sdk/python/endpoints/batch/deploy-pipelines/batch-scoring-with-preprocessing/sdk-deploy-and-test.ipynb?name=stream_job_logs)]
+
+### Access job output
+
+Once the job is completed, we can access its output. This job contains only one output named `scores`:
+
+# [Azure CLI](#tab/cli)
+
+You can download the associated results using `az ml job download`.
++
+# [Python](#tab/python)
+
+Download the result:
+
+[!notebook-python[] (~/azureml-examples-batch-pup/sdk/python/endpoints/batch/deploy-pipelines/batch-scoring-with-preprocessing/sdk-deploy-and-test.ipynb?name=download_outputs)]
+++
+Read the scored data:
+
+[!notebook-python[] (~/azureml-examples-batch-pup/sdk/python/endpoints/batch/deploy-pipelines/batch-scoring-with-preprocessing/sdk-deploy-and-test.ipynb?name=read_outputs)]
+
+The output looks as follows:
+
+| age | sex | ... | thal | prediction |
+||--|-|||
+| 0.9338 | 1 | ... | 2 | 0 |
+| 1.3782 | 1 | ... | 3 | 1 |
+| 1.3782 | 1 | ... | 4 | 0 |
+| -1.954 | 1 | ... | 3 | 0 |
+
+The output contains the predictions plus the data that was provided to the *score* component, which was preprocessed. For example, the column `age` has been normalized, and column `thal` contains original encoding values. In practice, you probably want to output the prediction only and then concat it with the original values. This work has been left to the reader.
+
+## Clean up resources
+
+Once you're done, delete the associated resources from the workspace:
+
+# [Azure CLI](#tab/cli)
+
+Run the following code to delete the batch endpoint and its underlying deployment. `--yes` is used to confirm the deletion.
++
+# [Python](#tab/python)
+
+Delete the endpoint:
+
+[!notebook-python[] (~/azureml-examples-batch-pup/sdk/python/endpoints/batch/deploy-pipelines/batch-scoring-with-preprocessing/sdk-deploy-and-test.ipynb?name=delete_endpoint)]
+++
+(Optional) Delete compute, unless you plan to reuse your compute cluster with later deployments.
+
+# [Azure CLI](#tab/cli)
+
+```azurecli
+az ml compute delete -n batch-cluster
+```
+
+# [Python](#tab/python)
+
+```python
+ml_client.compute.begin_delete(name="batch-cluster")
+```
++
+## Next steps
+
+- [Create batch endpoints from pipeline jobs (preview)](how-to-use-batch-pipeline-from-job.md)
+- [Accessing data from batch endpoints jobs](how-to-access-data-batch-endpoints-jobs.md)
+- [Troubleshooting batch endpoints](how-to-troubleshoot-batch-endpoints.md)
machine-learning How To Use Batch Training Pipeline https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-use-batch-training-pipeline.md
+
+ Title: "Operationalize a training pipeline on batch endpoints (preview)"
+
+description: Learn how to deploy a training pipeline under a batch endpoint.
++++++ Last updated : 04/21/2023
+reviewer: msakande
++++
+# How to operationalize a training pipeline with batch endpoints (preview)
++
+In this article, you'll learn how to operationalize a training pipeline under a batch endpoint. The pipeline uses multiple components (or steps) that include model training, data preprocessing, and model evaluation.
+
+You'll learn to:
+
+> [!div class="checklist"]
+> * Create and test a training pipeline
+> * Deploy the pipeline to a batch endpoint
+> * Modify the pipeline and create a new deployment in the same endpoint
+> * Test the new deployment and set it as the default deployment
++
+## About this example
+
+This example deploys a training pipeline that takes input training data (labeled) and produces a predictive model, along with the evaluation results and the transformations applied during preprocessing. The pipeline will use tabular data from the [UCI Heart Disease Data Set](https://archive.ics.uci.edu/ml/datasets/Heart+Disease) to train an XGBoost model. We use a data preprocessing component to preprocess the data before it is sent to the training component to fit and evaluate the model.
+
+A visualization of the pipeline is as follows:
+++
+The files for this example are in:
+
+```azurecli
+cd endpoints/batch/deploy-pipelines/training-with-components
+```
+
+### Follow along in Jupyter notebooks
+
+You can follow along with the Python SDK version of this example by opening the [sdk-deploy-and-test.ipynb](https://github.com/Azure/azureml-examples/blob/main/sdk/python/endpoints/batch/deploy-pipelines/training-with-components/sdk-deploy-and-test.ipynb) notebook in the cloned repository.
+
+## Prerequisites
++
+## Create the training pipeline component
+
+In this section, we'll create all the assets required for our training pipeline. We'll begin by creating an environment that includes necessary libraries to train the model. We'll then create a compute cluster on which the batch deployment will run, and finally, we'll register the input data as a data asset.
+
+### Create the environment
+
+The components in this example will use an environment with the `XGBoost` and `scikit-learn` libraries. The `environment/conda.yml` file contains the environment's configuration:
+
+__environment/conda.yml__
++
+Create the environment as follows:
+
+1. Define the environment:
+
+ # [Azure CLI](#tab/cli)
+
+ __environment/xgboost-sklearn-py38.yml__
+
+ :::code language="yaml" source="~/azureml-examples-batch-pup/cli/endpoints/batch/deploy-pipelines/training-with-components/environment/xgboost-sklearn-py38.yml" :::
+
+ # [Python](#tab/python)
+
+ [!notebook-python[] (~/azureml-examples-batch-pup/sdk/python/endpoints/batch/deploy-pipelines/training-with-components/sdk-deploy-and-test.ipynb?name=configure_environment)]
+
+1. Create the environment:
+
+ # [Azure CLI](#tab/cli)
+
+ :::code language="azurecli" source="~/azureml-examples-batch-pup/cli/endpoints/batch/deploy-pipelines/training-with-components/deploy-and-run.sh" ID="create_environment" :::
+
+ # [Python](#tab/python)
+
+ [!notebook-python[] (~/azureml-examples-batch-pup/sdk/python/endpoints/batch/deploy-pipelines/training-with-components/sdk-deploy-and-test.ipynb?name=create_environment)]
+
+### Create a compute cluster
+
+Batch endpoints and deployments run on compute clusters. They can run on any Azure Machine Learning compute cluster that already exists in the workspace. Therefore, multiple batch deployments can share the same compute infrastructure. In this example, we'll work on an Azure Machine Learning compute cluster called `batch-cluster`. Let's verify that the compute exists on the workspace or create it otherwise.
+
+# [Azure CLI](#tab/cli)
++
+# [Python](#tab/python)
+
+[!notebook-python[] (~/azureml-examples-batch-pup/sdk/python/endpoints/batch/deploy-pipelines/training-with-components/sdk-deploy-and-test.ipynb?name=create_compute)]
++
+### Register the training data as a data asset
+
+Our training data is represented in CSV files. To mimic a more production-level workload, we're going to register the training data in the `heart.csv` file as a data asset in the workspace. This data asset will later be indicated as an input to the endpoint.
+
+# [Azure CLI](#tab/cli)
++
+# [Python](#tab/python)
+
+[!notebook-python[] (~/azureml-examples-batch-pup/sdk/python/endpoints/batch/deploy-pipelines/training-with-components/sdk-deploy-and-test.ipynb?name=configure_data_asset)]
+
+Create the data asset:
+
+[!notebook-python[] (~/azureml-examples-batch-pup/sdk/python/endpoints/batch/deploy-pipelines/training-with-components/sdk-deploy-and-test.ipynb?name=create_data_asset)]
+
+Let's get a reference to the new data asset:
+
+[!notebook-python[] (~/azureml-examples-batch-pup/sdk/python/endpoints/batch/deploy-pipelines/training-with-components/sdk-deploy-and-test.ipynb?name=get_data_asset)]
+++
+### Create the pipeline
+
+The pipeline we want to operationalize takes one input, the training data, and produces three outputs: the trained model, the evaluation results, and the data transformations applied as preprocessing. The pipeline consists of two components:
+
+- `preprocess_job`: This step reads the input data and returns the prepared data and the applied transformations. The step receives three inputs:
+ - `data`: a folder containing the input data to transform and score
+ - `transformations`: (optional) Path to the transformations that will be applied, if available. If the path isn't provided, then the transformations will be learned from the input data. Since the `transformations` input is optional, the `preprocess_job` component can be used during training and scoring.
+ - `categorical_encoding`: the encoding strategy for the categorical features (`ordinal` or `onehot`).
+- `train_job`: This step will train an XGBoost model based on the prepared data and return the evaluation results and the trained model. The step receives three inputs:
+ - `data`: the preprocessed data.
+ - `target_column`: the column that we want to predict.
+ - `eval_size`: indicates the proportion of the input data used for evaluation.
+
+# [Azure CLI](#tab/cli)
+
+The pipeline configuration is defined in the `deployment-ordinal/pipeline.yml` file:
+
+__deployment-ordinal/pipeline.yml__
++
+> [!NOTE]
+> In the `pipeline.yml` file, the `transformations` input is missing from the `preprocess_job`; therefore, the script will learn the transformation parameters from the input data.
+
+# [Python](#tab/python)
+
+The configurations for the pipeline components are in the `prepare.yml` and `train_xgb.yml` files. Load the components:
+
+[!notebook-python[] (~/azureml-examples-batch-pup/sdk/python/endpoints/batch/deploy-pipelines/training-with-components/sdk-deploy-and-test.ipynb?name=load_component)]
+
+Construct the pipeline:
+
+[!notebook-python[] (~/azureml-examples-batch-pup/sdk/python/endpoints/batch/deploy-pipelines/training-with-components/sdk-deploy-and-test.ipynb?name=configure_pipeline)]
+
+> [!NOTE]
+> In the pipeline, the `transformations` input is missing; therefore, the script will learn the parameters from the input data.
+++
+A visualization of the pipeline is as follows:
++
+### Test the pipeline
+
+Let's test the pipeline with some sample data. To do that, we'll create a job using the pipeline and the `batch-cluster` compute cluster created previously.
+
+# [Azure CLI](#tab/cli)
+
+The following `pipeline-job.yml` file contains the configuration for the pipeline job:
+
+__deployment-ordinal/pipeline-job.yml__
+++
+# [Python](#tab/python)
+
+[!notebook-python[] (~/azureml-examples-batch-pup/sdk/python/endpoints/batch/deploy-pipelines/training-with-components/sdk-deploy-and-test.ipynb?name=configure_pipeline_job)]
+
+Now, we'll configure some run settings to run the test:
+
+[!notebook-python[] (~/azureml-examples-batch-pup/sdk/python/endpoints/batch/deploy-pipelines/training-with-components/sdk-deploy-and-test.ipynb?name=configure_pipeline_job_defaults)]
+++
+Create the test job:
+
+# [Azure CLI](#tab/cli)
++
+# [Python](#tab/python)
+
+[!notebook-python[] (~/azureml-examples-batch-pup/sdk/python/endpoints/batch/deploy-pipelines/training-with-components/sdk-deploy-and-test.ipynb?name=test_pipeline)]
+++
+## Create a batch endpoint
+
+1. Provide a name for the endpoint. A batch endpoint's name needs to be unique in each region since the name is used to construct the invocation URI. To ensure uniqueness, append any trailing characters to the name specified in the following code.
+
+ # [Azure CLI](#tab/cli)
+
+ :::code language="azurecli" source="~/azureml-examples-batch-pup/cli/endpoints/batch/deploy-pipelines/training-with-components/deploy-and-run.sh" ID="name_endpoint" :::
+
+ # [Python](#tab/python)
+
+ [!notebook-python[] (~/azureml-examples-batch-pup/sdk/python/endpoints/batch/deploy-pipelines/training-with-components/sdk-deploy-and-test.ipynb?name=name_endpoint)]
+
+1. Configure the endpoint:
+
+ # [Azure CLI](#tab/cli)
+
+ The `endpoint.yml` file contains the endpoint's configuration.
+
+ __endpoint.yml__
+
+ :::code language="yaml" source="~/azureml-examples-batch-pup/cli/endpoints/batch/deploy-pipelines/training-with-components/endpoint.yml" :::
+
+ # [Python](#tab/python)
+
+ [!notebook-python[] (~/azureml-examples-batch-pup/sdk/python/endpoints/batch/deploy-pipelines/training-with-components/sdk-deploy-and-test.ipynb?name=configure_endpoint)]
+
+1. Create the endpoint:
+
+ # [Azure CLI](#tab/cli)
+
+ :::code language="azurecli" source="~/azureml-examples-batch-pup/cli/endpoints/batch/deploy-pipelines/training-with-components/deploy-and-run.sh" ID="create_endpoint" :::
+
+ # [Python](#tab/python)
+
+ [!notebook-python[] (~/azureml-examples-batch-pup/sdk/python/endpoints/batch/deploy-pipelines/training-with-components/sdk-deploy-and-test.ipynb?name=create_endpoint)]
+
+1. Query the endpoint URI:
+
+ # [Azure CLI](#tab/cli)
+
+ :::code language="azurecli" source="~/azureml-examples-batch-pup/cli/endpoints/batch/deploy-pipelines/training-with-components/deploy-and-run.sh" ID="query_endpoint" :::
+
+ # [Python](#tab/python)
+
+ [!notebook-python[] (~/azureml-examples-batch-pup/sdk/python/endpoints/batch/deploy-pipelines/training-with-components/sdk-deploy-and-test.ipynb?name=query_endpoint)]
+
+## Deploy the pipeline component
+
+To deploy the pipeline component, we have to create a batch deployment. A deployment is a set of resources required for hosting the asset that does the actual work.
+
+1. Configure the deployment:
+
+ # [Azure CLI](#tab/cli)
+
+ The `deployment-ordinal/deployment.yml` file contains the deployment's configuration.
+
+ __deployment-ordinal/deployment.yml__
+
+ :::code language="yaml" source="~/azureml-examples-batch-pup/cli/endpoints/batch/deploy-pipelines/training-with-components/deployment-ordinal/deployment.yml" :::
+
+ # [Python](#tab/python)
+
+ Our pipeline is defined in a function. To transform it to a component, you'll use the `build()` method. Pipeline components are reusable compute graphs that can be included in batch deployments or used to compose more complex pipelines.
+
+ [!notebook-python[] (~/azureml-examples-batch-pup/sdk/python/endpoints/batch/deploy-pipelines/training-with-components/sdk-deploy-and-test.ipynb?name=build_pipeline_component)]
+
+ Now we can define the deployment:
+
+ [!notebook-python[] (~/azureml-examples-batch-pup/sdk/python/endpoints/batch/deploy-pipelines/training-with-components/sdk-deploy-and-test.ipynb?name=configure_deployment)]
+
+1. Create the deployment:
+
+ # [Azure CLI](#tab/cli)
+
+ Run the following code to create a batch deployment under the batch endpoint and set it as the default deployment.
+
+ :::code language="azurecli" source="~/azureml-examples-batch-pup/cli/endpoints/batch/deploy-pipelines/training-with-components/deploy-and-run.sh" ID="create_deployment" :::
+
+ > [!TIP]
+ > Notice the use of the `--set-default` flag to indicate that this new deployment is now the default.
+
+ # [Python](#tab/python)
+
+ This command will start the deployment creation and return a confirmation response while the deployment creation continues.
+
+ [!notebook-python[] (~/azureml-examples-batch-pup/sdk/python/endpoints/batch/deploy-pipelines/training-with-components/sdk-deploy-and-test.ipynb?name=create_deployment)]
+
+ Once created, let's configure this new deployment as the default one:
+
+ [!notebook-python[] (~/azureml-examples-batch-pup/sdk/python/endpoints/batch/deploy-pipelines/training-with-components/sdk-deploy-and-test.ipynb?name=update_default_deployment)]
+
+1. Your deployment is ready for use.
+
+## Test the deployment
+
+Once the deployment is created, it's ready to receive jobs. Follow these steps to test it:
+
+1. Our deployment requires that we indicate one data input.
+
+ # [Azure CLI](#tab/cli)
+
+ The `inputs.yml` file contains the definition for the input data asset:
+
+ __inputs.yml__
+
+ :::code language="yaml" source="~/azureml-examples-batch-pup/cli/endpoints/batch/deploy-pipelines/training-with-components/inputs.yml" :::
+
+ # [Python](#tab/python)
+
+ Define the input data asset:
+
+ [!notebook-python[] (~/azureml-examples-batch-pup/sdk/python/endpoints/batch/deploy-pipelines/training-with-components/sdk-deploy-and-test.ipynb?name=configure_inputs)]
+
+
+
+ > [!TIP]
+ > To learn more about how to indicate inputs, see [Create jobs and input data for batch endpoints](how-to-access-data-batch-endpoints-jobs.md).
+
+1. You can invoke the default deployment as follows:
+
+ # [Azure CLI](#tab/cli)
+
+ :::code language="azurecli" source="~/azureml-examples-batch-pup/cli/endpoints/batch/deploy-pipelines/training-with-components/deploy-and-run.sh" ID="invoke_deployment_file" :::
+
+ # [Python](#tab/python)
+
+ [!notebook-python[] (~/azureml-examples-batch-pup/sdk/python/endpoints/batch/deploy-pipelines/training-with-components/sdk-deploy-and-test.ipynb?name=invoke_deployment)]
+
+1. You can monitor the progress of the show and stream the logs using:
+
+ # [Azure CLI](#tab/cli)
+
+ :::code language="azurecli" source="~/azureml-examples-batch-pup/cli/endpoints/batch/deploy-pipelines/training-with-components/deploy-and-run.sh" ID="stream_job_logs" :::
+
+ # [Python](#tab/python)
+
+ [!notebook-python[] (~/azureml-examples-batch-pup/sdk/python/endpoints/batch/deploy-pipelines/training-with-components/sdk-deploy-and-test.ipynb?name=get_job)]
+
+ To wait for the job to finish, run the following code:
+
+ [!notebook-python[] (~/azureml-examples-batch-pup/sdk/python/endpoints/batch/deploy-pipelines/training-with-components/sdk-deploy-and-test.ipynb?name=stream_job_logs)]
+
+It's worth mentioning that only the pipeline's inputs are published as inputs in the batch endpoint. For instance, `categorical_encoding` is an input of a step of the pipeline, but not an input in the pipeline itself. Use this fact to control which inputs you want to expose to your clients and which ones you want to hide.
+
+### Access job outputs
+
+Once the job is completed, we can access some of its outputs. This pipeline produces the following outputs for its components:
+- `preprocess job`: output is `transformations_output`
+- `train job`: outputs are `model` and `evaluation_results`
+
+You can download the associated results using:
+
+# [Azure CLI](#tab/cli)
++
+# [Python](#tab/python)
+
+[!notebook-python[] (~/azureml-examples-batch-pup/sdk/python/endpoints/batch/deploy-pipelines/training-with-components/sdk-deploy-and-test.ipynb?name=download_outputs)]
+++
+## Create a new deployment in the endpoint
+
+Endpoints can host multiple deployments at once, while keeping only one deployment as the default. Therefore, you can iterate over your different models, deploy the different models to your endpoint and test them, and finally, switch the default deployment to the model deployment that works best for you.
+
+Let's change the way preprocessing is done in the pipeline to see if we get a model that performs better.
+
+### Change a parameter in the pipeline's preprocessing component
+
+The preprocessing component has an input called `categorical_encoding`, which can have values `ordinal` or `onehot`. These values correspond to two different ways of encoding categorical features.
+
+- `ordinal`: Encodes the feature values with numeric values (ordinal) from `[1:n]`, where `n` is the number of categories in the feature. Ordinal encoding implies that there's a natural rank order among the feature categories.
+- `onehot`: Doesn't imply a natural rank ordered relationship but introduces a dimensionality problem if the number of categories is large.
+
+By default, we used `ordinal` previously. Let's now change the categorical encoding to use `onehot` and see how the model performs.
+
+> [!TIP]
+> Alternatively, we could have exposed the `categorial_encoding` input to clients as an input to the pipeline job itself. However, we chose to change the parameter value in the preprocessing step so that we can hide and control the parameter inside of the deployment and take advantage of the opportunity to have multiple deployments under the same endpoint.
+
+1. Modify the pipeline. It looks as follows:
+
+ # [Azure CLI](#tab/cli)
+
+ The pipeline configuration is defined in the `deployment-onehot/pipeline.yml` file:
+
+ __deployment-onehot/pipeline.yml__
+
+ :::code language="yaml" source="~/azureml-examples-batch-pup/cli/endpoints/batch/deploy-pipelines/training-with-components/deployment-onehot/pipeline.yml" highlight="29" :::
+
+ # [Python](#tab/python)
+
+ [!notebook-python[] (~/azureml-examples-batch-pup/sdk/python/endpoints/batch/deploy-pipelines/training-with-components/sdk-deploy-and-test.ipynb?name=configure_nondefault_pipeline)]
+
+1. Configure the deployment:
+
+ # [Azure CLI](#tab/cli)
+
+ The `deployment-onehot/deployment.yml` file contains the deployment's configuration.
+
+ __deployment-onehot/deployment.yml__
+
+ :::code language="yaml" source="~/azureml-examples-batch-pup/cli/endpoints/batch/deploy-pipelines/training-with-components/deployment-onehot/deployment.yml" :::
+
+ # [Python](#tab/python)
+
+ Our pipeline is defined in a function. To transform it to a component, you'll use the `build()` method. Pipeline components are reusable compute graphs that can be included in batch deployments or used to compose more complex pipelines.
+
+ [!notebook-python[] (~/azureml-examples-batch-pup/sdk/python/endpoints/batch/deploy-pipelines/training-with-components/sdk-deploy-and-test.ipynb?name=build_nondefault_pipeline)]
+
+ Now we can define the deployment:
+
+ [!notebook-python[] (~/azureml-examples-batch-pup/sdk/python/endpoints/batch/deploy-pipelines/training-with-components/sdk-deploy-and-test.ipynb?name=configure_nondefault_deployment)]
+
+1. Create the deployment:
+
+ # [Azure CLI](#tab/cli)
+
+ Run the following code to create a batch deployment under the batch endpoint and set it as the default deployment.
+
+ :::code language="azurecli" source="~/azureml-examples-batch-pup/cli/endpoints/batch/deploy-pipelines/training-with-components/deploy-and-run.sh" ID="create_nondefault_deployment" :::
+
+ Your deployment is ready for use.
+
+ # [Python](#tab/python)
+
+ This command will start the deployment creation and return a confirmation response while the deployment creation continues.
+
+ [!notebook-python[] (~/azureml-examples-batch-pup/sdk/python/endpoints/batch/deploy-pipelines/training-with-components/sdk-deploy-and-test.ipynb?name=create_nondefault_deployment)]
+
+1. Your deployment is ready for use.
+
+### Test a nondefault deployment
+
+Once the deployment is created, it's ready to receive jobs. We can test it in the same way we did before, but now we'll invoke a specific deployment:
+
+1. Invoke the deployment as follows, specifying the deployment parameter to trigger the specific deployment `uci-classifier-train-onehot`:
+
+ # [Azure CLI](#tab/cli)
+
+ :::code language="azurecli" source="~/azureml-examples-batch-pup/cli/endpoints/batch/deploy-pipelines/training-with-components/deploy-and-run.sh" ID="invoke_nondefault_deployment_file" :::
+
+ # [Python](#tab/python)
+
+ [!notebook-python[] (~/azureml-examples-batch-pup/sdk/python/endpoints/batch/deploy-pipelines/training-with-components/sdk-deploy-and-test.ipynb?name=invoke_nondefault_deployment)]
+
+1. You can monitor the progress of the show and stream the logs using:
+
+ # [Azure CLI](#tab/cli)
+
+ :::code language="azurecli" source="~/azureml-examples-batch-pup/cli/endpoints/batch/deploy-pipelines/training-with-components/deploy-and-run.sh" ID="stream_job_logs" :::
+
+ # [Python](#tab/python)
+
+ [!notebook-python[] (~/azureml-examples-batch-pup/sdk/python/endpoints/batch/deploy-pipelines/training-with-components/sdk-deploy-and-test.ipynb?name=get_nondefault_job)]
+
+ To wait for the job to finish, run the following code:
+
+ [!notebook-python[] (~/azureml-examples-batch-pup/sdk/python/endpoints/batch/deploy-pipelines/training-with-components/sdk-deploy-and-test.ipynb?name=stream_nondefault_job_logs)]
++
+### Configure the new deployment as the default one
+
+Once we're satisfied with the performance of the new deployment, we can set this new one as the default:
+
+# [Azure CLI](#tab/cli)
+
+
+# [Python](#tab/python)
+
+[!notebook-python[] (~/azureml-examples-batch-pup/sdk/python/endpoints/batch/deploy-pipelines/training-with-components/sdk-deploy-and-test.ipynb?name=update_default_deployment)]
++
+### Delete the old deployment
+
+Once you're done, you can delete the old deployment if you don't need it anymore:
+
+# [Azure CLI](#tab/cli)
++
+# [Python](#tab/python)
+
+[!notebook-python[] (~/azureml-examples-batch-pup/sdk/python/endpoints/batch/deploy-pipelines/training-with-components/sdk-deploy-and-test.ipynb?name=delete_deployment)]
++
+## Clean up resources
+
+Once you're done, delete the associated resources from the workspace:
+
+# [Azure CLI](#tab/cli)
+
+Run the following code to delete the batch endpoint and its underlying deployment. `--yes` is used to confirm the deletion.
++
+# [Python](#tab/python)
+
+Delete the endpoint:
+
+[!notebook-python[] (~/azureml-examples-batch-pup/sdk/python/endpoints/batch/deploy-pipelines/training-with-components/sdk-deploy-and-test.ipynb?name=delete_endpoint)]
++
+(Optional) Delete compute, unless you plan to reuse your compute cluster with later deployments.
+
+# [Azure CLI](#tab/cli)
+
+```azurecli
+az ml compute delete -n batch-cluster
+```
+
+# [Python](#tab/python)
+
+```python
+ml_client.compute.begin_delete(name="batch-cluster")
+```
++
+## Next steps
+
+- [How to deploy a pipeline to perform batch scoring with preprocessing (preview)](how-to-use-batch-scoring-pipeline.md)
+- [Create batch endpoints from pipeline jobs (preview)](how-to-use-batch-pipeline-from-job.md)
+- [Accessing data from batch endpoints jobs](how-to-access-data-batch-endpoints-jobs.md)
+- [Troubleshooting batch endpoints](how-to-troubleshoot-batch-endpoints.md)
machine-learning How To Use Event Grid Batch https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-use-event-grid-batch.md
The workflow will work in the following way:
* This example assumes that you have a model correctly deployed as a batch endpoint. Particularly, we are using the *heart condition classifier* created in the tutorial [Using MLflow models in batch deployments](how-to-mlflow-batch.md). * This example assumes that your batch deployment runs in a compute cluster called `cpu-cluster`.
-* The Logic App we are creating will communicate with Azure Machine Learning batch endpoints using REST. To know more about how to use the REST API of batch endpoints read [Deploy models with REST for batch scoring](how-to-deploy-batch-with-rest.md).
+* The Logic App we are creating will communicate with Azure Machine Learning batch endpoints using REST. To know more about how to use the REST API of batch endpoints read [Create jobs and input data for batch endpoints](how-to-access-data-batch-endpoints-jobs.md?tabs=rest).
## Authenticating against batch endpoints
machine-learning How To Use Pipeline Component https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-use-pipeline-component.md
After submitted pipeline job, you can go to pipeline job detail page to change p
- [pipeline_with_train_eval_pipeline_component](https://github.com/Azure/azureml-examples/blob/main/sdk/python/jobs/pipelines/1j_pipeline_with_pipeline_component/pipeline_with_train_eval_pipeline_component/pipeline_with_train_eval_pipeline_component.ipynb) ## Next steps+ - [YAML reference for pipeline component](reference-yaml-component-pipeline.md) - [Track an experiment](how-to-log-view-metrics.md) - [Deploy a trained model](how-to-deploy-managed-online-endpoints.md)
+- [Deploy a pipeline with batch endpoints (preview)](how-to-use-batch-pipeline-deployments.md)
machine-learning How To Use Pipeline Ui https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-use-pipeline-ui.md
In this article, you learned the key features in how to create, explore, and deb
+ [How to train a model in the designer](tutorial-designer-automobile-price-train-score.md) + [How to deploy model to real-time endpoint in the designer](tutorial-designer-automobile-price-deploy.md) + [What is machine learning component](concept-component.md)++ [Deploy a pipeline with batch endpoints (preview)](how-to-use-batch-pipeline-deployments.md)
machine-learning Migrate To V2 Deploy Endpoints https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/migrate-to-v2-deploy-endpoints.md
For more information on registering models, see [Register a model from a local f
ml_client.begin_create_or_update(endpoint) ```
-For more information on concepts for endpoints and deployments, see [What are online endpoints?](concept-endpoints.md#what-are-online-endpoints)
+For more information on concepts for endpoints and deployments, see [What are online endpoints?](concept-endpoints-online.md)
## Submit a request
machine-learning Migrate To V2 Deploy Pipelines https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/migrate-to-v2-deploy-pipelines.md
+
+ Title: Upgrade pipeline endpoints to SDK v2
+
+description: Upgrade pipeline endpoints from v1 to v2 of Azure Machine Learning SDK
++++++ Last updated : 05/01/2023++
+monikerRange: 'azureml-api-1 || azureml-api-2'
++
+# Upgrade pipeline endpoints to SDK v2
+
+Once you have a pipeline up and running, you can publish a pipeline so that it runs with different inputs. This was known as __Published Pipelines__.
+
+## What has changed?
+
+[Batch Endpoint](concept-endpoints-batch.md) proposes a similar yet more powerful way to handle multiple assets running under a durable API which is why the Published pipelines functionality has been moved to [Pipeline component deployments in batch endpoints (preview)](concept-endpoints-batch.md#pipeline-component-deployment-preview).
+
+[Batch endpoints](concept-endpoints-batch.md) decouples the interface (endpoint) from the actual implementation (deployment) and allow the user to decide which deployment serves the default implementation of the endpoint. [Pipeline component deployments in batch endpoints](concept-endpoints-batch.md#pipeline-component-deployment-preview) allow users to deploy pipeline components instead of pipelines, which make a better use of reusable assets for those organizations looking to streamline their MLOps practice.
+
+The following table shows a comparison of each of the concepts:
+
+| Concept | SDK v1 | SDK v2 |
+|||--|
+| Pipeline's REST endpoint for invocation | Pipeline endpoint | Batch endpoint |
+| Pipeline's specific version under the endpoint | Published pipeline | Pipeline component deployment |
+| Pipeline's arguments on invocation | Pipeline parameter | Job inputs |
+| Job generated from a published pipeline | Pipeline job | Batch job |
+
+To learn how to create your first pipeline component deployment see [How to deploy pipelines in Batch Endpoints](how-to-use-batch-pipeline-deployments.md).
++
+## Moving to batch endpoints
+
+Use the following guidelines to learn how to move from SDK v1 to SDK v2 using the concepts in Batch Endpoints.
+
+### Publish a pipeline
+
+Compare how publishing a pipeline has changed from v1 to v2:
+
+# [SDK v1](#tab/v1)
+
+1. First, we need to get the pipeline we want to publish:
+
+ ```python
+ pipeline1 = Pipeline(workspace=ws, steps=[step1, step2])
+ ```
+
+1. We can publish the pipeline as follows:
+
+ ```python
+ from azureml.pipeline.core import PipelineEndpoint
+
+ endpoint_name = "PipelineEndpointTest"
+ pipeline_endpoint = PipelineEndpoint.publish(
+ workspace=ws,
+ name=endpoint_name,
+ pipeline=pipeline,
+ description="A hello world endpoint for component deployments"
+ )
+ ```
+
+# [SDK v2](#tab/v2)
+
+1. First, we need to get the pipeline we want to publish. However, batch endpoints can't deploy pipelines but pipeline components. We need to convert the pipeline to a component.
+
+ ```python
+ @pipeline()
+ def pipeline(input_data: Input(type=AssetTypes.URI_FOLDER)):
+ (...)
+
+ return {
+ (..)
+ }
+
+ pipeline_component = pipeline.pipeline_builder.build()
+ ```
+
+1. As a best practice, we recommend registering pipeline components so you can keep versioning of them in a centralized way inside the workspace or even the shared registries.
+
+ ```python
+ ml_client.components.create(pipeline_component)
+ ```
+
+1. Then, we need to create the endpoint that will host all the pipeline deployments:
+
+ ```python
+ endpoint_name = "PipelineEndpointTest"
+ endpoint = BatchEndpoint(
+ name=endpoint_name,
+ description="A hello world endpoint for component deployments",
+ )
+
+ ml_client.batch_endpoints.begin_create_or_update(endpoint)
+ ```
+
+1. Create a deployment for the pipeline component:
+
+ ```python
+ deployment_name = "hello-batch-dpl"
+ deployment = BatchPipelineComponentDeployment(
+ name=deployment_name,
+ description="A hello world deployment with a single step.",
+ endpoint_name=endpoint.name,
+ component=pipeline_component
+ )
+
+ ml_client.batch_deployments.begin_create_or_update(deployment)
+ ```
++
+### Submit a job to a pipeline endpoint
+
+# [SDK v1](#tab/v1)
+
+To call the default version of the pipeline, you can use:
+
+```python
+pipeline_endpoint = PipelineEndpoint.get(workspace=ws, name="PipelineEndpointTest")
+run_id = pipeline_endpoint.submit("PipelineEndpointExperiment")
+```
+
+# [SDK v2](#tab/v2)
+
+```python
+job = ml_client.batch_endpoints.invoke(
+ endpoint_name=batch_endpoint,
+)
+```
++
+You can also submit a job to a specific version:
+
+# [SDK v1](#tab/v1)
+
+```python
+run_id = pipeline_endpoint.submit(endpoint_name, pipeline_version="0")
+```
+
+# [SDK v2](#tab/v2)
+
+In batch endpoints, deployments are not versioned. However, you can deploy multiple pipeline components versions under the same endpoint. In this sense, each pipeline version in v1 will correspond to a different pipeline component version and its corresponding deployment under the endpoint.
+
+Then, you can deploy a specific deployment running under the endpoint if that deployment runs the version you are interested in.
+
+```python
+job = ml_client.batch_endpoints.invoke(
+ endpoint_name=endpoint_name,
+ deployment_name=deployment_name,
+)
+```
++
+### Get all pipelines deployed
+
+# [SDK v1](#tab/v1)
+
+```python
+all_pipelines = PublishedPipeline.get_all(ws)
+```
+
+# [SDK v2](#tab/v2)
+
+The following code list all the endpoints existing in the workspace:
+
+```python
+all_endpoints = ml_client.batch_endpoints.list()
+```
+
+However, keep in mind that batch endpoints can host deployments [operationalizing either pipelines or models](concept-endpoints-batch.md#batch-deployments). If you want to get a list of all the deployments that host pipelines, you can do as follows:
+
+```python
+all_deployments = []
+
+for endpoint in all_endpoints:
+ all_deployments.extend(ml_client.batch_deployments.list(endpoint_name=endpoint.name))
+
+all_pipeline_deployments = filter(all_endpoints, lamdba x: x is BatchPipelineComponentDeployment)
+```
++
+## Using the REST API
+
+You can create jobs from the endpoints by using the REST API of the invocation URL. See the following examples to see how invocation has changed from v1 to v2.
+
+# [SDK v1](#tab/v1)
+
+```python
+pipeline_endpoint = PipelineEndpoint.get(workspace=ws, name=endpoint_name)
+rest_endpoint = pipeline_endpoint.endpoint
+
+response = requests.post(
+ rest_endpoint,
+ headers=aad_token,
+ json={
+ "ExperimentName": "PipelineEndpointExperiment",
+ "RunSource": "API",
+ "ParameterAssignments": {"argument1": "united", "argument2":45}
+ }
+)
+```
+
+# [SDK v2](#tab/v2)
+
+Batch endpoints support multiple inputs types. The following example shows how to indicate two different inputs of type `string` and `numeric`:
+
+```python
+batch_endpoint = ml_client.batch_endpoints.get(endpoint_name)
+rest_endpoint = batch_endpoint.invocation_url
+
+response = requests.post(
+ rest_endpoint,
+ headers=aad_token,
+ json={
+ "properties": {
+ "InputData": {
+ "argument1": {
+ "JobInputType": "Literal",
+ "Value": "united"
+ },
+ "argument2": {
+ "JobInputType": "Literal",
+ "Value": 45
+ }
+ }
+ }
+ }
+)
+```
+
+To know how to indicate inputs and outputs in batch endpoints and all the supported types see [Create jobs and input data for batch endpoints](how-to-access-data-batch-endpoints-jobs.md).
+++
+## Next steps
+
+- [How to deploy pipelines in Batch Endpoints](how-to-use-batch-pipeline-deployments.md)
+- [How to operationalize a training routine in batch endpoints](how-to-use-batch-training-pipeline.md)
+- [How to operationalize an scoring routine in batch endpoints](how-to-use-batch-scoring-pipeline.md)
machine-learning Migrate To V2 Execution Pipeline https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/migrate-to-v2-execution-pipeline.md
This article gives a comparison of scenario(s) in SDK v1 and SDK v2. In the foll
|`r_script_step`| `command` job|`command` component| |`synapse_spark_step`| coming soon|coming soon|
+## Published pipelines
+
+Once you have a pipeline up and running, you can publish a pipeline so that it runs with different inputs. This was known as __Published Pipelines__. [Batch Endpoint](concept-endpoints-batch.md) proposes a similar yet more powerful way to handle multiple assets running under a durable API which is why the Published pipelines functionality has been moved to [Pipeline component deployments in batch endpoints (preview)](concept-endpoints-batch.md#pipeline-component-deployment-preview).
+
+[Batch endpoints](concept-endpoints-batch.md) decouples the interface (endpoint) from the actual implementation (deployment) and allow the user to decide which deployment serves the default implementation of the endpoint. [Pipeline component deployments in batch endpoints (preview)](concept-endpoints-batch.md#pipeline-component-deployment-preview) allow users to deploy pipeline components instead of pipelines, which make a better use of reusable assets for those organizations looking to streamline their MLOps practice.
+
+The following table shows a comparison of each of the concepts:
+
+| Concept | SDK v1 | SDK v2 |
+|||--|
+| Pipeline's REST endpoint for invocation | Pipeline endpoint | Batch endpoint |
+| Pipeline's specific version under the endpoint | Published pipeline | Pipeline component deployment |
+| Pipeline's arguments on invocation | Pipeline parameter | Job inputs |
+| Job generated from a published pipeline | Pipeline job | Batch job |
+
+See [Upgrade pipeline endpoints to SDK v2](migrate-to-v2-deploy-pipelines.md) for specific guidance about how to migrate to batch endpoints.
+ ## Related documents For more information, see the documentation here:
machine-learning Migrate To V2 Managed Online Endpoints https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/migrate-to-v2-managed-online-endpoints.md
monikerRange: 'azureml-api-1 || azureml-api-2'
# Upgrade steps for Azure Container Instances web services to managed online endpoints
-[Managed online endpoints](concept-endpoints.md#what-are-online-endpoints) help to deploy your ML models in a turnkey manner. Managed online endpoints work with powerful CPU and GPU machines in Azure in a scalable, fully managed way. Managed online endpoints take care of serving, scaling, securing, and monitoring your models, freeing you from the overhead of setting up and managing the underlying infrastructure. Details can be found on [Deploy and score a machine learning model by using an online endpoint](how-to-deploy-online-endpoints.md).
+[Managed online endpoints](concept-endpoints-online.md) help to deploy your ML models in a turnkey manner. Managed online endpoints work with powerful CPU and GPU machines in Azure in a scalable, fully managed way. Managed online endpoints take care of serving, scaling, securing, and monitoring your models, freeing you from the overhead of setting up and managing the underlying infrastructure. Details can be found on [Deploy and score a machine learning model by using an online endpoint](how-to-deploy-online-endpoints.md).
You can deploy directly to the new compute target with your previous models and environments, or use the [scripts](https://aka.ms/moeonboard) provided by us to export the current services and then deploy to the new compute without affecting your existing services. If you regularly create and delete Azure Container Instances (ACI) web services, we strongly recommend the deploying directly and not using the scripts.
machine-learning Reference Yaml Schedule https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/reference-yaml-schedule.md
-+ Last updated 08/15/2022
# CLI (v2) schedule YAML schema The source JSON schema can be found at https://azuremlschemas.azureedge.net/latest/schedule.schema.json.
The source JSON schema can be found at https://azuremlschemas.azureedge.net/late
| | - | -- | -- | | `$schema` | string | The YAML schema. | | | `name` | string | **Required.** Name of the schedule. | |
-| `version` | string | Version of the schedule. If omitted, Azure Machine Learning autogenerates a version. | |
+| `version` | string | Version of the schedule. If omitted, Azure Machine Learning will autogenerate a version. | |
| `description` | string | Description of the schedule. | | | `tags` | object | Dictionary of tags for the schedule. | | | `trigger` | object | The trigger configuration to define rule when to trigger job. **One of `RecurrenceTrigger` or `CronTrigger` is required.** | |
-| `create_job` | object or string | **Required.** The definition of the job that triggered by a schedule. **One of `string` or `JobDefinition` is required.**| |
+| `create_job` | object or string | **Required.** The definition of the job that will be triggered by a schedule. **One of `string` or `JobDefinition` is required.**| |
### Trigger configuration
The source JSON schema can be found at https://azuremlschemas.azureedge.net/late
| `type` | string | **Required.** Specifies the schedule type. |recurrence| |`frequency`| string | **Required.** Specifies the unit of time that describes how often the schedule fires.|`minute`, `hour`, `day`, `week`, `month`| |`interval`| integer | **Required.** Specifies the interval at which the schedule fires.| |
-|`start_time`| string |Describes the start date and time with timezone. If start_time is omitted, the first job will run instantly, and the future jobs trigger based on the schedule, saying start_time will match the job created time. If the start time is in the past, the first job will run at the next calculated run time.|
-|`end_time`| string |Describes the end date and time with timezone. If end_time is omitted, the schedule runs until it's explicitly disabled.|
+|`start_time`| string |Describes the start date and time with timezone. If start_time is omitted, the first job will run instantly and the future jobs will be triggered based on the schedule, saying start_time will be equal to the job created time. If the start time is in the past, the first job will run at the next calculated run time.|
+|`end_time`| string |Describes the end date and time with timezone. If end_time is omitted, the schedule will continue to run until it's explicitly disabled.|
|`timezone`| string |Specifies the time zone of the recurrence. If omitted, by default is UTC. |See [appendix for timezone values](#timezone)|
-|`pattern`|object|Specifies the pattern of the recurrence. If pattern is omitted, the job(s) is triggered according to the logic of start_time, frequency and interval.| |
+|`pattern`|object|Specifies the pattern of the recurrence. If pattern is omitted, the job(s) will be triggered according to the logic of start_time, frequency and interval.| |
#### Recurrence schedule
Recurrence schedule defines the recurrence pattern, containing `hours`, `minutes
| | - | -- | -- | | `type` | string | **Required.** Specifies the schedule type. |cron| | `expression` | string | **Required.** Specifies the cron expression to define how to trigger jobs. expression uses standard crontab expression to express a recurring schedule. A single expression is composed of five space-delimited fields:`MINUTES HOURS DAYS MONTHS DAYS-OF-WEEK`||
-|`start_time`| string |Describes the start date and time with timezone. If start_time is omitted, the first job will run instantly and the future jobs trigger based on the schedule, saying start_time will match the job created time. If the start time is in the past, the first job will run at the next calculated run time.|
-|`end_time`| string |Describes the end date and time with timezone. If end_time is omitted, the schedule continues to run until it's explicitly disabled.|
+|`start_time`| string |Describes the start date and time with timezone. If start_time is omitted, the first job will run instantly and the future jobs will be triggered based on the schedule, saying start_time will be equal to the job created time. If the start time is in the past, the first job will run at the next calculated run time.|
+|`end_time`| string |Describes the end date and time with timezone. If end_time is omitted, the schedule will continue to run until it's explicitly disabled.|
|`timezone`| string |Specifies the time zone of the recurrence. If omitted, by default is UTC. |See [appendix for timezone values](#timezone)| ### Job definition
Customer can directly use `create_job: azureml:<job_name>` or can use the follow
| | - | -- | -- | |`type`| string | **Required.** Specifies the job type. Only pipeline job is supported.|`pipeline`| |`job`| string | **Required.** Define how to reference a job, it can be `azureml:<job_name>` or a local pipeline job yaml such as `file:hello-pipeline.yml`.| |
-| `experiment_name` | string | Experiment name to organize the job under. The run record of each job will be organized under the corresponding experiment in the studio's "Experiments" tab. If omitted, it uses schedule name as default value. | |
+| `experiment_name` | string | Experiment name to organize the job under. Each job's run record will be organized under the corresponding experiment in the studio's "Experiments" tab. If omitted, we'll take schedule name as default value. | |
|`inputs`| object | Dictionary of inputs to the job. The key is a name for the input within the context of the job and the value is the input value.| | |`outputs`|object | Dictionary of output configurations of the job. The key is a name for the output within the context of the job and the value is the output configuration.| | | `settings` | object | Default settings for the pipeline job. See [Attributes of the `settings` key](#attributes-of-the-settings-key) for the set of configurable properties. | |
Customer can directly use `create_job: azureml:<job_name>` or can use the follow
| Key | Type | Description | Default value | | | - | -- | - |
-| `default_datastore` | string | Name of the datastore to use as the default datastore for the pipeline job. This value must be a reference to an existing datastore in the workspace using the `azureml:<datastore-name>` syntax. Any outputs defined in the `outputs` property of the parent pipeline job or child step jobs are stored in this datastore. If omitted, outputs are stored in the workspace blob datastore. | |
-| `default_compute` | string | Name of the compute target to use as the default compute for all steps in the pipeline. If compute is defined at the step level, it overrides this default compute for that specific step. This value must be a reference to an existing compute in the workspace using the `azureml:<compute-name>` syntax. | |
-| `continue_on_step_failure` | boolean | Whether the execution of steps in the pipeline should continue if one step fails. The default value is `False`, which means that if one step fails, the pipeline execution is stopped, canceling any running steps. | `False` |
+| `default_datastore` | string | Name of the datastore to use as the default datastore for the pipeline job. This value must be a reference to an existing datastore in the workspace using the `azureml:<datastore-name>` syntax. Any outputs defined in the `outputs` property of the parent pipeline job or child step jobs will be stored in this datastore. If omitted, outputs will be stored in the workspace blob datastore. | |
+| `default_compute` | string | Name of the compute target to use as the default compute for all steps in the pipeline. If compute is defined at the step level, it will override this default compute for that specific step. This value must be a reference to an existing compute in the workspace using the `azureml:<compute-name>` syntax. | |
+| `continue_on_step_failure` | boolean | Whether the execution of steps in the pipeline should continue if one step fails. The default value is `False`, which means that if one step fails, the pipeline execution will be stopped, canceling any running steps. | `False` |
### Job inputs | Key | Type | Description | Allowed values | Default value | | | - | -- | -- | - | | `type` | string | The type of job input. Specify `uri_file` for input data that points to a single file source, or `uri_folder` for input data that points to a folder source. | `uri_file`, `uri_folder` | `uri_folder` |
-| `path` | string | The path to the data to use as input, specified in a few ways: <br><br> - A local path to the data source file or folder, for example, `path: ./iris.csv`. The data uploads during job submission. <br><br> - A URI of a cloud path to the file or folder to use as the input. Supported URI types are `azureml`, `https`, `wasbs`, `abfss`, `adl`. For more information on how to use the `azureml://` URI format, see [Core yaml syntax](reference-yaml-core-syntax.md). <br><br> - An existing registered Azure Machine Learning data asset to use as the input. To reference a registered data asset, use the `azureml:<data_name>:<data_version>` syntax or `azureml:<data_name>@latest` (to reference the latest version of that data asset), for example, `path: azureml:cifar10-data:1` or `path: azureml:cifar10-data@latest`. | | |
-| `mode` | string | Mode of how the data should be delivered to the compute target. <br><br> For read-only mount (`ro_mount`), the data is consumed as a mount path. A folder mounts as a folder and a file mounts as a file. Azure Machine Learning resolves the input to the mount path. <br><br> In the `download` mode, the data downloads to the compute target. Azure Machine Learning resolves the input to the downloaded path. <br><br> If you only want the URL of the storage location of the data artifact(s), instead of mounting or downloading the data itself, you can use the `direct` mode. This passes in the URL of the storage location as the job input. In this case, you're fully responsible for handling credentials to access the storage. | `ro_mount`, `download`, `direct` | `ro_mount` |
+| `path` | string | The path to the data to use as input. This can be specified in a few ways: <br><br> - A local path to the data source file or folder, for example, `path: ./iris.csv`. The data will get uploaded during job submission. <br><br> - A URI of a cloud path to the file or folder to use as the input. Supported URI types are `azureml`, `https`, `wasbs`, `abfss`, `adl`. For more information on how to use the `azureml://` URI format, see [Core yaml syntax](reference-yaml-core-syntax.md). <br><br> - An existing registered Azure Machine Learning data asset to use as the input. To reference a registered data asset, use the `azureml:<data_name>:<data_version>` syntax or `azureml:<data_name>@latest` (to reference the latest version of that data asset), for example, `path: azureml:cifar10-data:1` or `path: azureml:cifar10-data@latest`. | | |
+| `mode` | string | Mode of how the data should be delivered to the compute target. <br><br> For read-only mount (`ro_mount`), the data will be consumed as a mount path. A folder will be mounted as a folder and a file will be mounted as a file. Azure Machine Learning will resolve the input to the mount path. <br><br> For `download` mode the data will be downloaded to the compute target. Azure Machine Learning will resolve the input to the downloaded path. <br><br> If you only want the URL of the storage location of the data artifact(s) rather than mounting or downloading the data itself, you can use the `direct` mode. This will pass in the URL of the storage location as the job input. In this case, you're fully responsible for handling credentials to access the storage. | `ro_mount`, `download`, `direct` | `ro_mount` |
### Job outputs | Key | Type | Description | Allowed values | Default value | | | - | -- | -- | - |
-| `type` | string | The type of job output. For the default `uri_folder` type, the output corresponds to a folder. | `uri_folder` | `uri_folder` |
-| `path` | string | The path to the data to use as input, specified in a few ways: <br><br> - A local path to the data source file or folder, for example, `path: ./iris.csv`. The data uploads during job submission. <br><br> - A URI of a cloud path to the file or folder to use as the input. Supported URI types are `azureml`, `https`, `wasbs`, `abfss`, `adl`. For more information on how to use the `azureml://` URI format, see [Core yaml syntax](reference-yaml-core-syntax.md). <br><br> - An existing registered Azure Machine Learning data asset to use as the input. To reference a registered data asset, use the `azureml:<data_name>:<data_version>` syntax or `azureml:<data_name>@latest` (to reference the latest version of that data asset), for example, `path: azureml:cifar10-data:1` or `path: azureml:cifar10-data@latest`. | | |
-| `mode` | string | Mode of how output file(s) are delivered to the destination storage. For read-write mount mode (`rw_mount`) the output directory is a mounted directory. In the upload mode, the written file(s) upload at the end of the job. | `rw_mount`, `upload` | `rw_mount` |
+| `type` | string | The type of job output. For the default `uri_folder` type, the output will correspond to a folder. | `uri_folder` | `uri_folder` |
+| `path` | string | The path to the data to use as input. This can be specified in a few ways: <br><br> - A local path to the data source file or folder, for example, `path: ./iris.csv`. The data will get uploaded during job submission. <br><br> - A URI of a cloud path to the file or folder to use as the input. Supported URI types are `azureml`, `https`, `wasbs`, `abfss`, `adl`. For more information on how to use the `azureml://` URI format, see [Core yaml syntax](reference-yaml-core-syntax.md). <br><br> - An existing registered Azure Machine Learning data asset to use as the input. To reference a registered data asset, use the `azureml:<data_name>:<data_version>` syntax or `azureml:<data_name>@latest` (to reference the latest version of that data asset), for example, `path: azureml:cifar10-data:1` or `path: azureml:cifar10-data@latest`. | | |
+| `mode` | string | Mode of how output file(s) will get delivered to the destination storage. For read-write mount mode (`rw_mount`) the output directory will be a mounted directory. For upload mode the file(s) written will get uploaded at the end of the job. | `rw_mount`, `upload` | `rw_mount` |
-### Import data definition (preview)
-
+## Remarks
-Customer can directly use `import_data: ./<data_import>.yaml` or can use the following properties to define the data import definition.
+The `az ml schedule` command can be used for managing Azure Machine Learning models.
-| Key | Type | Description | Allowed values |
-| | - | -- | -- |
-|`type`| string | **Required.** Specifies the data asset type that you want to import the data as. It can be mltable when importing from a Database source, or uri_folder when importing from a FileSource.|`mltable`, `uri_folder`|
-| `name` | string | **Required.** Data asset name to register the imported data under. | |
-| `path` | string | **Required.** The path to the datastore that takes in the imported data, specified in one of two ways: <br><br> - **Required.** A URI of datastore path. Only supported URI type is `azureml`. For more information on how to use the `azureml://` URI format, see [Core yaml syntax](reference-yaml-core-syntax.md). To avoid an over-write, a unique path for each import is recommended. To do this, parameterize the path as shown in this example - `azureml://datastores/<datastore_name>/paths/<source_name>/${{name}}`. The "datastore_name" in the example can be a datastore that you have created or can be workspaceblobstore. Alternately a "managed datastore" can be selected by referencing as shown: `azureml://datastores/workspacemanagedstore`, where the system automatically assigns a unique path. | Azure Machine Learning://<>|
-| `source` | object | External source details of the imported data source. See [Attributes of the `source`](#attributes-of-source-preview) for the set of source properties. | |
+## Examples
-### Attributes of `source` (preview)
+Examples are available in the [examples GitHub repository](https://github.com/Azure/azureml-examples/tree/main/cli/schedules). A couple are shown below.
-| Key | Type | Description | Allowed values | Default value |
-| | - | -- | -- | - |
-| `type` | string | The type of external source from where you intend to import data from. Only the following types are allowed at the moment - `Database` or `FileSystem`| `Database`, `FileSystem` | |
-| `query` | string | Define this value only when the `type` defined above is `database` The query in the external source of type `Database` that defines or filters data that needs to be imported.| | |
-| `path` | string | Define this only when the `type` defined above is `FileSystem` The path of the folder in the external source of type `FileSystem` where the file(s) or data that needs to be imported resides.| | |
-| `connection` | string | **Required.** The connection property for the external source referenced in the format of `azureml:<connection_name>` | | |
+## YAML: Schedule with recurrence pattern
-## Remarks
-The `az ml schedule` command can be used for managing Azure Machine Learning models.
+## YAML: Schedule with cron expression
-## Examples
-Examples are available in the [examples GitHub repository](https://github.com/Azure/azureml-examples/tree/main/cli/schedules). A couple are shown below.
-## YAML: Schedule for a job with recurrence pattern
---
-## YAML: Schedule for a job with cron expression
---
-## YAML: Schedule for data import with recurrence pattern (preview)
-```yml
-$schema: https://azuremlschemas.azureedge.net/latest/schedule.schema.json
-name: simple_recurrence_import_schedule
-display_name: Simple recurrence import schedule
-description: a simple hourly recurrence import schedule
-
-trigger:
- type: recurrence
- frequency: day #can be minute, hour, day, week, month
- interval: 1 #every day
- schedule:
- hours: [4,5,10,11,12]
- minutes: [0,30]
- start_time: "2022-07-10T10:00:00" # optional - default will be schedule creation time
- time_zone: "Pacific Standard Time" # optional - default will be UTC
-
-import_data: ./my-snowflake-import-data.yaml
-
-```
-## YAML: Schedule for data import definition inline with recurrence pattern on managed datastore (preview)
-```yml
-$schema: https://azuremlschemas.azureedge.net/latest/schedule.schema.json
-name: inline_recurrence_import_schedule
-display_name: Inline recurrence import schedule
-description: an inline hourly recurrence import schedule
-
-trigger:
- type: recurrence
- frequency: day #can be minute, hour, day, week, month
- interval: 1 #every day
- schedule:
- hours: [4,5,10,11,12]
- minutes: [0,30]
- start_time: "2022-07-10T10:00:00" # optional - default will be schedule creation time
- time_zone: "Pacific Standard Time" # optional - default will be UTC
-
-import_data:
- type: mltable
- name: my_snowflake_ds
- path: azureml://datastores/workspacemanagedstore
- source:
- type: database
- query: select * from TPCH_SF1.REGION
- connection: azureml:my_snowflake_connection
-
-```
-
-## YAML: Schedule for data import with cron expression (preview)
-```yml
-$schema: https://azuremlschemas.azureedge.net/latest/schedule.schema.json
-name: simple_cron_import_schedule
-display_name: Simple cron import schedule
-description: a simple hourly cron import schedule
-
-trigger:
- type: cron
- expression: "0 * * * *"
- start_time: "2022-07-10T10:00:00" # optional - default will be schedule creation time
- time_zone: "Pacific Standard Time" # optional - default will be UTC
-
-import_data: ./my-snowflake-import-data.yaml
-```
-## YAML: Schedule for data import definition inline with cron expression (preview)
-```yml
-$schema: https://azuremlschemas.azureedge.net/latest/schedule.schema.json
-name: inline_cron_import_schedule
-display_name: Inline cron import schedule
-description: an inline hourly cron import schedule
-
-trigger:
- type: cron
- expression: "0 * * * *"
- start_time: "2022-07-10T10:00:00" # optional - default will be schedule creation time
- time_zone: "Pacific Standard Time" # optional - default will be UTC
-
-import_data:
- type: mltable
- name: my_snowflake_ds
- path: azureml://datastores/workspaceblobstore/paths/snowflake/${{name}}
- source:
- type: database
- query: select * from TPCH_SF1.REGION
- connection: azureml:my_snowflake_connection
-```
## Appendix ### Timezone
-The current schedule supports the timezones in this table. The key can be used directly in the Python SDK, while the value can be used in the YAML job. The table is organized by UTC(Coordinated Universal Time).
+Current schedule supports the following timezones. The key can be used directly in the Python SDK, while the value can be used in the YAML job. The table is organized by UTC(Coordinated Universal Time).
| UTC | Key | Value | |-||--|
The current schedule supports the timezones in this table. The key can be used d
| UTC +13:00 | TONGA__STANDARD_TIME | "Tonga Standard Time" | | UTC +13:00 | SAMOA_STANDARD_TIME | "Samoa Standard Time" | | UTC +14:00 | LINE_ISLANDS_STANDARD_TIME | "Line Islands Standard Time" |+
machine-learning Tutorial Deploy Model https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/tutorial-deploy-model.md
A **deployment** is a set of resources required for hosting the model that does
A single endpoint can contain multiple deployments. Endpoints and deployments are independent Azure Resource Manager resources that appear in the Azure portal.
-Azure Machine Learning allows you to implement [online endpoints](concept-endpoints.md#what-are-online-endpoints) for real-time inferencing on client data, and [batch endpoints](concept-endpoints.md#what-are-batch-endpoints) for inferencing on large volumes of data over a period of time.
+Azure Machine Learning allows you to implement [online endpoints](concept-endpoints-online.md) for real-time inferencing on client data, and [batch endpoints](concept-endpoints-batch.md) for inferencing on large volumes of data over a period of time.
In this tutorial, we'll walk you through the steps of implementing a _managed online endpoint_. Managed online endpoints work with powerful CPU and GPU machines in Azure in a scalable, fully managed way that frees you from the overhead of setting up and managing the underlying deployment infrastructure.
network-watcher Traffic Analytics https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/network-watcher/traffic-analytics.md
One of the following [Azure built-in roles](../role-based-access-control/built-i
| | | |Resource Manager | Owner | | | Contributor |
-| | Reader |
| | Network Contributor | If none of the preceding built-in roles are assigned to your account, assign a [custom role](../role-based-access-control/custom-roles.md?toc=%2fazure%2fnetwork-watcher%2ftoc.json) to your account. The custom role should support the following actions at the subscription level:
purview How To Use Workflow Connectors https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/purview/how-to-use-workflow-connectors.md
Title: Workflow connectors
-description: This article describes how to use connectors in Purview workflows
+ Title: Workflow connectors and actions
+description: This article describes how to use connectors and actions in Microsoft Purview workflows
Previously updated : 02/22/2023 Last updated : 05/15/2023
-# Workflow connectors
+# Workflow connectors and actions
[!INCLUDE [feature-in-preview](includes/feature-in-preview.md)] You can use [workflows](concept-workflow.md) to automate some business processes through Microsoft Purview. A Connector in a workflow provides a way to connect to different systems and leverage a set of prebuilt actions and triggers.
-## Current workflow connectors
+## Current workflow connectors and actions
Currently the following connectors are available for a workflow in Microsoft Purview:
Currently the following connectors are available for a workflow in Microsoft Pur
|Grant access |Create an access policy to grant access to the requested user. |None | <br> - Renamable: Yes <br> - Deletable: Yes <br> - Multiple per workflow |Data access request | |Http |Integrate with external applications through http or https call. <br> For more information, see [Workflows HTTP connector](how-to-use-workflow-http-connector.md) | <br> - Host <br> - Method <br> - Path <br> - Headers <br> - Queries <br> - Body <br> - Authentication | <br> - Renamable: Yes <br> - Deletable: Yes <br> - Settings: Secured Input and Secure outputs (Enabled by default) <br> - Multiple per workflow |All workflows templates | |Import glossary terms |Import one or more glossary terms |None | <br> - Renamable: Yes <br> - Deletable: No <br> - Multiple per workflow |Import terms |
+|Parse JSON |Parse an incoming JSON to extract parameters |- Content <br> - Schema <br> | <br> - Renamable: Yes <br> - Deletable: No <br> - Multiple per workflow |All workflows templates |
|Send email notification |Send email notification to one or more recipients | <br> - Subject <br> - Message body <br> - Recipient | <br> - Renamable: Yes <br> - Deletable: Yes <br> - Settings: Secured Input and Secure outputs (Enabled by default) <br> - Multiple per workflow |All workflows templates | |Start and wait for an approval |Generates approval requests and assign the requests to individual users or Microsoft Azure Active Directory groups. Microsoft Purview workflow approval connector currently supports two types of approval types: <br> - First to Respond ΓÇô This implies that the first approver's outcome (Approve/Reject) is considered final. <br> - Everyone must approve ΓÇô This implies everyone identified as an approver must approve the request for the request to be considered approved. If one approver rejects the request, regardless of other approvers, the request is rejected. <br> - Reminder settings - You can set reminders to periodically remind the approver till they approve or reject. <br> - Expiry settings - You can set an expiration or deadline for the approval activity. Also, you can set who needs to be notified (user/AAD group) after the expiry. | <br> - Approval Type <br> - Title <br> - Assigned To | <br> - Renamable: Yes <br> - Deletable: Yes <br> - Multiple per workflow |All workflows templates | |Update glossary term |Update an existing glossary term |None | <br> - Renamable: Yes <br> - Deletable: Yes <br> - Multiple per workflow |Update glossary term |
purview Register Scan Azure Sql Database https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/purview/register-scan-azure-sql-database.md
To create policies that cover all data sources inside a resource group or Azure
## Extract lineage (preview) <a id="lineagepreview"></a>
+>[!NOTE]
+>Lineage is not currently supported using a self-hosted integration runtime and a private endpoint. You need to enable Azure services to access the server under network settings for your Azure SQL Database.
+ Microsoft Purview supports lineage from Azure SQL Database. When you're setting up a scan, you turn on the **Lineage extraction** toggle to extract lineage. ### Prerequisites for setting up a scan with lineage extraction 1. Follow the steps in the [Configure authentication for a scan](#configure-authentication-for-a-scan) section of this article to authorize Microsoft Purview to scan your SQL database.
-2. Sign in to Azure SQL Database with your Azure AD account, and assign `db_owner` permissions to the Microsoft Purview managed identity.
+1. Sign in to Azure SQL Database with your Azure AD account, and assign `db_owner` permissions to the Microsoft Purview managed identity.
Use the following example SQL syntax to create a user and grant permission. Replace `<purview-account>` with your account name.
Microsoft Purview supports lineage from Azure SQL Database. When you're setting
EXEC sp_addrolemember 'db_owner', <purview-account> GO ```
-3. Run the following command on your SQL database to create a master key:
+1. Run the following command on your SQL database to create a master key:
```sql Create master key Go ```
+1. Ensure that **Allow Azure services and resources to access this server** is enabled under networking/firewall for your Azure SQL resource.
### Create a scan with lineage extraction turned on
purview Register Scan Synapse Workspace https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/purview/register-scan-synapse-workspace.md
Previously updated : 09/06/2022 Last updated : 05/15/2023
GRANT REFERENCES ON DATABASE SCOPED CREDENTIAL::[scoped_credential] TO [PurviewA
1. Select **Save**. > [!IMPORTANT]
-> Currently, if you cannot enable **Allow Azure services and resources to access this workspace** on your Azure Synapse workspaces, when set up scan on Microsoft Purview governance portal, you will hit serverless DB enumeration failure. In this case, to scan serverless DBs, you can use [Microsoft Purview REST API - Scans - Create Or Update](/rest/api/purview/scanningdataplane/scans/create-or-update/) to set up scan. Refer to [this example](#set-up-scan-using-api).
+> Currently, if you cannot enable **Allow Azure services and resources to access this workspace** on your Azure Synapse workspaces, when set up scan on Microsoft Purview governance portal, you will hit serverless DB enumeration failure. In this case, you can choose the "Enter manually" option to specify the database names that you want to scan, and proceed. Learn more from [Create and run scan](#create-and-run-scan).
### Create and run scan
To create and run a new scan, do the following:
1. Select **View details**, and then select **New scan**. Alternatively, you can select the **Scan quick action** icon on the source tile. 1. On the **Scan** details pane, in the **Name** box, enter a name for the scan.
-1. In the **Type** dropdown list, select the types of resources that you want to scan within this source. **SQL Database** is the only type we currently support within an Azure Synapse workspace.
-
+
+1. In the **Credential** dropdown list, select the credential to connect to the resources within your data source.
+
+1. For **Database selection method**, choose **From Synapse workspace** or **Enter manually**. By default, Microsoft Purview tries to enumerate the databases under the workspace, and you can select the ones you want to scan. In case you hit error that Microsoft Purview fails to load the serverless databases, you can choose "Enter manually" to specify the type of database (dedicated or serverless) and the corresponding database name.
+ :::image type="content" source="media/register-scan-synapse-workspace/synapse-scan-setup.png" alt-text="Screenshot of the details pane for the Azure Synapse source scan.":::
-1. In the **Credential** dropdown list, select the credential to connect to the resources within your data source.
-
-1. Within each type, you can select to scan either all the resources or a subset of them by name.
+ Option of "Enter manually":
+
+ :::image type="content" source="media/register-scan-synapse-workspace/synapse-scan-setup-enter-manually.png" alt-text="Screenshot of the section of manually enter database names when setting up scan.":::
+
+1. Select **Test connection** to validate the settings. In case of any error, in the report page, hover on the "Connection status" to see details.
1. Select **Continue** to proceed.
To create and run a new scan, do the following:
### Set up scan using API
-Here's an example of creating scan for serverless DB using API. Replace the `{place_holder}` and `enum_option_1 | enum_option_2 (note)` value with your actual settings.
+Here's an example of creating scan for serverless DB using API. Replace the `{place_holder}` and `enum_option_1 | enum_option_2 (note)` value with your actual settings. Learn more from [Microsoft Purview REST API - Scans - Create Or Update](/rest/api/purview/scanningdataplane/scans/create-or-update/).
```http PUT https://{purview_account_name}.purview.azure.com/scan/datasources/<data_source_name>/scans/{scan_name}?api-version=2022-02-01-preview
purview Tutorial Atlas 2 2 Apis https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/purview/tutorial-atlas-2-2-apis.md
Sample JSON:
### Delete a business metadata attribute from an entity + You can send a `DELETE` request to the following endpoint: ```
Sample JSON:
### Delete a business metadata type definition
+>[!NOTE]
+>You can only delete business metadata type definition if it has no references, i.e., if it has not been assigned to any assets in the catalog.
+ You can send a `DELETE` request to the following endpoint: ```
sap Compliance Bcdr Reliabilty https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sap/center-sap-solutions/compliance-bcdr-reliabilty.md
+
+ Title: Resiliency in Azure Center for SAP Solutions
+description: Find out about reliability in Azure Center for SAP Solutions
++++++ Last updated : 05/15/2023++
+# What is reliability in *Azure Center for SAP Solutions*?
+This article describes reliability support in Azure Center for SAP Solutions, and covers both regional resiliency with availability zones and cross-region resiliency with customer enabled disaster recovery. For a more detailed overview of reliability in Azure, see [Azure reliability](/azure/well-architected/resiliency/overview).
+
+Azure Center for SAP solutions is an end-to-end solution that enables you to create and run SAP systems as a unified workload on Azure and provides a more seamless foundation for innovation. You can take advantage of the management capabilities for both new and existing Azure-based SAP systems.
+
+## Availability zone support
+Azure availability zones are at least three physically separate groups of datacenters within each Azure region. Datacenters within each zone are equipped with independent power, cooling, and networking infrastructure. In case of a local zone failure, availability zones are designed such that, if one zone is affected, the remaining two zones can support: regional services, capacity and high availability. Failures can range from software and hardware failures to events such as earthquakes, floods, and fires. Tolerance to failures is achieved with redundancy and logical isolation of Azure services. For more detailed information on availability zones in Azure, see [Availability zone service and regional support](/azure/reliability/availability-zones-service-support).
+
+There are three types of Azure services that support availability zones: zonal, zone-redundant, and always-available services. You can learn more about these types of services and how they promote resiliency in the [Azure services with availability zone support](/azure/reliability/availability-zones-service-support).
+
+Azure Center for SAP Solutions supports zone-redundancy. When creating a new SAP system through Azure Center for SAP solutions, you can choose the Compute availability option for the infrastructure being deployed. You can choose to deploy the SAP system with zone redundancy based on your requirements, while the service is zone-redundant by default. [Learn more about deployment type options for SAP systems here](/azure/sap/center-sap-solutions/deploy-s4hana#deployment-types).
+
+### Regional availability
+
+When deploying SAP systems using Azure Center for SAP solutions, you can use Zone-redundant Premium plans in the following regions:
+
+| Americas | Europe | Asia Pacific |
+||-|-|
+| East US 2 | North Europe | Australia East |
+| East US | West Europe | Central India |
+| West US 3 | | East Asia |
+| | | |
+
+### Prerequisites for ensuring Resiliency in Azure Center for SAP solutions
+- You are expected to choose Zone redundancy for SAP workload that you deploy using Azure Center for SAP solutions based on your requirements.
+- Zone redundancy for the SAP system infrastructure that you deploy using Azure Center for SAP solutions can only be chosen when creating the Virtual Instance for SAP solutions (VIS) resource. Once the VIS resource is created and infrastructure is deployed, you cannot change the underlying infrastructure configuration to zone redundant.
+
+#### Deploy an SAP system with availability zone enabled
+This section explains how you can deploy an SAP system with Zone redundancy from the Azure portal. You can also use PowerShell and CLI interfaces to deploy a zone redundant SAP system with Azure Center for SAP solutions. Learn more about [deploying a new SAP system using Azure Center for SAP solutions](/azure/sap/center-sap-solutions/deploy-s4hana).
+
+1. Open the Azure portal and navigate to the **Azure Center for SAP solutions** page.
+
+2. In the **Basics** page, special attention to the fields in the table (also highlighted in the screenshot), which have specific requirements for zone redundancy.
+
+ | Setting | Suggested value | Notes for Zone Redundancy |
+ | | - | -- |
+ | **Deployment Type** | Distributed with High Availability (HA) | You should choose Availability-Zone configuration for Compute Availability|
+
+ ![Screenshot of Zone redundancy option while VIS creation.](./media/azure-center-for-sap-solutions-availability-zone.png)
+
+3. There are no more input fields in the rest of the process that affects zone redundancy. You can proceed with creating the system as per the [deployment guide](/azure/sap/center-sap-solutions/deploy-s4hana).
+
+### Zone down experience
+If you deploy the SAP system infrastructure with Zone-redundancy, the SAP workload will fail over to the secondary virtual machine and you will be able to access the system without any interruptions in case of a zone outage.
+
+## Disaster recovery: cross-region fail over
+Azure Center for SAP solutions service is a zone redundant service. So, service may experience downtime because no paired region exists. There will be no Microsoft initiated fail over in the event of a region outage. This article explains some of the strategies that you can use to achieve cross-region resiliency for Virtual Instance for SAP solutions resources with customer enabled disaster recovery. It has detailed steps for you to follow when a region in which your Virtual Instance for SAP solutions resource exists is down.
+
+| Case # | ACSS Service Region | SAP Workload Region | Scenario | Mitigation Steps |
+|--|--||--||
+| Case 1 | A (Down) | B | ACSS Service region is down | Register the workload with ACSS service available in another region using PowerShell or CLI which allow to select an available service location. |
+| Case 2 | A | B (Down) | SAP Workload region is down | 1. Customers should perform workload failover to DR region (outside of ACSS). <br> 2. Register the failed over workload with ACSS using PowerShell or CLI. |
+| Case 3 | A (Down) | B (Down) | ACSS Service and SAP workload regions are down | 1. Customers should perform workload failover to DR region (outside of ACSS). <br> 2. Register the failed over workload with ACSS service available in another region using PowerShell or CLI which allow to select an available service location.
+
+### Outage detection, notification, and management
+When service goes down in a region customer will be notified through *Azure Communications*. Customer also can check the service health page in Azure portal, and can also configure the notifications on service health by following [steps to create a service health alert](/azure/service-health/alerts-activity-log-service-notifications-portal?toc=%2Fazure%2Fservice-health%2Ftoc.json).
+
+### Capacity and proactive disaster recovery resiliency
+You need to plan the capacity for your workload in the DR region.
+
+## Next steps
+> [!div class="nextstepaction"]
+> [Resiliency in Azure](/azure/reliability/availability-zones-overview)
sap Compliance Cedr https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sap/center-sap-solutions/compliance-cedr.md
+
+ Title: Customer enabled disaster recovery in Azure Center for SAP Solutions
+description: Find out about Customer enabled disaster recovery in Azure Center for SAP Solutions
+++++ Last updated : 05/15/2023+
+# Customer enabled disaster recovery in *Azure Center for SAP solutions*
+Azure Center for SAP solutions service is a zone redundant service. So, service may experience downtime because no paired region exists. There will be no Microsoft initiated failover in the event of a region outage. This article explains some of the strategies that you can use to achieve cross-region resiliency for Virtual Instance for SAP solutions resources with customer enabled disaster recovery. It has detailed steps for you to follow when a region in which your Virtual Instance for SAP solutions resource exists is down.
+
+You must configure disaster recovery for SAP systems that you deploy using Azure Center for SAP solutions using [Disaster recovery overview and infrastructure guidelines for SAP workload](/azure/sap/workloads/disaster-recovery-overview-guide).
+
+In case of a region outage, customers will be notified about it. This article has the steps you can follow to get the Virtual Instance for SAP solutions resources up and running in a different region.
+
+## Prerequisites for Customer enabled disaster recovery in Azure Center for SAP solutions.
+Configure disaster recovery for your SAP system deployed using Azure Center for SAP solutions or otherwise using the [Disaster recovery overview and infrastructure guidelines for SAP workload](/azure/sap/workloads/disaster-recovery-overview-guide).
+
+## Region Down Scenarios and Mitigation Steps:
+
+| Case # | ACSS Service Region | SAP Workload Region | Scenario | Mitigation Steps |
+|--|--||--||
+| Case 1 | A (Down) | B | ACSS Service region is down | Register the workload with ACSS service available in another region using PowerShell or CLI which allow to select an available service location. |
+| Case 2 | A | B (Down) | SAP Workload region is down | 1. Customers should perform workload failover to DR region (outside of ACSS). <br> 2. Register the failed over workload with ACSS using PowerShell or CLI. |
+| Case 3 | A (Down) | B (Down) | ACSS Service and SAP workload regions are down | 1. Customers should perform workload failover to DR region (outside of ACSS). <br> 2. Register the failed over workload with ACSS service available in another region using PowerShell or CLI which allow to select an available service location.
+
+## Steps to re-register the SAP system with Azure Center for SAP solutions in case of regional outage:
+
+1. In case the region where your SAP workload exists is down (case 1 and 2 mentioned in the above section), perform workload failover to DR region (outside of ACSS) and have the workload running in a secondary region.
+
+2. In case the Azure Center for SAP solutions service is down (case 1 and 3 mentioned in the above section) in the region where your Virtual Instance for SAP solutions resource exists, register your SAP system with another available region.
+
+ ```azurepowershell-interactive
+ New-AzWorkloadsSapVirtualInstance `
+ -ResourceGroupName 'TestRG' `
+ -Name L46 `
+ -Location eastus `
+ -Environment 'NonProd' `
+ -SapProduct 'S4HANA' `
+ -CentralServerVmId '/subscriptions/sub1/resourcegroups/rg1/providers/microsoft.compute/virtualmachines/l46ascsvm' `
+ -Tag @{k1 = "v1"; k2 = "v2"} `
+ -ManagedResourceGroupName "acss-L46-rg" `
+ -ManagedRgStorageAccountName 'acssstoragel46' `
+ -IdentityType 'UserAssigned' `
+ -UserAssignedIdentity @{'/subscriptions/sub1/resourcegroups/rg1/providers/Microsoft.ManagedIdentity/userAssignedIdentities/ACSS-MSI'= @{}} `
+ ```
+3. Following table has the list of locations where Azure Center for SAP solutions service is available. It is recommended that you choose a region within the same geography where your SAP infrastructure resources are located.
+
+ | **Azure Center for SAP solutions service locations** |
+ | |
+ | East US |
+ | East US 2 |
+ | West US 3 |
+ | West Europe |
+ | North Europe |
+ | Australia East |
+ | East Asia |
+ | Central India |
+
+## Next steps
+> [!div class="nextstepaction"]
+> [Deploy a new SAP system with Azure Center for SAP solutions](/azure/sap/center-sap-solutions/deploy-s4hana)
sap Deploy S4hana https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sap/center-sap-solutions/deploy-s4hana.md
Title: Deploy S/4HANA infrastructure (preview)
+ Title: Deploy S/4HANA infrastructure
description: Learn how to deploy S/4HANA infrastructure with Azure Center for SAP solutions through the Azure portal. You can deploy High Availability (HA), non-HA, and single-server configurations.
#Customer intent: As a developer, I want to deploy S/4HANA infrastructure using Azure Center for SAP solutions so that I can manage SAP workloads in the Azure portal.
-# Deploy S/4HANA infrastructure with Azure Center for SAP solutions (preview)
+# Deploy S/4HANA infrastructure with Azure Center for SAP solutions
++ In this how-to guide, you'll learn how to deploy S/4HANA infrastructure in *Azure Center for SAP solutions*. There are [three deployment options](#deployment-types): distributed with High Availability (HA), distributed non-HA, and single server. ## Prerequisites -- An Azure subscription.-- Register the **Microsoft.Workloads** Resource Provider on the subscription in which you are deploying the SAP system.-- An Azure account with **Contributor** role access to the subscriptions and resource groups in which you'll create the Virtual Instance for SAP solutions (VIS) resource.-- A **User-assigned managed identity** which has Contributor role access on the Subscription or atleast all resource groups (Compute, Network,Storage). If you wish to install SAP Software through the Azure Center for SAP solutions, also provide Storage Blob data Reader, Reader and Data Access roles to the identity on SAP bits storage account where you would store the SAP Media.
+- An Azure [subscription](/azure/cost-management-billing/manage/create-subscription#create-a-subscription)
+- [Register](/azure/azure-resource-manager/management/resource-providers-and-types#azure-portal) the **Microsoft.Workloads** Resource Provider on the subscription in which you are deploying the SAP system.
+- An Azure account with **Contributor** [role](/azure/role-based-access-control/role-assignments-portal-subscription-admin) access to the subscriptions and resource groups in which you'll create the Virtual Instance for SAP solutions (VIS) resource.
+- A **User-assigned managed** [identity](/azure/active-directory/managed-identities-azure-resources/how-manage-user-assigned-managed-identities?pivots=identity-mi-methods-azp#create-a-user-assigned-managed-identity) which has Contributor role access on the Subscription or atleast all resource groups (Compute, Network,Storage). If you wish to install SAP Software through the Azure Center for SAP solutions, also provide Storage Blob data Reader, Reader and Data Access roles to the identity on SAP bits storage account where you would store the SAP Media.
- A [network set up for your infrastructure deployment](prepare-network.md). - Availability of minimum 4 cores of either Standard_D4ds_v4 or Standard_E4s_v3 SKUS which will be used during Infrastructure deployment and Software Installation - [Review the quotas for your Azure subscription](../../quotas/view-quotas.md). If the quotas are low, you might need to create a support request before creating your infrastructure deployment. Otherwise, you might experience deployment failures or an **Insufficient quota** error.
There are three deployment options that you can select for your infrastructure,
- **99.95% (Optimize for cost)** shows three availability sets for all instances. The HA ASCS cluster is deployed in the first availability set. All Application servers are deployed across the second availability set. The HA Database server is deployed in the third availability set. No availability zone names are shown. - **Distributed** creates distributed non-HA architecture. - **Single Server** creates architecture with a single server. This option is available for non-production environments only.+
+## Supported software
+
+Azure Center for SAP solutions supports the following SAP software versions: S/4HANA 1909 SPS 03, S/4HANA 2020 SPS 03, and S/4HANA 2021 ISS 00.
+
+The following operating system (OS) software versions are compatible with these SAP software versions:
+
+| Publisher | Image and Image Version | Supported SAP Software Version |
+| | -- | |
+| Red Hat | RHEL 82sapha-gen2 latest | S/4HANA 1909 SPS 03, S/4HANA 2020 SPS 03, S/4HANA 2021 ISS 00 |
+| Red Hat | RHEL 84sapha-gen2 latest | S/4HANA 1909 SPS 03, S/4HANA 2020 SPS 03, S/4HANA 2021 ISS 00 |
+| SUSE | SLES 15sp3-gen2 latest | S/4HANA 1909 SPS 03, S/4HANA 2020 SPS 03, S/4HANA 2021 ISS 00 |
+| SUSE | SLES 12sp4-gen2 latest | S/4HANA 1909 SPS 03 |
+
+- You can use `latest` if you want to use the latest image and not a specific older version. If the *latest* image version is newly released in marketplace and has an unforeseen issue, the deployment may fail. If you are using Portal for deployment, we recommend choosing a different image *sku train* (e.g. 12-SP4 instead of 15-SP3) till the issues are resolved. However, if deploying via API/CLI, you can provide any other *image version* which is available. To view and select the available image versions from a publisher, use below commands
++
+ ```Powershell
+ Get-AzVMImage -Location $locName -PublisherName $pubName -Offer $offerName -Sku $skuName | Select Version
+
+ where, for example
+ $locName="eastus"
+ $pubName="RedHat"
+ $offerName="RHEL-SAP-HA"
+ $skuName="82sapha-gen2"
+ ```
+
## Create deployment 1. Sign in to the [Azure portal](https://portal.azure.com).
There are three deployment options that you can select for your infrastructure,
1. For **Network**, create the [network you created previously with subnets](prepare-network.md).
- 1. For **Application subnet** and **Database subnet**, map the IP address ranges as required. It's recommended to use a different subnet for each deployment.
+ 1. For **Application subnet** and **Database subnet**, map the IP address ranges as required. It's recommended to use a different subnet for each deployment. The names including AzureFirewallSubnet, AzureFirewallManagementSubnet, AzureBastionSubnet and GatewaySubnet are reserved names within Azure. Please do not use these as the subnet names.
1. Under **Operating systems**, enter the OS details.
There are three deployment options that you can select for your infrastructure,
1. For **Authentication type**, keep the setting as **SSH public**.
- 1. For **Username**, enter a username.
+ 1. For **Username**, enter an SAP administrator username.
1. For **SSH public key source**, select a source for the public key. You can choose to generate a new key pair, use an existing key stored in Azure, or use an existing public key stored on your local computer. If you don't have keys already saved, it's recommended to generate a new key pair.
There are three deployment options that you can select for your infrastructure,
1. Under **SAP Transport Directory**, enter how you want to set up the transport directory on this SID. This is applicable for Distributed with High Availability and Distributed deployments only.
- 1. For **SAP Transport Options**, you can choose to **Create a new SAP transport Directory** or **Use an existing SAP transport Directory** or completely skip the creation of transport directory by choosing **Dont include SAP transport directory** option. Currently, only NFS on AFS storage account fileshares are supported.
+ 1. For **SAP Transport Options**, you can choose to **Create a new SAP transport Directory** or **Use an existing SAP transport Directory** or completely skip the creation of transport directory by choosing **Don't include SAP transport directory** option. Currently, only NFS on AFS storage account fileshares is supported.
1. If you choose to **Create a new SAP transport Directory**, this will create and mount a new transport fileshare on the SID. By Default, this option will create an NFS on AFS storage account and a transport fileshare in the resource group where SAP system will be deployed. However, you can choose to create this storage account in a different resource group by providing the resource group name in **Transport Resource Group**. You can also provide a custom name for the storage account to be created under **Storage account name** section. Leaving the **Storage account name** will create the storage account with service default name **""SIDname""nfs""random characters""** in the chosen transport resource group. Creating a new transport directory will create a ZRS based replication for zonal deployments and LRS based replication for non-zonal deployments. If your region doesn't support ZRS replication deploying a zonal VIS will lead to a failure. In such cases, you can deploy a transport fileshare outside ACSS with ZRS replication and then create a zonal VIS where you select **Use an existing SAP transport Directory** to mount the pre-created fileshare.
- 1. If you choose to **Use an existing SAP transport Directory**, select the pre - existing NFS fileshare under **File share name** option. The existing transport fileshare will be only mounted on this SID. The selected fileshare shall be in the same region as that of SAP system being created. Currently, file shares existing in a different region can not be selected. Provide the associated privated endpoint of the storage account where the selected fileshare exists under **Private Endpoint** option.
+ 1. If you choose to **Use an existing SAP transport Directory**, select the pre - existing NFS fileshare under **File share name** option. The existing transport fileshare will be only mounted on this SID. The selected fileshare shall be in the same region as that of SAP system being created. Currently, file shares existing in a different region cannot be selected. Provide the associated private endpoint of the storage account where the selected fileshare exists under **Private Endpoint** option.
- 1. You can skip the creation of transport file share by selecting **Dont include SAP transport directory** option. The transport fileshare will neither be created or mounted for this SID.
+ 1. You can skip the creation of transport file share by selecting **Don't include SAP transport directory** option. The transport fileshare will neither be created or mounted for this SID.
1. Under **Configuration Details**, enter the FQDN for your SAP System.
- 1. For **SAP FQDN**, provide only the domain name for you system such "sap.contoso.com"
+ 1. For **SAP FQDN**, provide only the domain name for your system such "sap.contoso.com"
1. Under **User assigned managed identity**, provide the identity which Azure Center for SAP solutions will use to deploy infrastructure. 1. For **Managed identity source**, choose if you want the service to create a new managed identity or you can instead use an existing identity. If you wish to allow the service to create a managed identity, acknowledge the checkbox which asks for your consent for the identity to be created and the contributor role access to be added for all resource groups.
- 1. For **Managed identity name**, enter a name for a new identity you want to create or select an existing identity from the drop down menu. If you are selecting an existing identity, it should have **Contributor** role access on the Subscription or on Resource Groups related to this SAP system you are trying to deploy. That is, it requires Contributor access to the SAP application Resource Group, Virtual Network Resource Group and Resource Group which has the existing SSHKEY. If you wish to later install the SAP system using ACSS, we also recommend to give the **Storage Blob Data Reader and Reader** and **Data Access roles** on the Storage Account which has the SAP software media.
+ 1. For **Managed identity name**, enter a name for a new identity you want to create or select an existing identity from the drop down menu. If you are selecting an existing identity, it should have **Contributor** role access on the Subscription or on Resource Groups related to this SAP system you are trying to deploy. That is, it requires Contributor access to the SAP application Resource Group, Virtual Network Resource Group and Resource Group which has the existing SSHKEY. If you wish to later install the SAP system using ACSS, we also recommend giving the **Storage Blob Data Reader and Reader** and **Data Access roles** on the Storage Account which has the SAP software media.
1. Select **Next: Virtual machines**.
sap Get Quality Checks Insights https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sap/center-sap-solutions/get-quality-checks-insights.md
Title: Get quality checks and insights for a Virtual Instance for SAP solutions (preview)
+ Title: Get quality checks and insights for a Virtual Instance for SAP solutions
description: Learn how to get quality checks and insights for a Virtual Instance for SAP solutions (VIS) resource in Azure Center for SAP solutions through the Azure portal.
#Customer intent: As a developer, I want to use the quality checks feature so that I can learn more insights about virtual machines within my Virtual Instance for SAP resource.
-# Get quality checks and insights for a Virtual Instance for SAP solutions (preview)
+# Get quality checks and insights for a Virtual Instance for SAP solutions
-The *Quality Insights* Azure workbook in *Azure Center for SAP solutions* provides insights about the SAP system resources. The feature is part of the monitoring capabilities built in to the *Virtual Instance for SAP solutions (VIS)*. These quality checks make sure that your SAP system uses Azure and SAP best practices for reliability and performance.
-In this how-to guide, you'll learn how to use quality checks and insights to get more information about virtual machine (VM) configurations within your SAP system.
+The *Quality Insights* Azure workbook in *Azure Center for SAP solutions* provides insights about the SAP system resources as a result of running *more than 100 quality checks on the VIS*. The feature is part of the monitoring capabilities built in to the *Virtual Instance for SAP solutions (VIS)*. These quality checks make sure that your SAP system uses Azure and SAP best practices for reliability and performance.
+
+In this how-to guide, you'll learn how to use quality checks and insights to get more information about various configurations within your SAP system.
## Prerequisites
There are multiple sections in the workbook:
## Get Advisor Recommendations
-The **Quality checks** feature in Azure Center for SAP solutions runs validation checks for all VIS resources. These quality checks validate the SAP system configurations follow the best practices recommended by SAP and Azure. If a VIS doesn't follow these best practices, you receive a recommendation from Azure Advisor.
+The **Quality checks** feature in Azure Center for SAP solutions runs validation checks for all VIS resources. These quality checks validate the SAP system configurations follow the best practices recommended for SAP on Azure. If a VIS doesn't follow these best practices, you receive a recommendation from Azure Advisor.
+Azure Center for SAP solutions runs more than 100 quality checks on all VIS resources. These checks span across the following categories:
+
+- Azure Infrastructure checks
+- OS parameter checks.
+- High availability (HA) Load Balancer checks
+- HANA DB file system checks.
+- OS parameter checks for ANF file system.
+- Pacemaker configuration checks for HANA DB and ASCS Instance for SUSE and Redhat
+- OS Configuration checks for Application Instances
The table in the **Advisor Recommendations** tab shows all the recommendations for ASCS, Application and Database instances in the VIS.
Select an instance name to see all recommendations, including which action to ta
:::image type="content" source="media/get-quality-checks-insights/recommendation-detail.png" lightbox="media/get-quality-checks-insights/recommendation-detail.png" alt-text="Screenshot of detailed advisor recommendations for an instance and which actions to take to resolve each issue.":::
-The following checks are run for each VIS:
+### Set Alerts for Quality check recommendations
+As the Quality checks recommendations in Azure Center for SAP solutions are integrated with *Azure Advisor*, you can set alerts for the recommendations. See how to [Configure alerts for recommendations](/azure/advisor/advisor-alerts-portal)
-- Checks that the VMs used for different instances in the VIS are certified by SAP. For better performance and support, make sure that a VM is certified for SAP on Azure. For more details, see [SAP note 1928533] (https://launchpad.support.sap.com/#/notes/1928533).-- Checks that accelerated networking is enabled for the NICs attached to the different VMs. Network latency between Application VMs and Database VMs for SAP workloads must be 0.7 ms or less. If accelerated networking isn't enabled, network latency can increase beyond the threshold of 0.7 ms. For more details, see the [planning and deployment checklist for SAP workloads on Azure](../workloads/deployment-checklist.md).-- Checks that the network configuration is optimized for HANA and the OS. Makes sure that as many client ports as possible are available for HANA internal communication. You must explicitly exclude the ports used by processes and applications which bind to specific ports by adjusting the parameter `net.ipv4.ip_local_reserved_ports` to a range of 9000-64999. For more details, see [SAP note 2382421](https://launchpad.support.sap.com/#/notes/2382421).-- Checks that swap space is set to 2 GB in HANA systems. For SLES and RHEL, configure a small swap space of 2 GB to avoid performance regressions at times of high memory utilization in the OS. Typically, it's recommended that activities terminate with "out of memory" errors. This setting makes sure that the overall system is still usable and only certain requests are terminated. For more details, see [SAP note 1999997](https://launchpad.support.sap.com/#/notes/1999997).-- Checks that **fstrim** is disabled in SAP systems that run on SUSE OS. **fstrim** scans the filesystem and sends `UNMAP` commands for each unused block found. This setting is useful in a thin-provisioned system, if the system is over-provisioned. It's not recommended to run SAP HANA on an over-provisioned storage array. Active **fstrim** can cause XFS metadata corruption. For more information, see [SAP note 2205917](https://launchpad.support.sap.com/#/notes/2205917) and [Disabling fstrim - under which conditions?](https://www.suse.com/support/kb/doc/?id=000019447).
+> [!NOTE]
+> These quality checks run on all VIS instances at a regular frequency of once every 1 hour. The corresponding recommendations in Azure Advisor also refresh at the same 1-hour frequency.If you take action on one or more recommendations from Azure Center for SAP solutions, wait for the next refresh to see any new recommendations from Azure Advisor.
+> [!IMPORTANT]
+> Azure Advisor filters out recommendations for Deleted Azure resources for 7 days. Therefore, if you delete a VIS and then re-register it, you will be able to see Advisor recommendations after 7 days of re-registration.
-> [!NOTE]
-> These quality checks run on all VIS instances at a regular frequency of 12 hours. The corresponding recommendations in Azure Advisor also refresh at the same 12-hour frequency.
-If you take action on one or more recommendations from Azure Center for SAP solutions, wait for the next refresh to see any new recommendations from Azure Advisor.
## Get VM information
sap Get Sap Installation Media https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sap/center-sap-solutions/get-sap-installation-media.md
Title: Get SAP installation media (preview)
+ Title: Get SAP installation media
description: Learn how to download the necessary SAP media for installing the SAP software and upload it for use with Azure Center for SAP solutions. Please note this is an *optional* step and media and BOM can be obtained with a method preffered by customer.
#Customer intent: As a developer, I want to download the necessary SAP media for installing the SAP software and upload it for us with Azure Center for SAP solutions.
-# Get SAP installation media (preview)
+# Get SAP installation media
+ After you've [created infrastructure for your new SAP system using *Azure Center for SAP solutions*](deploy-s4hana.md), you need to install the SAP software on your SAP system. However, before you can do this installation, you need to get and upload the SAP installation media for use with Azure Center for SAP solutions.
sap Install Software https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sap/center-sap-solutions/install-software.md
Title: Install SAP software (preview)
+ Title: Install SAP software
description: Learn how to install SAP software on an SAP system that you created using Azure Center for SAP solutions. You can either install the SAP software with Azure Center for SAP solutions, or install the software outside the service and detect the installed system.
#Customer intent: As a developer, I want to install SAP software so that I can use Azure Center for SAP solutions.
-# Install SAP software (preview)
+# Install SAP software
++ After you've created infrastructure for your new SAP system using *Azure Center for SAP solutions*, you need to install the SAP software.
Review the prerequisites for your preferred installation method: [through the Az
- An Azure subscription. - An Azure account with **Contributor** role access to the subscriptions and resource groups in which the Virtual Instance for SAP solutions exists. - A user-assigned managed identity with **Storage Blob Data Reader** and **Reader and Data Access** roles on the Storage Account which has the SAP software. -- A [network set up for your infrastructure deployment](prepare-network.md).
+- A [network set up for your SAP deployment](prepare-network.md).
- A deployment of S/4HANA infrastructure.-- The SSH private key for the virtual machines in the SAP system. You generated this key during the infrastructure deployment. - If you are installing an SAP System through Azure Center for SAP solutions, you should have the SAP installation media available in a storage account. For more information, see [how to download the SAP installation media](get-sap-installation-media.md).-- If you're installing a Highly Available (HA) SAP system, get the Service Principal identifier (SPN ID) and password to authorize the Azure fence agent (fencing device) against Azure resources. For more information, see [Use Azure CLI to create an Azure AD app and configure it to access Media Services API](/azure/media-services/previous/media-services-cli-create-and-configure-aad-app).
+- If you're installing a Highly Available (HA) SAP system, get the Service Principal identifier (SPN ID) and password to authorize the Azure fence agent (fencing device) against Azure resources. For more information, see [Use Azure CLI to create an Azure AD app and configure it to access Media Services API](/azure/sap/workloads/high-availability-guide-suse-pacemaker#using-service-principal).
- For an example, see the Red Hat documentation for [Creating an Azure Active Directory Application](https://access.redhat.com/documentation/en-us/red_hat_enterprise_linux/7/html/deploying_red_hat_enterprise_linux_7_on_public_cloud_platforms/configuring-rhel-high-availability-on-azure_cloud-content#azure-create-an-azure-directory-application-in-ha_configuring-rhel-high-availability-on-azure). - To avoid frequent password expiry, use the Azure Command-Line Interface (Azure CLI) to create the Service Principal identifier and password instead of the Azure portal.
sap Manage Virtual Instance https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sap/center-sap-solutions/manage-virtual-instance.md
Title: Manage a Virtual Instance for SAP solutions (preview)
+ Title: Manage a Virtual Instance for SAP solutions
description: Learn how to configure a Virtual Instance for SAP solutions (VIS) resource in Azure Center for SAP solutions through the Azure portal.
#Customer intent: As a SAP Basis Admin, I want to view and manage my SAP systems using Virtual Instance for SAP solutions resource where I can find SAP system properties.
-# Manage a Virtual Instance for SAP solutions (preview)
+# Manage a Virtual Instance for SAP solutions
In this article, you'll learn how to view the *Virtual Instance for SAP solutions (VIS)* resource created in *Azure Center for SAP solutions* through the Azure portal. You can use these steps to find your SAP system's properties and connect parts of the VIS to other resources like databases.
sap Manage With Azure Rbac https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sap/center-sap-solutions/manage-with-azure-rbac.md
Title: Manage Azure Center for SAP solutions resources with Azure RBAC (preview)
+ Title: Manage Azure Center for SAP solutions resources with Azure RBAC
description: Use Azure role-based access control (Azure RBAC) to manage access to your SAP workloads within Azure Center for SAP solutions.
Last updated 02/03/2023
-# Management of Azure Center for SAP solutions resources with Azure RBAC (preview)
+# Management of Azure Center for SAP solutions resources with Azure RBAC
[Azure role-based access control (Azure RBAC)](../../role-based-access-control/overview.md) enables granular access management for Azure. You can use Azure RBAC to manage Virtual Instance for SAP solutions resources within Azure Center for SAP solutions. For example, you can separate duties within your team and grant only the amount of access that users need to perform their jobs.
sap Monitor Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sap/center-sap-solutions/monitor-portal.md
Title: Monitor SAP system from the Azure portal (preview)
+ Title: Monitor SAP system from the Azure portal
description: Learn how to monitor the health and status of your SAP system, along with important SAP metrics, using the Azure Center for SAP solutions within the Azure portal.
#Customer intent: As a developer, I want to set up monitoring for my Virtual Instance for SAP solutions, so that I can monitor the health and status of my SAP system in Azure Center for SAP solutions.
-# Monitor SAP system from Azure portal (preview)
+# Monitor SAP system from Azure portal
+ In this how-to guide, you'll learn how to monitor the health and status of your SAP system with *Azure Center for SAP solutions* through the Azure portal. The following capabilities are available for your *Virtual Instance for SAP solutions* resource:
To register an existing Azure Monitor for SAP solutions resource, select the ins
:::image type="content" source="media/monitor-portal/ams-registration.png" lightbox="media/monitor-portal/ams-registration.png" alt-text="Screenshot of Azure Monitor for SAP solutions registration page, showing the selection of an existing Azure Monitor for SAP solutions resource.":::
-## Unregister Azure Monitor for SAP solutions from VIS
+### Unregister Azure Monitor for SAP solutions from VIS
> [!NOTE] > This operation only unregisters the Azure Monitor for SAP solutions resource from the VIS. To delete the Azure Monitor for SAP solutions resource, you need to delete the Azure Monitor for SAP solutions instance.
To remove the link between your Azure Monitor for SAP solutions resource and you
1. Wait for the confirmation message, **Azure Monitor for SAP solutions has been unregistered successfully**.
+## Troubleshooting issues with Health and Status on VIS
+If an error appears on a successfully registered or deployed Virtual Instance for SAP solutions resource indicating that service is unable to fetch health and status data, then use the guidance provided here to fix problem.
+
+### Error - Unable to fetch health and status data from primary SAP Central services VM
+**Possible causes:**
+1. The SAP central services VM might not be running.
+2. The monitoring VM extension might not be running or encountered an unexpected failure on the central services VM.
+3. The storage account in the managed resource group isn't reachable from the Central service VM(s) or the storage account or underlying container/blob required by the monitoring service may have been deleted.
+4. The Central Service VM(s) system assigned managed identity doesn't have ΓÇÿStorage Blob Data OwnerΓÇÖ access on the managed RG or this managed identity may have been disabled.
+5. The sapstartsrv process might not be running for the SAP instance or for SAP hostctrl agent on the primary Central service VM.
+6. The monitoring VM extension couldn't execute the script to fetch health and status information due to policies or restrictions in place on the VM.
+
+**Solution:**
+1. If the SAP Central services VM is not running, then bring up the virtual machine and SAP services on the VM. Once this is done, wait for a few minutes and check if the Health and Status shows up on the VIS resource.
+2. Navigate to the SAP Central Services VM on Azure Portal and check if the status of **Microsoft.Workloads.MonitoringExtension** on the Extensions + applications tab shows **Provisioning Succeeded**. If not, raise a support ticket.
+3. Navigate to the VIS resource and go to the Managed Resource Group from the Essentials section on Overview. Check if a Storage Account exists in this resource group. If it exists, then check if your virtual network allows connectivity from the SAP central services VM to this storage account. Enable connectivity if needed. If the storage account doesn't exist, then you will have to delete the VIS resource and register the system again.
+4. Check if the SAP central services VM system assigned managed identity has the ΓÇÿStorage Blob Data OwnerΓÇÖ access on the managed resource group of the VIS. If not, provide the necessary access. If the system assigned managed identity doesn't exist, then you will have to delete the VIS and re-register the system.
+5. Ensure sapstartsrv process for the SAP instance and SAP Hostctrl is running on the Central Services VM.
+6. If everything mentioned above is in place, then log a support ticket.
+ ## Next steps - [Get quality checks and insights for your VIS](get-quality-checks-insights.md)
sap Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sap/center-sap-solutions/overview.md
Title: Azure Center for SAP solutions (preview)
+ Title: Azure Center for SAP solutions
description: Azure Center for SAP solutions is an Azure offering that makes SAP a top-level workload on Azure. You can use Azure Center for SAP solutions to deploy or manage SAP systems on Azure seamlessly.
#Customer intent: As a developer, I want to learn about Azure Center for SAP solutions so that I can decide to use the service with a new or existing SAP system.
-# What is Azure Center for SAP solutions? (preview)
+# What is Azure Center for SAP solutions?
+ *Azure Center for SAP solutions* is an Azure offering that makes SAP a top-level workload on Azure. Azure Center for SAP solutions is an end-to-end solution that enables you to create and run SAP systems as a unified workload on Azure and provides a more seamless foundation for innovation. You can take advantage of the management capabilities for both new and existing Azure-based SAP systems.
sap Prepare Network https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sap/center-sap-solutions/prepare-network.md
Title: Prepare network for infrastructure deployment (preview)
+ Title: Prepare network for infrastructure deployment
description: Learn how to prepare a network for use with an S/4HANA infrastructure deployment with Azure Center for SAP solutions through the Azure portal.
#Customer intent: As a developer, I want to create a virtual network so that I can deploy S/4HANA infrastructure in Azure Center for SAP solutions.
-# Prepare network for infrastructure deployment (preview)
+# Prepare network for infrastructure deployment
+ In this how-to guide, you'll learn how to prepare a virtual network to deploy S/4 HANA infrastructure using *Azure Center for SAP solutions*. This article provides general guidance about creating a virtual network. Your individual environment and use case will determine how you need to configure your own network settings for use with a *Virtual Instance for SAP (VIS)* resource.
sap Quick Stop Start Sap Cli https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sap/center-sap-solutions/quick-stop-start-sap-cli.md
+
+ Title: Quickstart - Start and stop SAP systems from Azure Center for SAP solutions with CLI
+description: Learn how to start or stop an SAP system through the Virtual Instance for SAP solutions (VIS) resource in Azure Center for SAP solutions through Azure CLI.
+++ Last updated : 05/04/2023++
+#Customer intent: As a developer, I want to start and stop SAP systems in Azure Center for SAP solutions so that I can control instances through the Virtual Instance for SAP resource.
+
+# Quickstart: Start and stop SAP systems from Azure Center for SAP solutions with CLI
+
+The Azure CLI is used to create and manage Azure resources from the command line or in scripts.
+
+In this how-to guide, you'll learn how to start and stop your SAP systems through the *Virtual Instance for SAP solutions (VIS)* resource in *Azure Center for SAP solutions* using the Azure CLI.
+
+Through the Azure CLI, you can start and stop:
+
+- The entire SAP Application tier, which includes ABAP SAP Central Services (ASCS) and Application Server instances.
+- Individual SAP instances, which include Central Services and Application server instances.
+- HANA Database
+- You can start and stop instances in the following types of deployments:
+ - Single-Server
+ - High Availability (HA)
+ - Distributed Non-HA
+- SAP systems that run on Windows and Linux operating systems (OS).
+- SAP HA systems that use Linux Pacemaker clustering software and Windows Server Failover Clustering (WSFC). Other certified cluster software isn't currently supported.
+
+## Prerequisites
+- An SAP system that you've [created in Azure Center for SAP solutions](prepare-network.md) or [registered with Azure Center for SAP solutions](register-existing-system.md) as a *Virtual Instance for SAP solutions* resource.
+- For the start operation to work, the underlying virtual machines (VMs) of the SAP instances must be running. This capability starts or stops the SAP application instances, not the VMs that make up the SAP system resources.
+- The `sapstartsrv` service must be running on all VMs related to the SAP system.
+- For HA deployments, the HA interface cluster connector for SAP (`sap_vendor_cluster_connector`) must be installed on the ASCS instance. For more information, see the [SUSE connector specifications](https://www.suse.com/c/sap-netweaver-suse-cluster-integration-new-sap_suse_cluster_connector-version-3-0-0/) and [RHEL connector specifications](https://access.redhat.com/solutions/3606101).
+- The Stop operation function for the HANA Database can only be initiated when the cluster maintenance mode is in **Disabled** status. Similarly, the Start operation function can only be initiated when the cluster maintenance mode is in **Enabled** status.
+
+## Start SAP system
+To Start an SAP system represented as a *Virtual Instance for SAP solutions* resource:
+
+Use the [az workloads sap-virtual-instance start](/cli/azure/workloads/sap-virtual-instance#az-workloads-sap-virtual-instance-start) command:
+
+Option 1:
+
+Use the Virtual Instance for SAP solutions resource Name and ResourceGroupName to identify the system you intend to start.
+
+```azurecli-interactive
+ az workloads sap-virtual-instance start -g <Resource-group-name> -n <ResourceName>
+```
+Option 2:
+
+Use the `id` parameter and pass the resource ID of the Virtual Instance for SAP solutions resource you intend to start.
+
+```azurecli-interactive
+ az workloads sap-virtual-instance start --id <ResourceID>
+```
+
+## Stop SAP system
+
+To stop an SAP system represented as a *Virtual Instance for SAP solutions* resource:
+
+Use the [az workloads sap-virtual-instance stop](/cli/azure/workloads/sap-virtual-instance#az-workloads-sap-virtual-instance-stop) command:
+
+Option 1:
+
+Use the Virtual Instance for SAP solutions resource Name and ResourceGroupName to identify the system you intend to stop.
+
+```azurecli-interactive
+ az workloads sap-virtual-instance stop -g <Resource-group-name> -n <ResourceName>
+```
+Option 2:
+
+Use the `id` parameter and pass the resource ID of the Virtual Instance for SAP solutions resource you intend to stop.
+
+```azurecli-interactive
+ az workloads sap-virtual-instance stop --id <ResourceID>
+```
+
+ ## Next steps
+- [Monitor SAP system from the Azure portal](monitor-portal.md)
sap Quick Stop Start Sap Powershell https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sap/center-sap-solutions/quick-stop-start-sap-powershell.md
+
+ Title: Quickstart - Start and stop SAP systems from Azure Center for SAP solutions with PowerShell
+description: Learn how to start or stop an SAP system through the Virtual Instance for SAP solutions (VIS) resource in Azure Center for SAP solutions through Azure PowerShell module.
+++ Last updated : 05/04/2023++
+#Customer intent: As a developer, I want to start and stop SAP systems in Azure Center for SAP solutions so that I can control instances through the Virtual Instance for SAP resource.
+
+# Quickstart: Start and stop SAP systems from Azure Center for SAP solutions with PowerShell
+
+The [Azure PowerShell AZ](/powershell/azure/new-azureps-module-az) module is used to create and manage Azure resources from the command line or in scripts.
+
+In this how-to guide, you'll learn to start and stop your SAP systems through the *Virtual Instance for SAP solutions (VIS)* resource in *Azure Center for SAP solutions* using PowerShell.
+
+Through the Azure PowerShell module, you can start and stop:
+
+- The entire SAP Application tier, which includes ABAP SAP Central Services (ASCS) and Application Server instances.
+- Individual SAP instances, which include Central Services and Application server instances.
+- HANA Database
+- You can start and stop instances in the following types of deployments:
+ - Single-Server
+ - High Availability (HA)
+ - Distributed Non-HA
+- SAP systems that run on Windows and Linux operating systems (OS).
+- SAP HA systems that use Linux Pacemaker clustering software and Windows Server Failover Clustering (WSFC). Other clustering software solutions aren't currently supported.
+
+## Prerequisites
+
+The following are prerequisites that you need to ensure before using the Start or Stop capability on the Virtual Instance for SAP solutions resource.
+- An SAP system that you've [created in Azure Center for SAP solutions](prepare-network.md) or [registered with Azure Center for SAP solutions](register-existing-system.md) as a *Virtual Instance for SAP solutions* resource.
+- For the start operation to work, the underlying virtual machines (VMs) of the SAP instances must be running. This capability starts or stops the SAP application instances, not the VMs that make up the SAP system resources.
+- The `sapstartsrv` service must be running on all VMs related to the SAP system.
+- For HA deployments, the HA interface cluster connector for SAP (`sap_vendor_cluster_connector`) must be installed on the ASCS instance. For more information, see the [SUSE connector specifications](https://www.suse.com/c/sap-netweaver-suse-cluster-integration-new-sap_suse_cluster_connector-version-3-0-0/) and [RHEL connector specifications](https://access.redhat.com/solutions/3606101).
+- The Stop operation function for the HANA Database can only be initiated when the cluster maintenance mode is in **Disabled** status. Similarly, Start operation can only be initiated when the cluster maintenance mode is in **Enabled** status.
+
+## Start SAP system
+
+To Start an SAP system represented as a *Virtual Instance for SAP solutions* resource:
+
+Use the [Start-AzWorkloadsSapVirtualInstance](/powershell/module/az.workloads/Start-AzWorkloadsSapVirtualInstance) command:
+
+Option 1:
+
+Use the Virtual Instance for SAP solutions resource Name and ResourceGroupName to identify the system you intend to start.
+
+```powershell
+ Start-AzWorkloadsSapVirtualInstance -Name DB0 -ResourceGroupName db0-vis-rg `
+```
+
+Option 2:
+
+Use the InputObject parameter and pass the resource ID of the Virtual Instance for SAP solutions resource you intend to start.
+
+ ```powershell
+ Start-AzWorkloadsSapVirtualInstance -InputObject /subscriptions/sub1/resourceGroups/rg1/providers/Microsoft.Workloads/sapVirtualInstances/DB0 `
+ ```
+
+## Stop SAP system
+
+To stop an SAP system represented as a *Virtual Instance for SAP solutions* resource:
+
+Use the [Stop-AzWorkloadsSapVirtualInstance](/powershell/module/az.workloads/Stop-AzWorkloadsSapVirtualInstance) command:
+
+Option 1:
+
+Use the Virtual Instance for SAP solutions resource Name and ResourceGroupName to identify the system you intend to stop.
+
+```powershell
+ Stop-AzWorkloadsSapVirtualInstance -Name DB0 -ResourceGroupName db0-vis-rg `
+```
+
+Option 2:
+
+Use the InputObject parameter and pass the resource ID of the Virtual Instance for SAP solutions resource you intend to stop.
+
+```powershell
+ Stop-AzWorkloadsSapVirtualInstance -InputObject /subscriptions/sub1/resourceGroups/rg1/providers/Microsoft.Workloads/sapVirtualInstances/DB0 `
+```
+
+## Next steps
+
+- [Monitor SAP system from the Azure portal](monitor-portal.md)
sap Quickstart Create Distributed Non High Availability https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sap/center-sap-solutions/quickstart-create-distributed-non-high-availability.md
+
+ Title: Quickstart - Create a distributed non-HA SAP system with Azure Center for SAP solutions with PowerShell
+description: Learn how to create a distributed non-HA SAP system in Azure Center for SAP solutions through Azure PowerShell module.
+++ Last updated : 05/04/2023++
+#Customer intent: As a developer, I want to create a distributed non-HA SAP system so that I can use the system with Azure Center for SAP solutions.
+
+# Quickstart: Create infrastructure for a distributed non-high-availability SAP system with *Azure Center for SAP solutions*
+
+The [Azure PowerShell AZ](/powershell/azure/new-azureps-module-az) module is used to create and manage Azure resources from the command line or in scripts.
+
+[Azure Center for SAP solutions](overview.md) enables you to deploy and manage SAP systems on Azure. This article shows you how to deploy infrastructure for an SAP system with non highly available (HA) Distributed architecture on Azure with *Azure Center for SAP solutions* using Az PowerShell module. Alternatively, you can deploy SAP systems using the Azure CLI, or in the Azure Portal.
+
+After you deploy infrastructure and [install SAP software](install-software.md) with *Azure Center for SAP solutions*, you can use its visualization, management and monitoring capabilities through the Azure portal. For example, you can:
+
+- View and track the SAP system as an Azure resource, called the *Virtual Instance for SAP solutions (VIS)*.
+- Get recommendations for your SAP infrastructure, Operating System configurations etc. based on quality checks that evaluate best practices for SAP on Azure.
+- Get health and status information about your SAP system.
+- Start and Stop SAP application tier.
+- Start and Stop individual instances of ASCS, App server and HANA Database.
+- Monitor the Azure infrastructure metrics for the SAP system resources.
+- View Cost Analysis for the SAP system.
+
+## Prerequisites
+
+- An Azure subscription.
+- If you are using Azure Center for SAP solutions for the first time, Register the **Microsoft.Workloads** Resource Provider on the subscription in which you are deploying the SAP system. Use [Register-AzResourceProvider](/powershell/module/az.Resources/Register-azResourceProvider), as follows:
+
+ ```powershell
+ Register-AzResourceProvider -ProviderNamespace "Microsoft.Workloads"
+ ```
+
+- An Azure account with **Azure Center for SAP solutions administrator** and **Managed Identity Operator** role access to the subscriptions and resource groups in which you'll create the Virtual Instance for SAP solutions (VIS) resource.
+- A **User-assigned managed identity** which has **Azure Center for SAP solutions service role** access on the Subscription or atleast all resource groups (Compute, Network,Storage). If you wish to install SAP Software through the Azure Center for SAP solutions, also provide **Reader and Data Access** role to the identity on SAP bits storage account where you would store the SAP Media.
+- A [network set up for your infrastructure deployment](prepare-network.md).
+- Availability of minimum 4 cores of either Standard_D4ds_v4 or Standard_E4s_v3 SKUS which will be used during Infrastructure deployment and Software Installation
+- [Review the quotas for your Azure subscription](../../quotas/view-quotas.md). If the quotas are low, you might need to create a support request before creating your infrastructure deployment. Otherwise, you might experience deployment failures or an **Insufficient quota** error.
+- Note the SAP Application Performance Standard (SAPS) and database memory size that you need to allow Azure Center for SAP solutions to size your SAP system. If you're not sure, you can also select the VMs. There are:
+ - A single or cluster of ASCS VMs, which make up a single ASCS instance in the VIS.
+ - A single or cluster of Database VMs, which make up a single Database instance in the VIS.
+ - A single Application Server VM, which makes up a single Application instance in the VIS. Depending on the number of Application Servers being deployed or registered, there can be multiple application instances.
+
+- Azure Cloud Shell or Azure PowerShell.
+
+ The steps in this quickstart run the Azure PowerShell cmdlets interactively in [Azure Cloud Shell](/azure/cloud-shell/overview). To run the commands in the Cloud Shell, select **Open Cloudshell** at the upper-right corner of a code block. Select **Copy** to copy the code and then paste it into Cloud Shell to run it. You can also run the Cloud Shell from within the Azure portal.
+
+ You can also [install Azure PowerShell locally](/powershell/azure/install-Az-ps) to run the cmdlets. The steps in this article require Azure PowerShell module version 5.4.1 or later. Run `Get-Module -ListAvailable Az` to find your installed version. If you need to upgrade, see [Update the Azure PowerShell module](/powershell/azure/install-Az-ps#update-the-azure-powershell-module).
+
+ If you run PowerShell locally, run `Connect-AzAccount` to connect to Azure.
+
+## Right Size the SAP system you want to deploy
+
+Use [Invoke-AzWorkloadsSapSizingRecommendation](/powershell/module/az.workloads/invoke-azworkloadssapsizingrecommendation) to get SAP system sizing recommendations by providing SAPS input for application tier and memory required for database tier
+
+```powershell
+Invoke-AzWorkloadsSapSizingRecommendation -Location eastus -AppLocation eastus -DatabaseType HANA -DbMemory 256 -DeploymentType ThreeTier -Environment NonProd -SapProduct S4HANA -Sap 10000 -DbScaleMethod ScaleUp
+```
+
+## Create *json* configuration file
+
+Prepare a *json* file with the payload that will be used for the deployment of SAP system infrastructure. You can make edits in this [sample payload](https://github.com/Azure/Azure-Center-for-SAP-solutions-preview/blob/main/Payload_Samples/CreatePayloadDistributedNon-HA.json) or use the examples listed in the [Rest API documentation](/rest/api/workloads) for Azure Center for SAP solutions
+
+## Deploy infrastructure for your SAP system
+
+Use [New-AzWorkloadsSapVirtualInstance](/powershell/module/az.workloads/new-azworkloadssapvirtualinstance) to deploy infrastructure for your SAP system with Three tier non-HA architecture
+
+```powershell
+New-AzWorkloadsSapVirtualInstance -ResourceGroupName 'PowerShell-CLI-TestRG' -Name L46 -Location eastus -Environment 'NonProd' -SapProduct 'S4HANA' -Configuration .\CreatePayload.json -Tag @{k1 = "v1"; k2 = "v2"} -IdentityType 'UserAssigned' -ManagedResourceGroupName "L46-rg" -UserAssignedIdentity @{'/subscriptions/49d64d54-e966-4c46-a868-1999802b762c/resourcegroups/SAP-E2ETest-rg/providers/Microsoft.ManagedIdentity/userAssignedIdentities/E2E-RBAC-MSI'= @{}}
+```
++
+## Next steps
+In this quickstart, you deployed infrastructure in Azure for an SAP system using Azure Center for SAP solutions. Continue to the next article to learn how to install SAP software on the infrastructure deployed.
+> [!div class="nextstepaction"]
+> [Install SAP software](install-software.md)
++++++
sap Quickstart Create High Availability Namecustom https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sap/center-sap-solutions/quickstart-create-high-availability-namecustom.md
+
+ Title: Quickstart - Create a distributed highly available SAP system with Azure Center for SAP solutions with Azure CLI
+description: Learn how to create a distributed highly available SAP system in Azure Center for SAP solutions through Azure CLI.
+++ Last updated : 05/04/2023++
+#Customer intent: As a developer, I want to Create a Distributed Highly available SAP system so that I can use the system with Azure Center for SAP solutions.
+
+# Quickstart: Use Azure CLI to create infrastructure for a distributed highly available (HA) SAP system with Azure Center for SAP solutions with customized resource names
+
+The [Azure CLI](/cli/azure/) is used to create and manage Azure resources from the command line or in scripts.
+
+[Azure Center for SAP solutions](overview.md) enables you to deploy and manage SAP systems on Azure. This article shows you how to use Azure CLI to deploy infrastructure for an SAP system with highly available (HA) Three-tier Distributed architecture. You also see how to customize resource names for the Azure infrastructure that gets deployed. Alternatively, you can deploy SAP systems with customized using the [Azure PowerShell Module](/powershell/module/az.workloads/new-azworkloadssapvirtualinstance)
+
+After you deploy infrastructure and [install SAP software](install-software.md) with *Azure Center for SAP solutions*, you can use its visualization, management and monitoring capabilities through the Azure portal. For example, you can:
+
+- View and track the SAP system as an Azure resource, called the *Virtual Instance for SAP solutions (VIS)*.
+- Get recommendations for your SAP infrastructure, Operating System configurations etc. based on quality checks that evaluate best practices for SAP on Azure.
+- Get health and status information about your SAP system.
+- Start and Stop SAP application tier.
+- Start and Stop individual instances of ASCS, App server and HANA Database.
+- Monitor the Azure infrastructure metrics for the SAP system resources.
+- View Cost Analysis for the SAP system.
+
+## Prerequisites
+
+- An Azure subscription.
+- If you're using Azure Center for SAP solutions for the first time, Register the **Microsoft.Workloads** Resource Provider on the subscription in which you're deploying the SAP system:
+
+ ```azurecli-interactive
+ az provider register --namespace 'Microsoft.Workloads'
+ ```
+
+- An Azure account with **Azure Center for SAP solutions administrator** and **Managed Identity Operator** role access to the subscriptions and resource groups in which you create the Virtual Instance for SAP solutions (VIS) resource.
+- A **User-assigned managed identity** which has **Azure Center for SAP solutions service role** access on the Subscription or atleast all resource groups (Compute, Network,Storage). If you wish to install SAP Software through the Azure Center for SAP solutions, also provide **Reader and Data Access** role to the identity on SAP bits storage account where you would store the SAP Media.
+- A [network set up for your infrastructure deployment](prepare-network.md).
+- Availability of minimum 4 cores of either Standard_D4ds_v4 or Standard_E4s_v3, SKUS which will be used during Infrastructure deployment and Software Installation
+- [Review the quotas for your Azure subscription](../../quotas/view-quotas.md). If the quotas are low, you might need to create a support request before creating your infrastructure deployment. Otherwise, you might experience deployment failures or an **Insufficient quota** error.
+- Note the SAP Application Performance Standard (SAPS) and database memory size that you need to allow Azure Center for SAP solutions to size your SAP system. If you're not sure, you can also select the VMs. There are:
+ - A single or cluster of ASCS VMs, which make up a single ASCS instance in the VIS.
+ - A single or cluster of Database VMs, which make up a single Database instance in the VIS.
+ - A single Application Server VM, which makes up a single Application instance in the VIS. Depending on the number of Application Servers being deployed or registered, there can be multiple application instances.
++
+## Right Size the SAP system you want to deploy
+
+Use [az workloads sap-sizing-recommendation](/cli/azure/workloads?view=azure-cli-latest#az-workloads-sap-sizing-recommendation&preserve-view=true) to get SAP system sizing recommendations by providing SAPS input for application tier and memory required for database tier
+
+```azurecli-interactive
+az workloads sap-sizing-recommendation --app-location "eastus" --database-type "HANA" --db-memory 1024 --deployment-type "ThreeTier" --environment "Prod" --high-availability-type "AvailabilitySet" --sap-product "S4HANA" --saps 75000 --location "eastus2" --db-scale-method ScaleUp
+```
+
+## Create *json* configuration file with custom resource names
+
+- Prepare a *json* file with the configuration (payload) to use for the deployment of SAP system infrastructure. You can make edits in this [sample payload]([https://github.com/Azure/Azure-Center-for-SAP-solutions-preview/blob/main/Payload_Samples/CreatePayloadDistributedNon-HA.json](https://github.com/Azure/Azure-Center-for-SAP-solutions-preview/blob/main/Payload_Samples/CreatePayload_withTransportDirectory_withHAAvSet_withCustomResourceName.json) or use the examples listed in the [Rest API documentation](/rest/api/workloads) for Azure Center for SAP solutions
+- In this json file, provide the custom resource names for the infrastructure that is deployed for your SAP system
+
+## Deploy infrastructure for your SAP system
+
+Use [az workloads sap-virtual-instance create](/cli/azure/workloads/sap-virtual-instance?view=azure-cli-latest#az-workloads-sap-virtual-instance-create&preserve-view=true) to deploy infrastructure for your SAP system with Three tier HA architecture
+
+```azurecli-interactive
+az workloads sap-virtual-instance create -g <Resource Group Name> -n <VIS Name> --environment NonProd --sap-product s4hana --configuration <Payload file path> --identity "{type:UserAssigned,userAssignedIdentities:{<Managed_Identity_ResourceID>:{}}}"
+```
++
+## Next steps
+In this quickstart, you deployed infrastructure in Azure for an SAP system using Azure Center for SAP solutions. You used custom resource names for the infrastructure. Continue to the next article to learn how to install SAP software on the infrastructure deployed.
+> [!div class="nextstepaction"]
+> [Install SAP software](install-software.md)
sap Quickstart Install Distributed Non High Availability https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sap/center-sap-solutions/quickstart-install-distributed-non-high-availability.md
+
+ Title: Quickstart - Install software for a distributed non-HA SAP system with Azure Center for SAP solutions with PowerShell
+description: Learn how to install software for a distributed non-HA SAP system in Azure Center for SAP solutions through Azure PowerShell module.
+++ Last updated : 05/04/2023++
+#Customer intent: As a developer, I want to Create a Distributed non-HA SAP system so that I can use the system with Azure Center for SAP solutions.
+
+# Quickstart: Install software for a distributed non-high-availability (HA) SAP system with Azure Center for SAP solutions using Azure PowerShell
+
+The [Azure PowerShell AZ](/powershell/azure/new-azureps-module-az) module is used to create and manage Azure resources from the command line or in scripts.
+
+[Azure Center for SAP solutions](overview.md) enables you to deploy and manage SAP systems on Azure. This article shows you how to Install SAP software for infrastructure deployed for an SAP system. In the [previous step](deploy-s4hana.md), you created infrastructure for an SAP system with non highly available (HA) Distributed architecture on Azure with *Azure Center for SAP solutions* using Az PowerShell module.
+
+After you [deploy infrastructure](deploy-s4hana.md) and install SAP software with *Azure Center for SAP solutions*, you can use its visualization, management and monitoring capabilities through the [Virtual Instance for SAP solutions](manage-virtual-instance.md). For example, you can:
+
+- View and track the SAP system as an Azure resource, called the *Virtual Instance for SAP solutions (VIS)*.
+- Get recommendations for your SAP infrastructure, Operating System configurations etc. based on quality checks that evaluate best practices for SAP on Azure.
+- Get health and status information about your SAP system.
+- Start and Stop SAP application tier.
+- Start and Stop individual instances of ASCS, App server and HANA Database.
+- Monitor the Azure infrastructure metrics for the SAP system resources.
+- View Cost Analysis for the SAP system.
+
+## Prerequisites
+- An Azure subscription.
+- An Azure account with **Azure Center for SAP solutions administrator** and **Managed Identity Operator** role access to the subscriptions and resource groups in which you'll create the Virtual Instance for SAP solutions (VIS) resource.
+- A **User-assigned managed identity** which has **Azure Center for SAP solutions service role** access on the Subscription or atleast all resource groups (Compute, Network,Storage).
+- A storage account where you would store the SAP Media
+- **Reader and Data Access** role to the **User-assigned managed identity** on the storage account where you would store the SAP Media.
+- A [network set up for your infrastructure deployment](prepare-network.md).
+- A deployment of S/4HANA infrastructure.
+- The SSH private key for the virtual machines in the SAP system. You generated this key during the infrastructure deployment.
+- You should have the SAP installation media available in a storage account. For more information, see [how to download the SAP installation media](get-sap-installation-media.md).
+- The *json* configuration file that you used to create infrastructure in the [previous step](deploy-s4hana.md) for SAP system using PowerShell or Azure CLI.
+
+## Create *json* configuration file
+
+- The json file for installation of SAP software is similar to the one used to Deploy infrastructure for SAP with an added section for SAP software configuration.
+- The software configuration section requires he following inputs
+ - Software installation type: Keep this as "SAPInstallWithoutOSConfig"
+ - BOM URL: This is the BOM file path. Example: `https://<your-storage-account>.blob.core.windows.net/sapbits/sapfiles/boms/S41909SPS03_v0010ms.yaml`
+ - Software version: Azure Center for SAP solutions supports three SAP software versions viz. **SAP S/4HANA 1909 SPS03** or **SAP S/4HANA 2020 SPS 03** or **SAP S/4HANA 2021 ISS 00**
+ - Storage account ID: This is the resource ID for the storage account where the BOM file is created
+- You can use the [sample software installation payload file](https://github.com/Azure/Azure-Center-for-SAP-solutions-preview/blob/main/Payload_Samples/InstallPayloadDistributedNon-HA.json)
+
+## Install SAP software
+Use [New-AzWorkloadsSapVirtualInstance](/powershell/module/az.workloads/new-azworkloadssapvirtualinstance) to install SAP software
+```powershell
+New-AzWorkloadsSapVirtualInstance -ResourceGroupName 'PowerShell-CLI-TestRG' -Name L46 -Location eastus -Environment 'NonProd' -SapProduct 'S4HANA' -Configuration .\InstallPayload.json -Tag @{k1 = "v1"; k2 = "v2"} -IdentityType 'UserAssigned' -ManagedResourceGroupName "L46-rg" -UserAssignedIdentity @{'/subscriptions/49d64d54-e966-4c46-a868-1999802b762c/resourcegroups/SAP-E2ETest-rg/providers/Microsoft.ManagedIdentity/userAssignedIdentities/E2E-RBAC-MSI'= @{}}
+```
+
+## Next steps
+In this quickstart, you installed SAP software on the deployed infrastructure in Azure for an SAP system using Azure Center for SAP solutions. Continue to the next article to learn how to Manage your SAP system on Azure using [Virtual Instance for SAP solutions]()
+> [!div class="nextstepaction"]
+> [Manage a Virtual Instance for SAP solutions](manage-virtual-instance.md)
+
sap Quickstart Install High Availability Namecustom Cli https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sap/center-sap-solutions/quickstart-install-high-availability-namecustom-cli.md
+
+ Title: Quickstart - Install software for a Distributed HA SAP system with Azure Center for SAP solutions with custom resource names using Azure CLI
+description: Learn how to Install software for a Distributed HA SAP system in Azure Center for SAP solutions through Azure CLI.
+++ Last updated : 05/05/2023++
+#Customer intent: As a developer, I want to Create a Distributed HA SAP system with custom resource names so that I can use the system with Azure Center for SAP solutions.
+
+# Quickstart: Install software for a Distributed High-Availability (HA) SAP system and customized resource names with Azure Center for SAP solutions using Azure CLI
+
+The [Azure CLI](/cli/azure/) is used to create and manage Azure resources from the command line or in scripts.
+
+[Azure Center for SAP solutions](overview.md) enables you to deploy and manage SAP systems on Azure. This article shows you how to Install SAP software for infrastructure deployed for an SAP system. In the [previous step](tutorial-create-high-availability-name-custom.md), you created infrastructure for an SAP system with highly available (HA) Distributed architecture on Azure with *Azure Center for SAP solutions* using Azure CLI. You also provided customized resource names for the deployed Azure resources.
+
+After you [deploy infrastructure](deploy-s4hana.md) and install SAP software with *Azure Center for SAP solutions*, you can use its visualization, management and monitoring capabilities through the [Virtual Instance for SAP solutions](manage-virtual-instance.md). For example, you can:
+
+- View and track the SAP system as an Azure resource, called the *Virtual Instance for SAP solutions (VIS)*.
+- Get recommendations for your SAP infrastructure, Operating System configurations etc. based on quality checks that evaluate best practices for SAP on Azure.
+- Get health and status information about your SAP system.
+- Start and Stop SAP application tier.
+- Start and Stop individual instances of ASCS, App server and HANA Database.
+- Monitor the Azure infrastructure metrics for the SAP system resources.
+- View Cost Analysis for the SAP system.
+
+## Prerequisites
+- An Azure subscription.
+- An Azure account with **Azure Center for SAP solutions administrator** and **Managed Identity Operator** role access to the subscriptions and resource groups in which you'll create the Virtual Instance for SAP solutions (VIS) resource.
+- A **User-assigned managed identity** which has **Azure Center for SAP solutions service role** access on the Subscription or atleast all resource groups (Compute, Network,Storage).
+- A storage account where you would store the SAP Media
+- **Reader and Data Access** role to the **User-assigned managed identity** on the storage account where you would store the SAP Media.
+- A [network set up for your infrastructure deployment](prepare-network.md).
+- A deployment of S/4HANA infrastructure.
+- The SSH private key for the virtual machines in the SAP system. You generated this key during the infrastructure deployment.
+- You should have the SAP installation media available in a storage account. For more information, see [how to download the SAP installation media](get-sap-installation-media.md).
+- The *json* configuration file that you used to create infrastructure in the [previous step](tutorial-create-high-availability-name-custom.md) for SAP system using PowerShell or Azure CLI.
+- As you're installing a Highly Available (HA) SAP system, get the Service Principal identifier (SPN ID) and password to authorize the Azure fence agent (fencing device) against Azure resources. For more information, see [Use Azure CLI to create an Azure AD app and configure it to access Media Services API](/azure/media-services/previous/media-services-cli-create-and-configure-aad-app).
+ - For an example, see the Red Hat documentation for [Creating an Azure Active Directory Application](https://access.redhat.com/documentation/en-us/red_hat_enterprise_linux/7/html/deploying_red_hat_enterprise_linux_7_on_public_cloud_platforms/configuring-rhel-high-availability-on-azure_cloud-content#azure-create-an-azure-directory-application-in-ha_configuring-rhel-high-availability-on-azure).
+ - To avoid frequent password expiry, use the Azure Command-Line Interface (Azure CLI) to create the Service Principal identifier and password instead of the Azure portal.
++
+## Create *json* configuration file
+
+- The json file for installation of SAP software is similar to the one used to Deploy infrastructure for SAP with an added section for SAP software configuration.
+- The software configuration section requires he following inputs
+ - Software installation type: Keep this as "SAPInstallWithoutOSConfig"
+ - BOM URL: This is the BOM file path. Example: `https://<your-storage-account>.blob.core.windows.net/sapbits/sapfiles/boms/S41909SPS03_v0010ms.yaml`
+ - Software version: Azure Center for SAP solutions supports three SAP software versions viz. **SAP S/4HANA 1909 SPS03** or **SAP S/4HANA 2020 SPS 03** or **SAP S/4HANA 2021 ISS 00**
+ - Storage account ID: This is the resource ID for the storage account where the BOM file is created
+ - As you are deploying an HA system, you need to provide the High Availability software configuration with the following two inputs:
+ - Fencing Client ID: The client identifier for the STONITH Fencing Agent service principal
+ - Fencing Client Password: The password for the Fencing Agent service principal
+- You can use the [sample software installation payload file](https://github.com/Azure/Azure-Center-for-SAP-solutions-preview/blob/main/Payload_Samples/CreatePayload_withTransportDirectory_withHAAvSet_withCustomResourceName.json)
+
+## Install SAP software
+Use [az workloads sap-virtual-instance create](/cli/azure/workloads/sap-virtual-instance?view=azure-cli-latest#az-workloads-sap-virtual-instance-create&preserve-view=true) to install SAP software
+
+```azurecli-interactive
+az workloads sap-virtual-instance create -g <Resource Group Name> -n <VIS Name> --environment NonProd --sap-product s4hana --configuration <Payload file path> --identity "{type:UserAssigned,userAssignedIdentities:{<Managed_Identity_ResourceID>:{}}}"
+```
+
+**Note:** The commands for infrastructure deployment and installation are the same but the payload file for the two needs to be different.
+
+## Next steps
+In this quickstart, you installed SAP software on the deployed infrastructure in Azure for an SAP system with Highly Available architecture type using Azure Center for SAP solutions. You also noted that the resource names were customized for the system while deploying infrastructure. Continue to the next article to learn how to Manage your SAP system on Azure using Virtual Instance for SAP solutions
+> [!div class="nextstepaction"]
+> [Manage a Virtual Instance for SAP solutions](manage-virtual-instance.md)
+
sap Quickstart Register System Cli https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sap/center-sap-solutions/quickstart-register-system-cli.md
+
+ Title: Quickstart - Register an existing system with Azure Center for SAP solutions with CLI
+description: Learn how to register an existing SAP system in Azure Center for SAP solutions through Azure CLI.
+++ Last updated : 05/04/2023++
+#Customer intent: As a developer, I want to register my existing SAP system so that I can use the system with Azure Center for SAP solutions.
+
+# Quickstart: Register an existing SAP system with Azure Center for SAP solutions with CLI
+
+The Azure CLI is used to create and manage Azure resources from the command line or in scripts.
+
+[Azure Center for SAP solutions](overview.md) enables you to deploy and manage SAP systems on Azure. This article shows you how to register an existing SAP system running on Azure with *Azure Center for SAP solutions* using Az CLI. Alternatively, you can register systems using the Azure PowerShell or in the Azure portal.
+After you register an SAP system with *Azure Center for SAP solutions*, you can use its visualization, management and monitoring capabilities through the Azure portal. For example, you can:
+
+This quickstart enables you to register an existing SAP system with *Azure Center for SAP solutions*.
+
+## Prerequisites for registering a system
+- Check that you're trying to register a [supported SAP system configuration](/articles/sap/center-sap-solutions/register-existing-system.md)
+- Grant access to Azure Storage accounts from the virtual network where the SAP system exists. Use one of these options:
+ - Allow outbound internet connectivity for the VMs.
+ - Use a [**Storage** service tag](../../virtual-network/service-tags-overview.md) to allow connectivity to any Azure storage account from the VMs.
+ - Use a [**Storage** service tag with regional scope](../../virtual-network/service-tags-overview.md) to allow storage account connectivity to the Azure storage accounts in the same region as the VMs.
+ - Allowlist the region-specific IP addresses for Azure Storage.
+- The first time you use Azure Center for SAP solutions, you must register the **Microsoft.Workloads** Resource Provider in the subscription where you have the SAP system with [Register-AzResourceProvider](/powershell/module/az.Resources/Register-azResourceProvider), as follows:
+
+ ```azurecli-interactive
+ az provider register --namespace 'Microsoft.Workloads'
+ ```
+- Check that your Azure account has **Azure Center for SAP solutions administrator** and **Managed Identity Operator** or equivalent role access on the subscription or resource groups where you have the SAP system resources.
+- A **User-assigned managed identity** which has **Azure Center for SAP solutions service role** access on the Compute resource group and **Reader** role access on the Virtual Network resource group of the SAP system. Azure Center for SAP solutions service uses this identity to discover your SAP system resources and register the system as a VIS resource.
+- Make sure ASCS, Application Server and Database virtual machines of the SAP system are in **Running** state.
+- sapcontrol and saphostctrl exe files must exist on ASCS, App server and Database.
+ - File path on Linux VMs: /usr/sap/hostctrl/exe
+ - File path on Windows VMs: C:\Program Files\SAP\hostctrl\exe\
+- Make sure the **sapstartsrv** process is running on all **SAP instances** and for **SAP hostctrl agent** on all the VMs in the SAP system.
+ - To start hostctrl sapstartsrv use this command for Linux VMs: 'hostexecstart -start'
+ - To start instance sapstartsrv use the command: 'sapcontrol -nr 'instanceNr' -function StartService S0S'
+ - To check status of hostctrl sapstartsrv use this command for Windows VMs: C:\Program Files\SAP\hostctrl\exe\saphostexec ΓÇôstatus
+- For successful discovery and registration of the SAP system, ensure there is network connectivity between ASCS, App and DB VMs. 'ping' command for App instance hostname must be successful from ASCS VM. 'ping' for Database hostname must be successful from App server VM.
+- On App server profile, SAPDBHOST, DBTYPE, DBID parameters must have the right values configured for the discovery and registration of Database instance details.
+
+## Register SAP system
+
+To register an existing SAP system in Azure Center for SAP solutions:
+
+1. Use the [az workloads sap-virtual-instance create](/cli/azure/workloads/sap-virtual-instance#az-workloads-sap-virtual-instance-create) to register an existing SAP system as a *Virtual Instance for SAP solutions* resource:
+
+ ```azurecli-interactive
+ az workloads sap-virtual-instance create -g <Resource Group Name> \
+ -n C36 \
+ --environment NonProd \
+ --sap-product s4hana \
+ --central-server-vm <Virtual Machine resource ID> \
+ --identity "{type:UserAssigned,userAssignedIdentities:{<Managed Identity resource ID>:{}}}" \
+ ```
+ - **n** parameter is used to specify the SAP System ID (SID) that you are registering with Azure Center for SAP solutions.
+ - **environment** parameter is used to specify the type of SAP environment you are registering. Valid values are *NonProd* and *Prod*.
+ - **sap-product** parameter is used to specify the type of SAP product you are registering. Valid values are *S4HANA*, *ECC*, *Other*.
+
+2. Once you trigger the registration process, you can view its status by getting the status of the Virtual Instance for SAP solutions resource that gets deployed as part of the registration process.
+
+ ```azurecli-interactive
+ az workloads sap-virtual-instance show -g <Resource-group-name> -n C36
+ ```
+
+## Next steps
+
+- [Monitor SAP system from Azure portal](monitor-portal.md)
+- [Manage a VIS](manage-virtual-instance.md)
sap Quickstart Register System Powershell https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sap/center-sap-solutions/quickstart-register-system-powershell.md
+
+ Title: Quickstart - Register an existing system with Azure Center for SAP solutions with PowerShell
+description: Learn how to register an existing SAP system in Azure Center for SAP solutions through Azure PowerShell module.
++++ Last updated : 05/04/2023++
+#Customer intent: As a developer, I want to register my existing SAP system so that I can use the system with Azure Center for SAP solutions.
+
+# Quickstart: Register an existing SAP system with Azure Center for SAP solutions with PowerShell
+
+The [Azure PowerShell AZ](/powershell/azure/new-azureps-module-az) module is used to create and manage Azure resources from the command line or in scripts.
+
+[Azure Center for SAP solutions](overview.md) enables you to deploy and manage SAP systems on Azure. This article shows you how to register an existing SAP system running on Azure with *Azure Center for SAP solutions* using Az PowerShell module. Alternatively, you can register systems using the Azure CLI, or in the Azure portal.
+After you register an SAP system with *Azure Center for SAP solutions*, you can use its visualization, management and monitoring capabilities through the Azure portal.
+
+This quickstart requires the Az PowerShell module version 1.0.0 or later. Run `Get-Module -ListAvailable Az` to find the version. If you need to install or upgrade, see [Install Azure PowerShell module](/powershell/azure/install-az-ps).
+
+## Prerequisites for Registering a system
+- Check that you're trying to register a [supported SAP system configuration](/articles/sap/center-sap-solutions/register-existing-system.md)
+- Grant access to Azure Storage accounts from the virtual network where the SAP system exists. Use one of these options:
+ - Allow outbound internet connectivity for the VMs.
+ - Use a [**Storage** service tag](../../virtual-network/service-tags-overview.md) to allow connectivity to any Azure storage account from the VMs.
+ - Use a [**Storage** service tag with regional scope](../../virtual-network/service-tags-overview.md) to allow storage account connectivity to the Azure storage accounts in the same region as the VMs.
+ - Allowlist the region-specific IP addresses for Azure Storage.
+- The first time you use Azure Center for SAP solutions, you must register the **Microsoft.Workloads** Resource Provider in the subscription where you have the SAP system with [Register-AzResourceProvider](/powershell/module/az.Resources/Register-azResourceProvider), as follows:
+
+ ```powershell
+ Register-AzResourceProvider -ProviderNamespace "Microsoft.Workloads"
+ ```
+- Check that your Azure account has **Azure Center for SAP solutions administrator** and **Managed Identity Operator** or equivalent role access on the subscription or resource groups where you have the SAP system resources.
+- A **User-assigned managed identity** which has **Azure Center for SAP solutions service role** access on the Compute resource group and **Reader** role access on the Virtual Network resource group of the SAP system. Azure Center for SAP solutions service uses this identity to discover your SAP system resources and register the system as a VIS resource.
+- Make sure ASCS, Application Server and Database virtual machines of the SAP system are in **Running** state.
+- sapcontrol and saphostctrl exe files must exist on ASCS, App server and Database.
+ - File path on Linux VMs: /usr/sap/hostctrl/exe
+ - File path on Windows VMs: C:\Program Files\SAP\hostctrl\exe\
+- Make sure the **sapstartsrv** process is running on all **SAP instances** and for **SAP hostctrl agent** on all the VMs in the SAP system.
+ - To start hostctrl sapstartsrv use this command for Linux VMs: 'hostexecstart -start'
+ - To start instance sapstartsrv use the command: 'sapcontrol -nr 'instanceNr' -function StartService S0S'
+ - To check status of hostctrl sapstartsrv use this command for Windows VMs: C:\Program Files\SAP\hostctrl\exe\saphostexec ΓÇôstatus
+- For successful discovery and registration of the SAP system, ensure there is network connectivity between ASCS, App and DB VMs. 'ping' command for App instance hostname must be successful from ASCS VM. 'ping' for Database hostname must be successful from App server VM.
+- On App server profile, SAPDBHOST, DBTYPE, DBID parameters must have the right values configured for the discovery and registration of Database instance details.
+
+## Register SAP system
+
+To register an existing SAP system in Azure Center for SAP solutions:
+
+1. Use the [New-AzWorkloadsSapVirtualInstance](/powershell/module/az.workloads/New-AzWorkloadsSapVirtualInstance) to register an existing SAP system as a *Virtual Instance for SAP solutions* resource:
+
+ ```powershell
+ New-AzWorkloadsSapVirtualInstance `
+ -ResourceGroupName 'TestRG' `
+ -Name L46 `
+ -Location eastus `
+ -Environment 'NonProd' `
+ -SapProduct 'S4HANA' `
+ -CentralServerVmId '/subscriptions/sub1/resourcegroups/rg1/providers/microsoft.compute/virtualmachines/l46ascsvm' `
+ -Tag @{k1 = "v1"; k2 = "v2"} `
+ -ManagedResourceGroupName "acss-L46-rg" `
+ -ManagedRgStorageAccountName 'acssstoragel46' `
+ -IdentityType 'UserAssigned' `
+ -UserAssignedIdentity @{'/subscriptions/sub1/resourcegroups/rg1/providers/Microsoft.ManagedIdentity/userAssignedIdentities/ACSS-MSI'= @{}} `
+ ```
+ - **Name** attribute is used to specify the SAP System ID (SID) that you are registering with Azure Center for SAP solutions.
+ - **Location** attribute is used to specify the Azure Center for SAP solutions service location. Following table has the mapping that enables you to choose the right service location based on where your SAP system infrastructure is located on Azure.
+
+ | **SAP application location** | **Azure Center for SAP solutions service location** |
+ | | |
+ | East US | East US |
+ | East US 2 | East US 2|
+ | South Central US | East US 2 |
+ | Central US | East US 2|
+ | West US 2 | West US 3 |
+ | West US 3 | West US 3 |
+ | West Europe | West Europe |
+ | North Europe | North Europe |
+ | Australia East | Australia East |
+ | Australia Central | Australia East |
+ | East Asia | East Asia |
+ | Southeast Asia | East Asia |
+ | Central India | Central India |
+
+ - **Environment** is used to specify the type of SAP environment you are registering. Valid values are *NonProd* and *Prod*.
+ - **SapProduct** is used to specify the type of SAP product you are registering. Valid values are *S4HANA*, *ECC*, *Other*.
+
+2. Once you trigger the registration process, you can view its status by getting the status of the Virtual Instance for SAP solutions resource that gets deployed as part of the registration process.
+
+ ```powershell
+ Get-AzWorkloadsSapVirtualInstance -ResourceGroupName TestRG -Name L46
+ ```
+
+## Next steps
+
+- [Monitor SAP system from Azure portal](monitor-portal.md)
+- [Manage a VIS](manage-virtual-instance.md)
sap Register Existing System https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sap/center-sap-solutions/register-existing-system.md
Title: Register existing SAP system (preview)
+ Title: Register existing SAP system
description: Learn how to register an existing SAP system in Azure Center for SAP solutions through the Azure portal. You can visualize, manage, and monitor your existing SAP system through Azure Center for SAP solutions.
#Customer intent: As a developer, I want to register my existing SAP system so that I can use the system with Azure Center for SAP solutions.
-# Register existing SAP system (preview)
+# Register existing SAP system
In this how-to guide, you'll learn how to register an existing SAP system with *Azure Center for SAP solutions*. After you register an SAP system with Azure Center for SAP solutions, you can use its visualization, management and monitoring capabilities through the Azure portal. For example, you can:
sap Start Stop Sap Systems https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sap/center-sap-solutions/start-stop-sap-systems.md
Title: Start and stop SAP systems (preview)
+ Title: Start and stop SAP systems
description: Learn how to start or stop an SAP system through the Virtual Instance for SAP solutions (VIS) resource in Azure Center for SAP solutions through the Azure portal.
#Customer intent: As a developer, I want to start and stop SAP systems in Azure Center for SAP solutions so that I can control instances through the Virtual Instance for SAP resource.
-# Start and stop SAP systems (preview)
+# Start and stop SAP systems
+ In this how-to guide, you'll learn to start and stop your SAP systems through the *Virtual Instance for SAP solutions (VIS)* resource in *Azure Center for SAP solutions*.
sap Tutorial Create High Availability Name Custom https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sap/center-sap-solutions/tutorial-create-high-availability-name-custom.md
+
+ Title: Tutorial - Create a distributed highly available SAP system with Azure Center for SAP solutions with Azure CLI
+description: In this tutorial you learn to create a distributed highly available SAP system in Azure Center for SAP solutions through Azure CLI.
+++ Last updated : 05/04/2023++
+#Customer intent: As a developer, I want to create a distributed highly available SAP system so that I can use the system with Azure Center for SAP solutions.
+
+# Tutorial: Use Azure CLI to create infrastructure for a distributed highly available (HA) SAP system with *Azure Center for SAP solutions* with customized resource names
+
+[Azure Center for SAP solutions](overview.md) enables you to deploy and manage SAP systems on Azure. After you deploy infrastructure and [install SAP software](install-software.md) with *Azure Center for SAP solutions*, you can use its visualization, management and monitoring capabilities through the [Virtual Instance for SAP solutions](https://github.com/MicrosoftDocs/azure-docs-pr/blob/release-azure-center-sap-g)
+
+## Introduction
+The [Azure CLI](/cli/azure/) is used to create and manage Azure resources from the command line or in scripts.
+
+This tutorial shows you how to use Azure CLI to deploy infrastructure for an SAP system with highly available (HA) Three-tier Distributed architecture. You also see how to customize resource names for the Azure infrastructure that gets deployed. See the following steps:
+> [!div class="checklist"]
+> * Complete the pre-requisites
+> * Understand the SAP SKUs available for your deployment type
+> * Check for recommended SKUs for SAPS and Memory requirements for your SAP system
+> * Create json configuration file with custom resource names
+> * Deploy infrastructure for your SAP system
++
+## Prerequisites
+
+- An Azure subscription.
+- If you're using Azure Center for SAP solutions for the first time, Register the **Microsoft.Workloads** Resource Provider on the subscription in which you're deploying the SAP system:
+
+ ```azurecli-interactive
+ az provider register --namespace 'Microsoft.Workloads'
+ ```
+
+- An Azure account with **Azure Center for SAP solutions administrator** and **Managed Identity Operator** role access to the subscriptions and resource groups in which you create the Virtual Instance for SAP solutions (VIS) resource.
+- A **User-assigned managed identity** which has **Azure Center for SAP solutions service role** access on the Subscription or at least all resource groups (Compute, Network,Storage). If you wish to install SAP Software through the Azure Center for SAP solutions, also provide **Reader and Data Access** role to the identity on SAP bits storage account where you would store the SAP Media.
+- A [network set up for your infrastructure deployment](prepare-network.md).
+- Availability of minimum 4 cores of either Standard_D4ds_v4 or Standard_E4s_v3, SKUS which will be used during Infrastructure deployment and Software Installation
+- [Review the quotas for your Azure subscription](../../quotas/view-quotas.md). If the quotas are low, you might need to create a support request before creating your infrastructure deployment. Otherwise, you might experience deployment failures or an **Insufficient quota** error.
+- Note the SAP Application Performance Standard (SAPS) and database memory size that you need to allow Azure Center for SAP solutions to size your SAP system. If you're not sure, you can also select the VMs. There are:
+ - A single or cluster of ASCS VMs, which make up a single ASCS instance in the VIS.
+ - A single or cluster of Database VMs, which make up a single Database instance in the VIS.
+ - A single Application Server VM, which makes up a single Application instance in the VIS. Depending on the number of Application Servers being deployed or registered, there can be multiple application instances.
++
+## Understand the SAP certified Azure SKUs available for your deployment type
+
+Use [az workloads sap-supported-sku](/cli/azure/workloads?view=azure-cli-latest#az-workloads-sap-supported-sku&preserve-view=true) to get a list of SKUs supported for your SAP system deployment type from Azure Center for SAP solutions
+
+```azurecli-interactive
+az workloads sap-supported-sku --app-location "eastus" --database-type "HANA" --deployment-type "ThreeTier" --environment "Prod" --high-availability-type "AvailabilitySet" --sap-product "S4HANA" --location "eastus"
+```
+You can use any of these SKUs recommended for App tier and Database tier when deploying infrastructure in the later steps. Or you can use the recommended SKUs by *Azure Center for SAP solutions* in the next step.
+
+## Check for recommended SKUs for SAPS and Memory requirements for your SAP system
+
+Use [az workloads sap-sizing-recommendation](/cli/azure/workloads?view=azure-cli-latest#az-workloads-sap-sizing-recommendation&preserve-view=true) to get SAP system sizing recommendations by providing SAPS input for application tier and memory required for database tier
+
+```azurecli-interactive
+az workloads sap-sizing-recommendation --app-location "eastus" --database-type "HANA" --db-memory 1024 --deployment-type "ThreeTier" --environment "Prod" --high-availability-type "AvailabilitySet" --sap-product "S4HANA" --saps 75000 --location "eastus2" --db-scale-method ScaleUp
+```
+
+## Create *json* configuration file with custom resource names
+
+- Prepare a *json* file with the configuration (payload) to use for the deployment of SAP system infrastructure. You can make edits in this [sample payload](https://github.com/Azure/Azure-Center-for-SAP-solutions-preview/blob/main/Payload_Samples/CreatePayload_withTransportDirectory_withHAAvSet_withCustomResourceName.json) or use the examples listed in the [Rest API documentation](/rest/api/workloads) for Azure Center for SAP solutions
+- In this json file, provide the custom resource names for the infrastructure that is deployed for your SAP system
+- The parameters available for customization are:
+ - VM Name
+ - Host Name
+ - Network interface name
+ - OS Disk Name
+ - Load Balancer Name
+ - Frontend IP Configuration Names
+ - Backend Pool Names
+ - Health Probe Names
+ - Data Disk Names: default, hanaData or hana/data, hanaLog or hana/log, usrSap or usr/sap, hanaShared or hana/shared, backup
+ - Shared Storage Account Name
+ - Shared Storage Account Private End Point Name
+
+You can download the [sample payload](https://github.com/Azure/Azure-Center-for-SAP-solutions-preview/blob/main/Payload_Samples/CreatePayload_withTransportDirectory_withHAAvSet_withCustomResourceName.json) and replace the resource names and any other parameter as needed
+
+## Deploy infrastructure for your SAP system
+
+Use [az workloads sap-virtual-instance create](/cli/azure/workloads/sap-virtual-instance?view=azure-cli-latest#az-workloads-sap-virtual-instance-create&preserve-view=true) to deploy infrastructure for your SAP system with Three tier HA architecture.
+
+```azurecli-interactive
+az workloads sap-virtual-instance create -g <Resource Group Name> -n <VIS Name> --environment NonProd --sap-product s4hana --configuration <Payload file path> --identity "{type:UserAssigned,userAssignedIdentities:{<Managed_Identity_ResourceID>:{}}}"
+```
+This will deploy your SAP system and the Virtual instance for SAP solutions (VIS) resource representing your SAP system in Azure.
+
+## Cleanup
+If you no longer wish to use the VIS resource, you can delete it by using [az workloads sap-virtual-instance delete](/cli/azure/workloads/sap-virtual-instance?view=azure-cli-latest#az-workloads-sap-virtual-instance-delete&preserve-view=true)
+
+```azurecli-interactive
+az workloads sap-virtual-instance delete -g <Resource_Group_Name> -n <VIS Name>
+```
+This command will only delete the VIS and other resources created by Azure Center for SAP solutions. This will not delete the deployed infrastructure like VMs, Disks etc.
++
+## Next steps
+In this tutorial, you deployed infrastructure in Azure for an SAP system using Azure Center for SAP solutions. You used custom resource names for the infrastructure. Continue to the next article to learn how to install SAP software on the infrastructure deployed.
+> [!div class="nextstepaction"]
+> [Install SAP software](install-software.md)
sap View Cost Analysis https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sap/center-sap-solutions/view-cost-analysis.md
Title: View post-deployment cost analysis in Azure Center for SAP solutions (preview)
+ Title: View post-deployment cost analysis in Azure Center for SAP solutions
description: Learn how to view the cost of running an SAP system through the Virtual Instance for SAP solutions (VIS) resource in Azure Center for SAP solutions.
#Customer intent: As an SAP Basis Admin, I want to understand the cost incurred for running SAP systems on Azure.
-# View post-deployment cost analysis for SAP system (preview)
+# View post-deployment cost analysis for SAP system
+ In this how-to guide, you'll learn how to view the running cost of your SAP systems through the *Virtual Instance for SAP solutions (VIS)* resource in *Azure Center for SAP solutions*.
To view the post-deployment costs of running an SAP system registered as a VIS r
- [Monitor SAP system from the Azure portal](monitor-portal.md) - [Get quality checks and insights for a VIS resource](get-quality-checks-insights.md)-- [Start and Stop SAP systems](start-stop-sap-systems.md)
+- [Start and Stop SAP systems](start-stop-sap-systems.md)
search Search Security Rbac https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/search/search-security-rbac.md
Previously updated : 04/14/2023 Last updated : 05/16/2023
Azure provides a global [role-based access control authorization system](../role
Per-user access over search results (sometimes referred to as row-level security or document-level security) isn't supported. As a workaround, [create security filters](search-security-trimming-for-azure-search.md) that trim results by user identity, removing documents for which the requestor shouldn't have access.
-## Built-in roles used in Search
-
-Built-in roles include generally available and preview roles. If these roles are insufficient, [create a custom role](#create-a-custom-role) instead.
-
-| Role | Description and availability |
-| - | - |
-| [Owner](../role-based-access-control/built-in-roles.md#owner) | (Generally available) Full access to the search resource, including the ability to assign Azure roles. Subscription administrators are members by default.</br></br> (Preview) This role has the same access as the Search Service Contributor role on the data plane. It includes access to all data plane actions except the ability to query the search index or index documents. |
-| [Contributor](../role-based-access-control/built-in-roles.md#contributor) | (Generally available) Same level of access as Owner, minus the ability to assign roles or change authorization options. </br></br> (Preview) This role has the same access as the Search Service Contributor role on the data plane. It includes access to all data plane actions except the ability to query the search index or index documents. |
-| [Reader](../role-based-access-control/built-in-roles.md#reader) | (Generally available) Limited access to partial service information. In the portal, the Reader role can access information in the service Overview page, in the Essentials section and under the Monitoring tab. All other tabs and pages are off limits. </br></br>This role has access to service information: service name, resource group, service status, location, subscription name and ID, tags, URL, pricing tier, replicas, partitions, and search units. This role also has access to service metrics: search latency, percentage of throttled requests, average queries per second. </br></br>This role doesn't allow access to API keys, role assignments, content (indexes or synonym maps), or content metrics (storage consumed, number of objects). </br></br> (Preview) When you enable the RBAC preview for the data plane, the Reader role has read access across the entire service. This allows you to read search metrics, content metrics (storage consumed, number of objects), and the definitions of data plane resources (indexes, indexers, etc.). The Reader role still won't have access to read API keys or read content within indexes. |
-| [Search Service Contributor](../role-based-access-control/built-in-roles.md#search-service-contributor) | (Generally available) This role is identical to the Contributor role and applies to control plane operations. </br></br>(Preview) When you enable the RBAC preview for the data plane, this role also provides full access to all data plane actions on indexes, synonym maps, indexers, data sources, and skillsets as defined by [`Microsoft.Search/searchServices/*`](../role-based-access-control/resource-provider-operations.md#microsoftsearch). This role doesn't give you access to query search indexes or index documents. This role is for search service administrators who need to manage the search service and its objects, but without the ability to view or access object data. </br></br>Like Contributor, members of this role can't make or manage role assignments or change authorization options. To use the preview capabilities of this role, your service must have the preview feature enabled, as described in this article. |
-| [Search Index Data Contributor](../role-based-access-control/built-in-roles.md#search-index-data-contributor) | (Preview) Provides full data plane access to content in all indexes on the search service. This role is for developers or index owners who need to import, refresh, or query the documents collection of an index. |
-| [Search Index Data Reader](../role-based-access-control/built-in-roles.md#search-index-data-reader) | (Preview) Provides read-only data plane access to search indexes on the search service. This role is for apps and users who run queries. |
- > [!NOTE] > In Cognitive Search, "control plane" refers to operations supported in the [Management REST API](/rest/api/searchmanagement/) or equivalent client libraries. The "data plane" refers to operations against the search service endpoint, such as indexing or queries, or any other operation specified in the [Search REST API](/rest/api/searchservice/) or equivalent client libraries.
-<a name="preview-limitations"></a>
+## Built-in roles used in Search
-## Preview capabilities and limitations
+The following roles are built in. If these roles are insufficient, [create a custom role](#create-a-custom-role).
-+ Role-based access control for data plane operations, such as creating an index or querying an index, is currently in public preview and available under [supplemental terms of use](https://azure.microsoft.com/support/legal/preview-supplemental-terms/).
+| Role | Plane | Description |
+| - | | |
+| [Owner](../role-based-access-control/built-in-roles.md#owner) | Control & Data | Full access to the control plane of the search resource, including the ability to assign Azure roles. Only the Owner role can enable or disable authentication options or manage roles for other users. Subscription administrators are members by default. </br></br>On the data plane, this role has the same access as the Search Service Contributor role. It includes access to all data plane actions except the ability to query or index documents.|
+| [Contributor](../role-based-access-control/built-in-roles.md#contributor) | Control & Data | Same level of control plane access as Owner, minus the ability to assign roles or change authentication options. </br></br>On the data plane, this role has the same access as the Search Service Contributor role. It includes access to all data plane actions except the ability to query or index documents.|
+| [Reader](../role-based-access-control/built-in-roles.md#reader) | Control & Data | Read access across the entire service, including search metrics, content metrics (storage consumed, number of objects), and the object definitions of data plane resources (indexes, indexers, and so on). However, it can't read API keys or read content within indexes. |
+| [Search Service Contributor](../role-based-access-control/built-in-roles.md#search-service-contributor) | Control & Data | Read-write access to object definitions (indexes, synonym maps, indexers, data sources, and skillsets). See [`Microsoft.Search/searchServices/*`](../role-based-access-control/resource-provider-operations.md#microsoftsearch) for the permissions list. This role can't access content in an index, so no querying or indexing, but it can create, delete, and list indexes, return index definitions and statistics, and test analyzers. This role is for search service administrators who need to manage the search service and its objects, but without content access. |
+| [Search Index Data Contributor](../role-based-access-control/built-in-roles.md#search-index-data-contributor) | Data | Read-write access to content in all indexes on the search service. This role is for developers or index owners who need to import, refresh, or query the documents collection of an index. |
+| [Search Index Data Reader](../role-based-access-control/built-in-roles.md#search-index-data-reader) | Data | Read-only access to all search indexes on the search service. This role is for apps and users who run queries. |
-+ There are no regional, tier, or pricing restrictions for using Azure RBAC preview, but your search service must be in the Azure public cloud. The preview isn't available in Azure Government, Azure Germany, or Azure China 21Vianet.
+> [!NOTE]
+> If you disable Azure role-based access, built-in roles for the control plane (Owner, Contributor, Reader) continue to be available. Disabling Azure RBAC removes just the data-related permissions associated with those roles. In a disabled-RBAC scenario, Search Service Contributor is equivalent to control-plane Contributor.
-+ If you migrate your Azure subscription to a new tenant, the Azure RBAC preview will need to be re-enabled.
+## Limitations
+ Adoption of role-based access control might increase the latency of some requests. Each unique combination of service resource (index, indexer, etc.) and service principal used on a request triggers an authorization check. These authorization checks can add up to 200 milliseconds of latency to a request. + In rare cases where requests originate from a high number of different service principals, all targeting different service resources (indexes, indexers, etc.), it's possible for the authorization checks to result in throttling. Throttling would only happen if hundreds of unique combinations of search service resource and service principal were used within a second.
-+ Role-based access control is supported in Azure portal and in the following search clients:
-
- + [Search REST APIs](/rest/api/searchservice/) (all supported versions)
- + [azure.search.documents (Azure SDK for .NET) version 11.4](https://github.com/Azure/azure-sdk-for-net/blob/main/sdk/search/Azure.Search.Documents/CHANGELOG.md)
- + [azure.search.documents (Azure SDK for Python) version 11.3](https://github.com/Azure/azure-sdk-for-python/blob/main/sdk/search/azure-search-documents/CHANGELOG.md)
- + [azure-search-documents (Azure SDK for Java) beta versions of 11.5 and 11.6](https://github.com/Azure/azure-sdk-for-jav),
- + [@azure/search-documents (Azure SDK for JavaScript), version 11.3 (see change log)](https://www.npmjs.com/package/@azure/search-documents?activeTab=explore).
- ## Configure role-based access for data plane **Applies to:** Search Index Data Contributor, Search Index Data Reader, Search Service Contributor
In this step, configure your search service to recognize an **authorization** he
1. Choose an **API access control** option. We recommend **Both** if you want flexibility or need to migrate apps.
- | Option | Status | Description |
- |--|--|-|
- | API Key | Generally available (default) | Requires an [admin or query API keys](search-security-api-keys.md) on the request header for authorization. No roles are used. |
- | Role-based access control | Preview | Requires membership in a role assignment to complete the task, described in the next step. It also requires an authorization header. |
- | Both | Preview | Requests are valid using either an API key or role-based access control. |
+ | Option | Description |
+ |--|--|
+ | API Key | (default). Requires an [admin or query API keys](search-security-api-keys.md) on the request header for authorization. No roles are used. |
+ | Role-based access control | Requires membership in a role assignment to complete the task, described in the next step. It also requires an authorization header. |
+ | Both | Requests are valid using either an API key or role-based access control. |
The change is effective immediately, but wait a few seconds before testing.
Role assignments in the portal are service-wide. If you want to [grant permissio
+ Owner + Contributor + Reader
- + Search Service Contributor (preview for data plane requests)
- + Search Index Data Contributor (preview)
- + Search Index Data Reader (preview)
+ + Search Service Contributor
+ + Search Index Data Contributor
+ + Search Index Data Reader
1. On the **Members** tab, select the Azure AD user or group identity.
search Whats New https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/search/whats-new.md
Previously updated : 04/10/2023 Last updated : 05/16/2023
Learn about the latest updates to Azure Cognitive Search functionality, docs, an
> [!NOTE] > Looking for preview features? Previews are announced here, but we also maintain a [preview features list](search-api-preview.md) so you can find them in one place.
+## May 2023
+
+| Item&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; | Type | Description |
+|--||--|
+| [**Azure RBAC (role-based access control)**](search-security-rbac.md) | Feature | Announcing general availability. |
+| [**2022-09-01 Management REST API**](/rest/api/searchmanagement) | API | New stable version of the Management REST APIs, with support for configuring search to use Azure RBAC. The **Az.Search** module of Azure PowerShell and **Az search** module of the Azure CLI are updated to support search service authentication options. You can also use the [**Terraform provider**](https://registry.terraform.io/providers/hashicorp/azurerm/latest/docs/resources/search_service) to configure authentication options (see this [Terraform quickstart](search-get-started-terraform.md) for details). |
+ ## April 2023 | Item&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; | Type | Description |
Learn about the latest updates to Azure Cognitive Search functionality, docs, an
| Month | Item | |-||
-| November | **Add search to websites** updated versions of React and Azure SDK client libraries: <ul><li>[C#](tutorial-csharp-overview.md)</li><li>[Python](tutorial-python-overview.md)</li><li>[JavaScript](tutorial-javascript-overview.md) </li></ul> "Add search to websites" is a tutorial series with sample code available in three languages. This series was . If you're integrating client code with a search index, these samples demonstrate an end-to-end approach to integration. |
+| November | **Add search to websites** series, updated versions of React and Azure SDK client libraries: <ul><li>[C#](tutorial-csharp-overview.md)</li><li>[Python](tutorial-python-overview.md)</li><li>[JavaScript](tutorial-javascript-overview.md) </li></ul> "Add search to websites" is a tutorial series with sample code available in three languages. If you're integrating client code with a search index, these samples demonstrate an end-to-end approach to integration. |
| November | **Retired** - [Visual Studio Code extension for Azure Cognitive Search](https://github.com/microsoft/vscode-azurecognitivesearch/blob/master/README.md). | | November | [Query performance dashboard](https://github.com/Azure-Samples/azure-samples-search-evaluation). This Application Insights sample demonstrates an approach for deep monitoring of query usage and performance of an Azure Cognitive Search index. It includes a JSON template that creates a workbook and dashboard in Application Insights and a Jupyter Notebook that populates the dashboard with simulated data. | | October | [Compliance risk analysis using Azure Cognitive Search](/azure/architecture/guide/ai/compliance-risk-analysis). Published on Azure Architecture Center, this guide covers the implementation of a compliance risk analysis solution that uses Azure Cognitive Search. |
security Trusted Hardware Identity Management https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/security/fundamentals/trusted-hardware-identity-management.md
THIM defines the Azure security baseline for Azure Confidential computing (ACC)
## Frequently asked questions
+### How do I use THIM with Intel processors?
+
+To generate Intel SGX and Intel TDX quotes, the Intel Quote Generation Library (QGL) needs access to quote generation/verification collateral. All or parts of this collateral must be fetched from THIM. This can be done using the Intel Quote Provider Library (QPL) or Azure DCAP Client Library.
+ - To learn more on how to use Intel QPL with THIM, please see: [How do I use the Intel Quote Provider Library (QPL) with THIM?](#how-do-i-use-the-intel-quote-provider-library-qpl-with-thim)
+ - To learn more on how to use Azure DCAP with THIM, please see: [Azure DCAP library](#what-is-the-azure-dcap-library)
++ ### The "next update" date of the Azure-internal caching service API, used by Microsoft Azure Attestation, seems to be out of date. Is it still in operation and can it be used? The "tcbinfo" field contains the TCB information. The THIM service by default provides an older tcbinfo--updating to the latest tcbinfo from Intel would cause attestation failures for those customers who haven't migrated to the latest Intel SDK, and could results in outages.
The certificates are fetched and cached in THIM service using platform manifest
To retrieve the certificate, you must install the [Azure DCAP library](#what-is-the-azure-dcap-library) that replaces Intel QPL. This library directs the fetch requests to THIM service running in Azure cloud. For the downloading the latest DCAP packages, see: [Where can I download the latest DCAP packages?](#where-can-i-download-the-latest-dcap-packages)
+### How do I use the Intel Quote Provider Library (QPL) with THIM?
+
+Customers may want the flexibility to use the Intel Quote Provider Library (QPL) to interact with THIM without having to download another dependency from Microsoft (i.e., Azure DCAP Client Library). Customers wanting to use Intel QPL with the THIM service must adjust Intel QPLΓÇÖs configuration file (ΓÇ£sgx_default_qcnl.confΓÇ¥), which is provided with the Intel QPL.
+
+The quote generation/verification collateral used to generate the Intel SGX or Intel TDX quotes can be split into the PCK certificate and all other quote generation/verification collateral. The customer has the following options to retrieve the two parts:
+ - Retrieve PCK certificate: the customer must use a THIM endpoint.
+ - Retrieve other quote generation/verification collateral: the customer can either use a THIM or an Intel Provisioning Certification Service (PCS) endpoint.
+
+The Intel QPL configuration file (ΓÇ£sgx_default_qcnl.confΓÇ¥) contains three keys used to define the collateral endpoint(s). The ΓÇ£pccs_urlΓÇ¥ key defines the endpoint used to retrieve the PCK certificates. The ΓÇ£collateral_serviceΓÇ¥ key can be used to define the endpoint used to retrieve all other quote generation/verification collateral. If the ΓÇ£collateral_serviceΓÇ¥ key is not defined, all quote verification collateral will be retrieved from the endpoint defined with the ΓÇ£pccs_urlΓÇ¥ key.
+
+The following table lists how these keys can be set.
+| Name | Possible Endpoints |
+| -- | -- |
+| "pccs_url" | THIM endpoint: "https://global.acccache.azure.net/sgx/certification/v3" |
+| "collateral_service" | THIM endpoint: "https://global.acccache.azure.net/sgx/certification/v3" or Intel PCS endpoint: The following file will always list the most up-to-date endpoint in the ΓÇ£collateral_serviceΓÇ¥ key: [sgx_default_qcnl.conf](https://github.com/intel/SGXDataCenterAttestationPrimitives/blob/master/QuoteGeneration/qcnl/linux/sgx_default_qcnl.conf#L13) |
+
+The following is a code snipped from an Intel QPL configuration file example:
+
+```bash
+ {
+ "pccs_url": "https://global.acccache.azure.net/sgx/certification/v3/",
+ "use_secure_cert": true,
+ "collateral_service": "https://global.acccache.azure.net/sgx/certification/v3/",
+ "pccs_api_version": "3.1",
+ "retry_times": 6,
+ "retry_delay": 5,
+ "local_pck_url": "http://169.254.169.254/metadata/THIM/sgx/certification/v3/",
+ "pck_cache_expire_hours": 24,
+ "verify_collateral_cache_expire_hours": 24,
+ "custom_request_options": {
+ "get_cert": {
+ "headers": {
+ "metadata": "true"
+ },
+ "params": {
+ "api-version ": "2021-07-22-preview"
+ }
+ }
+ }
+ }
+```
+
+In the following, we explain how the Intel QPL configuration file can be changed and how the changes can be activated.
+
+#### On Windows
+ 1. Make desired changes to the configuration file.
+ 2. Ensure that there are read permissions to the file from the following registry location and key/value.
+ ```bash
+ [HKEY_LOCAL_MACHINE\SOFTWARE\Intel\SGX\QCNL]
+ "CONFIG_FILE"="<Full File Path>"
+ ```
+ 3. Restart AESMD service. For instance, open PowerShell as an administrator and use the following commands:
+ ```bash
+ Restart-Service -Name "AESMService" -ErrorAction Stop
+ Get-Service -Name "AESMService"
+ ```
+
+#### On Linux
+ 1. Make desired changes to the configuration file. For example, vim can be used for the changes using the following command:
+ ```bash
+ sudo vim /etc/sgx_default_qcnl.conf
+ ```
+ 2. Restart AESMD service. Open any terminal and execute the following commands:
+ ```bash
+ sudo systemctl restart aesmd
+ systemctl status aesmd
+ ```
+ ### How do I request collateral in a Confidential Virtual Machine (CVM)? Use the following sample in a CVM guest for requesting AMD collateral that includes the VCEK certificate and certificate chain. For details on this collateral and where it originates from, see [Versioned Chip Endorsement Key (VCEK) Certificate and KDS Interface Specification](https://www.amd.com/system/files/TechDocs/57230.pdf).
storage File Sync Troubleshoot Sync Errors https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/file-sync/file-sync-troubleshoot-sync-errors.md
Disconnect all previous connections to the server or shared resource and try aga
Run the following PowerShell command on the server to reset the certificate: `Reset-AzStorageSyncServerCertificate -ResourceGroupName <string> -StorageSyncServiceName <string>`
-### Common troubleshooting steps
+## Common troubleshooting steps
+ <a id="troubleshoot-storage-account"></a>**Verify the storage account exists.** # [Portal](#tab/azure-portal) 1. Navigate to the sync group within the Storage Sync Service.
storage Files Troubleshoot Smb Authentication https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/files/files-troubleshoot-smb-authentication.md
description: Troubleshoot problems using identity-based authentication to connec
Previously updated : 03/31/2023 Last updated : 05/15/2023
This error is most likely triggered by a syntax error in the `Join-AzStorageAcco
## Azure Files on-premises AD DS Authentication support for AES-256 Kerberos encryption
-Azure Files supports AES-256 Kerberos encryption for AD DS authentication beginning with the AzFilesHybrid module v0.2.2. AES-256 is the recommended authentication method. If you've enabled AD DS authentication with a module version lower than v0.2.2, you'll need to [download the latest AzFilesHybrid module](https://github.com/Azure-Samples/azure-files-samples/releases) and run the PowerShell below. If you haven't enabled AD DS authentication on your storage account yet, follow this [guidance](./storage-files-identity-ad-ds-enable.md#option-one-recommended-use-azfileshybrid-powershell-module) for enablement.
+Azure Files supports AES-256 Kerberos encryption for AD DS authentication beginning with the AzFilesHybrid module v0.2.2. AES-256 is the recommended encryption method, and it's the default encryption method beginning in AzFilesHybrid module v0.2.5. If you've enabled AD DS authentication with a module version lower than v0.2.2, you'll need to [download the latest AzFilesHybrid module](https://github.com/Azure-Samples/azure-files-samples/releases) and run the PowerShell below. If you haven't enabled AD DS authentication on your storage account yet, follow this [guidance](./storage-files-identity-ad-ds-enable.md#option-one-recommended-use-azfileshybrid-powershell-module).
+
+> [!IMPORTANT]
+> If you were previously using RC4 encryption and update the storage account to use AES-256, you should run `klist purge` on the client and then remount the file share to get new Kerberos tickets with AES-256.
```PowerShell $ResourceGroupName = "<resource-group-name-here>"
storage Storage Files Faq https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/files/storage-files-faq.md
Title: Frequently asked questions (FAQ) for Azure Files
description: Get answers to Azure Files frequently asked questions. You can mount Azure file shares concurrently on cloud or on-premises Windows, Linux, or macOS deployments. Previously updated : 02/16/2023 Last updated : 05/15/2023
storage Storage Files Identity Ad Ds Enable https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/files/storage-files-identity-ad-ds-enable.md
$NewPassword = ConvertTo-SecureString -String $KerbKey -AsPlainText -Force
Set-ADAccountPassword -Identity <domain-object-identity> -Reset -NewPassword $NewPassword ```
+> [!IMPORTANT]
+> If you were previously using RC4 encryption and update the storage account to use AES-256, you should run `klist purge` on the client and then remount the file share to get new Kerberos tickets with AES-256.
+ ### Debugging If needed, you can run the `Debug-AzStorageAccountAuth` cmdlet to conduct a set of basic checks on your AD configuration with the logged on AD user. This cmdlet is supported on AzFilesHybrid v0.1.2+ version and higher. For more information on the checks performed in this cmdlet, see [Unable to mount Azure file shares with AD credentials](files-troubleshoot-smb-authentication.md#unable-to-mount-azure-file-shares-with-ad-credentials).
storage Storage Files Identity Auth Active Directory Domain Service Enable https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/files/storage-files-identity-auth-active-directory-domain-service-enable.md
Set-ADUser $userObject -KerberosEncryptionType AES256
Get-ADUser $userObject -properties KerberosEncryptionType ```
+> [!IMPORTANT]
+> If you were previously using RC4 encryption and update the storage account to use AES-256, you should run `klist purge` on the client and then remount the file share to get new Kerberos tickets with AES-256.
+ [!INCLUDE [storage-files-aad-permissions-and-mounting](../../../includes/storage-files-aad-permissions-and-mounting.md)] ## Next steps
stream-analytics Event Hubs Managed Identity https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/stream-analytics/event-hubs-managed-identity.md
Title: Use managed identities to access Event Hub from an Azure Stream Analytics job
+ Title: Use managed identities to access Event Hubs from an Azure Stream Analytics job
description: This article describes how to use managed identities to authenticate your Azure Stream Analytics job to Azure Event Hubs input and output. Previously updated : 07/07/2021 Last updated : 05/15/2023
-# Use managed identities to access Event Hub from an Azure Stream Analytics job
+# Use managed identities to access Event Hubs from an Azure Stream Analytics job
Azure Stream Analytics supports Managed Identity authentication for both Azure Event Hubs input and output. Managed identities eliminate the limitations of user-based authentication methods, like the need to reauthenticate because of password changes or user token expirations that occur every 90 days. When you remove the need to manually authenticate, your Stream Analytics deployments can be fully automated. 
-A managed identity is a managed application registered in Azure Active Directory that represents a given Stream Analytics job. The managed application is used to authenticate to a targeted resource, including Event Hubs that are behind a firewall or virtual network (VNet). For more information about how to bypass firewalls, see [Allow access to Azure Event Hubs namespaces via private endpoints](../event-hubs/private-link-service.md#trusted-microsoft-services).
+A managed identity is a managed application registered in Azure Active Directory that represents a given Stream Analytics job. The managed application is used to authenticate to a targeted resource, including event hubs that are behind a firewall or virtual network (VNet). For more information about how to bypass firewalls, see [Allow access to Azure Event Hubs namespaces via private endpoints](../event-hubs/private-link-service.md#trusted-microsoft-services).
-This article shows you how to enable Managed Identity for an Event Hubs input or output of a Stream Analytics job through the Azure portal. Before you enabled Managed Identity, you must first have a Stream Analytics job and Event Hub resource.
+This article shows you how to enable Managed Identity for an event hub input or output of a Stream Analytics job through the Azure portal. Before you enabled Managed Identity, you must first have a Stream Analytics job and an Event Hubs resource.
## Create a managed identity 
First, you create a managed identity for your Azure Stream Analytics job.ΓÇ»
The service principal has the same name as the Stream Analytics job. For example, if the name of your job isΓÇ»`MyASAJob`, the name of the service principal is alsoΓÇ»`MyASAJob`.ΓÇ»
-## Grant the Stream Analytics job permissions to access the Event Hub
+## Grant the Stream Analytics job permissions to access Event Hubs
-For the Stream Analytics job to access your Event Hub using managed identity, the service principal you created must have special permissions to the Event Hub.
+For the Stream Analytics job to access your event hub using managed identity, the service principal you created must have special permissions to the event hub.
1. Select **Access control (IAM)**.
For the Stream Analytics job to access your Event Hub using managed identity, th
1. Assign the following role. For detailed steps, see [Assign Azure roles using the Azure portal](../role-based-access-control/role-assignments-portal.md).
- | Setting | Value |
- | | |
- | Role | Azure Event Hubs Data Owner |
- | Assign access to | User, group, or service principal |
- | Members | \<Name of your Stream Analytics job> |
+> [!NOTE]
+> When giving access to any resource, you should give the least needed access. Depending on whether you are configuring Event Hubs as an input or output, you may not need to assign the Azure Event Hubs Data Owner role which would grant more than needed access to your Eventhub resource. For more information see [Authenticate an application with Azure Active Directory to access Event Hubs resources](../event-hubs/authenticate-application.md)
+
+ | Setting | Value |
+ | | |
+ | Role | Azure Event Hubs Data Owner |
+ | Assign access to | User, group, or service principal |
+ | Members | \<Name of your Stream Analytics job> |
- ![Screenshot that shows Add role assignment page in Azure portal.](../../includes/role-based-access-control/media/add-role-assignment-page.png)
+ ![Screenshot that shows Add role assignment page in Azure portal.](../../includes/role-based-access-control/media/add-role-assignment-page.png)
-You can also grant this role at the Event Hub Namespace level, which will naturally propagate the permissions to all Event Hubs created under it. That is, all Event Hubs under a Namespace can be used as a managed-identity-authenticating resource in your Stream Analytics job.
+You can also grant this role at the Event Hubs Namespace level, which will naturally propagate the permissions to all event hubs created under it. That is, all event hubs under a Namespace can be used as a managed-identity-authenticating resource in your Stream Analytics job.
> [!NOTE] > Due to global replication or caching latency, there may be a delay when permissions are revoked or granted. Changes should be reflected within 8 minutes.
-## Create an Event Hub input or output 
+## Create an Event Hubs input or output 
-Now that your managed identity is configured, you're ready to add the Event Hub resource as an input or output to your Stream Analytics job. 
+Now that your managed identity is configured, you're ready to add the event hub resource as an input or output to your Stream Analytics job. 
-### Add the Event Hub as an input
+### Add Event Hubs as an input
1. Go to your Stream Analytics job and navigate to the **Inputs** page under **Job Topology**.
-1. Select **Add Stream Input > Event Hub**. In the input properties window, search and select your Event Hub and select **Managed Identity** from the *Authentication mode* drop-down menu.
+1. Select **Add Stream Input > Event Hub**. In the input properties window, search and select your event hub and select **Managed Identity** from the *Authentication mode* drop-down menu.
1. Fill out the rest of the properties and select **Save**.
-### Add the Event Hub as an output
+### Add Event Hubs as an output
1. Go to your Stream Analytics job and navigate to the **Outputs** page under **Job Topology**.
-1. Select **Add > Event Hub**. In the output properties window, search and select your Event Hub and select **Managed Identity** from the *Authentication mode* drop-down menu.
+1. Select **Add > Event Hub**. In the output properties window, search and select your event hub and select **Managed Identity** from the *Authentication mode* drop-down menu.
1. Fill out the rest of the properties and select **Save**.
stream-analytics Postgresql Database Output https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/stream-analytics/postgresql-database-output.md
Previously updated : 12/09/2022 Last updated : 05/12/2023 # Azure Database for PostgreSQL output from Azure Stream Analytics
-You can use [Azure Database for PostgreSQL](https://azure.microsoft.com/services/postgresql/) as an output for data that is relational in nature or for applications that depend on the content being hosted in a relational database. Azure Stream Analytics jobs write to an existing table in PostgreSQL Database. The table schema must exactly match the fields and their types in your job's output.
-
-Azure Database for PostgreSQL powered by the PostgreSQL community edition is available in two deployment options:
-* Single Server
-* Flexible Server
-
+You can use [Azure Database for PostgreSQL](https://azure.microsoft.com/services/postgresql/) as an output for data that is relational in nature or for applications that depend on the content being hosted in a relational database. Azure Stream Analytics jobs write to an existing table in PostgreSQL Database. Azure Database for PostgreSQL output from Azure Stream Analytics is available for flexible.
For more information about Azure Database for PostgreSQL please visit the: [What is Azure Database for PostgreSQL documentation.](../postgresql/overview.md) To learn more about how to create an Azure Database for PostgreSQL server by using the Azure portal please visit:
-* [Quick start for Azure Database for PostgreSQL ΓÇô Single server](../postgresql/quickstart-create-server-database-portal.md)
* [Quick start for Azure Database for PostgreSQL ΓÇô Flexible server](../postgresql/flexible-server/quickstart-create-server-portal.md) - > [!NOTE]
-> Managed identities for Azure Database for PostgreSQL output in Azure Stream Analytics is currently not supported.
+> Single server is being deprecated.
+> To write to citus/hyperscale when using Azure Database for PostgreSQL, use Azure CosmosDB for PostgreSQL.
## Output configuration
The following table lists the property names and their description for creating
Partitioning needs to enabled and is based on the PARTITION BY clause in the query. When the Inherit Partitioning option is enabled, it follows the input partitioning for [fully parallelizable queries](stream-analytics-scale-jobs.md).
+## Limitations
+* The table schema must exactly match the fields and their types in your job's output.
+* Managed identities for Azure Database for PostgreSQL output in Azure Stream Analytics is currently not supported.
++ ## Next steps * [Quickstart: Create a Stream Analytics job by using the Azure portal](stream-analytics-quick-create-portal.md)
virtual-desktop Connect Windows https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-desktop/users/connect-windows.md
Title: Connect to Azure Virtual Desktop with the Remote Desktop client for Windo
description: Learn how to connect to Azure Virtual Desktop using the Remote Desktop client for Windows. Previously updated : 10/04/2022 Last updated : 05/16/2023
Before you can access your resources, you'll need to meet the prerequisites:
> Support for Windows 7 ended on January 10, 2023. - Download the Remote Desktop client installer, choosing the correct version for your device:
- - [Windows 64-bit](https://go.microsoft.com/fwlink/?linkid=2068602) *(most common)*
- - [Windows 32-bit](https://go.microsoft.com/fwlink/?linkid=2098960)
- - [Windows on Arm](https://go.microsoft.com/fwlink/?linkid=2098961)
+ - [Windows 64-bit](https://go.microsoft.com/fwlink/?linkid=2139369) *(most common)*
+ - [Windows 32-bit](https://go.microsoft.com/fwlink/?linkid=2139456)
+ - [Windows ARM64](https://go.microsoft.com/fwlink/?linkid=2139370)
- .NET Framework 4.6.2 or later. You may need to install this on Windows Server 2012 R2, Windows Server 2016, and some versions of Windows 10. To download the latest version, see [Download .NET Framework](https://dotnet.microsoft.com/download/dotnet-framework).
virtual-desktop Whats New Client Windows https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-desktop/whats-new-client-windows.md
description: Learn about recent changes to the Remote Desktop client for Windows
Previously updated : 05/09/2023 Last updated : 05/16/2023 # What's new in the Remote Desktop client for Windows
The following table lists the current versions available for the public and Insi
| Release | Latest version | Download | ||-|-|
-| Public | 1.2.4159 | [Windows 64-bit](https://go.microsoft.com/fwlink/?linkid=2139369) *(most common)*<br />[Windows 32-bit](https://go.microsoft.com/fwlink/?linkid=2139456)<br />[Windows ARM64](https://go.microsoft.com/fwlink/?linkid=2139370) |
+| Public | 1.2.4240 | [Windows 64-bit](https://go.microsoft.com/fwlink/?linkid=2139369) *(most common)*<br />[Windows 32-bit](https://go.microsoft.com/fwlink/?linkid=2139456)<br />[Windows ARM64](https://go.microsoft.com/fwlink/?linkid=2139370) |
| Insider | 1.2.4240 | [Windows 64-bit](https://go.microsoft.com/fwlink/?linkid=2139233) *(most common)*<br />[Windows 32-bit](https://go.microsoft.com/fwlink/?linkid=2139144)<br />[Windows ARM64](https://go.microsoft.com/fwlink/?linkid=2139368) |
-## Updates for version 1.2.4240 (Insider)
+## Updates for version 1.2.4240
-*Date published: May 4, 2023*
+*Date published: May 16, 2023*
-Download: [Windows 64-bit](https://go.microsoft.com/fwlink/?linkid=2139233), [Windows 32-bit](https://go.microsoft.com/fwlink/?linkid=2139144), [Windows ARM64](https://go.microsoft.com/fwlink/?linkid=2139368)
+Download: [Windows 64-bit](https://go.microsoft.com/fwlink/?linkid=2139369), [Windows 32-bit](https://go.microsoft.com/fwlink/?linkid=2139456), [Windows ARM64](https://go.microsoft.com/fwlink/?linkid=2139370)
In this release, we've made the following changes:
In this release, we've made the following changes:
*Date published: May 9, 2023*
-Download: [Windows 64-bit](https://go.microsoft.com/fwlink/?linkid=2139369), [Windows 32-bit](https://go.microsoft.com/fwlink/?linkid=2139456), [Windows ARM64](https://go.microsoft.com/fwlink/?linkid=2139370)
+Download: [Windows 64-bit](https://query.prod.cms.rt.microsoft.com/cms/api/am/binary/RW13yd3), [Windows 32-bit](https://query.prod.cms.rt.microsoft.com/cms/api/am/binary/RW13yd4), [Windows ARM64](https://query.prod.cms.rt.microsoft.com/cms/api/am/binary/RW13nJY)
In this release, we've made the following changes:
In this release, we've made the following changes:
*Date published: March 28, 2023*
-Download: [Windows 64-bit](https://query.prod.cms.rt.microsoft.com/cms/api/am/binary/RW10DEa), [Windows 32-bit](https://query.prod.cms.rt.microsoft.com/cms/api/am/binary/RW10GYu), [Windows ARM64](https://query.prod.cms.rt.microsoft.com/cms/api/am/binary/RW10GYw)
- In this release, we've made the following changes: - General improvements to Narrator experience.
virtual-network Service Tags Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-network/service-tags-overview.md
By default, service tags reflect the ranges for the entire cloud. Some service t
| **AzureAttestation** | Azure Attestation. | Outbound | No | Yes | | **AzureBackup** |Azure Backup.<br/><br/>**Note**: This tag has a dependency on the **Storage** and **AzureActiveDirectory** tags. | Outbound | No | Yes | | **AzureBotService** | Azure Bot Service. | Outbound | No | Yes |
-| **AzureCloud** | All [datacenter public IP addresses](https://www.microsoft.com/download/details.aspx?id=56519). | Both | Yes | Yes |
+| **AzureCloud** | All [datacenter public IP addresses](https://www.microsoft.com/download/details.aspx?id=56519). Doesn't include IPv6. | Both | Yes | Yes |
| **AzureCognitiveSearch** | Azure Cognitive Search. <br/><br/>This tag or the IP addresses covered by this tag can be used to grant indexers secure access to data sources. For more information about indexers, see [indexer connection documentation](../search/search-indexer-troubleshooting.md#connection-errors). <br/><br/> **Note**: The IP of the search service isn't included in the list of IP ranges for this service tag and **also needs to be added** to the IP firewall of data sources. | Inbound | No | Yes | | **AzureConnectors** | This tag represents the IP addresses used for managed connectors that make inbound webhook callbacks to the Azure Logic Apps service and outbound calls to their respective services, for example, Azure Storage or Azure Event Hubs. | Both | Yes | Yes | | **AzureContainerAppsService** | Azure Container Apps Service | Both | Yes | No |
By default, service tags reflect the ranges for the entire cloud. Some service t
| **GuestAndHybridManagement** | Azure Automation and Guest Configuration. | Outbound | No | Yes | | **HDInsight** | Azure HDInsight. | Inbound | Yes | Yes | | **Internet** | The IP address space that's outside the virtual network and reachable by the public internet.<br/><br/>The address range includes the [Azure-owned public IP address space](https://www.microsoft.com/download/details.aspx?id=56519). | Both | No | No |
+| **KustoAnalytics** | Kusto Analytics. | Both | No | No |
| **LogicApps** | Logic Apps. | Both | No | Yes | | **LogicAppsManagement** | Management traffic for Logic Apps. | Inbound | No | Yes | | **Marketplace** | Represents the entire suite of Azure 'Commercial Marketplace Experiences' services. | Both | No | Yes |
virtual-wan About Virtual Hub Routing https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-wan/about-virtual-hub-routing.md
The following sections describe the key concepts in virtual hub routing.
A virtual hub route table can contain one or more routes. A route includes its name, a label, a destination type, a list of destination prefixes, and next hop information for a packet to be routed. A **Connection** typically will have a routing configuration that associates or propagates to a route table. ### <a name= "hub-route"></a> Hub routing intent and policies
->[!NOTE]
-> Hub Routing Policies are currently in Managed Preview.
->
->To obtain access to this preview, please reach out to previewinterhub@microsoft.com with the Virtual WAN ID, Subscription ID and Azure Region you wish to configure Routing Policies in. Please expect a response within 24-48 hours with confirmation of feature enablement.
->
-> For more information on how to configure Routing Intent and Policies please view the following [document](how-to-routing-policies.md).
+Routing Intent and Routing policies allow you to configure your Virtual WAN hub to send Internet-bound and Private (Point-to-site, Site-to-site, ExpressRoute, Network Virtual Appliances inside the Virtual WAN Hub and Virtual Network) Traffic via an Azure Firewall, Next-Generation Firewall NVA or software-as-a-service solution deployed in the Virtual WAN hub. There are two types of Routing Policies: Internet Traffic and Private Traffic Routing Policies. Each Virtual WAN Hub may have at most one Internet Traffic Routing Policy and one Private Traffic Routing Policy, each with a Next Hop resource.
-Customers using Azure Firewall manager to set up policies for public and private traffic now can set up their networks in a much simpler manner using Routing Intent and Routing Policies.
-
-Routing Intent and Routing policies allow you to specify how the Virtual WAN hub forwards Internet-bound and Private (Point-to-site, Site-to-site, ExpressRoute, Network Virtual Appliances inside the Virtual WAN Hub and Virtual Network) Traffic. There are two types of Routing Policies: Internet Traffic and Private Traffic Routing Policies. Each Virtual WAN Hub may have at most one Internet Traffic Routing Policy and one Private Traffic Routing Policy, each with a Next Hop resource.
-
-While Private Traffic includes both branch and Virtual Network address prefixes, Routing Policies considers them as one entity within the Routing Intent Concept.
+While Private Traffic includes both branch and Virtual Network address prefixes, Routing Policies considers them as one entity within the Routing Intent concepts.
* **Internet Traffic Routing Policy**: When an Internet Traffic Routing Policy is configured on a Virtual WAN hub, all branch (User VPN (Point-to-site VPN), Site-to-site VPN and ExpressRoute) and Virtual Network connections to that Virtual WAN Hub will forward Internet-bound traffic to the Azure Firewall resource or Third-Party Security provider specified as part of the Routing Policy. -
-* **Private Traffic Routing Policy**: When a Private Traffic Routing Policy is configured on a Virtual WAN hub, **all** branch and Virtual Network traffic in and out of the Virtual WAN Hub including inter-hub traffic will be forwarded to the Next Hop Azure Firewall resource that was specified in the Private Traffic Routing Policy.
+* **Private Traffic Routing Policy**: When a Private Traffic Routing Policy is configured on a Virtual WAN hub, **all** branch and Virtual Network traffic in and out of the Virtual WAN Hub including inter-hub traffic will be forwarded to the Next Hop Azure Firewall resource that was specified in the Private Traffic Routing Policy.
For more information on how to configure Routing Intent and Policies please view the following [document](how-to-routing-policies.md).
Route tables now have features for association and propagation. A pre-existing r
## <a name="reset"></a>Hub reset
-Virtual hub **Reset** is available only in the Azure portal. Resetting provides you a way to bring any failed resources such as route tables, hub router, or the virtual hub resource itself back to its rightful provisioning state. Consider resetting the hub prior to contacting Microsoft for support. This operation doesn't reset any of the gateways in a virtual hub.
+Virtual hub **Reset** is available only in the Azure portal. Resetting provides you with a way to bring any failed resources such as route tables, hub router, or the virtual hub resource itself back to its rightful provisioning state. Consider resetting the hub prior to contacting Microsoft for support. This operation doesn't reset any of the gateways in a virtual hub.
## <a name="considerations"></a>Additional considerations
Consider the following when configuring Virtual WAN routing:
* All branch connections (Point-to-site, Site-to-site, and ExpressRoute) need to be associated to the Default route table. That way, all branches will learn the same prefixes. * All branch connections need to propagate their routes to the same set of route tables. For example, if you decide that branches should propagate to the Default route table, this configuration should be consistent across all branches. As a result, all connections associated to the Default route table will be able to reach all of the branches. * Branch-to-branch via Azure Firewall is currently not supported.
-* When using Azure Firewall in multiple regions, all spoke virtual networks must be associated to the same route table. For example, having a subset of the VNets going through the Azure Firewall while other VNets bypass the Azure Firewall in the same virtual hub isn't possible.
+* When you use Azure Firewall in multiple regions, all spoke virtual networks must be associated to the same route table. For example, having a subset of the VNets going through the Azure Firewall while other VNets bypass the Azure Firewall in the same virtual hub isn't possible.
* You may specify multiple next hop IP addresses on a single Virtual Network connection. However, Virtual Network Connection doesn't support ΓÇÿmultiple/uniqueΓÇÖ next hop IP to the ΓÇÿsameΓÇÖ network virtual appliance in a SPOKE Virtual Network 'if' one of the routes with next hop IP is indicated to be public IP address or 0.0.0.0/0 (internet) * All information pertaining to 0.0.0.0/0 route is confined to a local hub's route table. This route doesn't propagate across hubs. * You can only use Virtual WAN to program routes in a spoke if the prefix is shorter (less specific) than the virtual network prefix. For example, in the diagram above the spoke VNET1 has the prefix 10.1.0.0/16: in this case, Virtual WAN wouldn't be able to inject a route that matches the virtual network prefix (10.1.0.0/16) or any of the subnets (10.1.0.0/24, 10.1.1.0/24). In other words, Virtual WAN can't attract traffic between two subnets that are in the same virtual network.
virtual-wan How To Routing Policies https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-wan/how-to-routing-policies.md
# How to configure Virtual WAN Hub routing intent and routing policies
-Customers using Azure Firewall manager to set up policies for public and private traffic now can set up their networks in a much simpler manner using Routing Intent and Routing Policies.
+>[!NOTE]
+> The rollout for routing intent capabilities to support inter-region traffic is currently underway. Inter-region capabilities may not be immediately available.
->[!NOTE]
-> Hub Routing Intent is currently in gated public preview.
-> The preview for Hub Routing Intent impacts routing and route advertisements for **all** connections to the Virtual Hub (Point-to-site VPN, Site-to-site VPN, ExpressRoute, NVA, Virtual Network).
-
-Routing Intent and Routing policies allow you to specify how the Virtual WAN hub forwards Internet-bound and Private (Point-to-site, Site-to-site, ExpressRoute, Network Virtual Appliances inside the Virtual WAN Hub and Virtual Network) Traffic. There are two types of Routing Policies: Internet Traffic and Private Traffic Routing Policies. Each Virtual WAN Hub may have at most one Internet Traffic Routing Policy and one Private Traffic Routing Policy, each with a single Next Hop resource.
+Virtual WAN Hub routing intent allows you to set up simple and declarative routing policies to send traffic to bump-in-the-wire security solutions like Azure Firewall, Network Virtual Appliances or software-as-a-service (SaaS) solutions deployed within the Virtual WAN hub.
-While Private Traffic includes both branch and Virtual Network address prefixes, Routing Policies considers them as one entity within the Routing Intent Concepts.
+## Background
->[!NOTE]
-> Inter-region traffic **cannot** be inspected by Azure Firewall or NVA. Additionally, configuring both private and internet routing policies is currently **not** supported in most Azure regions. Doing so will put Gateways (ExpressRoute, Site-to-site VPN and Point-to-site VPN) in a failed state and break connectivity from on-premises branches to Azure. Please ensure you only have one type of routing policy on each Virtual WAN hub. For more information, please contact previewinterhub@microsoft.com.
+Routing Intent and Routing Policies allow you to configure the Virtual WAN hub to forward Internet-bound and Private (Point-to-site VPN, Site-to-site VPN, ExpressRoute, Virtual Network and Network Virtual Appliance) Traffic to an Azure Firewall, Next-Generation Firewall Network Virtual Appliance (NGFW-NVA) or security software-as-a-service (SaaS) solution deployed in the virtual hub.
+There are two types of Routing Policies: Internet Traffic and Private Traffic Routing Policies. Each Virtual WAN Hub may have at most one Internet Traffic Routing Policy and one Private Traffic Routing Policy, each with a single Next Hop resource. While Private Traffic includes both branch and Virtual Network address prefixes, Routing Policies considers them as one entity within the Routing Intent concepts.
-* **Internet Traffic Routing Policy**: When an Internet Traffic Routing Policy is configured on a Virtual WAN hub, all branch (User VPN (Point-to-site VPN), Site-to-site VPN, and ExpressRoute) and Virtual Network connections to that Virtual WAN Hub will forward Internet-bound traffic to the Azure Firewall resource, Third-Party Security provider or **Network Virtual Appliance** specified as part of the Routing Policy.
+* **Internet Traffic Routing Policy**: When an Internet Traffic Routing Policy is configured on a Virtual WAN hub, all branch (Remote User VPN (Point-to-site VPN), Site-to-site VPN, and ExpressRoute) and Virtual Network connections to that Virtual WAN Hub forwards Internet-bound traffic to the **Azure Firewall**, **Third-Party Security provider**, **Network Virtual Appliance** or **SaaS solution** specified as part of the Routing Policy.
- In other words, when Traffic Routing Policy is configured on a Virtual WAN hub, the Virtual WAN advertises a **default** route to all spokes, Gateways and Network Virtual Appliances (deployed in the hub or spoke). This includes the **Network Virtual Appliance** that is the next hop for the Internet Traffic routing policy.
+ In other words, when an Internet Traffic Routing Policy is configured on a Virtual WAN hub, the Virtual WAN advertises a default (0.0.0.0/0) route to all spokes, Gateways and Network Virtual Appliances (deployed in the hub or spoke).
-* **Private Traffic Routing Policy**: When a Private Traffic Routing Policy is configured on a Virtual WAN hub, **all** branch and Virtual Network traffic in and out of the Virtual WAN Hub including inter-hub traffic will be forwarded to the Next Hop Azure Firewall resource or Network Virtual Appliance resource that was specified in the Private Traffic Routing Policy.
+* **Private Traffic Routing Policy**: When a Private Traffic Routing Policy is configured on a Virtual WAN hub, **all** branch and Virtual Network traffic in and out of the Virtual WAN Hub including inter-hub traffic is forwarded to the Next Hop **Azure Firewall**, **Network Virtual Appliance** or **SaaS solution** resource.
- In other words, when a Private Traffic Routing Policy is configured on the Virtual WAN Hub, all branch-to-branch, branch-to-virtual network, virtual network-to-branch and inter-hub traffic will be sent via Azure Firewall or a Network Virtual Appliance deployed in the Virtual WAN Hub.
+ In other words, when a Private Traffic Routing Policy is configured on the Virtual WAN Hub, all branch-to-branch, branch-to-virtual network, virtual network-to-branch and inter-hub traffic is sent via Azure Firewall, Network Virtual Appliance or SaaS solution deployed in the Virtual WAN Hub.
-## Preview notes
+## Use Cases
-This preview is provided without a service-level agreement and isn't recommended for production workloads. Some features might be unsupported or have constrained capabilities. For more information, see [Supplemental terms of use for Microsoft Azure previews](https://azure.microsoft.com/support/legal/preview-supplemental-terms/).
+The following section describes two common scenarios where Routing Policies are applied to Secured Virtual WAN hubs.
-To obtain access to the preview, please deploy any Virtual WAN hubs and gateways (Site-to-site VPN Gateways, Point-to-site Gateways and ExpressRouteGateways) and then reach out to previewinterhub@microsoft.com with the Virtual WAN ID, Subscription ID and Azure Region you wish to configure Routing Intent in. Expect a response within 48 business hours (Monday-Friday) with confirmation of feature enablement. Please note that any gateways created after feature enablement will need to be upgraded by the Virtual WAN team.
+### All Virtual WAN Hubs are secured (deployed with Azure Firewall, NVA or SaaS solution)
+In this scenario, all Virtual WAN hubs are deployed with an Azure Firewall, NVA or SaaS solution in them. In this scenario, you may configure an Internet Traffic Routing Policy, a Private Traffic Routing Policy or both on each Virtual WAN Hub.
-## Key considerations
-* You will **not** be able to enable routing policies on your deployments with existing Custom Route tables configured or if there are static routes configured in your Default Route Table.
-* Currently, Private Traffic Routing Policies aren't supported in Hubs with Encrypted ExpressRoute connections (Site-to-site VPN Tunnel running over ExpressRoute Private connectivity).
-* In the gated public preview of Virtual WAN Hub routing policies, inter-hub traffic between hubs in different Azure regions is dropped.
-* Routing Intent and Routing Policies currently must be configured via the custom portal link provided in Step 3 of **Prerequisites**. Routing Intents and Policies aren't supported via Terraform, PowerShell, and CLI.
+Consider the following configuration where Hub 1 and Hub 2 have Routing Policies for both Private and Internet Traffic.
+**Hub 1 configuration:**
+* Private Traffic Policy with Next Hop Hub 1 Azure Firewall, NVA or SaaS solution
+* Internet Traffic Policy with Next Hop Hub 1 Azure Firewall, NVA or SaaS solution
-## Prerequisites
+**Hub 2 configuration:**
+* Private Traffic Policy with Next Hop Hub 2 Azure Firewall, NVA or SaaS solution
+* Internet Traffic Policy with Next Hop Hub 2 Azure Firewall, NVA or SaaS solution
-1. Create a Virtual WAN. Make sure you create at least two Virtual Hubs if you wish to inspect inter-hub traffic. For instance, you may create a Virtual WAN with two Virtual Hubs in East US. If you only wish to inspect branch-to-branch traffic, you may deploy a single Virtual WAN Hub as opposed to multiple hubs.
-2. Convert your Virtual WAN Hub into a Secured Virtual WAN Hub by deploying an Azure Firewall into the Virtual Hubs in the chosen region. For more information on converting your Virtual WAN Hub to a Secured Virtual WAN Hub, see [How to secure your Virtual WAN Hub](howto-firewall.md).
-3. Deploy any Site-to-site VPN, Point-to-site VPN and ExpressRoute Gateways you'll use for testing. Reach out to **previewinterhub@microsoft.com** with the **Virtual WAN Resource ID** and the **Azure Virtual hub Region** you wish to configure Routing Policies in. To locate the Virtual WAN ID, open Azure portal, navigate to your Virtual WAN resource and select Settings > Properties > Resource ID. For example:
- ```
- /subscriptions/<subscriptionID>/resourceGroups/<resourceGroupName>/providers/Microsoft.Network/virtualWans/<virtualWANname>
- ```
-4. Expect a response within 24-48 hours with confirmation of feature enablement.
-5. Ensure that your Virtual Hubs do **not** have any Custom Route Tables or any routes you may have added into the defaultRouteTable. You will **not** be able to enable routing policies on your deployments with existing Custom Route tables configured or if there are static routes configured in your Default Route Table.
-6. If you're using an **Azure Firewall** deployed in the Virtual WAN Hub, see [Configuring routing policies (through Azure Firewall Manager)](#azurefirewall) to configure routing intent and routing policies. If you're using a **Network Virtual Appliance** deployed in the Virtual WAN Hub, see [Configuring routing policies (through Virtual WAN Portal)](#nva) to configure routing intent and routing policies.
+The following are the traffic flows that result from such a configuration.
-## <a name="azurefirewall"></a> Configure routing policies (through Azure Firewall Manager)
+> [!NOTE]
+> Internet Traffic must egress through the **local** seurity solution in the hub as the default route (0.0.0.0/0) does **not** propagate across hubs.
-1. From the custom portal Link provided in the confirmation email from Step 3 in the **Prerequisites** section, navigate to the Virtual WAN Hub that you want to configure Routing Policies on.
-1. Under Security, select **Secured Virtual hub settings** and then **Manage security provider and route settings for this Secured virtual hub in Azure Firewall Manager**
-1. Select the Hub you want to configure your Routing Policies on from the menu.
-1. Select **Security configuration** under **Settings**
-1. If you want to configure an Internet Traffic Routing Policy, select **Azure Firewall** or the relevant Internet Security provider from the dropdown for **Internet Traffic**. If not, select **None**
-1. If you want to configure a Private Traffic Routing Policy (for branch and Virtual Network traffic) via Azure Firewall, select **Azure Firewall** from the dropdown for **Private Traffic**. If not, select **Bypass Azure Firewall**.
+| From | To | Hub 1 VNets | Hub 1 branches | Hub 2 VNets | Hub 2 branches| Internet|
+| -- | -- | - | | | | |
+| Hub 1 VNets | &#8594;| Hub 1 AzFW or NVA| Hub 1 AzFW or NVA | Hub 1 and 2 AzFW, NVA or SaaS | Hub 1 and 2 AzFW, NVA or SaaS | Hub 1 AzFW, NVA or SaaS |
+| Hub 1 Branches | &#8594;| Hub 1 AzFW, NVA or SaaS | Hub 1 AzFW, NVA or SaaS | Hub 1 and 2 AzFW, NVA or SaaS | Hub 1 and 2 AzFW, NVA or SaaS | Hub 1 AzFW, NVA or SaaS|
+| Hub 2 VNets | &#8594;| Hub 1 and 2 AzFW, NVA or SaaS| Hub 1 and 2 AzFW, NVA or SaaS | Hub 2 AzFW, NVA or SaaS | Hub 2 AzFW, NVA or SaaS| Hub 2 AzFW, NVA or SaaS|
+| Hub 2 Branches | &#8594;| Hub 1 and 2 AzFW, NVA or SaaS| Hub 1 and 2 AzFW, NVA or SaaS | Hub 2 AzFW, NVA or SaaS | Hub 2 AzFW, NVA or SaaS | Hub 2AzFW, NVA or SaaS|
- :::image type="content" source="./media/routing-policies/configure-intents.png"alt-text="Screenshot showing how to configure routing policies."lightbox="./media/routing-policies/configure-intents.png":::
-7. If you want to configure a Private Traffic Routing Policy and have branches or virtual networks advertising non-IANA RFC1918 Prefixes, select **Private Traffic Prefixes** and specify the non-IANA RFC1918 prefix ranges in the text box that comes up. Select **Done**.
+### Deploying both secured and regular Virtual WAN Hubs
- :::image type="content" source="./media/routing-policies/private-prefixes.png"alt-text="Screenshot showing how to edit private traffic prefixes."lightbox="./media/routing-policies/private-prefixes.png":::
+In this scenario, not all hubs in the WAN are Secured Virtual WAN Hubs (hubs that have a security solution deployed in them).
-8. Select **Inter-hub** to be **Enabled**. Enabling this option ensures your Routing Policies are applied to the Routing Intent of this Virtual WAN Hub.
-9. Select **Save**. This operation takes around 10 minutes to complete.
-10. Repeat steps 2-8 for other Secured Virtual WAN hubs that you want to configure Routing policies for.
-11. At this point, you're ready to send test traffic. Make sure your Firewall Policies are configured appropriately to allow/deny traffic based on your desired security configurations.
+Consider the following configuration where Hub 1 (Normal) and Hub 2 (Secured) are deployed in a Virtual WAN. Hub 2 has Routing Policies for both Private and Internet Traffic.
-## <a name="nva"></a> Configure routing policies for network virtual appliances (through Virtual WAN portal)
+**Hub 1 Configuration:**
+* N/A (can't configure Routing Policies if hub isn't deployed with Azure Firewall, NVA or SaaS solution)
->[!NOTE]
-> The only Network Virtual Appliance deployed in the Virtual WAN hub compatible with routing intent and routing policies are listed in the [Partners section](about-nva-hub.md) as dual-role connectivity and Next-Generation Firewall solution providers.
+**Hub 2 Configuration:**
+* Private Traffic Policy with Next Hop Hub 2 Azure Firewall, NVA or SaaS solution.
+* Internet Traffic Policy with Next Hop Hub 2 Azure Firewall, NVA or SaaS solution.
-1. From the custom portal link provided in the confirmation email from Step 3 in the **Prerequisites** section, navigate to the Virtual WAN hub that you want to configure routing policies on.
-1. Under Routing, select **Routing Policies**.
- :::image type="content" source="./media/routing-policies/routing-policies-vwan-ui.png"alt-text="Screenshot showing how to navigate to routing policies."lightbox="./media/routing-policies/routing-policies-vwan-ui.png":::
+ The following are the traffic flows that result from such a configuration. Branches and Virtual Networks connected to Hub 1 **can't** access the Internet via a security solution deployed in the Hub because the default route (0.0.0.0/0) does **not** propagate across hubs.
-3. If you want to configure a Private Traffic Routing Policy (for branch and Virtual Network Traffic), select **Network Virtual Appliance** under **Private Traffic** and under **Next Hop Resource** select the Network Virtual Appliance resource you wish to send traffic to.
+| From | To | Hub 1 VNets | Hub 1 branches | Hub 2 VNets | Hub 2 branches| Internet |
+| -- | -- | - | | | | |
+| Hub 1 VNets | &#8594;| Direct | Direct | Hub 2 AzFW, NVA or SaaS| Hub 2 AzFW, NVA or SaaS | - |
+| Hub 1 Branches | &#8594;| Direct | Direct | Hub 2 AzFW, NVA or SaaS | Hub 2 AzFW, NVA or SaaS | - |
+| Hub 2 VNets | &#8594;| Hub 2 AzFW, NVA or SaaS| Hub 2 AzFW, NVA or SaaS | Hub 2 AzFW, NVA or SaaS| Hub 2 AzFW, NVA or SaaS | Hub 2 AzFW, NVA or SaaS|
+| Hub 2 Branches | &#8594;| Hub 2 AzFW, NVA or SaaS | Hub 2 AzFW, NVA or SaaS | Hub 2 AzFW, NVA or SaaS| Hub 2 AzFW, NVA or SaaS | Hub 2 AzFW, NVA or SaaS|
- :::image type="content" source="./media/routing-policies/routing-policies-private-nva.png"alt-text="Screenshot showing how to configure NVA private routing policies."lightbox="./media/routing-policies/routing-policies-private-nva.png":::
-4. If you want to configure a Private Traffic Routing Policy and have branches or virtual networks using non-IANA RFC1918 Prefixes, select **Additional Prefixes** and specify the non-IANA RFC1918 prefix ranges in the text box that comes up. Select **Done**.
+## <a name="knownlimitations"></a> Known Limitations
- > [!NOTE]
- > At this point in time, Routing Policies for **Network Virtual Appliances** do not allow you to edit the RFC1918 prefixes. Virtual WAN will propagate the RFC1918 aggregate prefixes to all spoke Virtual networks, Gateways as well as the **Network Virtual Appliances**. Be mindful of the implications about the propagation of these prefixes into your environment and create the appropriate policies inside your **Network Virtual Appliance** to control routing behavior.
+* Routing Intent is currently Generally Available in Azure public cloud. Azure China Cloud and Azure Government Cloud are currently in roadmap.
+* Routing Intent simplifies routing by managing route table associations and propagations for all connections (Virtual Network, Site-to-site VPN, Point-to-site VPN and ExpressRoute). Virtual WANs with custom route tables and customized policies therefore can't be used with the Routing Intent constructs.
+* The following connectivity use cases are **not** supported with Routing Intent:
+ * Encrypted ExpressRoute (Site-to-site VPN tunnels running over ExpressRoute circuits) is **not** supported in hubs where routing intent is configured. Connectivity between Encrypted ExpressRoute connected sites and Azure is impacted if routing intent is configured on a hub.
+ * Static routes in the defaultRouteTable that point to a Virtual Network connection can't be used in conjunction with routing intent. However, you can use the [BGP peering feature](scenario-bgp-peering-hub.md).
+ * Routing Intent only supports a single Network Virtual Appliance in each Virtual WAN hub. Multiple Network Virtual Appliances is currently in the road-map.
+ * Network Virtual Appliances (NVAs) can only be specified as the next hop resource for routing intent if they're Next-Generation Firewall or dual-role Next-Generation Firewall and SD-WAN NVAs. Currently, **checkpoint**, **fortinet-ngfw** and **fortinet-ngfw-and-sdwan** are the only NVAs eligible to be configured to be the next hop for routing intent. If you attempt to specify another NVA, Routing Intent creation fails. You can check the type of the NVA by navigating to your Virtual Hub -> Network Virtual Appliances and then looking at the **Vendor** field.
+ * Routing Intent users who want to connect multiple ExpressRoute circuits to Virtual WAN and want to send traffic between them via a security solution deployed in the hub can enable open up a support case to enable this use case. Reference [enabling connectivity across ExpressRoute circuits](#expressroute) for more information.
- :::image type="content" source="./media/routing-policies/private-prefixes-nva.png"alt-text="Screenshot showing how to configure additional private prefixes for NVA routing policies."lightbox="./media/routing-policies/private-prefixes-nva.png":::
+## Considerations
-5. If you want to configure an Internet Traffic Routing Policy, under **Internet traffic** select **Network Virtual Appliance** and under **Next Hop Resource** select the Network Virtual Appliance you want to send internet-bound traffic to.
+Customers who are currently using Azure Firewall in the Virtual WAN hub without Routing Intent may enable routing intent using Azure Firewall Manager, Virtual WAN hub routing portal or through other Azure management tools (PowerShell, CLI, REST API).
- :::image type="content" source="./media/routing-policies/public-routing-policy-nva.png"alt-text="Screenshot showing how to configure public routing policies for NVA."lightbox="./media/routing-policies/public-routing-policy-nva.png":::
+Before enabling routing intent, consider the following:
+* Routing intent can only be configured on hubs where there are no custom route tables and no static routes in the defaultRouteTable with next hop Virtual Network Connection. For more information, see [prerequisites](#prereq).
+* Save a copy of your gateways, connections and route tables prior to enabling routing intent. The system won't automatically save and apply previous configurations. For more information, see [rollback strategy](#rollback).
+* Routing intent changes the static routes in the defaultRouteTable. Due to Azure portal optimizations, the state of the defaultRouteTable after routing intent is configured may be different if you configure routing intent using REST, CLI or PowerShell. For more information, see [static routes](#staticroute).
+* Enabling routing intent affects the advertisement of prefixes to on-premises. See [prefix advertisements](#prefixadvertisments) for more information.
+* You may open a support case to enable connectivity across ExpressRoute circuits via a Firewall appliance in the hub. Enabling this connectivity pattern modifies the prefixes advertised to ExpressRoute circuits. See [About ExpressRoute](#expressroute) for more information.
-6. To apply your routing intent and routing policies configuration, click **Save**.
+### <a name="prereq"></a> Prerequisites
- :::image type="content" source="./media/routing-policies/save-nva.png"alt-text="Screenshot showing how to save routing policies configurations"lightbox="./media/routing-policies/save-nva.png":::
+To enable routing intent and policies, your Virtual Hub must meet the below prerequisites:
-7. Repeat for all hubs you would like to configure routing policies for.
+* There are no custom route tables deployed with the Virtual Hub. The only route tables that exist are the noneRouteTable and the defaultRouteTable.
+* You can't have static routes with next hop Virtual Network Connection. You may have static routes in the defaultRouteTable have next hop Azure Firewall.
-8. At this point, you're ready to send test traffic. Make sure your Firewall Policies are configured appropriately to allow/deny traffic based on your desired security configurations.
+The option to configure routing intent is greyed out for hubs that don't meet the above requirements.
-## Routing policy configuration examples
+Using routing intent (enable inter-hub option) in Azure Firewall Manager has an additional requirement:
-The following section describes two common scenarios customers of applying Routing Policies to Secured Virtual WAN hubs.
+* Routes created by Azure Firewall Manager follow the naming convention of **private_traffic**, **internet_traffic** or **all_traffic**. Therefore, all routes in the defaultRouteTable must follow this convention.
-### All Virtual WAN Hubs are secured (deployed with Azure Firewall or NVA)
+### <a name="rollback"></a> Rollback strategy
-In this scenario, all Virtual WAN hubs are deployed with an Azure Firewall or NVA in them. In this scenario, you may configure an Internet Traffic Routing Policy, a Private Traffic Routing Policy or both on each Virtual WAN Hub.
+> [!NOTE]
+> Note that when routing intent configuration is completely removed from a hub, all connections to the hub are set to propagate to the default label (which applies to 'all' defaultRouteTables in the Virtual WAN). As a result, if you're considering implementing Routing Intent in Virtual WAN, you should save a copy of your existing configurations (gateways, connections, route tables) to apply if you wish to revert back to the original configuration. The system doesn't automatically restore your previous configuration.
+Routing Intent simplifies routing and configuration by managing route associations and propagations of all connections in a hub.
-Consider the following configuration where Hub 1 and Hub 2 have Routing Policies for both Private and Internet Traffic.
+The following table describes the associated route table and propagated route tables of all connections once routing intent is configured.
-**Hub 1 configuration:**
-* Private Traffic Policy with Next Hop Hub 1 Azure Firewall or NVA
-* Internet Traffic Policy with Next Hop Hub 1 Azure Firewall or NVA
+|Routing Intent configuration | Associated route table| Propagated route tables|
+| --| --| --|
+|Internet|defaultRouteTable| default label (defaultRouteTable of all hubs in the Virtual WAN)|
+| Private| defaultRouteTable| noneRouteTable|
+|Internet and Private| defaultRouteTable| noneRouteTable|
-**Hub 2 configuration:**
-* Private Traffic Policy with Next Hop Hub 2 Azure Firewall or NVA
-* Internet Traffic Policy with Next Hop Hub 2 Azure Firewall or NVA
+### <a name="staticroute"></a> Static routes in defaultRouteTable
+
+The following section describes how routing intent manages static routes in the defaultRouteTable when routing intent is enabled on a hub. The modifications that Routing Intent makes to the defaultRouteTable is irreversible.
+
+If you remove routing intent, you'll have to manually restore your previous configuration. Therefore, we recommend saving a snapshot of your configuration before enabling routing intent.
-The following are the traffic flows that result from such a configuration.
+#### Azure Firewall Manager and Virtual WAN Hub Portal
+
+When routing intent is enabled on the hub, static routes corresponding to the configured routing policies are created automatically in the defaultRouteTable. These routes are:
+
+| Route Name | Prefixes | Next Hop Resource|
+|--|--|--|
+| _policy_PrivateTraffic | 10.0.0.0/8, 192.168.0.0/16, 172.16.0.0/12| Azure Firewall |
+ _policy_InternetTraffic| 0.0.0.0/0| Azure Firewall |
> [!NOTE]
-> Internet Traffic must egress through the **local** Azure Firewall as the default route (0.0.0.0/0) does **not** propagate across hubs.
+> Any static routes in the defaultRouteTable containing prefixes that aren't exact matches with 0.0.0.0/0 or the RFC1918 super-nets (10.0.0.0/8, 192.168.0.0/16 and 172.16.0.0/12) are automatically consolidated into a single static route, named **private_traffic**. Prefixes in the defaultRouteTable that match RFC1918 supernets or 0.0.0.0/0 are always automatically removed once routing intent is configured, regardless of the policy type.
-| From | To | Hub 1 VNets | Hub 1 branches | Hub 2 VNets | Hub 2 branches| Internet|
-| -- | -- | - | | | | |
-| Hub 1 VNets | &#8594;| Hub 1 AzFW or NVA| Hub 1 AzFW or NVA | Hub 1 and 2 AzFW or NVA | Hub 1 and 2 AzFW or NVA | Hub 1 AzFW or NVA |
-| Hub 1 Branches | &#8594;| Hub 1 AzFW or NVA| Hub 1 AzFW or NVA | Hub 1 and 2 AzFW or NVA | Hub 1 and 2 AzFW or NVA | Hub 1 AzFW or NVA|
-| Hub 2 VNets | &#8594;| Hub 1 and 2 AzFW or NVA| Hub 1 and 2 AzFW or NVA | Hub 2 AzFW or NVA | Hub 2 AzFW or NVA| Hub 2 AzFW or NVA|
-| Hub 2 Branches | &#8594;| Hub 1 and 2 AzFW or NVA | Hub 1 and 2 AzFW or NVA | Hub 2 AzFW or NVA | Hub 2 AzFW or NVA | Hub 2 AzFW or NVA|
+For example, consider the scenario where the defaultRouteTable has the following routes prior to configuring routing intent:
+| Route Name | Prefixes | Next Hop Resource|
+|--|--|--|
+| private_traffic | 192.168.0.0/16, 172.16.0.0/12, 40.0.0.0/24, 10.0.0.0/24| Azure Firewall |
+ to_internet | 0.0.0.0/0| Azure Firewall |
+ additional_private | 10.0.0.0/8, 50.0.0.0/24| Azure Firewall |
-### Mixture of secured and regular Virtual WAN Hubs
+Enabling routing intent on this hub would result in the following end state of the defaultRouteTable. All prefixes that aren't RFC1918 or 0.0.0.0/0 are consolidated into a single route named private_traffic.
-In this scenario, not all Virtual WAN hubs are deployed with an Azure Firewall or Network Virtual Appliances in them. In this scenario, you may configure an Internet Traffic Routing Policy, a Private Traffic Routing Policy on the secured Virtual WAN Hubs.
+| Route Name | Prefixes | Next Hop Resource|
+|--|--|--|
+| _policy_PrivateTraffic | 10.0.0.0/8, 192.168.0.0/16, 172.16.0.0/12| Azure Firewall |
+ _policy_InternetTraffic| 0.0.0.0/0| Azure Firewall |
+| private_traffic | 40.0.0.0/24, 10.0.0.0/24, 50.0.0.0/24| Azure Firewall |
-Consider the following configuration where Hub 1 (Normal) and Hub 2 (Secured) are deployed in a Virtual WAN. Hub 2 has Routing Policies for both Private and Internet Traffic.
+#### Other methods (PowerShell, REST, CLI)
-**Hub 1 Configuration:**
-* N/A (cannot configure Routing Policies if hub is not deployed with Azure Firewall or NVA)
+Creating routing intent using non-Portal methods automatically creates the corresponding policy routes in the defaultRouteTable and also removes any prefixes in static routes that are exact matches with 0.0.0.0/0 or RFC1918 supernets (10.0.0.0/8, 192.168.0.0/16 or 172.16.0.0/12). However, other static routes are **not** automatically consolidated.
-**Hub 2 Configuration:**
-* Private Traffic Policy with Next Hop Hub 2 Azure Firewall or NVA
-* Internet Traffic Policy with Next Hop Hub 2 Azure Firewall or NVA
+For example, consider the scenario where the defaultRouteTable has the following routes prior to configuring routing intent:
+| Route Name | Prefixes | Next Hop Resource|
+|--|--|--|
+| firewall_route_ 1 | 10.0.0.0/8|Azure Firewall |
+| firewall_route_2 | 192.168.0.0/16, 10.0.0.0/24 | Azure Firewall|
+| firewall_route_3 | 40.0.0.0/24| Azure Firewall|
+ to_internet | 0.0.0.0/0| Azure Firewall |
+The following table represents the final state of the defaultRouteTable after routing intent creation succeeds. Note that firewall_route_1 and to_internet was automatically removed as the only prefix in those routes were 10.0.0.0/8 and 0.0.0.0/0. firewall_route_2 was modified to remove 192.168.0.0/16 as that prefix is an RFC1918 aggregate prefix.
+
+| Route Name | Prefixes | Next Hop Resource|
+|--|--|--|
+| _policy_PrivateTraffic | 10.0.0.0/8, 192.168.0.0/16, 172.16.0.0/12| Azure Firewall |
+| _policy_InternetTraffic| 0.0.0.0/0| Azure Firewall |
+| firewall_route_2 | 10.0.0.0/24 | Azure Firewall|
+| firewall_route_3 | 40.0.0.0/24| Azure Firewall|
+## <a name="prefixadvertisments"></a> Prefix advertisement to on-premises
- The following are the traffic flows that result from such a configuration. Branches and Virtual Networks connected to Hub 1 **cannot** access the Internet via Azure Firewall in the Hub because the default route (0.0.0.0/0) does **not** propagate across hubs.
+The following section describes how Virtual WAN advertises routes to on-premises after Routing Intent has been configured on a Virtual Hub.
-| From | To | Hub 1 VNets | Hub 1 branches | Hub 2 VNets | Hub 2 branches| Internet |
-| -- | -- | - | | | | |
-| Hub 1 VNets | &#8594;| Direct | Direct | Hub 2 AzFW or NVA| Hub 2 AzFW or NVA | - |
-| Hub 1 Branches | &#8594;| Direct | Direct | Hub 2 AzFW or NVA | Hub 2 AzFW or NVA | - |
-| Hub 2 VNets | &#8594;| Hub 2 AzFW or NVA| Hub 2 AzFW or NVA | Hub 2 AzFW or NVA| Hub 2 AzFW or NVA| Hub 2 AzFW or NVA|
-| Hub 2 Branches | &#8594;| Hub 2 AzFW or NVA | Hub 2 AzFW | Hub 2 AzFW or NVA| Hub 2 AzFW or NVA | Hub 2 AzFW or NVA|
+### Internet routing policy
+> [!NOTE]
+> The 0.0.0.0/0 default route is **not** advertised across virtual hubs.
-## Troubleshooting
+If you enable Internet routing policies on the Virtual Hub, 0.0.0.0/0 default route is advertised to all connections to the hub (Virtual Network ExpressRoute, Site-to-site VPN, Point-to-site VPN, NVA in the hub and BGP connections) where the **Propagate default route** or **Enable internet security** flag is set to true. You may set this flag to false for all connections that shouldn't learn the default route.
-The following section describes common issues encountered when you configure Routing Policies on your Virtual WAN Hub. Read the below sections and if your issue is still unresolved, reach out to previewinterhub@microsoft.com for support. Expect a response within 48 business hours (Monday through Friday).
-### Troubleshooting configuration issues
+### Private routing policy
-* Make sure that you have gotten confirmation from previewinterhub@microsoft.com that access to the gated public preview has been granted to your subscription and chosen region. You will **not** be able to configure routing policies without being granted access to the preview.
-* After enabling the Routing Policy feature on your deployment, ensure you **only** use the custom portal link provided as part of your confirmation email. Don't use PowerShell, CLI, or REST API calls to manage your Virtual WAN deployments. This includes creating new Branch (Site-to-site VPN, Point-to-site VPN or ExpressRoute) connections.
+When a Virtual hub is configured with a Private Routing policy Virtual WAN advertises routes to local on-premises connections in the following manner:
- >[!NOTE]
- > If you are using Terraform, routing policies are currently not supported.
+* Routes corresponding to prefixes learned from local hub's Virtual Networks, ExpressRoute, Site-to-site VPN, Point-to-site VPN, NVA-in-the-hub or BGP connections connected to the current hub.
+* Routes corresponding to prefixes learned from remote hub Virtual Networks, ExpressRoute, Site-to-site VPN, Point-to-site VPN, NVA-in-the-hub or BGP connections where Private Routing policies are configured.
+* Routes corresponding to prefixes learned from remote hub Virtual Networks, ExpressRoute, Site-to-site VPN, Point-to-site VPN, NVA-in-the-hub and BGP connections where Routing Intent isn't configured **and** the remote connections propagate to the defaultRouteTable of the local hub.
+* Prefixes learned from one ExpressRoute circuit aren't advertised to other ExpressRoute circuits unless Global Reach is enabled. If you want to enable ExpressRoute to ExpressRoute transit through a security solution deployed in the hub, open a support case. For more information, see [Enabling connectivity across ExpressRoute circuits](#expressroute).
-* Ensure that your Virtual Hubs don't have any Custom Route Tables or any static routes in the defaultRouteTable. You will **not** be able to select **Enable interhub** from Firewall Manager on your Virtual WAN Hub if there are Custom Route tables configured or if there are static routes in your defaultRouteTable.
+### <a name="expressroute"></a> Transit connectivity between ExpressRoute circuits with routing intent
-### Troubleshooting data path
+Transit connectivity between ExpressRoute circuits within Virtual WAN is provided through ExpressRoute Global Reach capabilities. Traffic between Global Reach enabled ExpressRoute circuits is sent directly between the two circuits and doesn't transit the Virtual Hub.
-* Currently, using Azure Firewall to inspect inter-hub traffic is available for Virtual WAN hubs that are deployed in the **same** Azure Region.
-* Currently, Private Traffic Routing Policies aren't supported in Hubs with Encrypted ExpressRoute connections (Site-to-site VPN Tunnel running over ExpressRoute Private connectivity).
-* You can verify that the Routing Policies have been applied properly by checking the Effective Routes of the DefaultRouteTable. If Private Routing Policies are configured, you should see routes in the DefaultRouteTable for private traffic prefixes with next hop Azure Firewall. If Internet Traffic Routing Policies are configured, you should see a default (0.0.0.0/0) route in the DefaultRouteTable with next hop Azure Firewall.
-* If there are any Site-to-site VPN gateways or Point-to-site VPN gateways created **after** the feature has been confirmed to be enabled on your deployment, you'll have to reach out again to previewinterhub@microsoft.com to get the feature enabled.
-* If you're using Private Routing Policies with ExpressRoute, note that your ExpressRoute circuit can't advertise exact address ranges for the RFC1918 address ranges (you can't advertise 10.0.0.0/8, 172.16.0.0/12, 192.168.0.0/16). Ensure you're advertising more specific subnets (within RFC1918 ranges) as opposed to aggregate supernets. Additionally, if your ExpressRoute circuit is advertising a non-RFC1918 prefix to Azure, please make sure the address ranges that you put in the Private Traffic Prefixes text box are less specific than ExpressRoute advertised routes. For example, if the ExpressRoute Circuit is advertising 40.0.0.0/24 from on-premises, put a /23 CIDR range or larger in the Private Traffic Prefix text box (example: 40.0.0.0/23).
-* Make sure you don't have both private and internet routing policies configured on a single Virtual WAN hub. Configuring both private and internet routing policies on the same hub is currently unsupported and will cause Point-to-site VPN, ExpressRoute and Site-to-site VPN gateways to go into a failed state and interrupt datapath connectivity to Azure.
+>[!NOTE]
+>However, you may raise a support case with Azure to one ExpressRoute circuit to send traffic to another ExpressRoute circuit via a security solution deployed in the hub with routing intent private routing policies. Note that this capability doesn't require Global Reach to be enabled on the circuit.
+
+Connectivity across ExpressRoute circuits via a Firewall appliance in the hub is available in the following configurations:
+
+* Both ExpressRoute circuits are connected to the same hub and a private routing policy is configured on that hub.
+* ExpressRoute circuits are connected to different hubs and private routing policies are configured on both hubs. Therefore, both hubs must have a security solution deployed.
+
+#### Routing considerations with ExpressRoute
+
+After transit connectivity across ExpressRoute circuits using a firewall appliance deployed in the Virtual Hub is enabled, you can expect the following changes in behavior in how routes are advertised to ExpressRoute on-premises:
+* Virtual WAN automatically advertises RFC1918 aggregate prefixes (10.0.0.0/8, 192.168.0.0/16, 172.16.0.0/12) to the ExpressRoute-connected on-premises. These aggregate routes are advertised in addition to the routes described in the previous section.
+* Virtual WAN automatically advertises all static routes in the defaultRouteTable to ExpressRoute circuit-connected on-premises. This means Virtual WAN advertises the routes specified in the private traffic prefix text box to on-premises.
+
+ Because of these route advertisement changes, this means that ExpressRoute-connected on-premises can't advertise exact address ranges for RFC1918 aggregate address ranges (10.0.0.0/8, 172.16.0.0/12, 192.168.0.0/16). Please ensure you're advertising more specific subnets (within RFC1918 ranges) as opposed to aggregate super-nets and any prefixes in the Private Traffic text box.
+
+Additionally, if your ExpressRoute circuit is advertising a non-RFC1918 prefix to Azure, please make sure the address ranges that you put in the Private Traffic Prefixes text box are less specific than ExpressRoute advertised routes. For example, if the ExpressRoute Circuit is advertising 40.0.0.0/24 from on-premises, put a /23 CIDR range or larger in the Private Traffic Prefix text box (example: 40.0.0.0/23).
+
+## <a name="azurefirewall"></a> Configure routing intent and policies through Azure Firewall Manager
+
+The following steps describe how to configure routing intent and routing policies on your Virtual Hub using Azure Firewall Manager. Note that Azure Firewall Manager only supports next hop resources of type Azure Firewall.
+
+1. Navigate to the Virtual WAN Hub that you want to configure Routing Policies on.
+1. Under Security, select **Secured Virtual hub settings** and then **Manage security provider and route settings for this Secured virtual hub in Azure Firewall Manager**.
+1. Select the Hub you want to configure your Routing Policies on from the menu.
+1. Select **Security configuration** under **Settings**
+1. If you want to configure an Internet Traffic Routing Policy, select **Azure Firewall** or the relevant Internet Security provider from the dropdown for **Internet Traffic**. If not, select **None**
+1. If you want to configure a Private Traffic Routing Policy (for branch and Virtual Network traffic) via Azure Firewall, select **Azure Firewall** from the dropdown for **Private Traffic**. If not, select **Bypass Azure Firewall**.
+
+ :::image type="content" source="./media/routing-policies/configure-intents.png"alt-text="Screenshot showing how to configure routing policies."lightbox="./media/routing-policies/configure-intents.png":::
+
+7. If you want to configure a Private Traffic Routing Policy and have branches or virtual networks advertising non-IANA RFC1918 Prefixes, select **Private Traffic Prefixes** and specify the non-IANA RFC1918 prefix ranges in the text box that comes up. Select **Done**.
+
+ :::image type="content" source="./media/routing-policies/private-prefixes.png"alt-text="Screenshot showing how to edit private traffic prefixes."lightbox="./media/routing-policies/private-prefixes.png":::
+
+8. Select **Inter-hub** to be **Enabled**. Enabling this option ensures your Routing Policies are applied to the Routing Intent of this Virtual WAN Hub.
+9. Select **Save**.
+10. Repeat steps 2-8 for other Secured Virtual WAN hubs that you want to configure Routing policies for.
+11. At this point, you're ready to send test traffic. Please make sure your Firewall Policies are configured appropriately to allow/deny traffic based on your desired security configurations.
+
+## <a name="nva"></a> Configure routing intent and policies through Virtual WAN portal
+
+The following steps describe how to configure routing intent and routing policies on your Virtual Hub using Virtual WAN portal.
+
+1. From the custom portal link provided in the confirmation email from Step 3 in the **Prerequisites** section, navigate to the Virtual WAN hub that you want to configure routing policies on.
+1. Under Routing, select **Routing Policies**.
+
+ :::image type="content" source="./media/routing-policies/routing-policies-vwan-ui.png"alt-text="Screenshot showing how to navigate to routing policies."lightbox="./media/routing-policies/routing-policies-vwan-ui.png":::
+
+3. If you want to configure a Private Traffic Routing Policy (for branch and Virtual Network Traffic), select **Azure Firewall**, **Network Virtual Appliance** or **SaaS solutions** under **Private Traffic**. Under **Next Hop Resource**, select the relevant next hop resource.
+
+ :::image type="content" source="./media/routing-policies/routing-policies-private-nva.png"alt-text="Screenshot showing how to configure NVA private routing policies."lightbox="./media/routing-policies/routing-policies-private-nva.png":::
+
+4. If you want to configure a Private Traffic Routing Policy and have branches or virtual networks using non-IANA RFC1918 Prefixes, select **Additional Prefixes** and specify the non-IANA RFC1918 prefix ranges in the text box that comes up. Select **Done**.
+++
+ :::image type="content" source="./media/routing-policies/private-prefixes-nva.png"alt-text="Screenshot showing how to configure additional private prefixes for NVA routing policies."lightbox="./media/routing-policies/private-prefixes-nva.png":::
+
+5. If you want to configure an Internet Traffic Routing Policy, select **Azure Firewall**, **Network Virtual Appliance** or **SaaS solution**. Under **Next Hop Resource**, select the relevant next hop resource.
+
+ :::image type="content" source="./media/routing-policies/public-routing-policy-nva.png"alt-text="Screenshot showing how to configure public routing policies for NVA."lightbox="./media/routing-policies/public-routing-policy-nva.png":::
+
+6. To apply your routing intent and routing policies configuration, click **Save**.
+
+ :::image type="content" source="./media/routing-policies/save-nva.png"alt-text="Screenshot showing how to save routing policies configurations."lightbox="./media/routing-policies/save-nva.png":::
+
+7. Repeat for all hubs you would like to configure routing policies for.
+
+8. At this point, you're ready to send test traffic. Ensure your Firewall Policies are configured appropriately to allow/deny traffic based on your desired security configurations.
-### Troubleshooting Azure Firewall
+## Troubleshooting
+
+The following section describes common ways to troubleshoot when you configure routing intent and policies on your Virtual WAN Hub.
+
+### Effective Routes
+
+When private routing policies are configured on the Virtual Hub, all traffic between on-premises and Virtual Networks are inspected by Azure Firewall, Network Virtual Appliance or SaaS solution in the Virtual hub.
-* If you're using non [IANA RFC1918](https://datatracker.ietf.org/doc/html/rfc1918) prefixes in your branches/Virtual Networks, make sure you have specified those prefixes in the "Private Prefixes" text box in Firewall Manager.
-* If you have specified non RFC1918 addresses as part of the **Private Traffic Prefixes** text box in Firewall Manager, you may need to configure SNAT policies on your Firewall to disable SNAT for non-RFC1918 private traffic. For more information, reference the following [document](../firewall/snat-private-range.md).
-* Configure and view Azure Firewall logs to help troubleshoot and analyze your network traffic. For more information on how to set-up monitoring for Azure Firewall, reference the following [document](../firewall/firewall-diagnostics.md). An overview of the different types of Firewall logs can be found [here](../firewall/logs-and-metrics.md).
-* For more information on Azure Firewall, review [Azure Firewall Documentation](../firewall/overview.md).
+Therefore, the effective routes of the defaultRouteTable show the RFC1918 aggregate prefixes (10.0.0.0/8, 192.168.0.0/16, 172.16.0.0/12) with next hop Azure Firewall or Network Virtual Appliance. This reflects that all traffic between Virtual Networks and branches is routed to Azure Firewall, NVA or SaaS solution in the hub for inspection.
-## Frequently asked questions
+ :::image type="content" source="./media/routing-policies/default-route-table-effective-routes.png"alt-text="Screenshot showing effective routes for defaultRouteTable."lightbox="./media/routing-policies/public-routing-policy-nva.png":::
-### Why can't I edit the defaultRouteTable from the custom portal link provided by previewinterhub@microsoft.com?
+After the Firewall inspects the packet (and the packet is allowed per Firewall rule configuration), Virtual WAN forwards the packet to its final destination. To see which routes Virtual WAN uses to forward inspected packets, view the effective route table of the Firewall or Network Virtual Appliance.
-As part of the gated public preview of Routing Policies, your Virtual WAN hub routing is managed entirely by Firewall Manager. Additionally, the managed preview of Routing Policies is **not** supported alongside Custom Routing. Custom Routing with Routing Policies will be supported at a later date.
+ :::image type="content" source="./media/routing-policies/firewall-nva-effective-routes.png"alt-text="Screenshot showing effective routes for Azure Firewall."lightbox="./media/routing-policies/public-routing-policy-nva.png":::
+
+The Firewall effective route table helps narrow down and isolate issues in your network such as mis-configurations or issues with certain branches and Virtual networks.
+
+### Troubleshooting configuration issues
+If you're troubleshooting configuration issues consider the following:
+* Make sure you don't have custom route tables or static routes in the defaultRouteTable with next hop Virtual Network connection.
+ * The option to configure routing intent is greyed out in Azure portal if your deployment doesn't meet the requirements above.
+ * If you're using CLI, PowerShell or REST, the routing intent creation operation fails. Delete the failed routing intent, remove the custom route tables and static routes and then try re-creating.
+ * If you're using Azure Firewall Manager, ensure existing routes in the defaultRouteTable are named private_traffic, internet_traffic or all_traffic. Option to configure routing intent (enable inter-hub) is greyed out if routes are named differently.
+* After configuring routing intent on a hub, ensure any updates to existing connections or new connections to the hub are created with the optional associated and propagated route table fields set to empty. Setting the optional associations and propagations as empty is done automatically for all operations performed through Azure portal.
-However, you can still view the Effective Routes of the DefaultRouteTable by navigating to the **Effective Routes** Tab.
+### Troubleshooting data path
-If you have configured private traffic routing policies on your Virtual WAN hub, the Effective Route Table will only contain routes for RFC1918 supernets and any additional address prefixes that were specified in the Additional Private Traffic Prefixes text box.
+Assuming you have already reviewed the [Known Limitations](#knownlimitations) section, here are some ways to troubleshoot datapath and connectivity:
-### Can I configure a Routing Policy for Private Traffic and also send Internet Traffic (0.0.0.0/0) via a Network Virtual Appliance in a Spoke Virtual Network?
+* Troubleshooting with Effective Routes:
+ * **If Private Routing Policies are configured**, you should see routes with next hop Firewall in the effective routes of the defaultRouteTable for RFC1918 aggregates (10.0.0.0/8, 192.168.0.0/16, 172.16.0.0/12) as well as any prefixes specified in the private traffic text box. Ensure that all Virtual Network and on-premises prefixes are subnets within the static routes in the defaultRouteTable. If an on-premises or Virtual Network is using an address space that isn't a subnet within the effective routes in the defaultRouteTable, add the prefix into the private traffic textbox.
+ * **If Internet Traffic Routing Policies are configured**, you should see a default (0.0.0.0/0) route in the effective routes of the defaultRouteTable.
+ * Once you have verified that the effective routes of the defaultRouteTable have the correct prefixes, **view the Effective Routes of the Network Virtual Appliance or Azure Firewall**. Effective routes on the Firewall show which routes Virtual WAN has selected and determines which destinations Firewall can forward packets to. Figuring out which prefixes are missing or in an incorrect state helps narrow down data-path issues and point to the right VPN, ExpressRoute, NVA or BGP connection to troubleshoot.
+* Scenario-specific troubleshooting:
+ * **If you have a nonsecured hub (hub without Azure Firewall or NVA) in your Virtual WAN**, make sure connections to the nonsecured hub are propagating to the defaultRouteTable of the hubs with routing intent configured. If propagations aren't set to the defaultRouteTable, connections to the secured hub won't be able to send packets to the nonsecured hub.
+ * **If you have Internet Routing Policies configured**, make sure the 'Propagate Default Route' or 'Enable Internet Security' setting is set to 'true' for all connections that should learn the 0.0.0.0/0 default route. Connections where this setting is set to 'false' won't learn the 0.0.0.0/0 route, even if Internet Routing Policies are configured.
+ * **If you're using Private Endpoints deployed in Virtual Networks connected to the Virtual Hub**, traffic destined for Private Endpoints deployed in Virtual Networks connected to the Virtual WAN hub by default bypasses the routing intent next hop Azure Firewall and NVA. To ensure Private Endpoint traffic is inspected by Azure Firewall or NVA, make sure you enable [User-Defined Routing network policies](../private-link/disable-private-endpoint-network-policy.md) on the subnets where Private Endpoints are deployed. Alternatively, you may put a /32 route corresponding to all Private Endpoint private IP addresses in the Private Traffic text box.
-This scenario isn't supported in the gated public preview. However, reach out to previewinterhub@microsoft.com to express interest in implementing this scenario.
+### Troubleshooting Azure Firewall routing issues
-### Does the default route (0.0.0.0/0) propagate across hubs?
+* Make sure the provisioning state of the Azure Firewall is **succeeded** before trying to configure routing intent.
+* If you're using non-[IANA RFC1918](https://datatracker.ietf.org/doc/html/rfc1918) prefixes in your branches/Virtual Networks, make sure you have specified those prefixes in the "Private Prefixes" text box.
+* If you have specified non RFC1918 addresses as part of the **Private Traffic Prefixes** text box in Firewall Manager, you may need to configure SNAT policies on your Firewall to disable SNAT for non-RFC1918 private traffic. For more information, reference [Azure Firewall SNAT ranges](../firewall/snat-private-range.md).
+* Configure and view Azure Firewall logs to help troubleshoot and analyze your network traffic. For more information on how to set-up monitoring for Azure Firewall, reference [Azure Firewall diagnostics](../firewall/firewall-diagnostics.md). For an overview of the different types of Firewall logs, see [Azure Firewall logs and metrics](../firewall/logs-and-metrics.md).
+* For more information on Azure Firewall, review [Azure Firewall Documentation](../firewall/overview.md).
-No. Currently, branches and Virtual Networks will egress to the internet using an Azure Firewall deployed inside of the Virtual WAN hub the branches and Virtual Networks are connected to. You can't configure a connection to access the Internet via the Firewall in a remote hub.
+### Troubleshooting Network Virtual Appliances
-### Why do I see RFC1918 aggregate prefixes advertised to my on-premises devices?
+* Make sure the provisioning state of the Network Virtual Appliance is **succeeded** before trying to configure routing intent.
+* If you're using non [IANA RFC1918](https://datatracker.ietf.org/doc/html/rfc1918) prefixes in your connected on-premises or Virtual Networks, make sure you have specified those prefixes in the "Private Prefixes" text box.
+* If you have specified non RFC1918 addresses as part of the **Private Traffic Prefixes** text box, you may need to configure SNAT policies on your NVA to disable SNAT for certain non-RFC1918 private traffic.
+* Check NVA Firewall logs to see if traffic is being dropped or denied by your Firewall rules.
+* Reach out to your NVA provider for more support and guidance on troubleshooting.
-When Private Traffic Routing Policies are configured, Virtual WAN Gateways will automatically advertise static routes that are in the default route table (RFC1918 prefixes: 10.0.0.0/8,172.16.0.0/12,192.168.0.0/16) in addition to the explicit branch and Virtual Network prefixes.
+### Troubleshooting software-as-a-service
-### Why are my Gateways (Site-to-site VPN, Point-to-site VPN, ExpressRoute) in a failed state?
+* Make sure the SaaS solution's provisioning state is **succeeded** before trying to configure routing intent.
+* For more troubleshooting tips, see the troubleshooting section in [Virtual WAN documentation](how-to-palo-alto-cloud-ngfw.md) or see [Palo Alto Networks Cloud NGFW documentation](https://docs.paloaltonetworks.com/cloud-ngfw/azure/cloud-ngfw-for-azure).
-There is currently a limitation where if Internet and private routing policies are configured concurrently on the same hub, Gateways go into a failed state, meaning your branches can't communicate with Azure. For more information on when this limitation will be lifted, contact previewinterhub@microsoft.com.
## Next steps For more information about virtual hub routing, see [About virtual hub routing](about-virtual-hub-routing.md).
virtual-wan Whats New https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-wan/whats-new.md
You can also find the latest Azure Virtual WAN updates and subscribe to the RSS
| Type |Area |Name |Description | Date added | Limitations | | ||||||
+| Feature| Routing | [Routing intent](how-to-routing-policies.md)| Routing intent is the mechanism through which you can configure Virtual WAN to send private or internet traffic via a security solution deployed in the hub.|May 2023|Support for inter-region is currently rolling out. Routing Intent is Generally Available in Azure public cloud. See documentation for [additional limitations](how-to-routing-policies.md#knownlimitations).|
|Feature| Routing |[Virtual hub routing preference](about-virtual-hub-routing-preference.md)|Hub routing preference gives you more control over your infrastructure by allowing you to select how your traffic is routed when a virtual hub router learns multiple routes across S2S VPN, ER and SD-WAN NVA connections. |October 2022| | |Feature| Routing|[Bypass next hop IP for workloads within a spoke VNet connected to the virtual WAN hub generally available](how-to-virtual-hub-routing.md)|Bypassing next hop IP for workloads within a spoke VNet connected to the virtual WAN hub lets you deploy and access other resources in the VNet with your NVA without any additional configuration.|October 2022| | |SKU/Feature/Validation | Routing | [BGP end point (General availability)](scenario-bgp-peering-hub.md) | The virtual hub router now exposes the ability to peer with it, thereby exchanging routing information directly through Border Gateway Protocol (BGP) routing protocol. | June 2022 | |
You can also find the latest Azure Virtual WAN updates and subscribe to the RSS
| Type |Area |Name |Description | Date added | Limitations | | ||||||
+|Feature|Network Virtual Appliances (NVAs)/Integrated Third-party solutions in Virtual WAN hubs|[Fortinet NGFW](https://www.fortinet.com/products/next-generation-firewall)|General Availability of [Fortinet NGFW](https://aka.ms/fortinetngfwdocumentation) and [Fortinet SD-WAN/NGFW dual-role](https://aka.ms/fortinetdualroledocumentation) NVAs.|May 2023| Same limitations as routing intent. Doesn't support internet inbound scenario.|
+|Feature|Network Virtual Appliances (NVAs)/Integrated Third-party solutions in Virtual WAN hubs|[Check Point CloudGuard Network Security for Azure Virtual WAN](https://www.checkpoint.com/cloudguard/microsoft-azure-security/wan/) |General Availability of [Check Point CloudGuard Network Security NVA deployable from Azure Marketplace](https://sc1.checkpoint.com/documents/IaaS/WebAdminGuides/EN/CP_CloudGuard_Network_for_Azure_vWAN_AdminGuide/Content/Topics-Azure-vWAN/Introduction.htm) within the Virtual WAN hub in all Azure regions.|May 2023|Same limitations as routing intent. Doesn't support internet inbound scenario.|
|Feature|Software-as-a-service|Palo Alto Networks Cloud NGFW|Public preview of [Palo Alto Networks Cloud NGFW](https://aka.ms/pancloudngfwdocs), the first software-as-a-serivce security offering deployable within the Virtual WAN hub.|May 2023|Palo Alto Networks Cloud NGFW is only deployable in newly created Virtual WAN hubs in some Azure regions. See [Limitations of Palo Alto Networks Cloud NGFW](how-to-palo-alto-cloud-ngfw.md) for a full list of limitations.| | Feature| Network Virtual Appliances (NVAs)/Integrated Third-party solutions in Virtual WAN hubs| [Fortinet SD-WAN](https://docs.fortinet.com/document/fortigate-public-cloud/7.2.2/azure-vwan-sd-wan-deployment-guide/12818/deployment-overview)| General availability of Fortinet SD-WAN solution in Virtual WAN. Next-Generation Firewall use cases in preview.| October 2022| SD-WAN solution generally available. Next Generation Firewall use cases in preview.| |Feature |Network Virtual Appliances (NVAs)/Integrated Third-party solutions in Virtual WAN hubs| [Versa SD-WAN](about-nva-hub.md#partners)|Preview of Versa SD-WAN.|November 2021| |
The following features are currently in gated public preview. After working with
| Managed preview | Route-maps | This feature allows you to preform route aggregation, route filtering, and modify BGP attributes for your routes in Virtual WAN. | preview-route-maps@microsoft.com | Known limitations are displayed here: [About Route-maps (preview)](route-maps-about.md#key-considerations). |Managed preview|Configure user groups and IP address pools for P2S User VPNs| This feature allows you to configure P2S User VPNs to assign users IP addresses from specific address pools based on their identity or authentication credentials by creating **User Groups**.|| Known limitations are displayed here: [Configure User Groups and IP address pools for P2S User VPNs (preview)](user-groups-create.md).| |Managed preview|Aruba EdgeConnect SD-WAN| Deployment of Aruba EdgeConnect SD-WAN NVA into the Virtual WAN hub| preview-vwan-aruba@microsoft.com| |
-|Managed preview|Routing intent and policies enabling inter-hub security|This feature allows you to configure internet-bound, private, or inter-hub traffic flow through the Azure Firewall. For more information, see [Routing intent and policies](how-to-routing-policies.md).|For access to the preview, contact previewinterhub@microsoft.com|Not compatible with NVA in a spoke, but compatible with BGP peering.<br><br>For additional limitations, see [How to configure Virtual WAN hub routing intent and routing policies](how-to-routing-policies.md#key-considerations).|
-|Managed preview|Checkpoint NGFW|Deployment of Checkpoint NGFW NVA into the Virtual WAN hub|DL-vwan-support-preview@checkpoint.com, previewinterhub@microsoft.com|Same limitations as routing intent.<br><br>Doesn't support internet inbound scenario.|
-|Managed preview|Fortinet NGFW/SD-WAN|Deployment of Fortinet dual-role SD-WAN/NGFW NVA into the Virtual WAN hub|azurevwan@fortinet.com, previewinterhub@microsoft.com|Same limitations as routing intent.<br><br>Doesn't support internet inbound scenario.|
+|Managed preview|Checkpoint NGFW|Deployment of Checkpoint NGFW NVA into the Virtual WAN hub|DL-vwan-support-preview@checkpoint.com, previewinterhub@microsoft.com|Same limitations as routing intent. Doesn't support internet inbound scenario.|
+|Managed preview|Fortinet NGFW/SD-WAN|Deployment of Fortinet dual-role SD-WAN/NGFW NVA into the Virtual WAN hub|azurevwan@fortinet.com, previewinterhub@microsoft.com|Same limitations as routing intent. Doesn't support internet inbound scenario.|
|Public preview/Self serve|Virtual hub routing preference|This feature allows you to influence routing decisions for the virtual hub router. For more information, see [Virtual hub routing preference](about-virtual-hub-routing-preference.md).|For questions or feedback, contact preview-vwan-hrp@microsoft.com|If a route-prefix is reachable via ER or VPN connections, and via virtual hub SD-WAN NVA, then the latter route is ignored by the route-selection algorithm. Therefore, the flows to prefixes reachable only via virtual hub. SD-WAN NVA will take the route through the NVA. This is a limitation during the preview phase of the hub routing preference feature.| |Public preview/Self serve|Hub-to-hub traffic flows instead of an ER circuit connected to different hubs (Hub-to-hub over ER)|This feature allows traffic between 2 hubs traverse through the Azure Virtual WAN router in each hub and uses a hub-to-hub path, instead of the ExpressRoute path (which traverses through Microsoft's edge routers/MSEE). For more information, see the [Hub-to-hub over ER](virtual-wan-faq.md#expressroute-bow-tie) preview link.|For questions or feedback, contact preview-vwan-hrp@microsoft.com|
The following features are currently in gated public preview. After working with
|#|Issue|Description |Date first reported|Mitigation| ||||||
-|1|Virtual hub upgrade to VMSS-based infrastructure: Compatibility with NVA in a hub.|For deployments with an NVA provisioned in the hub, the virtual hub router can't be upgraded to Virtual Machine Scale Sets.| July 2022|The Virtual WAN team is working on a fix that will allow Virtual hub routers to be upgraded to Virtual Machine Scale Sets, even if an NVA is provisioned in the hub. After upgrading, users will have to re-peer the NVA with the hub routerΓÇÖs new IP addresses (instead of having to delete the NVA).|
+|1|Virtual hub upgrade to VMSS-based infrastructure: Compatibility with NVA in a hub.|For deployments with an NVA provisioned in the hub, the virtual hub router can't be upgraded to Virtual Machine Scale Sets.| July 2022|The Virtual WAN team is working on a fix that will allow Virtual hub routers to be upgraded to Virtual Machine Scale Sets, even if an NVA is provisioned in the hub. After you upgrade the hub router, you will have to re-peer the NVA with the hub routerΓÇÖs new IP addresses (instead of having to delete the NVA).|
|2|Virtual hub upgrade to VMSS-based infrastructure: Compatibility with NVA in a spoke VNet.|For deployments with an NVA provisioned in a spoke VNet, the customer will have to delete and recreate the BGP peering with the spoke NVA.|March 2022|The Virtual WAN team is working on a fix to remove the need for users to delete and recreate the BGP peering with a spoke NVA after upgrading.| |3|Virtual hub upgrade to VMSS-based infrastructure: Compatibility with spoke VNets in different regions |If your Virtual WAN hub is connected to a combination of spoke virtual networks in the same region as the hub and a separate region than the hub, then you may experience a lack of connectivity to these respective spoke virtual networks after upgrading your hub router to VMSS-based infrastructure.|March 2023|To resolve this and restore connectivity to these virtual networks, you can modify any of the virtual network connection properties (For example, you can modify the connection to propagate to a dummy label). We are actively working on removing this requirement. | |4|Virtual hub upgrade to VMSS-based infrastructure: Compatibility with more than 100 spoke VNets |If your Virtual WAN hub is connected to more than 100 spoke VNets, then the upgrade may time out, causing your virtual hub to remain on Cloud Services-based infrastructure.|March 2023|The Virtual WAN team is working on a fix to support upgrades when there are more than 100 spoke VNets connected.|