Updates from: 09/26/2022 01:06:35
Service Microsoft Docs article Related commit history on GitHub Change details
active-directory Onboard Aws https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/cloud-infrastructure-entitlement-management/onboard-aws.md
This article describes how to onboard an Amazon Web Services (AWS) account on Pe
> [!NOTE] > A *global administrator* or *super admin* (an admin for all authorization system types) can perform the tasks in this article after the global administrator has initially completed the steps provided in [Enable Permissions Management on your Azure Active Directory tenant](onboard-enable-tenant.md).
+## Explanation
+
+There are several moving parts across AWS and Azure, which are required to be configured before onboarding.
+
+* An Azure AD OIDC App
+* An AWS OIDC account
+* An (optional) AWS Master account
+* An (optional) AWS Central logging account
+* An AWS OIDC role
+* An AWS Cross Account role assumed by OIDC role
+
+
+<!-- diagram from gargi -->
+ ## Onboard an AWS account 1. If the **Data Collectors** dashboard isn't displayed when Permissions Management launches:
This article describes how to onboard an Amazon Web Services (AWS) account on Pe
Select **Enable AWS SSO checkbox**, if the AWS account access is configured through AWS SSO.
-Choose from 3 options to manage AWS accounts.
+Choose from three options to manage AWS accounts.
#### Option 1: Automatically manage
-Choose this option to automatically detect and add to monitored account list, without additional configuration. Steps to detect list of accounts and onboard for collection:
+Choose this option to automatically detect and add to the monitored account list, without extra configuration. Steps to detect list of accounts and onboard for collection:
- Deploy Master account CFT (Cloudformation template) which creates organization account role that grants permission to OIDC role created earlier to list accounts, OUs and SCPs. - If AWS SSO is enabled, organization account CFT also adds policy needed to collect AWS SSO configuration details. -- Deploy Member account CFT in all the accounts that need to be monitored by Entra Permissions Management. This creates a cross account role that trusts the OIDC role created earlier. The SecurityAudit policy is attached to the role created for data collection.
+- Deploy Member account CFT in all the accounts that need to be monitored by Entra Permissions Management. These actions create a cross account role that trusts the OIDC role created earlier. The SecurityAudit policy is attached to the role created for data collection.
Any current or future accounts found get onboarded automatically.
This option detects all AWS accounts that are accessible through OIDC role acces
- Deploy Master account CFT (Cloudformation template) which creates organization account role that grants permission to OIDC role created earlier to list accounts, OUs and SCPs. - If AWS SSO is enabled, organization account CFT also adds policy needed to collect AWS SSO configuration details. -- Deploy Member account CFT in all the accounts that need to be monitored by Entra Permissions Management. This creates a cross account role that trusts the OIDC role created earlier. The SecurityAudit policy is attached to the role created for data collection.
+- Deploy Member account CFT in all the accounts that need to be monitored by Entra Permissions Management. These actions create a cross account role that trusts the OIDC role created earlier. The SecurityAudit policy is attached to the role created for data collection.
- Click Verify and Save. - Navigate to newly create Data Collector row under AWSdata collectors. - Click on Status column when the row has ΓÇ£PendingΓÇ¥ status
active-directory Onboard Azure https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/cloud-infrastructure-entitlement-management/onboard-azure.md
This article describes how to onboard a Microsoft Azure subscription or subscrip
> [!NOTE] > A *global administrator* or *super admin* (an admin for all authorization system types) can perform the tasks in this article after the global administrator has initially completed the steps provided in [Enable Permissions Management on your Azure Active Directory tenant](onboard-enable-tenant.md).
+## Explanation
+
+The Permissions Management service is built on Azure, and given you're onboarding your Azure subscriptions to be monitored and managed, setup is simple with few moving parts to configure. Below is what is required to configure onboarding:
+
+* When your tenant is onboarded, an application is created in the tenant.
+* This app requires 'reader' permissions on the subscriptions
+* For controller functionality, the app requires 'User Access Administrator' to create and implement right-size roles
+ ## Prerequisites To add Permissions Management to your Azure AD tenant:
To add Permissions Management to your Azure AD tenant:
### 1. Add Azure subscription details
-Choose from 3 options to manage Azure subscriptions.
+Choose from three options to manage Azure subscriptions.
#### Option 1: Automatically manage
-This option allows subscriptions to be automatically detected and monitored without extra configuration.A key benefit of automatic management is that any current or future subscriptions found get onboarded automatically. Steps to detect list of subscriptions and onboard for collection:
+This option allows subscriptions to be automatically detected and monitored without further work required. A key benefit of automatic management is that any current or future subscriptions found will be onboarded automatically. The steps to detect a list of subscriptions and onboard for collection are as follows:
-- Firstly, grant Reader role to Cloud Infrastructure Entitlement Management application at management group or subscription scope.
+- Firstly, grant Reader role to Cloud Infrastructure Entitlement Management application at management group or subscription scope. To do this:
1. In the EPM portal, left-click the cog on the top right-hand side. 1. Navigate to data collectors tab
This option allows subscriptions to be automatically detected and monitored with
1. Click ΓÇÿCreate ConfigurationΓÇÖ 1. For onboarding mode, select ΓÇÿAutomatically ManageΓÇÖ
-The steps listed on the screen outline how to create the role assignment for the Cloud Infrastructure Entitlements Management application. This can be performed manually in the Entra console, or programatically with PowerShell or the Azure CLI.
+ > [!NOTE]
+ > The steps listed on the screen outline how to create the role assignment for the Cloud Infrastructure Entitlements Management application. This can be performed manually in the Entra console, or programatically with PowerShell or the Azure CLI.
-Lastly, Click ΓÇÿVerify Now & SaveΓÇÖ
+- Once complete, Click ΓÇÿVerify Now & SaveΓÇÖ
To view status of onboarding after saving the configuration:
To view status of onboarding after saving the configuration:
You have the ability to specify only certain subscriptions to manage and monitor with MEPM (up to 10 per collector). Follow the steps below to configure these subscriptions to be monitored:
-1. For each subscription you wish to manage, ensure that the ΓÇÿReaderΓÇÖ role has been granted to Cloud Infrastructure Entitlement Management application for this subscription.
+1. For each subscription you wish to manage, ensure that the ΓÇÿReaderΓÇÖ role has been granted to Cloud Infrastructure Entitlement Management application for the subscription.
1. In the EPM portal, click the cog on the top right-hand side. 1. Navigate to data collectors tab 1. Ensure 'Azure' is selected 1. Click ΓÇÿCreate ConfigurationΓÇÖ 1. Select ΓÇÿEnter Authorization SystemsΓÇÖ
-1. Under the Subscription IDs section, enter a desired subscription ID into the input box. Click the ΓÇ£+ΓÇ¥ up to 9 additional times, putting a single subscription ID into each respective input box.
+1. Under the Subscription IDs section, enter a desired subscription ID into the input box. Click the ΓÇ£+ΓÇ¥ up to nine extra times, putting a single subscription ID into each respective input box.
1. Once you have input all of the desired subscriptions, click next 1. Click ΓÇÿVerify Now & SaveΓÇÖ 1. Once the access to read and collect data is verified, collection will begin.
This option detects all subscriptions that are accessible by the Cloud Infrastru
1. Click ΓÇÿCreate ConfigurationΓÇÖ 1. For onboarding mode, select ΓÇÿAutomatically ManageΓÇÖ
-The steps listed on the screen outline how to create the role assignment for the Cloud Infrastructure Entitlements Management application. You can do this manually in the Entra console, or programatically with PowerShell or the Azure CLI.
+ > [!NOTE]
+ > The steps listed on the screen outline how to create the role assignment for the Cloud Infrastructure Entitlements Management application. You can do this manually in the Entra console, or programatically with PowerShell or the Azure CLI.
-Lastly, Click ΓÇÿVerify Now & SaveΓÇÖ
+- Once complete, Click ΓÇÿVerify Now & SaveΓÇÖ
To view status of onboarding after saving the configuration:
active-directory Onboard Enable Tenant https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/cloud-infrastructure-entitlement-management/onboard-enable-tenant.md
This article describes how to enable Permissions Management in your organization
> [!NOTE] > To complete this task, you must have *global administrator* permissions as a user in that tenant. You can't enable Permissions Management as a user from other tenant who has signed in via B2B or via Azure Lighthouse. + ## Prerequisites To enable Permissions Management in your organization:
active-directory Onboard Gcp https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/cloud-infrastructure-entitlement-management/onboard-gcp.md
This article describes how to onboard a Google Cloud Platform (GCP) project on P
> [!NOTE] > A *global administrator* or *super admin* (an admin for all authorization system types) can perform the tasks in this article after the global administrator has initially completed the steps provided in [Enable Permissions Management on your Azure Active Directory tenant](onboard-enable-tenant.md).
+## Explanation
+
+For GCP, permissions management is scoped to a *GCP project*. A GCP project is a logical collection of your resources in GCP, like a subscription in Azure, albeit with further configurations you can perform such as application registrations and OIDC configurations.
+
+<!-- Diagram from Gargi-->
+
+There are several moving parts across GCP and Azure, which are required to be configured before onboarding.
+
+* An Azure AD OIDC App
+* A Workload Identity in GCP
+* OAuth2 confidential client grants utilized
+* A GCP service account with permissions to collect
++ ## Onboard a GCP project 1. If the **Data Collectors** dashboard isn't displayed when Permissions Management launches:
This article describes how to onboard a Google Cloud Platform (GCP) project on P
> [!NOTE] > 1. To confirm that the app was created, open **App registrations** in Azure and, on the **All applications** tab, locate your app. > 1. Select the app name to open the **Expose an API** page. The **Application ID URI** displayed in the **Overview** page is the *audience value* used while making an OIDC connection with your GCP account.-
- 1. Return to Permissions Management, and in the **Permissions Management Onboarding - Azure AD OIDC App Creation**, select **Next**.
+ > 1. Return to the Permissions Management window, and in the **Permissions Management Onboarding - Azure AD OIDC App Creation**, select **Next**.
### 2. Set up a GCP OIDC project.
Choose from 3 options to manage GCP projects.
#### Option 1: Automatically manage
-This option allows projects to be automatically detected and monitored without additional configuration. Steps to detect list of projects and onboard for collection:
+The automatically manage option allows projects to be automatically detected and monitored without extra configuration. Steps to detect list of projects and onboard for collection:
Firstly, grant Viewer and Security Reviewer role to service account created in previous step at organization, folder or project scope.
-Once done, the steps are listed in the screen to do this manually in the GPC console, or programatically with the gcloud CLI.
+Once done, the steps are listed in the screen, which shows how to further configure in the GPC console, or programatically with the gcloud CLI.
-Once this has been configured, click next, then 'Verify Now & Save'.
+Once everything has been configured, click next, then 'Verify Now & Save'.
Any current or future projects found get onboarded automatically.
To view status of onboarding after saving the configuration:
This option detects all projects that are accessible by the Cloud Infrastructure Entitlement Management application. - Firstly, grant Viewer and Security Reviewer role to service account created in previous step at organization, folder or project scope-- Once done, the steps are listed in the screen to do this manually in the GPC console, or programatically with the gcloud CLI
+- Once done, the steps are listed in the screen to do configure manually in the GPC console, or programatically with the gcloud CLI
- Click Next - Click 'Verify Now & Save' - Navigate to newly create Data Collector row under GCP data collectors
This option detects all projects that are accessible by the Cloud Infrastructure
The **Welcome to Permissions Management GCP onboarding** screen appears, displaying steps you must complete to onboard your GCP project.
-### 5. Paste the environment vars from the Permissions Management portal.
+### 5. Paste the environmental variables from the Permissions Management portal.
1. Return to Permissions Management and select **Copy export variables**. 1. In the GCP Onboarding shell editor, paste the variables you copied, and then press **Enter**.
active-directory Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/cloud-infrastructure-entitlement-management/overview.md
## Overview
-Permissions Management is a cloud infrastructure entitlement management (CIEM) solution that provides comprehensive visibility into permissions assigned to all identities. For example, over-privileged workload and user identities, actions, and resources across multi-cloud infrastructures in Microsoft Azure, Amazon Web Services (AWS), and Google Cloud Platform (GCP).
+Permissions Management is a cloud infrastructure entitlement management (CIEM) solution that provides comprehensive visibility into permissions assigned to all identities. For example, over-privileged workload and user identities, actions, and resources across multicloud infrastructures in Microsoft Azure, Amazon Web Services (AWS), and Google Cloud Platform (GCP).
Permissions Management detects, automatically right-sizes, and continuously monitors unused and excessive permissions. Organizations have to consider permissions management as a central piece of their Zero Trust security to implement least privilege access across their entire infrastructure: -- Organizations are increasingly adopting multi-cloud strategy and are struggling with the lack of visibility and the increasing complexity of managing access permissions.
+- Organizations are increasingly adopting multicloud strategy and are struggling with the lack of visibility and the increasing complexity of managing access permissions.
- With the proliferation of identities and cloud services, the number of high-risk cloud permissions is exploding, expanding the attack surface for organizations. - IT security teams are under increased pressure to ensure access to their expanding cloud estate is secure and compliant. - The inconsistency of cloud providers' native access management models makes it even more complex for Security and Identity to manage permissions and enforce least privilege access policies across their entire environment.
Organizations have to consider permissions management as a central piece of thei
Permissions Management allows customers to address three key use cases: *discover*, *remediate*, and *monitor*.
-Permissions Management has been designed in such a way that we recommended your organization sequentially 'step-through' each of the below phases in order to gain insights into permissions across the organization. This is because you generally cannot action what is yet to be discovered, likewise you cannot continually evaluate what is yet to be remediated.
+Permissions Management has been designed in such a way that we recommended you 'step-through' each of the below phases in order to gain insights into permissions across the organization. This is because you generally can't action what is yet to be discovered, likewise you can't continually evaluate what is yet to be remediated.
### Discover
Permissions Management deepens Zero Trust security strategies by augmenting the
- Automate least privilege access: Use access analytics to ensure identities have the right permissions, at the right time. - Unify access policies across infrastructure as a service (IaaS) platforms: Implement consistent security policies across your cloud infrastructure.
-Once your organization has explored and implemented the discover, remediation and monitor phases, you have established one of the core pillars of a modern zero-trust security strategy.
+Once your organization has explored and implemented the discover, remediation and monitor phases, you've established one of the core pillars of a modern zero-trust security strategy.
## Next steps
active-directory Ui Dashboard https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/cloud-infrastructure-entitlement-management/ui-dashboard.md
Permissions Management provides a summary of key statistics and data about your authorization system regularly. This information is available for Amazon Web Services (AWS), Microsoft Azure, and Google Cloud Platform (GCP). + ## View metrics related to avoidable risk The data provided by Permissions Management includes metrics related to avoidable risk. These metrics allow the Permissions Management administrator to identify areas where they can reduce risks related to the principle of least permissions.
The Permissions Management **Dashboard** displays the following information:
## The PCI heat map + The **Permission Creep Index** heat map shows the incurred risk of users with access to high-risk permissions, and provides information about: - Users who were given access to high-risk permissions but aren't actively using them. *High-risk permissions* include the ability to modify or delete information in the authorization system.
active-directory Howto Vm Sign In Azure Ad Linux https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/devices/howto-vm-sign-in-azure-ad-linux.md
To improve the security of Linux virtual machines (VMs) in Azure, you can integr
This article shows you how to create and configure a Linux VM and log in with Azure AD by using OpenSSH certificate-based authentication. > [!IMPORTANT]
-> This capability is now generally available. The previous version that made use of device code flow was [deprecated on August 15, 2021](../../virtual-machines/linux/login-using-aad.md). To migrate from the old version to this version, see the section [Migrate from the previous (preview) version](#migrate-from-the-previous-preview-version).
+> This capability is now generally available. The previous version that made use of device code flow was [deprecated on August 15, 2021](/azure-docs-archive-pr/virtual-machines/linux/login-using-aad). To migrate from the old version to this version, see the section [Migrate from the previous (preview) version](#migrate-from-the-previous-preview-version).
There are many security benefits of using Azure AD with OpenSSH certificate-based authentication to log in to Linux VMs in Azure. They include:
active-directory Howto Vm Sign In Azure Ad Windows https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/devices/howto-vm-sign-in-azure-ad-windows.md
To configure role assignments for your Azure AD-enabled Windows Server 2019 Data
The following example uses [az role assignment create](/cli/azure/role/assignment#az-role-assignment-create) to assign the Virtual Machine Administrator Login role to the VM for your current Azure user. You obtain the username of your current Azure account by using [az account show](/cli/azure/account#az-account-show), and you set the scope to the VM created in a previous step by using [az vm show](/cli/azure/vm#az-vm-show).
-You can also assign the scope at a resource group or subscription level. Normal Azure RBAC inheritance permissions apply. For more information, see [Log in to a Linux virtual machine in Azure by using Azure Active Directory authentication](../../virtual-machines/linux/login-using-aad.md).
+You can also assign the scope at a resource group or subscription level. Normal Azure RBAC inheritance permissions apply. For more information, see [Log in to a Linux virtual machine in Azure by using Azure Active Directory authentication](/azure-docs-archive-pr/virtual-machines/linux/login-using-aad).
```AzureCLI $username=$(az account show --query user.name --output tsv)
active-directory Secure With Azure Ad Resource Management https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/fundamentals/secure-with-azure-ad-resource-management.md
When a requirement exists to deploy IaaS workloads to Azure that require identit
![Diagram that shows Azure AD authentication to Azure VMs.](media/secure-with-azure-ad-resource-management/sign-into-vm.png)
-**Supported operating systems**: Signing into virtual machines in Azure using Azure AD authentication is currently supported in Windows and Linux. For more specifics on supported operating systems, refer to the documentation for [Windows](../devices/howto-vm-sign-in-azure-ad-windows.md) and [Linux](../../virtual-machines/linux/login-using-aad.md).
+**Supported operating systems**: Signing into virtual machines in Azure using Azure AD authentication is currently supported in Windows and Linux. For more specifics on supported operating systems, refer to the documentation for [Windows](../devices/howto-vm-sign-in-azure-ad-windows.md) and [Linux](/azure-docs-archive-pr/virtual-machines/linux/login-using-aad).
**Credentials**: One of the key benefits of signing into virtual machines in Azure using Azure AD authentication is the ability to use the same federated or managed Azure AD credentials that you normally use for access to Azure AD services for sign-in to the virtual machine. >[!NOTE] >The Azure AD tenant that is used for sign-in in this scenario is the Azure AD tenant that is associated with the subscription that the virtual machine has been provisioned into. This Azure AD tenant can be one that has identities synchronized from on-premises AD DS. Organizations should make an informed choice that aligns with their isolation principals when choosing which subscription and Azure AD tenant they wish to use for sign-in to these servers.
-**Network Requirements**: These virtual machines will need to access Azure AD for authentication so you must ensure that the virtual machines network configuration permits outbound access to Azure AD endpoints on 443. See the documentation for [Windows](../devices/howto-vm-sign-in-azure-ad-windows.md) and [Linux](../../virtual-machines/linux/login-using-aad.md) for more information.
+**Network Requirements**: These virtual machines will need to access Azure AD for authentication so you must ensure that the virtual machines network configuration permits outbound access to Azure AD endpoints on 443. See the documentation for [Windows](../devices/howto-vm-sign-in-azure-ad-windows.md) and [Linux](/azure-docs-archive-pr/virtual-machines/linux/login-using-aad) for more information.
**Role-based Access Control (RBAC)**: Two RBAC roles are available to provide the appropriate level of access to these virtual machines. These RBAC roles can be configured via the Azure AD Portal or via the Azure Cloud Shell Experience. For more information, see [Configure role assignments for the VM](../devices/howto-vm-sign-in-azure-ad-windows.md).
active-directory Configure Permission Classifications https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/manage-apps/configure-permission-classifications.md
In this article you'll learn how to configure permissions classifications in Azu
Currently, only the "Low impact" permission classification is supported. Only delegated permissions that don't require admin consent can be classified as "Low impact".
-The minimum permissions needed to do basic sign in are `openid`, `profile`, `email`, `User.Read` and `offline_access`, which are all delegated permissions on the Microsoft Graph. With these permissions an app can read the full profile details of the signed-in user and can maintain this access even when the user is no longer using the app.
+The minimum permissions needed to do basic sign in are `openid`, `profile`, `email`, and `offline_access`, which are all delegated permissions on the Microsoft Graph. With these permissions an app can read details of the signed-in user's profile, and can maintain this access even when the user is no longer using the app.
## Prerequisites
active-directory Datawiza Azure Ad Sso Oracle Peoplesoft https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/manage-apps/datawiza-azure-ad-sso-oracle-peoplesoft.md
To integrate Oracle PeopleSoft with Azure AD:
|:--|:-| | Platform | Web | | App Name | Enter a unique application name|
- | Public Domain | For example: https://ps-external.example.com <br>For testing, you can use localhost DNS. If you aren't deploying DAB behind a load balancer, use the Public Domain port. |
+ | Public Domain | For example: `https://ps-external.example.com` <br>For testing, you can use localhost DNS. If you aren't deploying DAB behind a load balancer, use the Public Domain port. |
| Listen Port | The port that DAB listens on. | | Upstream Servers | The Oracle PeopleSoft implementation URL and port to be protected.|
api-management Api Management Key Concepts https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/api-management/api-management-key-concepts.md
Title: Azure API Management overview and key concepts | Microsoft Docs
-description: Learn about key scenarios, capabilities, and concepts of the Azure API Management service.
+ Title: Azure API Management - Overview and key concepts
+description: Introduction to key scenarios, capabilities, and concepts of the Azure API Management service. API Management supports the full API lifecycle.
documentationcenter: ''
editor: ''
Previously updated : 01/07/2022 Last updated : 09/23/2022 -
-adobe-target: true
-adobe-target-activity: DocsExpΓÇô458741ΓÇôA/BΓÇôDocs/APIManagementΓÇôContentΓÇôFY23Q1
-adobe-target-experience: Experience B
-adobe-target-content: ./api-management-key-concepts-experiment
-# About API Management
+# What is Azure API Management?
+
+This article provides an overview of common scenarios and key components of Azure API Management. Azure API Management is a hybrid, multicloud management platform for APIs across all environments. As a platform-as-a-service, API Management supports the complete API lifecycle.
-Azure API Management is a hybrid, multicloud management platform for APIs across all environments. This article provides an overview of common scenarios and key components of API Management.
+> [!TIP]
+> If you're already familiar with API Management and ready to start, see these resources:
+> * [Features and service tiers](api-management-features.md)
+> * [Create an API Management instance](get-started-create-service-instance.md)
+> * [Import and publish an API](import-and-publish.md)
+> * [API Management policies](api-management-howto-policies.md)
## Scenarios APIs enable digital experiences, simplify application integration, underpin new digital products, and make data and services reusable and universally accessible. ΓÇïWith the proliferation and increasing dependency on APIs, organizations need to manage them as first-class assets throughout their lifecycle.ΓÇï ++ Azure API Management helps customers meet these challenges: * Abstract backend architecture diversity and complexity from API consumers
Common scenarios include:
Azure API Management is made up of an API *gateway*, a *management plane*, and a *developer portal*. These components are Azure-hosted and fully managed by default. API Management is available in various [tiers](api-management-features.md) differing in capacity and features.
-### API gateway
+## API gateway
All requests from client applications first reach the API gateway, which then forwards them to respective backend services. The API gateway acts as a facade to the backend services, allowing API providers to abstract API implementations and evolve backend architecture without impacting API consumers. The gateway enables consistent configuration of routing, security, throttling, caching, and observability. [!INCLUDE [api-management-gateway-role](../../includes/api-management-gateway-role.md)]
+### Self-hosted gateway
With the [self-hosted gateway](self-hosted-gateway-overview.md), customers can deploy the API gateway to the same environments where they host their APIs, to optimize API traffic and ensure compliance with local regulations and guidelines. The self-hosted gateway enables customers with hybrid IT infrastructure to manage APIs hosted on-premises and across clouds from a single API Management service in Azure.
-The self-hosted gateway is packaged as a Linux-based Docker container and is commonly deployed to Kubernetes, including to Azure Kubernetes Service and [Azure Arc-enabled Kubernetes](how-to-deploy-self-hosted-gateway-azure-arc.md).
+The self-hosted gateway is packaged as a Linux-based Docker container and is commonly deployed to Kubernetes, including to Azure Kubernetes Service and [Azure Arc-enabled Kubernetes](how-to-deploy-self-hosted-gateway-azure-arc.md).
More information: * [API gateway in Azure API Management](api-management-gateways-overview.md)
-### Management plane
+## Management plane
API providers interact with the service through the management plane, which provides full access to the API Management service capabilities.
Use the management plane to:
* Manage users
-### Developer portal
+## Developer portal
The open-source [developer portal][Developer portal] is an automatically generated, fully customizable website with the documentation of your APIs. + API providers can customize the look and feel of the developer portal by adding custom content, customizing styles, and adding their branding. Extend the developer portal further by [self-hosting](developer-portal-self-host.md). App developers use the open-source developer portal to discover the APIs, onboard to use them, and learn how to consume them in applications. (APIs can also be exported to the [Power Platform](export-api-power-platform.md) for discovery and use by citizen developers.)
Using the developer portal, developers can:
## Integration with Azure services
-API Management integrates with many complementary Azure services, including:
+API Management integrates with many complementary Azure services to create enterprise solutions, including:
* [Azure Key Vault](../key-vault/general/overview.md) for secure safekeeping and management of [client certificates](api-management-howto-mutual-certificates.md) and [secretsΓÇï](api-management-howto-properties.md) * [Azure Monitor](api-management-howto-use-azure-monitor.md) for logging, reporting, and alerting on management operations, systems events, and API requestsΓÇï * [Application Insights](api-management-howto-app-insights.md) for live metrics, end-to-end tracing, and troubleshooting
-* [Virtual networks](virtual-network-concepts.md) and [Application Gateway](api-management-howto-integrate-internal-vnet-appgateway.md) for network-level protectionΓÇï
+* [Virtual networks](virtual-network-concepts.md), [private endpoints](private-endpoint.md), and [Application Gateway](api-management-howto-integrate-internal-vnet-appgateway.md) for network-level protectionΓÇï
* Azure Active Directory for [developer authentication](api-management-howto-aad.md) and [request authorization](api-management-howto-protect-backend-with-aad.md)ΓÇï * [Event Hubs](api-management-howto-log-event-hubs.md) for streaming eventsΓÇï * Several Azure compute offerings commonly used to build and host APIs on Azure, including [Functions](import-function-app-as-api.md), [Logic Apps](import-logic-app-as-api.md), [Web Apps](import-app-service-as-api.md), [Service Fabric](how-to-configure-service-fabric-backend.md), and others.ΓÇï
+**More information**:
+* [Basic enterprise integration](/azure/architecture/reference-architectures/enterprise-integration/basic-enterprise-integration?toc=%2Fazure%2Fapi-management%2Ftoc.json&bc=/azure/api-management/breadcrumb/toc.json)
+* [Landing zone accelerator](/azure/cloud-adoption-framework/scenarios/app-platform/api-management/landing-zone-accelerator?toc=%2Fazure%2Fapi-management%2Ftoc.json&bc=/azure/api-management/breadcrumb/toc.json)
++ ## Key concepts ### APIs
APIs are the foundation of an API Management service instance. Each API represen
Operations in API Management are highly configurable, with control over URL mapping, query and path parameters, request and response content, and operation response caching.
-More information:
+**More information**:
* [Import and publish your first API][How to create APIs] * [Mock API responses][How to add operations to an API]
Products are how APIs are surfaced to developers. Products in API Management hav
When a product is ready for use by developers, it can be published. Once published, it can be viewed or subscribed to by developers. Subscription approval is configured at the product level and can either require an administrator's approval or be automatic.
-More information:
+**More information**:
* [Create and publish a product][How to create and publish a product] * [Subscriptions in API Management](api-management-subscriptions.md)
Groups are used to manage the visibility of products to developers. API Manageme
Administrators can also create custom groups or use external groups in an [associated Azure Active Directory tenant](api-management-howto-aad.md) to give developers visibility and access to API products. For example, create a custom group for developers in a partner organization to access a specific subset of APIs in a product. A user can belong to more than one group.
-More information:
+**More information**:
* [How to create and use groups][How to create and use groups] ### Developers Developers represent the user accounts in an API Management service instance. Developers can be created or invited to join by administrators, or they can sign up from the [developer portal][Developer portal]. Each developer is a member of one or more groups, and can subscribe to the products that grant visibility to those groups.
-When developers subscribe to a product, they are granted the primary and secondary key for the product for use when calling the product's APIs.
+When developers subscribe to a product, they're granted the primary and secondary key for the product for use when calling the product's APIs.
-More information:
+**More information**:
* [How to manage user accounts][How to create or invite developers] ### Policies
Policy expressions can be used as attribute values or text values in any of the
Policies can be applied at different scopes, depending on your needs: global (all APIs), a product, a specific API, or an API operation.
-More information:
+**More information**:
* [Transform and protect your API][How to create and configure advanced product settings]. * [Policy expressions](./api-management-policy-expressions.md)
azure-arc Preview Testing https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/data/preview-testing.md
To install a pre-release version, follow these pre-requisite instructions:
If you use the Azure CLI extension: - Uninstall the Azure CLI extension (`az extension remove -n arcdata`).-- Download the latest pre-release Azure CLI extension `.whl` file from [https://aka.ms/az-cli-arcdata-ext](https://aka.ms/az-cli-arcdata-ext).
+- Download the latest pre-release Azure CLI extension `.whl` file from the link in the [Current preview release information](#Current preview release information)
- Install the latest pre-release Azure CLI extension (`az extension add -s <location of downloaded .whl file>`). If you use the Azure Data Studio extension to install: - Uninstall the Azure Data Studio extension. Select the Extensions panel and select on the **Azure Arc** extension, select **Uninstall**.-- Download the latest pre-release Azure Data Studio extension .vsix files from [https://aka.ms/ads-arcdata-ext](https://aka.ms/ads-arcdata-ext) and [https://aka.ms/ads-azcli-ext](https://aka.ms/ads-azcli-ext).
+- Download the latest pre-release Azure Data Studio extension .vsix files from the links in the [Current preview release information](#Current preview release information)
- Install the extensions by choosing File -> Install Extension from VSIX package and then browsing to the download location of the .vsix files. Install the `azcli` extension first and then `arc`. ### Install using Azure CLI
azure-arc Version Log https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/data/version-log.md
This article identifies the component versions with each release of Azure Arc-en
|Container images tag |`v1.11.0_2022-09-13`| |CRD names and version|`datacontrollers.arcdata.microsoft.com`: v1beta1, v1 through v6<br/>`exporttasks.tasks.arcdata.microsoft.com`: v1beta1, v1, v2<br/>`kafkas.arcdata.microsoft.com`: v1beta1<br/>`monitors.arcdata.microsoft.com`: v1beta1, v1, v2<br/>`sqlmanagedinstances.sql.arcdata.microsoft.com`: v1beta1, v1 through v6<br/>`postgresqls.arcdata.microsoft.com`: v1beta1, v1beta2, v1beta3<br/>`sqlmanagedinstancerestoretasks.tasks.sql.arcdata.microsoft.com`: v1beta1, v1<br/>`failovergroups.sql.arcdata.microsoft.com`: v1beta1, v1beta2, v1<br/>`activedirectoryconnectors.arcdata.microsoft.com`: v1beta1, v1beta2<br/>`sqlmanagedinstancereprovisionreplicatask.tasks.sql.arcdata.microsoft.com`: v1beta1<br/>`otelcollectors.arcdata.microsoft.com`: v1beta1<br/>`telemetryrouters.arcdata.microsoft.com`: v1beta1<br/>| |Azure Resource Manager (ARM) API version|2022-03-01-preview (No change)|
-|`arcdata` Azure CLI extension version|1.4.6 ([Download](https://aka.ms/az-cli-arcdata-ext))|
+|`arcdata` Azure CLI extension version|1.4.6|
|Arc enabled Kubernetes helm chart extension version|1.11.0 (Note: This versioning scheme is new, starting from this release. The scheme follows the semantic versioning scheme of the container images.)|
-|Arc Data extension for Azure Data Studio<br/>`arc`<br/>`azcli`|<br/>1.5.4 ([Download](https://aka.ms/ads-arcdata-ext))</br>1.5.4 ([Download](https://aka.ms/ads-azcli-ext))|
+|Arc Data extension for Azure Data Studio<br/>`arc`<br/>`azcli`|<br/>1.5.4</br>1.5.4|
## August 9, 2022
This article identifies the component versions with each release of Azure Arc-en
|Container images tag |`v1.10.0_2022-08-09`| |CRD names and version|`datacontrollers.arcdata.microsoft.com`: v1beta1, v1 through v6<br/>`exporttasks.tasks.arcdata.microsoft.com`: v1beta1, v1, v2<br/>`kafkas.arcdata.microsoft.com`: v1beta1<br/>`monitors.arcdata.microsoft.com`: v1beta1, v1, v2<br/>`sqlmanagedinstances.sql.arcdata.microsoft.com`: v1beta1, v1 through v6<br/>`postgresqls.arcdata.microsoft.com`: v1beta1, v1beta2<br/>`sqlmanagedinstancerestoretasks.tasks.sql.arcdata.microsoft.com`: v1beta1, v1<br/>`failovergroups.sql.arcdata.microsoft.com`: v1beta1, v1beta2, v1<br/>`activedirectoryconnectors.arcdata.microsoft.com`: v1beta1, v1beta2<br/>| |Azure Resource Manager (ARM) API version|2022-03-01-preview (No change)|
-|`arcdata` Azure CLI extension version|1.4.5 ([Download](https://arcdataazurecliextension.blob.core.windows.net/stage/arcdata-1.4.5-py2.py3-none-any.whl))|
+|`arcdata` Azure CLI extension version|1.4.5|
|Arc enabled Kubernetes helm chart extension version|1.2.20381002|
-|Arc Data extension for Azure Data Studio<br/>`arc`<br/>`azcli`|<br/>1.5.1 ([Download](https://azuredatastudioarcext.blob.core.windows.net/stage/arc-1.5.1.vsix))</br>1.5.1 ([Download](https://azuredatastudioarcext.blob.core.windows.net/stage/azcli-1.5.1.vsix))|
+|Arc Data extension for Azure Data Studio<br/>`arc`<br/>`azcli`|<br/>1.5.1</br>1.5.1|
## July 12, 2022
This article identifies the component versions with each release of Azure Arc-en
|Container images tag |`v1.9.0_2022-07-12`| |CRD names and version|`datacontrollers.arcdata.microsoft.com`: v1beta1, v1 through v6<br/>`exporttasks.tasks.arcdata.microsoft.com`: v1beta1, v1, v2<br/>`kafkas.arcdata.microsoft.com`: v1beta1<br/>`monitors.arcdata.microsoft.com`: v1beta1, v1, v2<br/>`sqlmanagedinstances.sql.arcdata.microsoft.com`: v1beta1, v1 through v5<br/>`postgresqls.arcdata.microsoft.com`: v1beta1, v1beta2<br/>`sqlmanagedinstancerestoretasks.tasks.sql.arcdata.microsoft.com`: v1beta1, v1<br/>`failovergroups.sql.arcdata.microsoft.com`: v1beta1, v1beta2, v1<br/>`activedirectoryconnectors.arcdata.microsoft.com`: v1beta1, v1beta2<br/>| |Azure Resource Manager (ARM) API version|2022-03-01-preview (No change)|
-|`arcdata` Azure CLI extension version|1.4.3 ([Download](https://arcdataazurecliextension.blob.core.windows.net/stage/arcdata-1.4.3-py2.py3-none-any.whl))|
+|`arcdata` Azure CLI extension version|1.4.3|
|Arc enabled Kubernetes helm chart extension version|1.2.20031002|
-|Arc Data extension for Azure Data Studio<br/>`arc`<br/>`azcli`|<br/>1.3.0 ([Download](https://azuredatastudioarcext.blob.core.windows.net/stage/arc-1.3.0.vsix))</br>1.3.0 ([Download](https://azuredatastudioarcext.blob.core.windows.net/stage/azcli-1.3.0.vsix))|
+|Arc Data extension for Azure Data Studio<br/>`arc`<br/>`azcli`|<br/>1.3.0</br>1.3.0|
## June 14, 2022
This article identifies the component versions with each release of Azure Arc-en
|Container images tag |`v1.8.0_2022-06-14`| |CRD names and version|`datacontrollers.arcdata.microsoft.com`: v1beta1, v1 through v6<br/>`exporttasks.tasks.arcdata.microsoft.com`: v1beta1, v1, v2<br/>`kafkas.arcdata.microsoft.com`: v1beta1<br/>`monitors.arcdata.microsoft.com`: v1beta1, v1, v2<br/>`sqlmanagedinstances.sql.arcdata.microsoft.com`: v1beta1, v1 through v5<br/>`postgresqls.arcdata.microsoft.com`: v1beta1, v1beta2<br/>`sqlmanagedinstancerestoretasks.tasks.sql.arcdata.microsoft.com`: v1beta1, v1<br/>`failovergroups.sql.arcdata.microsoft.com`: v1beta1, v1beta2, v1<br/>`activedirectoryconnectors.arcdata.microsoft.com`: v1beta1, v1beta2<br/>| |ARM API version|2022-03-01-preview (No change)|
-|`arcdata` Azure CLI extension version|1.4.2 ([Download](https://arcdataazurecliextension.blob.core.windows.net/stage/arcdata-1.4.2-py2.py3-none-any.whl))|
+|`arcdata` Azure CLI extension version|1.4.2|
|Arc enabled Kubernetes helm chart extension version|1.2.19831003|
-|Arc Data extension for Azure Data Studio|1.3.0 (No change)([Download](https://aka.ms/ads-arcdata-ext))|
+|Arc Data extension for Azure Data Studio|1.3.0 (No change)|
## May 24, 2022
azure-maps Quick Android Map https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-maps/quick-android-map.md
description: 'Quickstart: Learn how to create an Android app using the Azure Maps Android SDK.' Previously updated : 02/09/2022 Last updated : 09/22/2022
The next step in building your application is to install the Azure Maps Android
:::image type="content" source="./media/quick-android-map/project-settings-file.png" alt-text="A screenshot of the project settings file in Android Studio.":::
-3. Open the application **build.gradle** file and do the following:
+3. Open the project's **gradle.properties** file, verify that `android.useAndroidX` and `android.enableJetifier` are both set to `true`.
+
+ If the **gradle.properties** file does not include `android.useAndroidX` and `android.enableJetifier`, add the next two lines to the end of the file:
+
+ ```gradle
+ android.useAndroidX=true
+ android.enableJetifier=true
+ ```
+
+
+4. Open the application **build.gradle** file and do the following:
1. Verify your project's **minSdk** is **21** or higher.
The next step in building your application is to install the Azure Maps Android
:::image type="content" source="./media/quick-android-map/build-gradle-file.png" alt-text="A screenshot showing the application build dot gradle file in Android Studio.":::
-4. Add a map fragment to the main activity:
+5. Add a map fragment to the main activity:
```xml <com.azure.android.maps.control.MapControl
The next step in building your application is to install the Azure Maps Android
::: zone pivot="programming-language-java-android"
-5. In the **MainActivity.java** file you'll need to:
+6. In the **MainActivity.java** file you'll need to:
* Add imports for the Azure Maps SDK. * Set your Azure Maps authentication information.
The next step in building your application is to install the Azure Maps Android
::: zone pivot="programming-language-kotlin"
-5. In the **MainActivity.kt** file you'll need to:
+7. In the **MainActivity.kt** file you'll need to:
* add imports for the Azure Maps SDK * set your Azure Maps authentication information
The next step in building your application is to install the Azure Maps Android
::: zone-end
-6. Select the run button from the toolbar, as shown in the following image (or press `Control` + `R` on a Mac), to build your application.
+8. Select the run button from the toolbar, as shown in the following image (or press `Control` + `R` on a Mac), to build your application.
:::image type="content" source="media/quick-android-map/run-app.png" alt-text="A screenshot showing the run button in Android Studio.":::
azure-monitor Java In Process Agent https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/java-in-process-agent.md
This section shows you how to download the auto-instrumentation jar file.
#### Download the jar file
-Download the [applicationinsights-agent-3.4.0.jar](https://github.com/microsoft/ApplicationInsights-Java/releases/download/3.4.0/applicationinsights-agent-3.4.0.jar) file.
+Download the [applicationinsights-agent-3.4.1.jar](https://github.com/microsoft/ApplicationInsights-Java/releases/download/3.4.1/applicationinsights-agent-3.4.1.jar) file.
> [!WARNING] >
Download the [applicationinsights-agent-3.4.0.jar](https://github.com/microsoft/
#### Point the JVM to the jar file
-Add `-javaagent:"path/to/applicationinsights-agent-3.4.0.jar"` to your application's JVM args.
+Add `-javaagent:"path/to/applicationinsights-agent-3.4.1.jar"` to your application's JVM args.
> [!TIP] > For help with configuring your application's JVM args, see [Tips for updating your JVM args](./java-standalone-arguments.md).
Add `-javaagent:"path/to/applicationinsights-agent-3.4.0.jar"` to your applicati
APPLICATIONINSIGHTS_CONNECTION_STRING=<Copy connection string from Application Insights Resource Overview> ```
- - Or you can create a configuration file named `applicationinsights.json`. Place it in the same directory as `applicationinsights-agent-3.4.0.jar` with the following content:
+ - Or you can create a configuration file named `applicationinsights.json`. Place it in the same directory as `applicationinsights-agent-3.4.1.jar` with the following content:
```json {
azure-monitor Java Spring Boot https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/java-spring-boot.md
You can enable the Azure Monitor Application Insights agent for Java by adding a
### Usual case
-Add the JVM arg `-javaagent:"path/to/applicationinsights-agent-3.4.0.jar"` somewhere before `-jar`, for example:
+Add the JVM arg `-javaagent:"path/to/applicationinsights-agent-3.4.1.jar"` somewhere before `-jar`, for example:
```
-java -javaagent:"path/to/applicationinsights-agent-3.4.0.jar" -jar <myapp.jar>
+java -javaagent:"path/to/applicationinsights-agent-3.4.1.jar" -jar <myapp.jar>
``` ### Spring Boot via Docker entry point
-If you're using the *exec* form, add the parameter `-javaagent:"path/to/applicationinsights-agent-3.4.0.jar"` to the parameter list somewhere before the `"-jar"` parameter, for example:
+If you're using the *exec* form, add the parameter `-javaagent:"path/to/applicationinsights-agent-3.4.1.jar"` to the parameter list somewhere before the `"-jar"` parameter, for example:
```
-ENTRYPOINT ["java", "-javaagent:path/to/applicationinsights-agent-3.4.0.jar", "-jar", "<myapp.jar>"]
+ENTRYPOINT ["java", "-javaagent:path/to/applicationinsights-agent-3.4.1.jar", "-jar", "<myapp.jar>"]
```
-If you're using the *shell* form, add the JVM arg `-javaagent:"path/to/applicationinsights-agent-3.4.0.jar"` somewhere before `-jar`, for example:
+If you're using the *shell* form, add the JVM arg `-javaagent:"path/to/applicationinsights-agent-3.4.1.jar"` somewhere before `-jar`, for example:
```
-ENTRYPOINT java -javaagent:"path/to/applicationinsights-agent-3.4.0.jar" -jar <myapp.jar>
+ENTRYPOINT java -javaagent:"path/to/applicationinsights-agent-3.4.1.jar" -jar <myapp.jar>
``` ## Programmatic configuration
To use the programmatic configuration and attach the Application Insights agent
<dependency> <groupId>com.microsoft.azure</groupId> <artifactId>applicationinsights-runtime-attach</artifactId>
- <version>3.4.0</version>
+ <version>3.4.1</version>
</dependency> ```
azure-monitor Java Standalone Arguments https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/java-standalone-arguments.md
Read the Spring Boot documentation [here](../app/java-in-process-agent.md).
If you installed Tomcat via `apt-get` or `yum`, then you should have a file `/etc/tomcat8/tomcat8.conf`. Add this line to the end of that file: ```
-JAVA_OPTS="$JAVA_OPTS -javaagent:path/to/applicationinsights-agent-3.4.0.jar"
+JAVA_OPTS="$JAVA_OPTS -javaagent:path/to/applicationinsights-agent-3.4.1.jar"
``` ### Tomcat installed via download and unzip
JAVA_OPTS="$JAVA_OPTS -javaagent:path/to/applicationinsights-agent-3.4.0.jar"
If you installed Tomcat via download and unzip from [https://tomcat.apache.org](https://tomcat.apache.org), then you should have a file `<tomcat>/bin/catalina.sh`. Create a new file in the same directory named `<tomcat>/bin/setenv.sh` with the following content: ```
-CATALINA_OPTS="$CATALINA_OPTS -javaagent:path/to/applicationinsights-agent-3.4.0.jar"
+CATALINA_OPTS="$CATALINA_OPTS -javaagent:path/to/applicationinsights-agent-3.4.1.jar"
```
-If the file `<tomcat>/bin/setenv.sh` already exists, then modify that file and add `-javaagent:path/to/applicationinsights-agent-3.4.0.jar` to `CATALINA_OPTS`.
+If the file `<tomcat>/bin/setenv.sh` already exists, then modify that file and add `-javaagent:path/to/applicationinsights-agent-3.4.1.jar` to `CATALINA_OPTS`.
## Tomcat 8 (Windows)
If the file `<tomcat>/bin/setenv.sh` already exists, then modify that file and a
Locate the file `<tomcat>/bin/catalina.bat`. Create a new file in the same directory named `<tomcat>/bin/setenv.bat` with the following content: ```
-set CATALINA_OPTS=%CATALINA_OPTS% -javaagent:path/to/applicationinsights-agent-3.4.0.jar
+set CATALINA_OPTS=%CATALINA_OPTS% -javaagent:path/to/applicationinsights-agent-3.4.1.jar
``` Quotes aren't necessary, but if you want to include them, the proper placement is: ```
-set "CATALINA_OPTS=%CATALINA_OPTS% -javaagent:path/to/applicationinsights-agent-3.4.0.jar"
+set "CATALINA_OPTS=%CATALINA_OPTS% -javaagent:path/to/applicationinsights-agent-3.4.1.jar"
```
-If the file `<tomcat>/bin/setenv.bat` already exists, just modify that file and add `-javaagent:path/to/applicationinsights-agent-3.4.0.jar` to `CATALINA_OPTS`.
+If the file `<tomcat>/bin/setenv.bat` already exists, just modify that file and add `-javaagent:path/to/applicationinsights-agent-3.4.1.jar` to `CATALINA_OPTS`.
### Running Tomcat as a Windows service
-Locate the file `<tomcat>/bin/tomcat8w.exe`. Run that executable and add `-javaagent:path/to/applicationinsights-agent-3.4.0.jar` to the `Java Options` under the `Java` tab.
+Locate the file `<tomcat>/bin/tomcat8w.exe`. Run that executable and add `-javaagent:path/to/applicationinsights-agent-3.4.1.jar` to the `Java Options` under the `Java` tab.
## JBoss EAP 7 ### Standalone server
-Add `-javaagent:path/to/applicationinsights-agent-3.4.0.jar` to the existing `JAVA_OPTS` environment variable in the file `JBOSS_HOME/bin/standalone.conf` (Linux) or `JBOSS_HOME/bin/standalone.conf.bat` (Windows):
+Add `-javaagent:path/to/applicationinsights-agent-3.4.1.jar` to the existing `JAVA_OPTS` environment variable in the file `JBOSS_HOME/bin/standalone.conf` (Linux) or `JBOSS_HOME/bin/standalone.conf.bat` (Windows):
```java ...
- JAVA_OPTS="-javaagent:path/to/applicationinsights-agent-3.4.0.jar -Xms1303m -Xmx1303m ..."
+ JAVA_OPTS="-javaagent:path/to/applicationinsights-agent-3.4.1.jar -Xms1303m -Xmx1303m ..."
... ``` ### Domain server
-Add `-javaagent:path/to/applicationinsights-agent-3.4.0.jar` to the existing `jvm-options` in `JBOSS_HOME/domain/configuration/host.xml`:
+Add `-javaagent:path/to/applicationinsights-agent-3.4.1.jar` to the existing `jvm-options` in `JBOSS_HOME/domain/configuration/host.xml`:
```xml ...
Add `-javaagent:path/to/applicationinsights-agent-3.4.0.jar` to the existing `jv
<jvm-options> <option value="-server"/> <!--Add Java agent jar file here-->
- <option value="-javaagent:path/to/applicationinsights-agent-3.4.0.jar"/>
+ <option value="-javaagent:path/to/applicationinsights-agent-3.4.1.jar"/>
<option value="-XX:MetaspaceSize=96m"/> <option value="-XX:MaxMetaspaceSize=256m"/> </jvm-options>
Add these lines to `start.ini`
``` --exec--javaagent:path/to/applicationinsights-agent-3.4.0.jar
+-javaagent:path/to/applicationinsights-agent-3.4.1.jar
``` ## Payara 5
-Add `-javaagent:path/to/applicationinsights-agent-3.4.0.jar` to the existing `jvm-options` in `glassfish/domains/domain1/config/domain.xml`:
+Add `-javaagent:path/to/applicationinsights-agent-3.4.1.jar` to the existing `jvm-options` in `glassfish/domains/domain1/config/domain.xml`:
```xml ... <java-config ...> <!--Edit the JVM options here--> <jvm-options>
- -javaagent:path/to/applicationinsights-agent-3.4.0.jar>
+ -javaagent:path/to/applicationinsights-agent-3.4.1.jar>
</jvm-options> ... </java-config>
Java and Process Management > Process definition > Java Virtual Machine
``` In "Generic JVM arguments" add the following JVM argument: ```--javaagent:path/to/applicationinsights-agent-3.4.0.jar
+-javaagent:path/to/applicationinsights-agent-3.4.1.jar
``` After that, save and restart the application server.
After that, save and restart the application server.
Create a new file `jvm.options` in the server directory (for example `<openliberty>/usr/servers/defaultServer`), and add this line: ```--javaagent:path/to/applicationinsights-agent-3.4.0.jar
+-javaagent:path/to/applicationinsights-agent-3.4.1.jar
``` ## Others
azure-monitor Java Standalone Config https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/java-standalone-config.md
You will find more details and additional configuration options below.
## Configuration file path
-By default, Application Insights Java 3.x expects the configuration file to be named `applicationinsights.json`, and to be located in the same directory as `applicationinsights-agent-3.4.0.jar`.
+By default, Application Insights Java 3.x expects the configuration file to be named `applicationinsights.json`, and to be located in the same directory as `applicationinsights-agent-3.4.1.jar`.
You can specify your own configuration file path using either * `APPLICATIONINSIGHTS_CONFIGURATION_FILE` environment variable, or * `applicationinsights.configuration.file` Java system property
-If you specify a relative path, it will be resolved relative to the directory where `applicationinsights-agent-3.4.0.jar` is located.
+If you specify a relative path, it will be resolved relative to the directory where `applicationinsights-agent-3.4.1.jar` is located.
Alternatively, instead of using a configuration file, you can specify the entire _content_ of the json configuration via the environment variable `APPLICATIONINSIGHTS_CONFIGURATION_CONTENT`.
You can also set the connection string using the environment variable `APPLICATI
You can also set the connection string by specifying a file to load the connection string from.
-If you specify a relative path, it will be resolved relative to the directory where `applicationinsights-agent-3.4.0.jar` is located.
+If you specify a relative path, it will be resolved relative to the directory where `applicationinsights-agent-3.4.1.jar` is located.
```json {
Furthermore, sampling is trace ID based, to help ensure consistent sampling deci
### Rate-Limited Sampling
-Starting from 3.4.0, rate-limited sampling is available, and is now the default.
+Starting from 3.4.1, rate-limited sampling is available, and is now the default.
If no sampling has been configured, the default is now rate-limited sampling configured to capture at most (approximately) 5 requests per second, along with all the dependencies and logs on those requests.
Starting from version 3.2.0, if you want to set a custom dimension programmatica
## Connection string overrides (preview)
-This feature is in preview, starting from 3.4.0.
+This feature is in preview, starting from 3.4.1.
Connection string overrides allow you to override the [default connection string](#connection-string), for example: * Set one connection string for one http path prefix `/myapp1`.
You can enable code properties (_FileName_, _ClassName_, _MethodName_, _LineNumb
> > This feature could add a performance overhead.
-This feature is in preview, starting from 3.4.0.
+This feature is in preview, starting from 3.4.1.
### LoggingLevel
To disable auto-collection of Micrometer metrics (including Spring Boot Actuator
Literal values in JDBC queries are masked by default in order to avoid accidentally capturing sensitive data.
-Starting from 3.4.0, this behavior can be disabled if desired, e.g.
+Starting from 3.4.1, this behavior can be disabled if desired, e.g.
```json {
Starting from 3.4.0, this behavior can be disabled if desired, e.g.
Literal values in Mongo queries are masked by default in order to avoid accidentally capturing sensitive data.
-Starting from 3.4.0, this behavior can be disabled if desired, e.g.
+Starting from 3.4.1, this behavior can be disabled if desired, e.g.
```json {
and the console, corresponding to this configuration:
`level` can be one of `OFF`, `ERROR`, `WARN`, `INFO`, `DEBUG`, or `TRACE`. `path` can be an absolute or relative path. Relative paths are resolved against the directory where
-`applicationinsights-agent-3.4.0.jar` is located.
+`applicationinsights-agent-3.4.1.jar` is located.
`maxSizeMb` is the max size of the log file before it rolls over.
azure-monitor Java Standalone Sampling Overrides https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/java-standalone-sampling-overrides.md
To begin, create a configuration file named *applicationinsights.json*. Save it
"sampling": { "overrides": [ {
- "telemetryKind": "request",
+ "telemetryType": "request",
"attributes": [ ... ], "percentage": 0 }, {
- "telemetryKind": "request",
+ "telemetryType": "request",
"attributes": [ ... ],
To begin, create a configuration file named *applicationinsights.json*. Save it
## How it works
-`telemetryKind` must be one of `request`, `dependency`, `trace` (log), or `exception`.
+`telemetryType` must be one of `request`, `dependency`, `trace` (log), or `exception`.
When a span is started, the type of span and the attributes present on it at that time are used to check if any of the sampling overrides match.
This will also suppress collecting any downstream spans (dependencies) that woul
"sampling": { "overrides": [ {
- "telemetryKind": "request",
+ "telemetryType": "request",
"attributes": [ { "key": "http.url",
This will suppress collecting telemetry for all `GET my-noisy-key` redis calls.
"sampling": { "overrides": [ {
- "telemetryKind": "dependency",
+ "telemetryType": "dependency",
"attributes": [ { "key": "db.system",
those will also be collected for all '/login' requests.
"sampling": { "overrides": [ {
- "telemetryKind": "request",
+ "telemetryType": "request",
"attributes": [ { "key": "http.url",
azure-monitor Java Standalone Upgrade From 2X https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/java-standalone-upgrade-from-2x.md
auto-instrumentation which is provided by the 3.x Java agent.
| 2.x dependency | Action | Remarks | |-|--||
-| `applicationinsights-core` | Update the version to `3.4.0` or later | |
-| `applicationinsights-web` | Update the version to `3.4.0` or later, and remove the Application Insights web filter your `web.xml` file. | |
-| `applicationinsights-web-auto` | Replace with `3.4.0` or later of `applicationinsights-web` | |
+| `applicationinsights-core` | Update the version to `3.4.1` or later | |
+| `applicationinsights-web` | Update the version to `3.4.1` or later, and remove the Application Insights web filter your `web.xml` file. | |
+| `applicationinsights-web-auto` | Replace with `3.4.1` or later of `applicationinsights-web` | |
| `applicationinsights-logging-log4j1_2` | Remove the dependency and remove the Application Insights appender from your log4j configuration. | No longer needed since Log4j 1.2 is auto-instrumented in the 3.x Java agent. | | `applicationinsights-logging-log4j2` | Remove the dependency and remove the Application Insights appender from your log4j configuration. | No longer needed since Log4j 2 is auto-instrumented in the 3.x Java agent. | | `applicationinsights-logging-log4j1_2` | Remove the dependency and remove the Application Insights appender from your logback configuration. | No longer needed since Logback is auto-instrumented in the 3.x Java agent. |
-| `applicationinsights-spring-boot-starter` | Replace with `3.4.0` or later of `applicationinsights-web` | The cloud role name will no longer default to `spring.application.name`, see the [3.x configuration docs](./java-standalone-config.md#cloud-role-name) for configuring the cloud role name. |
+| `applicationinsights-spring-boot-starter` | Replace with `3.4.1` or later of `applicationinsights-web` | The cloud role name will no longer default to `spring.application.name`, see the [3.x configuration docs](./java-standalone-config.md#cloud-role-name) for configuring the cloud role name. |
## Step 2: Add the 3.x Java agent Add the 3.x Java agent to your JVM command-line args, for example ```--javaagent:path/to/applicationinsights-agent-3.4.0.jar
+-javaagent:path/to/applicationinsights-agent-3.4.1.jar
``` If you were using the Application Insights 2.x Java agent, just replace your existing `-javaagent:...` with the above.
azure-monitor Logs Export Logic App https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/logs/logs-export-logic-app.md
The method described in this article describes a scheduled export from a log que
- One time export to local machine using PowerShell script. See [Invoke-AzOperationalInsightsQueryExport](https://www.powershellgallery.com/packages/Invoke-AzOperationalInsightsQueryExport). ## Overview
-This procedure uses the [Azure Monitor Logs connector](/connectors/azuremonitorlogs.md) which lets you run a log query from a Logic App and use its output in other actions in the workflow. The [Azure Blob Storage connector](/connectors/azureblob.md) is used in this procedure to send the query output to Azure storage.
+This procedure uses the [Azure Monitor Logs connector](/connectors/azuremonitorlogs) which lets you run a log query from a Logic App and use its output in other actions in the workflow. The [Azure Blob Storage connector](/connectors/azureblob) is used in this procedure to send the query output to Azure storage.
[![Logic app overview](media/logs-export-logic-app/logic-app-overview.png "Screenshot of Logic app flow.")](media/logs-export-logic-app/logic-app-overview.png#lightbox)
azure-resource-manager Bicep Config Linter https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-resource-manager/bicep/bicep-config-linter.md
Title: Linter settings for Bicep config description: Describes how to customize configuration values for the Bicep linter Previously updated : 09/21/2022 Last updated : 09/23/2022 # Add linter settings in the Bicep config file
The following example shows the rules that are available for configuration.
"use-protectedsettings-for-commandtoexecute-secrets": { "level": "warning" },
+ "use-resource-id-functions": {
+ "level": "warning"
+ },
"use-stable-resource-identifiers": { "level": "warning" },
azure-resource-manager Linter Rule No Hardcoded Location https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-resource-manager/bicep/linter-rule-no-hardcoded-location.md
Title: Linter rule - no hard-coded locations
-description: Linter rule - no hard-coded locations
+ Title: Linter rule - no hardcoded locations
+description: Linter rule - no hardcoded locations
Last updated 1/6/2022
-# Linter rule - no hard-coded locations
+# Linter rule - no hardcoded locations
This rule finds uses of Azure location values that aren't parameterized.
Use the following value in the [Bicep configuration file](bicep-config-linter.md
## Solution
-Template users may have limited access to regions where they can create resources. A hard-coded resource location might block users from creating a resource, thus preventing them from using the template. By providing a location parameter that defaults to the resource group location, users can use the default value when convenient but also specify a different location.
+Template users may have limited access to regions where they can create resources. A hardcoded resource location might block users from creating a resource, thus preventing them from using the template. By providing a location parameter that defaults to the resource group location, users can use the default value when convenient but also specify a different location.
-Rather than using a hard-coded string or variable value, use a parameter, the string 'global', or an expression (but not `resourceGroup().location` or `deployment().location`, see [no-loc-expr-outside-params](./linter-rule-no-loc-expr-outside-params.md)). Best practice suggests that to set your resources' locations, your template should have a string parameter named `location`. This parameter may default to the resource group or deployment location (`resourceGroup().location` or `deployment().location`).
+Rather than using a hardcoded string or variable value, use a parameter, the string 'global', or an expression (but not `resourceGroup().location` or `deployment().location`, see [no-loc-expr-outside-params](./linter-rule-no-loc-expr-outside-params.md)). Best practice suggests that to set your resources' locations, your template should have a string parameter named `location`. This parameter may default to the resource group or deployment location (`resourceGroup().location` or `deployment().location`).
The following example fails this test because the resource's `location` property uses a string literal:
The following example fails this test because a string literal is being passed i
module m1 'module1.bicep' = { name: 'module1' params: {
- location: 'westus'
+ location: 'westus'
} } ```
azure-resource-manager Linter Rule Use Resource Id Functions https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-resource-manager/bicep/linter-rule-use-resource-id-functions.md
+
+ Title: Linter rule - use resource ID functions
+description: Linter rule - use resource ID functions
+ Last updated : 09/23/2022++
+# Linter rule - use resource ID functions
+
+Ensures that the ID of a symbolic resource name or a suitable function is used rather than a manually created ID, such as a concatenating string, for all properties representing a resource ID. Use resource symbolic names whenever it's possible.
+
+The allowed functions include:
+
+- [`extensionResourceId`](./bicep-functions-resource.md#extensionresourceid)
+- [`resourceId`](./bicep-functions-resource.md#resourceid)
+- [`subscriptionResourceId`](./bicep-functions-resource.md#subscriptionresourceid)
+- [`tenantResourceId`](./bicep-functions-resource.md#tenantresourceid)
+- [`reference`](./bicep-functions-resource.md#reference)
+- [`subscription`](./bicep-functions-scope.md#subscription)
+- [`guid`](./bicep-functions-string.md#guid)
+
+## Linter rule code
+
+Use the following value in the [Bicep configuration file](bicep-config-linter.md) to customize rule settings:
+
+`use-resource-id-functions`
+
+## Solution
+
+The following example fails this test because the resource's `api/id` property uses a manually created string:
+
+```bicep
+@description('description')
+param connections_azuremonitorlogs_name string
+
+@description('description')
+param location string
+
+@description('description')
+param resourceTags object
+param tenantId string
+
+resource connections_azuremonitorlogs_name_resource 'Microsoft.Web/connections@2016-06-01' = {
+ name: connections_azuremonitorlogs_name
+ location: location
+ tags: resourceTags
+ properties: {
+ displayName: 'azuremonitorlogs'
+ statuses: [
+ {
+ status: 'Connected'
+ }
+ ]
+ nonSecretParameterValues: {
+ 'token:TenantId': tenantId
+ 'token:grantType': 'code'
+ }
+ api: {
+ name: connections_azuremonitorlogs_name
+ displayName: 'Azure Monitor Logs'
+ description: 'Use this connector to query your Azure Monitor Logs across Log Analytics workspace and Application Insights component, to list or visualize results.'
+ iconUri: 'https://connectoricons-prod.azureedge.net/releases/v1.0.1501/1.0.1501.2507/${connections_azuremonitorlogs_name}/icon.png'
+ brandColor: '#0072C6'
+ id: '/subscriptions/<subscription_id_here>/providers/Microsoft.Web/locations/<region_here>/managedApis/${connections_azuremonitorlogs_name}'
+ type: 'Microsoft.Web/locations/managedApis'
+ }
+ }
+}
+```
+
+You can fix it by using the `subscriptionResourceId()` function:
+
+```bicep
+@description('description')
+param connections_azuremonitorlogs_name string
+
+@description('description')
+param location string
+
+@description('description')
+param resourceTags object
+param tenantId string
+
+resource connections_azuremonitorlogs_name_resource 'Microsoft.Web/connections@2016-06-01' = {
+ name: connections_azuremonitorlogs_name
+ location: location
+ tags: resourceTags
+ properties: {
+ displayName: 'azuremonitorlogs'
+ statuses: [
+ {
+ status: 'Connected'
+ }
+ ]
+ nonSecretParameterValues: {
+ 'token:TenantId': tenantId
+ 'token:grantType': 'code'
+ }
+ api: {
+ name: connections_azuremonitorlogs_name
+ displayName: 'Azure Monitor Logs'
+ description: 'Use this connector to query your Azure Monitor Logs across Log Analytics workspace and Application Insights component, to list or visualize results.'
+ iconUri: 'https://connectoricons-prod.azureedge.net/releases/v1.0.1501/1.0.1501.2507/${connections_azuremonitorlogs_name}/icon.png'
+ brandColor: '#0072C6'
+ id: subscriptionResourceId('Microsoft.Web/locations/managedApis', location, connections_azuremonitorlogs_name)
+ type: 'Microsoft.Web/locations/managedApis'
+ }
+ }
+}
+```
+
+## Next steps
+
+For more information about the linter, see [Use Bicep linter](./linter.md).
azure-resource-manager Linter https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-resource-manager/bicep/linter.md
Title: Use Bicep linter description: Learn how to use Bicep linter. Previously updated : 09/21/2022 Last updated : 9/23/2022 # Use Bicep linter
The default set of linter rules is minimal and taken from [arm-ttk test cases](.
- [secure-secrets-in-params](./linter-rule-secure-secrets-in-parameters.md) - [simplify-interpolation](./linter-rule-simplify-interpolation.md) - [use-protectedsettings-for-commandtoexecute-secrets](./linter-rule-use-protectedsettings-for-commandtoexecute-secrets.md)
+- [use-resource-id-functions](./linter-rule-use-resource-id-functions.md)
- [use-stable-resource-identifiers](./linter-rule-use-stable-resource-identifier.md) - [use-stable-vm-image](./linter-rule-use-stable-vm-image.md)
azure-resource-manager Concepts Built In Policy https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-resource-manager/custom-providers/concepts-built-in-policy.md
Title: Deploy associations using policies
-description: Learn about deploying associations for a custom provider using Azure Policy service.
+description: Learn about deploying associations for a custom resource provider using Azure Policy service.
Last updated 09/06/2019
-# Deploy associations for a custom provider using Azure Policy
+# Deploy associations for a custom resource provider using Azure Policy
-Azure policies can be used to deploy associations to associate resources to a custom provider. In this article, we describe a built-in policy that deploys associations and how you can use that policy.
+Azure policies can be used to deploy associations to associate resources to a custom resource provider. In this article, we describe a built-in policy that deploys associations and how you can use that policy.
## Built-in policy to deploy associations
-Deploy associations for a custom provider is a built-in policy that can be used to deploy association to associate a resource to a custom provider. The policy accepts three parameters:
+Deploy associations for a custom resource provider is a built-in policy that can be used to deploy association to associate a resource to a custom resource provider. The policy accepts three parameters:
-- Custom provider ID - This ID is the resource ID of the custom provider to which the resources need to be associated.-- Resource types to associate - These resource types are the list of resource types to be associated to the custom provider. You can associate multiple resource types to a custom provider using the same policy.
+- Custom resource provider ID - This ID is the resource ID of the custom resource provider to which the resources need to be associated.
+- Resource types to associate - These resource types are the list of resource types to be associated to the custom resource provider. You can associate multiple resource types to a custom resource provider using the same policy.
- Association name prefix - This string is the prefix to be added to the name of the association resource being created. The default value is "DeployedByPolicy". The policy uses DeployIfNotExists evaluation. It runs after a Resource Provider has handled a create or update resource request and the evaluation has returned a success status code. After that, the association resource gets deployed using a template deployment.
-For more information on associations, see [Azure Custom Providers resource onboarding](./concepts-resource-onboarding.md)
+For more information on associations, see [Azure Custom Resource Providers resource onboarding](./concepts-resource-onboarding.md)
## How to use the deploy associations built-in policy ### Prerequisites
-If the custom provider needs permissions to the scope of the policy to perform an action, the policy deployment of association resource wouldn't work without granting the permissions.
+If the custom resource provider needs permissions to the scope of the policy to perform an action, the policy deployment of association resource wouldn't work without granting the permissions.
### Policy assignment
-To use the built-in policy, create a policy assignment and assign the Deploy associations for a custom provider policy. The policy will then identify non-compliant resources and deploy association for those resources.
+To use the built-in policy, create a policy assignment and assign the Deploy associations for a custom resource provider policy. The policy will then identify non-compliant resources and deploy association for those resources.
![Assign built-in policy](media/concepts-built-in-policy/assign-builtin-policy-customprovider.png)
If you have questions about Azure Custom Resource Providers development, try ask
In this article, you learnt about using built-in policy to deploy associations. See these articles to learn more: -- [Concepts: Azure Custom Providers resource onboarding](./concepts-resource-onboarding.md)-- [Tutorial: Resource onboarding with custom providers](./tutorial-resource-onboarding.md)
+- [Concepts: Azure Custom Resource Providers resource onboarding](./concepts-resource-onboarding.md)
+- [Tutorial: Resource onboarding with custom resource providers](./tutorial-resource-onboarding.md)
- [Tutorial: Create custom actions and resources in Azure](./tutorial-get-started-with-custom-providers.md)-- [Quickstart: Create a custom resource provider and deploy custom resources](./create-custom-provider.md)
+- [Quickstart: Create Azure Custom Resource Provider and deploy custom resources](./create-custom-provider.md)
- [How to: Adding custom actions to an Azure REST API](./custom-providers-action-endpoint-how-to.md) - [How to: Adding custom resources to an Azure REST API](./custom-providers-resources-endpoint-how-to.md)
azure-resource-manager Concepts Resource Onboarding https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-resource-manager/custom-providers/concepts-resource-onboarding.md
Title: Resource onboarding
-description: Learn about performing resource onboarding by using Azure Custom Providers to apply management or configuration to other Azure resource types.
+description: Learn about performing resource onboarding by using Azure Custom Resource Providers to apply management or configuration to other Azure resource types.
Last updated 09/06/2019
-# Azure Custom Providers resource onboarding overview
+# Azure Custom Resource Providers resource onboarding overview
-Azure Custom Providers resource onboarding is an extensibility model for Azure resource types. It allows you to apply operations or management across existing Azure resources at scale. For more information, see [How Azure Custom Providers can extend Azure](overview.md). This article describes:
+Azure Custom Resource Providers resource onboarding is an extensibility model for Azure resource types. It allows you to apply operations or management across existing Azure resources at scale. For more information, see [How Azure Custom Resource Providers can extend Azure](overview.md). This article describes:
- What resource onboarding can do. - Resource onboarding basics and how to use it. - Where to find guides and code samples to get started. > [!IMPORTANT]
-> Custom Providers is currently in public preview.
+> Custom Resource Providers is currently in public preview.
> This preview version is provided without a service-level agreement, and we don't recommend it for production workloads. Certain features might be unsupported or might have constrained capabilities. > For more information, see [Supplemental Terms of Use for Microsoft Azure Previews](https://azure.microsoft.com/support/legal/preview-supplemental-terms/). ## What can resource onboarding do?
-Similar to [Azure Custom Providers custom resources](./custom-providers-resources-endpoint-how-to.md), resource onboarding defines a contract that will proxy "onboarding" requests to an endpoint. Unlike custom resources, resource onboarding doesn't create a new resource type. Instead, it allows the extension of existing resource types. And resource onboarding works with Azure Policy, so management and configuration of resources can be done at scale. Some examples of resource onboarding workflows:
+Similar to [Azure Custom Resource Providers custom resources](./custom-providers-resources-endpoint-how-to.md), resource onboarding defines a contract that will proxy "onboarding" requests to an endpoint. Unlike custom resources, resource onboarding doesn't create a new resource type. Instead, it allows the extension of existing resource types. And resource onboarding works with Azure Policy, so management and configuration of resources can be done at scale. Some examples of resource onboarding workflows:
- Install and manage onto virtual machine extensions. - Upload and configure defaults on Azure storage accounts.
Similar to [Azure Custom Providers custom resources](./custom-providers-resource
## Resource onboarding basics
-You configure resource onboarding through Azure Custom Providers by using Microsoft.CustomProviders/resourceProviders and Microsoft.CustomProviders/associations resource types. To enable resource onboarding for a custom provider, during the configuration process, create a **resourceType** called "associations" with a **routingType** that includes "Extension". The Microsoft.CustomProviders/associations and Microsoft.CustomProviders/resourceProviders don't need to belong to the same resource group.
+You configure resource onboarding through Azure Custom Resource Providers by using Microsoft.CustomProviders/resourceProviders and Microsoft.CustomProviders/associations resource types. To enable resource onboarding for a custom resource provider, during the configuration process, create a **resourceType** called "associations" with a **routingType** that includes "Extension". The Microsoft.CustomProviders/associations and Microsoft.CustomProviders/resourceProviders don't need to belong to the same resource group.
-Here's a sample Azure custom provider:
+Here's a sample Azure custom resource provider:
```JSON {
name | Yes | The name of the endpoint definition. For resource onboarding, the n
routingType | Yes | Determines the type of contract with the endpoint. For resource onboarding, the valid **routingTypes** are "Proxy,Cache,Extension" and "Webhook,Cache,Extension". endpoint | Yes | The endpoint to route the requests to. This will handle the response and any side effects of the request.
-After you create the custom provider with the associations resource type, you can target using Microsoft.CustomProviders/associations. Microsoft.CustomProviders/associations is an extension resource that can extend any other Azure resource. When an instance of Microsoft.CustomProviders/associations is created, it will take a property **targetResourceId**, which should be a valid Microsoft.CustomProviders/resourceProviders or Microsoft.Solutions/applications resource ID. In these cases, the request will be forwarded to the associations resource type on the Microsoft.CustomProviders/resourceProviders instance you created.
+After you create the custom resource provider with the associations resource type, you can target using Microsoft.CustomProviders/associations. Microsoft.CustomProviders/associations is an extension resource that can extend any other Azure resource. When an instance of Microsoft.CustomProviders/associations is created, it will take a property **targetResourceId**, which should be a valid Microsoft.CustomProviders/resourceProviders or Microsoft.Solutions/applications resource ID. In these cases, the request will be forwarded to the associations resource type on the Microsoft.CustomProviders/resourceProviders instance you created.
> [!NOTE] > If a Microsoft.Solutions/applications resource ID is provided as the **targetResourceId**, there must be a Microsoft.CustomProviders/resourceProviders deployed in the managed resource group with the name "public".
-Sample Azure Custom Providers association:
+Sample Azure Custom Resource Providers association:
```JSON {
targetResourceId | Yes | The resource ID of the Microsoft.CustomProviders/resour
Resource onboarding works by extending other resources with the Microsoft.CustomProviders/associations extension resource. In the following sample, the request is made for a virtual machine, but any resource can be extended.
-First, you need to create a custom provider resource with an associations resource type. This will declare the callback URL that will be used when a corresponding Microsoft.CustomProviders/associations resource is created, which targets the custom provider.
+First, you need to create a custom resource provider resource with an associations resource type. This will declare the callback URL that will be used when a corresponding Microsoft.CustomProviders/associations resource is created, which targets the custom resource provider.
Sample Microsoft.CustomProviders/resourceProviders create request:
Content-Type: application/json
} ```
-After you create the custom provider, you can target other resources and apply the side effects of the custom provider to them.
+After you create the custom resource provider, you can target other resources and apply the side effects of the custom resource provider to them.
Sample Microsoft.CustomProviders/associations create request:
Content-Type: application/json
} ```
-This request will then be forwarded to the endpoint specified in the custom provider you created, which is referenced by the **targetResourceId** in this form:
+This request will then be forwarded to the endpoint specified in the custom resource provider you created, which is referenced by the **targetResourceId** in this form:
``` HTTP PUT https://{endpointURL}/?api-version=2018-09-01-preview
If you have questions about Azure Custom Resource Providers development, try ask
## Next steps
-In this article, you learned about custom providers. See these articles to learn more:
+In this article, you learned about custom resource providers. See these articles to learn more:
-- [Tutorial: Resource onboarding with custom providers](./tutorial-resource-onboarding.md)
+- [Tutorial: Resource onboarding with custom resource providers](./tutorial-resource-onboarding.md)
- [Tutorial: Create custom actions and resources in Azure](./tutorial-get-started-with-custom-providers.md)-- [Quickstart: Create a custom resource provider and deploy custom resources](./create-custom-provider.md)
+- [Quickstart: Create Azure Custom Resource Provider and deploy custom resources](./create-custom-provider.md)
- [How to: Adding custom actions to an Azure REST API](./custom-providers-action-endpoint-how-to.md) - [How to: Adding custom resources to an Azure REST API](./custom-providers-resources-endpoint-how-to.md)
azure-resource-manager Create Custom Provider Quickstart Powershell https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-resource-manager/custom-providers/create-custom-provider-quickstart-powershell.md
In this quickstart, you learn how to create your own Azure custom resource provi
[Az.CustomProviders](/powershell/module/az.customproviders) PowerShell module. > [!CAUTION]
-> Azure Custom Providers is currently in public preview. This preview version is provided without a
+> Azure Custom Resource Providers is currently in public preview. This preview version is provided without a
> service level agreement. It's not recommended for production workloads. Certain features might not > be supported or might have constrained capabilities. For more information, see > [Supplemental Terms of Use for Microsoft Azure Previews](https://azure.microsoft.com/support/legal/preview-supplemental-terms/).
azure-resource-manager Create Custom Provider https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-resource-manager/custom-providers/create-custom-provider.md
-# Quickstart: Create a custom resource provider and deploy custom resources
+# Quickstart: Create Azure Custom Resource Provider and deploy custom resources
-In this quickstart, you create a custom resource provider and deploy custom resources for that resource provider. For more information about custom providers, see [Azure Custom Resource Providers Overview](overview.md).
+In this quickstart, you create a custom resource provider and deploy custom resources for that resource provider. For more information about custom resource providers, see [Azure Custom Resource Providers Overview](overview.md).
## Prerequisites
Azure CLI examples use `az rest` for `REST` requests. For more information, see
-## Deploy custom provider
+## Deploy custom resource provider
To set up the custom resource provider, deploy an [example template](https://github.com/Azure/azure-docs-json-samples/blob/master/custom-providers/customprovider.json) to your Azure subscription. The template deploys the following resources to your subscription: - Function app with the operations for the resources and actions.-- Storage account for storing users that are created through the custom provider.
+- Storage account for storing users that are created through the custom resource provider.
- Custom resource provider that defines the custom resource types and actions. It uses the function app endpoint for sending requests. - Custom resource from the custom resource provider.
Remove-AzResourceGroup -Name $rgName
## Next steps
-For an introduction to custom providers, see the following article:
+For an introduction to custom resource providers, see the following article:
> [!div class="nextstepaction"]
-> [Azure Custom Providers Preview overview](overview.md)
+> [Azure Custom Resource Providers Overview](overview.md)
azure-resource-manager Custom Providers Resources Endpoint How To https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-resource-manager/custom-providers/custom-providers-resources-endpoint-how-to.md
Sample Azure Resource Manager Template:
Parameter | Required | Description ||
-resourceTypeName | *yes* | The **name** of the **resourceType** defined in the custom provider.
+resourceTypeName | *yes* | The **name** of the **resourceType** defined in the custom resource provider.
resourceProviderName | *yes* | The custom resource provider instance name. customResourceName | *yes* | The custom resource name.
azure-resource-manager Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-resource-manager/custom-providers/overview.md
Title: Overview of custom providers
+ Title: Overview of custom resource providers
description: Learn about Azure Custom Resource Providers and how to extend the Azure API plane to fit your workflows.
Azure Custom Resource Providers is an extensibility platform to Azure. It allows
- How to utilize Azure Custom Resource Providers to extend existing workflows. - Where to find guides and code samples to get started.
-![Custom provider overview](./media/overview/overview.png)
> [!IMPORTANT]
-> Custom Providers is currently in public preview.
+> Custom Resource Providers is currently in public preview.
> This preview version is provided without a service level agreement, and it's not recommended for production workloads. Certain features might not be supported or might have constrained capabilities. > For more information, see [Supplemental Terms of Use for Microsoft Azure Previews](https://azure.microsoft.com/support/legal/preview-supplemental-terms/).
If you have questions for Azure Custom Resource Provider development, try asking
## Next steps
-In this article, you learned about custom providers. Go to the next article to create a custom provider.
+In this article, you learned about custom resource providers. Go to the next article to create a custom resource provider.
- [Quickstart: Create Azure Custom Resource Provider and deploy custom resources](./create-custom-provider.md) - [Tutorial: Create custom actions and resources in Azure](./tutorial-get-started-with-custom-providers.md)
azure-resource-manager Reference Custom Providers Csharp Endpoint https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-resource-manager/custom-providers/reference-custom-providers-csharp-endpoint.md
Title: Custom provider C# RESTful endpoint reference
-description: Provides basic reference for an Azure Custom Providers C# RESTful endpoint. The endpoint is provided through an Azure function app.
+ Title: Custom resource provider C# RESTful endpoint reference
+description: Provides basic reference for an Azure Custom Resource Providers C# RESTful endpoint. The endpoint is provided through an Azure function app.
Last updated 05/15/2022
-# Custom provider C# RESTful endpoint reference
+# Custom resource provider C# RESTful endpoint reference
-This article is a basic reference for a custom provider C# RESTful endpoint. If you're unfamiliar with Azure Custom Providers, see [the overview on custom resource providers](overview.md).
+This article is a basic reference for a custom resource provider C# RESTful endpoint. If you're unfamiliar with Azure Custom Resource Providers, see [the overview on custom resource providers](overview.md).
## Azure Functions RESTful endpoint
-The following code works with a function app in Azure. To learn how to set up an function app to work with Azure Custom Providers, see [the tutorial on setting up Azure Functions for Azure Custom Providers](./tutorial-custom-providers-function-setup.md).
+The following code works with a function app in Azure. To learn how to set up an function app to work with Azure Custom Resource Providers, see [the tutorial on setting up Azure Functions for Azure Custom Resource Providers](./tutorial-custom-providers-function-setup.md).
```csharp #r "Newtonsoft.Json"
public class CustomResource : TableEntity
} /// <summary>
-/// Entry point for the Azure Function webhook that acts as the service behind a custom provider.
+/// Entry point for the Azure Function webhook that acts as the service behind a custom resource provider.
/// </summary> /// <param name="requestMessage">The HTTP request message.</param> /// <param name="log">The logger.</param>
public static async Task<HttpResponseMessage> Run(HttpRequestMessage req, ILogge
"application/json"); }
- log.LogInformation($"The Custom Provider Function received a request '{req.Method}' for resource '{requestPath}'.");
+ log.LogInformation($"The Custom Resource Provider Function received a request '{req.Method}' for resource '{requestPath}'.");
// Determines if it is a collection level call or action. var isResourceRequest = requestPath.Split('/').Length % 2 == 1;
public static async Task<HttpResponseMessage> TriggerCustomAction(HttpRequestMes
/// </summary> /// <param name="requestMessage">The HTTP request message.</param> /// <param name="tableStorage">The Azure Table storage account.</param>
-/// <param name="partitionKey">The partition key for storage. This is the custom provider ID.</param>
+/// <param name="partitionKey">The partition key for storage. This is the custom resource provider ID.</param>
/// <param name="resourceType">The resource type of the enumeration.</param> /// <returns>The HTTP response containing a list of resources stored under 'value'.</returns> public static async Task<HttpResponseMessage> EnumerateAllCustomResources(HttpRequestMessage requestMessage, CloudTable tableStorage, string partitionKey, string resourceType)
public static async Task<HttpResponseMessage> EnumerateAllCustomResources(HttpRe
/// </summary> /// <param name="requestMessage">The HTTP request message.</param> /// <param name="tableStorage">The Azure Table storage account.</param>
-/// <param name="partitionKey">The partition key for storage. This is the custom provider ID.</param>
+/// <param name="partitionKey">The partition key for storage. This is the custom resource provider ID.</param>
/// <param name="rowKey">The row key for storage. This is '{resourceType}:{customResourceName}'.</param> /// <returns>The HTTP response containing the existing custom resource.</returns> public static async Task<HttpResponseMessage> RetrieveCustomResource(HttpRequestMessage requestMessage, CloudTable tableStorage, string partitionKey, string rowKey)
public static async Task<HttpResponseMessage> RetrieveCustomResource(HttpRequest
/// <param name="requestMessage">The HTTP request message.</param> /// <param name="tableStorage">The Azure Table storage account.</param> /// <param name="azureResourceId">The parsed Azure resource ID.</param>
-/// <param name="partitionKey">The partition key for storage. This is the custom provider ID.</param>
+/// <param name="partitionKey">The partition key for storage. This is the custom resource provider ID.</param>
/// <param name="rowKey">The row key for storage. This is '{resourceType}:{customResourceName}'.</param> /// <returns>The HTTP response containing the created custom resource.</returns> public static async Task<HttpResponseMessage> CreateCustomResource(HttpRequestMessage requestMessage, CloudTable tableStorage, ResourceId azureResourceId, string partitionKey, string rowKey)
public static async Task<HttpResponseMessage> CreateCustomResource(HttpRequestMe
/// </summary> /// <param name="requestMessage">The HTTP request message.</param> /// <param name="tableStorage">The Azure Table storage account.</param>
-/// <param name="partitionKey">The partition key for storage. This is the custom provider ID.</param>
+/// <param name="partitionKey">The partition key for storage. This is the custom resource provider ID.</param>
/// <param name="rowKey">The row key for storage. This is '{resourceType}:{customResourceName}'.</param> /// <returns>The HTTP response containing the result of the deletion.</returns> public static async Task<HttpResponseMessage> RemoveCustomResource(HttpRequestMessage requestMessage, CloudTable tableStorage, string partitionKey, string rowKey)
public static async Task<HttpResponseMessage> RemoveCustomResource(HttpRequestMe
## Next steps - [Overview of Azure Custom Resource Providers](overview.md)-- [Tutorial: Create an Azure custom resource provider and deploy custom resources](./create-custom-provider.md)
+- [Quickstart: Create Azure Custom Resource Provider and deploy custom resources](./create-custom-provider.md)
- [How to: Adding custom actions to Azure REST API](./custom-providers-action-endpoint-how-to.md) - [Reference: Custom resource cache reference](proxy-cache-resource-endpoint-reference.md)
azure-resource-manager Tutorial Custom Providers Create https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-resource-manager/custom-providers/tutorial-custom-providers-create.md
Title: Create and use a custom provider
-description: This tutorial shows how to create and use an Azure Custom Provider. Use custom providers to change workflows on Azure.
+ Title: Create and use a custom resource provider
+description: This tutorial shows how to create and use an Azure Custom Resource Provider. Use custom resource providers to change workflows on Azure.
Last updated 05/06/2022
-# Create and use a custom provider
+# Create and use a custom resource provider
-A custom provider is a contract between Azure and an endpoint. With custom providers, you can change workflows on Azure. This tutorial shows the process of creating a custom provider. If you're unfamiliar with Azure Custom Providers, see [the overview of Azure Custom Resource Providers](overview.md).
+A custom resource provider is a contract between Azure and an endpoint. With custom resource providers, you can change workflows on Azure. This tutorial shows the process of creating a custom resource provider. If you're unfamiliar with Azure Custom Resource Providers, see [the overview of Azure Custom Resource Providers](overview.md).
-## Create a custom provider
+## Create a custom resource provider
> [!NOTE] > This tutorial does not show how to author an endpoint. If you don't have a RESTFUL endpoint, follow the [tutorial on authoring RESTful endpoints](./tutorial-custom-providers-function-authoring.md), which is the foundation for the current tutorial.
-After you create an endpoint, you can create a custom provider to generate a contract between the provider and the endpoint. With a custom provider, you can specify a list of endpoint definitions:
+After you create an endpoint, you can create a custom resource provider to generate a contract between the provider and the endpoint. With a custom resource provider, you can specify a list of endpoint definitions:
```JSON {
The value of **endpoint** is the trigger URL of the Azure function app. The `<yo
## Define custom actions and resources
-The custom provider contains a list of endpoint definitions modeled under the **actions** and **resourceTypes** properties. The **actions** property maps to the custom actions exposed by the custom provider, and the **resourceTypes** property is the custom resources. In this tutorial, the custom provider has an **actions** property named `myCustomAction` and a **resourceTypes** property named `myCustomResources`.
+The custom resource provider contains a list of endpoint definitions modeled under the **actions** and **resourceTypes** properties. The **actions** property maps to the custom actions exposed by the custom resource provider, and the **resourceTypes** property is the custom resources. In this tutorial, the custom resource provider has an **actions** property named `myCustomAction` and a **resourceTypes** property named `myCustomResources`.
```JSON {
The custom provider contains a list of endpoint definitions modeled under the **
} ```
-## Deploy the custom provider
+## Deploy the custom resource provider
> [!NOTE] > You must replace the **endpoint** values with the trigger URL from the function app created in the previous tutorial.
-You can deploy the previous custom provider by using an Azure Resource Manager template:
+You can deploy the previous custom resource provider by using an Azure Resource Manager template:
```JSON {
You can deploy the previous custom provider by using an Azure Resource Manager t
## Use custom actions and resources
-After you create a custom provider, you can use the new Azure APIs. The following sections explain how to call and use a custom provider.
+After you create a custom resource provider, you can use the new Azure APIs. The following sections explain how to call and use a custom resource provider.
### Custom actions #### Azure CLI > [!NOTE]
-> You must replace the `{subscriptionId}` and `{resourceGroupName}` placeholders with the subscription and resource group of where you deployed the custom provider.
+> You must replace the `{subscriptionId}` and `{resourceGroupName}` placeholders with the subscription and resource group of where you deployed the custom resource provider.
```azurecli-interactive az resource invoke-action --action myCustomAction \
az resource invoke-action --action myCustomAction \
Parameter | Required | Description ||
-*action* | Yes | The name of the action defined in the custom provider.
-*ids* | Yes | The resource ID of the custom provider.
+*action* | Yes | The name of the action defined in the custom resource provider.
+*ids* | Yes | The resource ID of the custom resource provider.
*request-body* | No | The request body that will be sent to the endpoint. ### Custom resources
Parameter | Required | Description
# [Azure CLI](#tab/azure-cli) > [!NOTE]
-> You must replace the `{subscriptionId}` and `{resourceGroupName}` placeholders with the subscription and resource group of where you deployed the custom provider.
+> You must replace the `{subscriptionId}` and `{resourceGroupName}` placeholders with the subscription and resource group of where you deployed the custom resource provider.
#### Create a custom resource
az resource create --is-full-object \
Parameter | Required | Description || *is-full-object* | Yes | Indicates whether the properties object includes other options like location, tags, SKU, or plan.
-*id* | Yes | The resource ID of the custom resource. This ID is an extension of the custom provider resource ID.
+*id* | Yes | The resource ID of the custom resource. This ID is an extension of the custom resource provider's resource ID.
*properties* | Yes | The request body that will be sent to the endpoint. #### Delete a custom resource
az resource delete --id /subscriptions/{subscriptionId}/resourceGroups/{resource
Parameter | Required | Description ||
-*id* | Yes | The resource ID of the custom resource. This ID is an extension of the custom provider resource ID.
+*id* | Yes | The resource ID of the custom resource. This ID is an extension of the custom resource provider's resource ID.
#### Retrieve a custom resource
az resource show --id /subscriptions/{subscriptionId}/resourceGroups/{resourceGr
Parameter | Required | Description ||
-*id* | Yes | The resource ID of the custom resource. This ID is an extension of the custom provider resource ID.
+*id* | Yes | The resource ID of the custom resource. This ID is an extension of the custom resource provider's resource ID.
# [Template](#tab/template)
A sample Resource Manager template:
Parameter | Required | Description ||
-*resourceTypeName* | Yes | The `name` value of the **resourceTypes** property defined in the custom provider.
-*resourceProviderName* | Yes | The custom provider instance name.
+*resourceTypeName* | Yes | The `name` value of the **resourceTypes** property defined in the custom resource provider.
+*resourceProviderName* | Yes | The custom resource provider's instance name.
*customResourceName* | Yes | The custom resource name. > [!NOTE]
-> After you finish deploying and using the custom provider, remember to clean up all created resources including the Azure function app.
+> After you finish deploying and using the custom resource provider, remember to clean up all created resources including the Azure function app.
## Next steps
-In this article, you learned about custom providers. For more information, see:
+In this article, you learned about custom resource providers. For more information, see:
- [How to: Add custom actions to Azure REST API](./custom-providers-action-endpoint-how-to.md) - [How to: Add custom resources to Azure REST API](./custom-providers-resources-endpoint-how-to.md)
azure-resource-manager Tutorial Custom Providers Function Authoring https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-resource-manager/custom-providers/tutorial-custom-providers-function-authoring.md
Title: Author a RESTful endpoint
-description: This tutorial shows how to author a RESTful endpoint for custom providers. It details how to handle requests and responses for the supported RESTful HTTP methods.
+description: This tutorial shows how to author a RESTful endpoint for custom resource providers. It details how to handle requests and responses for the supported RESTful HTTP methods.
Last updated 05/06/2022
-# Author a RESTful endpoint for custom providers
+# Author a RESTful endpoint for custom resource providers
-A custom provider is a contract between Azure and an endpoint. With custom providers, you can customize workflows on Azure. This tutorial shows how to author a custom provider RESTful endpoint. If you're unfamiliar with Azure Custom Providers, see [the overview on custom resource providers](overview.md).
+A custom resource provider is a contract between Azure and an endpoint. With custom resource providers, you can customize workflows on Azure. This tutorial shows how to author a custom resource provider RESTful endpoint. If you're unfamiliar with Azure Custom Resource Providers, see [the overview on custom resource providers](overview.md).
> [!NOTE]
-> This tutorial builds on the tutorial [Set up Azure Functions for Azure Custom Providers](./tutorial-custom-providers-function-setup.md). Some of the steps in this tutorial work only if a function app has been set up in Azure Functions to work with custom providers.
+> This tutorial builds on the tutorial [Set up Azure Functions for custom resource providers](./tutorial-custom-providers-function-setup.md). Some of the steps in this tutorial work only if a function app has been set up in Azure Functions to work with custom resource providers.
## Work with custom actions and custom resources
-In this tutorial, you update the function app to work as a RESTful endpoint for your custom provider. Resources and actions in Azure are modeled after the following basic RESTful specification:
+In this tutorial, you update the function app to work as a RESTful endpoint for your custom resource provider. Resources and actions in Azure are modeled after the following basic RESTful specification:
- **PUT**: Create a new resource - **GET (instance)**: Retrieve an existing resource
In this tutorial, you update the function app to work as a RESTful endpoint for
## Partition custom resources in storage
-Because you're creating a RESTful service, you need to store the created resources. For Azure Table storage, you need to generate partition and row keys for your data. For custom providers, data should be partitioned to the custom provider. When an incoming request is sent to the custom provider, the custom provider adds the `x-ms-customproviders-requestpath` header to outgoing requests to the endpoint.
+Because you're creating a RESTful service, you need to store the created resources. For Azure Table storage, you need to generate partition and row keys for your data. For custom resource providers, data should be partitioned to the custom resource provider. When an incoming request is sent to the custom resource provider, the custom resource provider adds the `x-ms-customproviders-requestpath` header to outgoing requests to the endpoint.
The following example shows an `x-ms-customproviders-requestpath` header for a custom resource:
Based on the `x-ms-customproviders-requestpath` header, you can create the *part
Parameter | Template | Description ||
-*partitionKey* | `{subscriptionId}:{resourceGroupName}:{resourceProviderName}` | The *partitionKey* parameter specifies how the data is partitioned. Usually the data is partitioned by the custom provider instance.
+*partitionKey* | `{subscriptionId}:{resourceGroupName}:{resourceProviderName}` | The *partitionKey* parameter specifies how the data is partitioned. Usually the data is partitioned by the custom resource provider instance.
*rowKey* | `{myResourceType}:{myResourceName}` | The *rowKey* parameter specifies the individual identifier for the data. Usually the identifier is the name of the resource. You also need to create a new class to model your custom resource. In this tutorial, you add the following **CustomResource** class to your function app:
public class CustomResource : ITableEntity
**CustomResource** is a simple, generic class that accepts any input data. It's based on **ITableEntity**, which is used to store data. The **CustomResource** class implements all properties from interface **ITableEntity**: **timestamp**, **eTag**, **partitionKey**, and **rowKey**.
-## Support custom provider RESTful methods
+## Support custom resource provider RESTful methods
> [!NOTE] > If you aren't copying the code directly from this tutorial, the response content must be valid JSON that sets the `Content-Type` header to `application/json`.
-Now that you've set up data partitioning, create the basic CRUD and trigger methods for custom resources and custom actions. Because custom providers act as proxies, the RESTful endpoint must model and handle the request and response. The following code snippets show how to handle the basic RESTful operations.
+Now that you've set up data partitioning, create the basic CRUD and trigger methods for custom resources and custom actions. Because custom resource providers act as proxies, the RESTful endpoint must model and handle the request and response. The following code snippets show how to handle the basic RESTful operations.
### Trigger a custom action
-For custom providers, a custom action is triggered through POST requests. A custom action can optionally accept a request body that contains a set of input parameters. The action then returns a response that signals the result of the action and whether it succeeded or failed.
+For custom resource providers, a custom action is triggered through POST requests. A custom action can optionally accept a request body that contains a set of input parameters. The action then returns a response that signals the result of the action and whether it succeeded or failed.
Add the following **TriggerCustomAction** method to your function app:
The **TriggerCustomAction** method accepts an incoming request and echoes back t
### Create a custom resource
-For custom providers, a custom resource is created through PUT requests. The custom provider accepts a JSON request body, which contains a set of properties for the custom resource. Resources in Azure follow a RESTful model. You can use the same request URL to create, retrieve, or delete a resource.
+For custom resource providers, a custom resource is created through PUT requests. The custom resource provider accepts a JSON request body, which contains a set of properties for the custom resource. Resources in Azure follow a RESTful model. You can use the same request URL to create, retrieve, or delete a resource.
Add the following **CreateCustomResource** method to create new resources:
Add the following **CreateCustomResource** method to create new resources:
/// <param name="requestMessage">The HTTP request message.</param> /// <param name="tableClient">The client that allows you to interact with Azure Tables hosted in either Azure storage accounts or Azure Cosmos DB table API.</param> /// <param name="azureResourceId">The parsed Azure resource ID.</param>
-/// <param name="partitionKey">The partition key for storage. This is the custom provider ID.</param>
+/// <param name="partitionKey">The partition key for storage. This is the custom resource provider ID.</param>
/// <param name="rowKey">The row key for storage. This is '{resourceType}:{customResourceName}'.</param> /// <returns>The HTTP response containing the created custom resource.</returns> public static async Task<HttpResponseMessage> CreateCustomResource(HttpRequestMessage requestMessage, TableClient tableClient, ResourceId azureResourceId, string partitionKey, string rowKey)
public static async Task<HttpResponseMessage> CreateCustomResource(HttpRequestMe
} ```
-The **CreateCustomResource** method updates the incoming request to include the Azure-specific fields **id**, **name**, and **type**. These fields are top-level properties used by services across Azure. They let the custom provider interoperate with other services like Azure Policy, Azure Resource Manager templates, and Azure Activity Log.
+The **CreateCustomResource** method updates the incoming request to include the Azure-specific fields **id**, **name**, and **type**. These fields are top-level properties used by services across Azure. They let the custom resource provider interoperate with other services like Azure Policy, Azure Resource Manager templates, and Azure Activity Log.
Property | Example | Description ||
In addition to adding the properties, you also saved the JSON document to Azure
### Retrieve a custom resource
-For custom providers, a custom resource is retrieved through GET requests. A custom provider *doesn't* accept a JSON request body. For GET requests, the endpoint uses the `x-ms-customproviders-requestpath` header to return the already created resource.
+For custom resource providers, a custom resource is retrieved through GET requests. A custom resource provider *doesn't* accept a JSON request body. For GET requests, the endpoint uses the `x-ms-customproviders-requestpath` header to return the already created resource.
Add the following **RetrieveCustomResource** method to retrieve existing resources:
Add the following **RetrieveCustomResource** method to retrieve existing resourc
/// </summary> /// <param name="requestMessage">The HTTP request message.</param> /// <param name="tableClient">The client that allows you to interact with Azure Tables hosted in either Azure storage accounts or Azure Cosmos DB table API.</param>
-/// <param name="partitionKey">The partition key for storage. This is the custom provider ID.</param>
+/// <param name="partitionKey">The partition key for storage. This is the custom resource provider ID.</param>
/// <param name="rowKey">The row key for storage. This is '{resourceType}:{customResourceName}'.</param> /// <returns>The HTTP response containing the existing custom resource.</returns> public static async Task<HttpResponseMessage> RetrieveCustomResource(HttpRequestMessage requestMessage, TableClient tableClient, string partitionKey, string rowKey)
In Azure, resources follow a RESTful model. The request URL that creates a resou
### Remove a custom resource
-For custom providers, a custom resource is removed through DELETE requests. A custom provider *doesn't* accept a JSON request body. For a DELETE request, the endpoint uses the `x-ms-customproviders-requestpath` header to delete the already created resource.
+For custom resource providers, a custom resource is removed through DELETE requests. A custom resource provider *doesn't* accept a JSON request body. For a DELETE request, the endpoint uses the `x-ms-customproviders-requestpath` header to delete the already created resource.
Add the following **RemoveCustomResource** method to remove existing resources:
Add the following **RemoveCustomResource** method to remove existing resources:
/// </summary> /// <param name="requestMessage">The HTTP request message.</param> /// <param name="tableClient">The client that allows you to interact with Azure Tables hosted in either Azure storage accounts or Azure Cosmos DB table API.</param>
-/// <param name="partitionKey">The partition key for storage. This is the custom provider ID.</param>
+/// <param name="partitionKey">The partition key for storage. This is the custom resource provider ID.</param>
/// <param name="rowKey">The row key for storage. This is '{resourceType}:{customResourceName}'.</param> /// <returns>The HTTP response containing the result of the deletion.</returns> public static async Task<HttpResponseMessage> RemoveCustomResource(HttpRequestMessage requestMessage, TableClient tableClient, string partitionKey, string rowKey)
In Azure, resources follow a RESTful model. The request URL that creates a resou
### List all custom resources
-For custom providers, you can enumerate a list of existing custom resources by using collection GET requests. A custom provider *doesn't* accept a JSON request body. For a collection of GET requests, the endpoint uses the `x-ms-customproviders-requestpath` header to enumerate the already created resources.
+For custom resource providers, you can enumerate a list of existing custom resources by using collection GET requests. A custom resource provider *doesn't* accept a JSON request body. For a collection of GET requests, the endpoint uses the `x-ms-customproviders-requestpath` header to enumerate the already created resources.
Add the following **EnumerateAllCustomResources** method to enumerate the existing resources:
Add the following **EnumerateAllCustomResources** method to enumerate the existi
/// </summary> /// <param name="requestMessage">The HTTP request message.</param> /// <param name="tableClient">The client that allows you to interact with Azure Tables hosted in either Azure storage accounts or Azure Cosmos DB table API.</param>
-/// <param name="partitionKey">The partition key for storage. This is the custom provider ID.</param>
+/// <param name="partitionKey">The partition key for storage. This is the custom resource provider ID.</param>
/// <param name="resourceType">The resource type of the enumeration.</param> /// <returns>The HTTP response containing a list of resources stored under 'value'.</returns> public static async Task<HttpResponseMessage> EnumerateAllCustomResources(HttpRequestMessage requestMessage, TableClient tableClient, string partitionKey, string resourceType)
public static async Task<HttpResponseMessage> EnumerateAllCustomResources(HttpRe
> [!NOTE] > The RowKey QueryComparisons.GreaterThan and QueryComparisons.LessThan is Azure Table storage syntax to perform a "startswith" query for strings.
-To list all existing resources, generate an Azure Table storage query that ensures the resources exist under your custom provider partition. The query then checks that the row key starts with the same `{myResourceType}` value.
+To list all existing resources, generate an Azure Table storage query that ensures the resources exist under your custom resource provider partition. The query then checks that the row key starts with the same `{myResourceType}` value.
## Integrate RESTful operations
After all the RESTful methods are added to the function app, update the main **R
```csharp /// <summary>
-/// Entry point for the function app webhook that acts as the service behind a custom provider.
+/// Entry point for the function app webhook that acts as the service behind a custom resource provider.
/// </summary> /// <param name="requestMessage">The HTTP request message.</param> /// <param name="log">The logger.</param>
public static async Task<HttpResponseMessage> Run(HttpRequestMessage req, ILogge
"application/json"); }
- log.LogInformation($"The Custom Provider Function received a request '{req.Method}' for resource '{requestPath}'.");
+ log.LogInformation($"The Custom Resource Provider Function received a request '{req.Method}' for resource '{requestPath}'.");
// Determines if it is a collection level call or action. var isResourceRequest = requestPath.Split('/').Length % 2 == 1;
public static async Task<HttpResponseMessage> Run(HttpRequestMessage req, ILogge
} ```
-The updated **Run** method now includes the *tableClient* input binding that you added for Azure Table storage. The first part of the method reads the `x-ms-customproviders-requestpath` header and uses the `Microsoft.Azure.Management.ResourceManager.Fluent` library to parse the value as a resource ID. The `x-ms-customproviders-requestpath` header is sent by the custom provider and specifies the path of the incoming request.
+The updated **Run** method now includes the *tableClient* input binding that you added for Azure Table storage. The first part of the method reads the `x-ms-customproviders-requestpath` header and uses the `Microsoft.Azure.Management.ResourceManager.Fluent` library to parse the value as a resource ID. The `x-ms-customproviders-requestpath` header is sent by the custom resource provider and specifies the path of the incoming request.
By using the parsed resource ID, you can generate the **partitionKey** and **rowKey** values for the data to look up or to store custom resources.
using Newtonsoft.Json;
using Newtonsoft.Json.Linq; ```
-If you get lost at any point of this tutorial, you can find the complete code sample in the [custom provider C# RESTful endpoint reference](./reference-custom-providers-csharp-endpoint.md). After you've finished the function app, save the function app URL. It can be used to trigger the function app in later tutorials.
+If you get lost at any point of this tutorial, you can find the complete code sample in the [custom resource provider C# RESTful endpoint reference](./reference-custom-providers-csharp-endpoint.md). After you've finished the function app, save the function app URL. It can be used to trigger the function app in later tutorials.
## Next steps
-In this article, you authored a RESTful endpoint to work with an Azure Custom Provider endpoint. To learn how to create a custom provider, go to the article [Tutorial: Creating a custom provider](./tutorial-custom-providers-create.md).
+In this article, you authored a RESTful endpoint to work with an Azure Custom Resource Provider endpoint. To learn how to create a custom resource provider, go to the article [Create and use a custom resource provider](./tutorial-custom-providers-create.md).
azure-resource-manager Tutorial Custom Providers Function Setup https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-resource-manager/custom-providers/tutorial-custom-providers-function-setup.md
Title: Set up Azure Functions
-description: This tutorial describes how to create a function app in Azure Functions that works with Azure Custom Providers.
+description: This tutorial describes how to create a function app in Azure Functions that works with Azure Custom Resource Providers.
Last updated 09/20/2022
-# Set up Azure Functions for custom providers
+# Set up Azure Functions for custom resource providers
-A custom provider is a contract between Azure and an endpoint. With custom providers, you can change workflows in Azure. This tutorial shows how to set up a function app in Azure Functions to work as a custom provider endpoint.
+A custom resource provider is a contract between Azure and an endpoint. With custom resource providers, you can change workflows in Azure. This tutorial shows how to set up a function app in Azure Functions to work as a custom resource provider endpoint.
## Create the function app > [!NOTE]
-> In this tutorial, you create a simple service endpoint that uses a function app in Azure Functions. However, a custom provider can use any publicly accessible endpoint. Alternatives include Azure Logic Apps, Azure API Management, and the Web Apps feature of Azure App Service.
+> In this tutorial, you create a simple service endpoint that uses a function app in Azure Functions. However, a custom resource provider can use any publicly accessible endpoint. Alternatives include Azure Logic Apps, Azure API Management, and the Web Apps feature of Azure App Service.
To start this tutorial, you should first follow the tutorial [Create your first function app in the Azure portal](../../azure-functions/functions-get-started.md). That tutorial creates a .NET core webhook function that can be modified in the Azure portal. It's also the foundation for the current tutorial.
To install the Azure Table storage bindings:
1. In the **Table name** box, enter *myCustomResources*. 1. Select **Save** to save the updated input parameter. ## Update RESTful HTTP methods
-To set up the Azure function to include the custom provider RESTful request methods:
+To set up the Azure function to include the custom resource provider RESTful request methods:
1. Go to the **Integrate** tab for the `HttpTrigger`. 1. Under **Selected HTTP methods**, select **GET**, **POST**, **DELETE**, and **PUT**. ## Add Azure Resource Manager NuGet packages > [!NOTE] > If your C# project file is missing from the project directory, you can add it manually, or it will appear after the `Microsoft.Azure.WebJobs.Extensions.Storage` extension is installed on the function app.
-Next, update the C# project file to include helpful NuGet libraries. These libraries make it easier to parse incoming requests from custom providers. Follow the steps to [add extensions from the portal](../../azure-functions/functions-bindings-register.md) and update the C# project file to include the following package references:
+Next, update the C# project file to include helpful NuGet libraries. These libraries make it easier to parse incoming requests from custom resource providers. Follow the steps to [add extensions from the portal](../../azure-functions/functions-bindings-register.md) and update the C# project file to include the following package references:
```xml <PackageReference Include="Microsoft.Azure.WebJobs.Extensions.Storage" Version="3.0.4" />
The following XML element is an example C# project file:
## Next steps
-In this tutorial, you set up a function app in Azure Functions to work as an Azure Custom Provider endpoint.
+In this tutorial, you set up a function app in Azure Functions to work as an Azure Custom Resource Provider endpoint.
-To learn how to author a RESTful custom provider endpoint, see [Tutorial: Authoring a RESTful custom provider endpoint](./tutorial-custom-providers-function-authoring.md).
+To learn how to author a RESTful custom resource provider endpoint, see [Author a RESTful endpoint for custom resource providers](./tutorial-custom-providers-function-authoring.md).
azure-resource-manager Tutorial Get Started With Custom Providers https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-resource-manager/custom-providers/tutorial-get-started-with-custom-providers.md
# Create custom actions and resources in Azure
-A custom provider is a contract between Azure and an endpoint. With custom providers, you can change workflows in Azure by adding new APIs into Azure Resource Manager. With these custom APIs, Resource Manager can use new deployment and management capabilities.
+A custom resource provider is a contract between Azure and an endpoint. With custom resource providers, you can change workflows in Azure by adding new APIs into Azure Resource Manager. With these custom APIs, Resource Manager can use new deployment and management capabilities.
This tutorial goes through a simple example of how to add new actions and resources to Azure and how to integrate them.
-## Set up Azure Functions for Azure Custom Providers
+## Set up Azure Functions for Azure Custom Resource Providers
-Part one of this tutorial describes how to set up an Azure function app to work with custom providers:
+Part one of this tutorial describes how to set up an Azure function app to work with custom resource providers:
-- [Set up Azure Functions for Azure Custom Providers](./tutorial-custom-providers-function-setup.md)
+- [Set up Azure Functions for Azure Custom Resource Providers](./tutorial-custom-providers-function-setup.md)
-Custom providers can work with any public URL.
+Custom resource providers can work with any public URL.
-## Author a RESTful endpoint for custom providers
+## Author a RESTful endpoint for custom resource providers
-Part two of this tutorial describes how to author a RESTful endpoint for custom providers:
+Part two of this tutorial describes how to author a RESTful endpoint for custom resource providers:
-- [Authoring a RESTful endpoint for custom providers](./tutorial-custom-providers-function-authoring.md)
+- [Authoring a RESTful endpoint for custom resource providers](./tutorial-custom-providers-function-authoring.md)
-## Create and use a custom provider
+## Create and use a custom resource provider
-Part three of this tutorial describes how to create a custom provider and use its custom actions and resources:
+Part three of this tutorial describes how to create a custom resource provider and use its custom actions and resources:
-- [Create and use a custom provider](./tutorial-custom-providers-create.md)
+- [Create and use a custom resource provider](./tutorial-custom-providers-create.md)
## Next steps
-In this tutorial, you learned about custom providers and how to build one. To continue to the next tutorial, see [Tutorial: Set up Azure Functions for Azure Custom Providers](./tutorial-custom-providers-function-setup.md).
+In this tutorial, you learned about custom resource providers and how to build one. To continue to the next tutorial, see [Tutorial: Set up Azure Functions for Azure Custom Resource Providers](./tutorial-custom-providers-function-setup.md).
If you're looking for references or a quickstart, here are some useful links: -- [Quickstart: Create an Azure custom resource provider and deploy custom resources](./create-custom-provider.md)
+- [Quickstart: Create Azure Custom Resource Provider and deploy custom resources](./create-custom-provider.md)
- [How to: Adding custom actions to Azure REST API](./custom-providers-action-endpoint-how-to.md) - [How to: Adding custom resources to Azure REST API](./custom-providers-resources-endpoint-how-to.md)
azure-resource-manager Tutorial Resource Onboarding https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-resource-manager/custom-providers/tutorial-resource-onboarding.md
Title: Extend resources with custom providers
-description: Resource onboarding through custom providers allows you to manipulate and extend existing Azure resources.
+ Title: Extend resources with custom resource providers
+description: Resource onboarding through custom resource providers allows you to manipulate and extend existing Azure resources.
Last updated 05/06/2022
-# Extend resources with custom providers
+# Extend resources with custom resource providers
-In this tutorial, you deploy a custom resource provider to Azure that extends the Azure Resource Manager API with the Microsoft.CustomProviders/associations resource type. The tutorial shows how to extend existing resources that are outside the resource group where the custom provider instance is located. In this tutorial, the custom resource provider is powered by an Azure logic app, but you can use any public API endpoint.
+In this tutorial, you deploy a custom resource provider to Azure that extends the Azure Resource Manager API with the Microsoft.CustomProviders/associations resource type. The tutorial shows how to extend existing resources that are outside the resource group where the custom resource provider instance is located. In this tutorial, the custom resource provider is powered by an Azure logic app, but you can use any public API endpoint.
## Prerequisites To complete this tutorial, make sure you review the following:
-* The capabilities of [Azure Custom Providers](overview.md).
-* Basic information about [resource onboarding with custom providers](concepts-resource-onboarding.md).
+* The capabilities of [Azure Custom Resource Providers](overview.md).
+* Basic information about [resource onboarding with custom resource providers](concepts-resource-onboarding.md).
## Get started with resource onboarding
-In this tutorial, there are two pieces that need to be deployed: **the custom provider** and **the association**. To make the process easier, you can optionally use a single template that deploys both.
+In this tutorial, there are two pieces that need to be deployed: **the custom resource provider** and **the association**. To make the process easier, you can optionally use a single template that deploys both.
The template will use these resources:
The template will use these resources:
"type": "string", "defaultValue": "[uniqueString(resourceGroup().id)]", "metadata": {
- "description": "Name of the custom provider to be created."
+ "description": "Name of the custom resource provider to be created."
} }, "customResourceProviderId": { "type": "string", "defaultValue": "", "metadata": {
- "description": "The resource ID of an existing custom provider. Provide this to skip deployment of new logic app and custom provider."
+ "description": "The resource ID of an existing custom resource provider. Provide this to skip deployment of new logic app and custom resource provider."
} }, "associationName": {
The template will use these resources:
} ```
-### Deploy the custom provider infrastructure
+### Deploy the custom resource provider infrastructure
-The first part of the template deploys the custom provider infrastructure. This infrastructure defines the effect of the associations resource. If you're not familiar with custom providers, see [Custom provider basics](overview.md).
+The first part of the template deploys the custom resource provider infrastructure. This infrastructure defines the effect of the associations resource. If you're not familiar with custom resource providers, see [Azure Custom Resource Providers Overview](overview.md).
-Let's deploy the custom provider infrastructure. Either copy, save, and deploy the preceding template, or follow along and deploy the infrastructure using the Azure portal.
+Let's deploy the custom resource provider infrastructure. Either copy, save, and deploy the preceding template, or follow along and deploy the infrastructure using the Azure portal.
1. Go to the [Azure portal](https://portal.azure.com).
Let's deploy the custom provider infrastructure. Either copy, save, and deploy t
| Location | Yes | The location for the resources in the template. | | Logic App Name | No | The name of the logic app. | | Custom Resource Provider Name | No | The custom resource provider name. |
- | Custom Resource Provider Id | No | An existing custom resource provider that supports the association resource. If you specify a value here, the logic app and custom provider deployment will be skipped. |
+ | Custom Resource Provider Id | No | An existing custom resource provider that supports the association resource. If you specify a value here, the logic app and custom resource provider deployment will be skipped. |
| Association Name | No | The name of the association resource. | Sample parameters:
Let's deploy the custom provider infrastructure. Either copy, save, and deploy t
Here's the resource group, with **Show hidden types** selected:
- ![Custom provider deployment](media/tutorial-resource-onboarding/showhidden.png)
+ ![Custom resource provider deployment](media/tutorial-resource-onboarding/showhidden.png)
10. Explore the logic app **Runs history** tab to see the calls for the association create:
Let's deploy the custom provider infrastructure. Either copy, save, and deploy t
## Deploy additional associations
-After you have the custom provider infrastructure set up, you can easily deploy more associations. The resource group for additional associations doesn't have to be the same as the resource group where you deployed the custom provider infrastructure. To create an association, you need to have Microsoft.CustomProviders/resourceproviders/write permissions on the specified Custom Resource Provider ID.
+After you have the custom resource provider infrastructure set up, you can easily deploy more associations. The resource group for additional associations doesn't have to be the same as the resource group where you deployed the custom resource provider infrastructure. To create an association, you need to have Microsoft.CustomProviders/resourceproviders/write permissions on the specified Custom Resource Provider ID.
-1. Go to the custom provider **Microsoft.CustomProviders/resourceProviders** resource in the resource group of the previous deployment. You need to select the **Show hidden types** check box:
+1. Go to the custom resource provider **Microsoft.CustomProviders/resourceProviders** resource in the resource group of the previous deployment. You need to select the **Show hidden types** check box:
![Go to the resource](media/tutorial-resource-onboarding/showhidden.png)
-2. Copy the Resource ID property of the custom provider.
+2. Copy the Resource ID property of the custom resource provider.
3. Search for *templates* in **All Services** or by using the main search box:
After you have the custom provider infrastructure set up, you can easily deploy
![Select the previously created template and then select Deploy](media/tutorial-resource-onboarding/templateselectspecific.png)
-5. Enter the settings for the required fields and then select the subscription and a different resource group. For the **Custom Resource Provider Id** setting, enter the Resource ID that you copied from the custom provider that you deployed earlier.
+5. Enter the settings for the required fields and then select the subscription and a different resource group. For the **Custom Resource Provider Id** setting, enter the Resource ID that you copied from the custom resource provider that you deployed earlier.
6. Go to the deployment and wait for it to finish. It should now deploy only the new associations resource:
You can go back to the logic app **Run history** and see that another call was m
## Next steps
-In this article, you deployed a custom resource provider to Azure that extends the Azure Resource Manager API with the Microsoft.CustomProviders/associates resource type. To continue learning about custom providers, see:
+In this article, you deployed a custom resource provider to Azure that extends the Azure Resource Manager API with the Microsoft.CustomProviders/associates resource type. To continue learning about custom resource providers, see:
-* [Deploy associations for a custom provider using Azure Policy](./concepts-built-in-policy.md)
-* [Azure Custom Providers resource onboarding overview](./concepts-resource-onboarding.md)
+* [Deploy associations for a custom resource provider using Azure Policy](./concepts-built-in-policy.md)
+* [Azure Custom Resource Providers resource onboarding overview](./concepts-resource-onboarding.md)
azure-video-indexer Clapperboard Metadata https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-video-indexer/clapperboard-metadata.md
+
+ Title: Enable and view a clapperboard with extracted metadata
+description: Learn about how to enable and view a clapperboard with extracted metadata.
+++ Last updated : 09/20/2022+++
+# Enable and view a clapperboard with extracted metadata (preview)
+
+The clapperboard insight is used to detect clapper board instances and information written on each. For example, *head* or *tail* (the board is upside-down), *production*, *roll*, *scene*, *take*, etc. A [clapperboard](https://en.wikipedia.org/wiki/Clapperboard)'s extracted metadata is most useful to customers involved in the movie post-production process.
+
+When the movie is being edited, the slate is removed from the scene but a metadata with what's on the clapper board is important. Azure Video Indexer extracts the data from clapperboards, preserves and presents the metadata as described in this article.
+
+This insight is most useful to customers involved in the movie post-production process.
+
+## View the insight
+
+### View post-production insights
+
+In order to set the indexing process to include the slate metadata, select the **Video + audio indexing** -> **Advanced** presets.
+
+> [!div class="mx-imgBorder"]
+> :::image type="content" source="./media/slate-detection-process/advanced-setting.png" alt-text="This image shows the advanced setting in order to view post-production clapperboards insights.":::
+
+After the file has been uploaded and indexed, if you want to view the timeline of the insight, select the **Post-production** checkmark from the list of insights.
+
+> [!div class="mx-imgBorder"]
+> :::image type="content" source="./media/slate-detection-process/post-production-checkmark.png" alt-text="This image shows the post-production checkmark needed to view clapperboards.":::
+
+### Clapperboards
+
+Clapperboards contain titles, like: *production*, *roll*, *scene*, *take* and values associated with each title.
+
+The titles and their values' quality may not always be recognizable. For more information, see [limitations](#clapperboard-limitations).
+
+For example, take this clapperboard:
+
+> [!div class="mx-imgBorder"]
+> :::image type="content" source="./media/slate-detection-process/clapperboard.png" alt-text="This image shows a clapperboard.":::
+
+In the following example the board contains the following fields:
+
+|title|content|
+|||
+|camera|COD|
+|date|FILTER (in this case the board contains no date)|
+|director|John|
+|production|Prod name|
+|scene|FPS|
+|take|99|
+
+#### View the insight
+
+To see the instances on the website, select **Insights** and scroll to **Clapperboards**. You can hover over each clapperboard, or unfold **Show/Hide clapperboard info** and see the metadata:
+
+> [!div class="mx-imgBorder"]
+> :::image type="content" source="./media/slate-detection-process/clapperboard-metadata.png" alt-text="This image shows the clapperboard metadata.":::
+
+#### View the timeline
+
+If you checked the **Post-production** insight, You can also find the clapperboard instance and its timeline (includes time, fields' values) on the **Timeline** tab.
+
+#### Vew JSON
+
+To display the JSON file:
+
+1. Select Download and then Insights (JSON).
+1. Copy the `clapperboard` element, under `insights`, and paste it into your Online JSON Viewer.
+
+> [!div class="mx-imgBorder"]
+> :::image type="content" source="./media/slate-detection-process/clapperboard-json.png" alt-text="This image shows the clapperboard metadata in json.":::
+
+The following table describes fields found in json:
+
+|Name|Description|
+|||
+|`id`|The clapperboard ID.|
+|`thumbnailId`|The ID of the thumbnail.|
+|`isHeadSlate`|The value stands for head or tail (the board is upside-down) of the clapper board: `true` or `false`.|
+|`fields`|The fields found in the clapper board; also each field's name and value.|
+|`instances`|A list of time ranges where this element appeared.|
+
+## Clapperboard limitations
+
+- The titles of the fields appearing on the clapper board are optimized to identify the most popular fields appearing on top of clapper boards.
+- Handwritten text or digital digits may not be correctly identified by the fields detection algorithm.
+- The algorithm is optimized to identify fields categories that appear horizontally.
+- The clapper board may not be detected if the frame is blurred or that the text written on it can't be identified by the human eye.
+- Empty fieldsΓÇÖ values may lead to to wrong fields categories.
+<!-- If a part of a clapper board is hidden a value with the highest confidence is shown. -->
+
+## Next steps
+
+* [Slate detection overview](slate-detection-insight.md)
+* [How to enable and view digital patterns with color bars](digital-patterns-color-bars.md).
+* [How to enable and view textless slate with matched scene](textless-slate-scene-matching.md).
azure-video-indexer Digital Patterns Color Bars https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-video-indexer/digital-patterns-color-bars.md
+
+ Title: Enable and view digital patterns with color bars
+description: Learn about how to enable and view digital patterns with color bars.
+++ Last updated : 09/20/2022+++
+# Enable and view digital patterns with color bars (preview)
+
+This article shows how to enable and view digital patterns with color bars (preview).
+
+You can view the names of the specific digital patterns. <!-- They are searchable by the color bar type (Color Bar/Test card) in the insights. -->The timeline includes the following types:
+
+- Color bars
+- Test cards
+
+This insight is most useful to customers involved in the movie post-production process.
+
+## View post-production insights
+
+In order to set the indexing process to include the slate metadata, select the **Video + audio indexing** -> **Advanced** presets.
+
+> [!div class="mx-imgBorder"]
+> :::image type="content" source="./media/slate-detection-process/advanced-setting.png" alt-text="This image shows the advanced setting in order to view post-production clapperboards insights.":::
+
+After the file has been uploaded and indexed, if you want to view the timeline of the insight, select the **Post-production** checkmark from the list of insights.
+
+> [!div class="mx-imgBorder"]
+> :::image type="content" source="./media/slate-detection-process/post-production-checkmark.png" alt-text="This image shows the post-production checkmark needed to view clapperboards.":::
+
+### View digital patterns insights
+
+#### View the insight
+
+To see the instances on the website, select **Insights** and scroll to **Labels**.
+The insight shows under **Labels** in the **Insight** tab.
+
+> [!div class="mx-imgBorder"]
+> :::image type="content" source="./media/slate-detection-process/insights-color-bars.png" alt-text="This image shows the color bars under labels.":::
+
+#### View the timeline
+
+If you checked the **Post-production** insight, you can find the color bars instance and timeline under the **Timeline** tab.
+
+> [!div class="mx-imgBorder"]
+> :::image type="content" source="./media/slate-detection-process/timeline-color-bars.png" alt-text="This image shows the color bars under timeline.":::
+
+#### View JSON
+
+To display the JSON file:
+
+1. Select Download and then Insights (JSON).
+1. Copy the `framePatterns` element, under `insights`, and paste it into your Online JSON Viewer.
+
+> [!div class="mx-imgBorder"]
+> :::image type="content" source="./media/slate-detection-process/color-bar-json.png" alt-text="This image shows the color bars json.":::
+
+The following table describes fields found in json:
+
+|Name|Description|
+|||
+|`id`|The digital pattern ID.|
+|`patternType`|The following types are supported: ColorBars, TestCards.|
+|`confidence`|The confidence level for color bar accuracy.|
+|`name`|The name of the element. For example, "SMPTE color bars".|
+|`displayName`| The friendly/display name.
+|`thumbnailId`|The ID of the thumbnail.|
+|`instances`|A list of time ranges where this element appeared.|
+
+## Limitations
+
+- There can be a mismatch if the input video is of low quality (for example ΓÇô old Analog recordings).
+- The digital patterns will be identified over the 10 min of the beginning and 10 min of the ending part of the video.
+<!-
+
+## Next steps
+
+* [Slate detection overview](slate-detection-insight.md)
+* [How to enable and view clapper board with extracted metadata](clapperboard-metadata.md)
+* [How to enable and view textless slate with matched scene](textless-slate-scene-matching.md)
azure-video-indexer Release Notes https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-video-indexer/release-notes.md
var uploadRequestResult = await client.PostAsync($"{apiUrl}/{accountInfo.Loc
## September 2022
-### General availability of Azure Resource Management (ARM)
+### General availability of ARM-based accounts
-With the ARM-based [paid (unlimited)](accounts-overview.md) account you are able to use:
+With an Azure Resource Management (ARM) based [paid (unlimited)](accounts-overview.md) account you are able to use:
- [Azure role-based access control (RBAC)](../role-based-access-control/overview.md). - Managed Identity to better secure the communication between your Azure Media Services and Azure Video Indexer account, Network Service Tags, and native integration with Azure Monitor to monitor your account (audit and indexing logs). - Scale and automate your [deployment with ARM-template](deploy-with-arm-template.md), [bicep](deploy-with-bicep.md) or terraform. - [Create logic apps connector for ARM-based accounts](logic-apps-connector-arm-accounts.md).
-
+ To create an ARM-based account, see [create an account](create-account-portal.md).
+### Slate detection insights (preview)
+
+The following slate detection (a movie post-production) insights are automatically identified when indexing a video using the advanced indexing option:
+
+* Clapperboard detection with metadata extraction.
+* Digital patterns detection, including color bars.
+* Textless slate detection, including scene matching.
+
+For details, see [Slate detection](slate-detection-insight.md).
+ ### New source languages support for STT, translation, and search Now supporting source languages for STT (speech-to-text), translation, and search in Ukraine and Vietnamese. It means transcription, translation, and search features are also supported for these languages in Azure Video Indexer web applications, widgets and APIs. For more information, see [supported languages](language-support.md).
+### Edit the name speakers in the transcription
+
+You can now use the [Azure Video Indexer website](https://www.videoindexer.ai/) to edit the name of the speakers in the transcription.
+
+### Word level time annotation with confidence score
+
+An annotation is any type of additional information that is added to an already existing text, be it a transcription of an audio file or an original text file.
+
+Now supporting word level time annotation with confidence score.
+
+### Azure Monitor integration enabling indexing logs
+
+The new set of logs, described below, enables you to better monitor your indexing pipeline.
+
+Azure Video Indexer now supports Diagnostics settings for indexing events. You can now export logs monitoring upload, and re-indexing of media files through diagnostics settings to Azure Log Analytics, Storage, Event Hubs, or a third-party solution.
+ ### Expanded the supported languages in LID and MLID through the API We expanded the list of the languages to be supported in LID (language identification) and MLID (multi language Identification) using APIs.
azure-video-indexer Slate Detection Insight https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-video-indexer/slate-detection-insight.md
+
+ Title: Slate detection insights
+description: Learn about slate detection insights.
+++ Last updated : 09/20/2022+++
+# The slate detection insights (preview)
+
+The following slate detection insights (listed below) are automatically identified when indexing a video using the advanced indexing option. These insights are most useful to customers involved in the movie post-production process.
+
+* [Clapperboard](https://en.wikipedia.org/wiki/Clapperboard) detection with metadata extraction. This insight is used to detect clapperboard instances and information written on each (for example, *production*, *roll*, *scene*, *take*, etc.
+* Digital patterns detection, including [color bars](https://en.wikipedia.org/wiki/SMPTE_color_bars).
+* Textless slate detection, including scene matching.
+
+## View post-production insights
+
+### The Insight tab
+
+In order to set the indexing process to include the slate metadata, select the **Video + audio indexing** -> **Advanced** presets.
+
+> [!div class="mx-imgBorder"]
+> :::image type="content" source="./media/slate-detection-process/advanced-setting.png" alt-text="This image shows the advanced setting in order to view post-production insights.":::
+
+### The Timeline tab
+
+After the file has been uploaded and indexed, if you want to view the timeline of the insight, select the **Post-production** checkmark from the list of insights.
+
+> [!div class="mx-imgBorder"]
+> :::image type="content" source="./media/slate-detection-process/post-production-checkmark.png" alt-text="This image shows the post-production checkmark.":::
+
+For details about viewing each slate insight, see:
+
+- [How to enable and view clapper board with extracted metadata](clapperboard-metadata.md).
+- [How to enable and view digital patterns with color bars](digital-patterns-color-bars.md)
+- [How to enable and view textless slate with scene matching](textless-slate-scene-matching.md).
+
+## Next steps
+
+[Overview](video-indexer-overview.md)
azure-video-indexer Textless Slate Scene Matching https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-video-indexer/textless-slate-scene-matching.md
+
+ Title: Enable and view a textless slate with matching scene
+description: Learn about how to enable and view a textless slate with matching scene.
+++ Last updated : 09/20/2022+++
+# Enable and view a textless slate with matching scene (preview)
+
+This article shows how to enable and view a textless slate with matching scene (preview).
+
+This insight is most useful to customers involved in the movie post-production process.
+
+## View post-production insights
+
+In order to set the indexing process to include the slate metadata, select the **Video + audio indexing** -> **Advanced** presets.
+
+> [!div class="mx-imgBorder"]
+> :::image type="content" source="./media/slate-detection-process/advanced-setting.png" alt-text="This image shows the advanced setting in order to view post-production clapperboards insights.":::
+
+After the file has been uploaded and indexed, if you want to view the timeline of the insight, select the **Post-production** checkmark from the list of insights.
+
+> [!div class="mx-imgBorder"]
+> :::image type="content" source="./media/slate-detection-process/post-production-checkmark.png" alt-text="This image shows the post-production checkmark needed to view clapperboards.":::
+
+### Insight
+
+This insight can only be viewed in the form of the downloaded json file.
+
+## Next steps
+
+* [Slate detection overview](slate-detection-insight.md)
+* [How to enable and view clapper board with extracted metadata](clapperboard-metadata.md).
+* [How to enable and view digital patterns with color bars](digital-patterns-color-bars.md).
azure-video-indexer Video Indexer Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-video-indexer/video-indexer-overview.md
Unless specified otherwise, a model is generally available.
* **People's detected clothing** (preview): Detects the clothing types of people appearing in the video and provides information such as long or short sleeves, long or short pants and skirt or dress. The detected clothing is associated with the people wearing it and the exact timestamp (start, end) along with a confidence level for the detection are provided. For more information, see [detected clothing](detected-clothing.md). * **Featured clothing** (preview): captures featured clothing images appearing in a video. You can improve your targeted ads by using the featured clothing insight. For information on how the featured clothing images are ranked and how to get the insights, see [featured clothing](observed-people-featured-clothing.md). * **Matched person** (preview): Matches people that were observed in the video with the corresponding faces detected. The matching between the observed people and the faces contain a confidence level.
+* **Slate detection** (preview): identifies the following movie post-production insights when indexing a video using the advanced indexing option:
+
+ * Clapperboard detection with metadata extraction.
+ * Digital patterns detection, including color bars.
+ * Textless slate detection, including scene matching.
+
+ For details, see [Slate detection](slate-detection-insight.md).
### Audio models
cognitive-services Rest Speech To Text Short https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Speech-Service/rest-speech-to-text-short.md
Previously updated : 05/16/2022 Last updated : 09/25/2022 ms.devlang: csharp
# Speech-to-text REST API for short audio
-Use cases for the speech-to-text REST API for short audio are limited. Use it only in cases where you can't use the [Speech SDK](speech-sdk.md). For [Batch transcription](batch-transcription.md) and [Custom Speech](custom-speech-overview.md), you should always use [Speech to Text REST API](rest-speech-to-text.md).
+Use cases for the speech-to-text REST API for short audio are limited. Use it only in cases where you can't use the [Speech SDK](speech-sdk.md).
Before you use the speech-to-text REST API for short audio, consider the following limitations:
-* Requests that use the REST API for short audio and transmit audio directly can contain no more than 60 seconds of audio.
+* Requests that use the REST API for short audio and transmit audio directly can contain no more than 60 seconds of audio. The input [audio formats](#audio-formats) are more limited compared to the [Speech SDK](speech-sdk.md).
* The REST API for short audio returns only final results. It doesn't provide partial results. * [Speech translation](speech-translation.md) is not supported via REST API for short audio. You need to use [Speech SDK](speech-sdk.md).
+* [Batch transcription](batch-transcription.md) and [Custom Speech](custom-speech-overview.md) are not supported via REST API for short audio. You should always use the [Speech to Text REST API](rest-speech-to-text.md) for batch transcription and Custom Speech.
-> [!TIP]
-> For Azure Government and Azure China endpoints, see [this article about sovereign clouds](sovereign-clouds.md).
--
-### Regions and endpoints
+## Regions and endpoints
The endpoint for the REST API for short audio has this format:
https://<REGION_IDENTIFIER>.stt.speech.microsoft.com/speech/recognition/conversa
Replace `<REGION_IDENTIFIER>` with the identifier that matches the [region](regions.md) of your Speech resource. > [!NOTE]
-> You must append the language parameter to the URL to avoid receiving a 4xx HTTP error. For example, the language set to US English via the West US endpoint is: `https://westus.stt.speech.microsoft.com/speech/recognition/conversation/cognitiveservices/v1?language=en-US`.
+> For Azure Government and Azure China endpoints, see [this article about sovereign clouds](sovereign-clouds.md).
-### Query parameters
+## Audio formats
-These parameters might be included in the query string of the REST request:
+Audio is sent in the body of the HTTP `POST` request. It must be in one of the formats in this table:
-| Parameter | Description | Required or optional |
-|--|-||
-| `language` | Identifies the spoken language that's being recognized. See [Supported languages](language-support.md?tabs=stt-tts). | Required |
-| `format` | Specifies the result format. Accepted values are `simple` and `detailed`. Simple results include `RecognitionStatus`, `DisplayText`, `Offset`, and `Duration`. Detailed responses include four different representations of display text. The default setting is `simple`. | Optional |
-| `profanity` | Specifies how to handle profanity in recognition results. Accepted values are: <br><br>`masked`, which replaces profanity with asterisks. <br>`removed`, which removes all profanity from the result. <br>`raw`, which includes profanity in the result. <br><br>The default setting is `masked`. | Optional |
-| `cid` | When you're using the [Speech Studio](speech-studio-overview.md) to create [custom models](./custom-speech-overview.md), you can take advantage of the **Endpoint ID** value from the **Deployment** page. Use the **Endpoint ID** value as the argument to the `cid` query string parameter. | Optional |
+| Format | Codec | Bit rate | Sample rate |
+|--|-|-|--|
+| WAV | PCM | 256 kbps | 16 kHz, mono |
+| OGG | OPUS | 256 kpbs | 16 kHz, mono |
-### Request headers
+> [!NOTE]
+> The preceding formats are supported through the REST API for short audio and WebSocket in the Speech service. The [Speech SDK](speech-sdk.md) supports the WAV format with PCM codec as well as [other formats](how-to-use-codec-compressed-audio-input-streams.md).
+
+## Request headers
This table lists required and optional headers for speech-to-text requests:
This table lists required and optional headers for speech-to-text requests:
| `Expect` | If you're using chunked transfer, send `Expect: 100-continue`. The Speech service acknowledges the initial request and awaits additional data.| Required if you're sending chunked audio data. | | `Accept` | If provided, it must be `application/json`. The Speech service provides results in JSON. Some request frameworks provide an incompatible default value. It's good practice to always include `Accept`. | Optional, but recommended. |
-### Audio formats
+## Query parameters
-Audio is sent in the body of the HTTP `POST` request. It must be in one of the formats in this table:
+These parameters might be included in the query string of the REST request.
-| Format | Codec | Bit rate | Sample rate |
-|--|-|-|--|
-| WAV | PCM | 256 kbps | 16 kHz, mono |
-| OGG | OPUS | 256 kpbs | 16 kHz, mono |
+> [!NOTE]
+> You must append the language parameter to the URL to avoid receiving a 4xx HTTP error. For example, the language set to US English via the West US endpoint is: `https://westus.stt.speech.microsoft.com/speech/recognition/conversation/cognitiveservices/v1?language=en-US`.
->[!NOTE]
->The preceding formats are supported through the REST API for short audio and WebSocket in the Speech service. The [Speech SDK](speech-sdk.md) supports the WAV format with PCM codec as well as [other formats](how-to-use-codec-compressed-audio-input-streams.md).
+| Parameter | Description | Required or optional |
+|--|-||
+| `language` | Identifies the spoken language that's being recognized. See [Supported languages](language-support.md?tabs=stt-tts). | Required |
+| `format` | Specifies the result format. Accepted values are `simple` and `detailed`. Simple results include `RecognitionStatus`, `DisplayText`, `Offset`, and `Duration`. Detailed responses include four different representations of display text. The default setting is `simple`. | Optional |
+| `profanity` | Specifies how to handle profanity in recognition results. Accepted values are: <br><br>`masked`, which replaces profanity with asterisks. <br>`removed`, which removes all profanity from the result. <br>`raw`, which includes profanity in the result. <br><br>The default setting is `masked`. | Optional |
+| `cid` | When you're using the [Speech Studio](speech-studio-overview.md) to create [custom models](./custom-speech-overview.md), you can take advantage of the **Endpoint ID** value from the **Deployment** page. Use the **Endpoint ID** value as the argument to the `cid` query string parameter. | Optional |
### Pronunciation assessment parameters
This table lists required and optional parameters for pronunciation assessment:
| `ReferenceText` | The text that the pronunciation will be evaluated against. | Required | | `GradingSystem` | The point system for score calibration. The `FivePoint` system gives a 0-5 floating point score, and `HundredMark` gives a 0-100 floating point score. Default: `FivePoint`. | Optional | | `Granularity` | The evaluation granularity. Accepted values are:<br><br> `Phoneme`, which shows the score on the full-text, word, and phoneme levels.<br>`Word`, which shows the score on the full-text and word levels. <br>`FullText`, which shows the score on the full-text level only.<br><br> The default setting is `Phoneme`. | Optional |
-| `Dimension` | Defines the output criteria. Accepted values are:<br><br> `Basic`, which shows the accuracy score only. <br>`Comprehensive`, which shows scores on more dimensions (for example, fluency score and completeness score on the full-text level, and error type on the word level).<br><br> To see definitions of different score dimensions and word error types, see [Response parameters](#response-parameters). The default setting is `Basic`. | Optional |
+| `Dimension` | Defines the output criteria. Accepted values are:<br><br> `Basic`, which shows the accuracy score only. <br>`Comprehensive`, which shows scores on more dimensions (for example, fluency score and completeness score on the full-text level, and error type on the word level).<br><br> To see definitions of different score dimensions and word error types, see [Response properties](#response-properties). The default setting is `Basic`. | Optional |
| `EnableMiscue` | Enables miscue calculation. With this parameter enabled, the pronounced words will be compared to the reference text. They'll be marked with omission or insertion based on the comparison. Accepted values are `False` and `True`. The default setting is `False`. | Optional | | `ScenarioId` | A GUID that indicates a customized point system. | Optional |
var pronAssessmentParamsBytes = Encoding.UTF8.GetBytes(pronAssessmentParamsJson)
var pronAssessmentHeader = Convert.ToBase64String(pronAssessmentParamsBytes); ```
-We strongly recommend streaming (chunked) uploading while you're posting the audio data, which can significantly reduce the latency. To learn how to enable streaming, see the [sample code in various programming languages](https://github.com/Azure-Samples/Cognitive-Speech-TTS/tree/master/PronunciationAssessment).
+We strongly recommend streaming ([chunked transfer](#chunked-transfer)) uploading while you're posting the audio data, which can significantly reduce the latency. To learn how to enable streaming, see the [sample code in various programming languages](https://github.com/Azure-Samples/Cognitive-Speech-TTS/tree/master/PronunciationAssessment).
->[!NOTE]
+> [!NOTE]
> For more For more information, see [pronunciation assessment](how-to-pronunciation-assessment.md).
-### Sample request
+## Sample request
The following sample includes the host name and required headers. It's important to note that the service also expects audio data, which is not included in this sample. As mentioned earlier, chunking is recommended but not required.
The HTTP status code for each response indicates success or common errors.
| 401 | Unauthorized | A resource key or an authorization token is invalid in the specified region, or an endpoint is invalid. | | 403 | Forbidden | A resource key or authorization token is missing. |
-### Chunked transfer
-
-Chunked transfer (`Transfer-Encoding: chunked`) can help reduce recognition latency. It allows the Speech service to begin processing the audio file while it's transmitted. The REST API for short audio does not provide partial or interim results.
-
-The following code sample shows how to send audio in chunks. Only the first chunk should contain the audio file's header. `request` is an `HttpWebRequest` object that's connected to the appropriate REST endpoint. `audioFile` is the path to an audio file on disk.
-
-```csharp
-var request = (HttpWebRequest)HttpWebRequest.Create(requestUri);
-request.SendChunked = true;
-request.Accept = @"application/json;text/xml";
-request.Method = "POST";
-request.ProtocolVersion = HttpVersion.Version11;
-request.Host = host;
-request.ContentType = @"audio/wav; codecs=audio/pcm; samplerate=16000";
-request.Headers["Ocp-Apim-Subscription-Key"] = "YOUR_RESOURCE_KEY";
-request.AllowWriteStreamBuffering = false;
-
-using (var fs = new FileStream(audioFile, FileMode.Open, FileAccess.Read))
-{
- // Open a request stream and write 1,024-byte chunks in the stream one at a time.
- byte[] buffer = null;
- int bytesRead = 0;
- using (var requestStream = request.GetRequestStream())
- {
- // Read 1,024 raw bytes from the input audio file.
- buffer = new Byte[checked((uint)Math.Min(1024, (int)fs.Length))];
- while ((bytesRead = fs.Read(buffer, 0, buffer.Length)) != 0)
- {
- requestStream.Write(buffer, 0, bytesRead);
- }
-
- requestStream.Flush();
- }
-}
-```
-
-### Response parameters
-
-Results are provided as JSON. The `simple` format includes the following top-level fields:
-
-| Parameter | Description |
-|--|--|
-|`RecognitionStatus`|Status, such as `Success` for successful recognition. See the next table.|
-|`DisplayText`|The recognized text after capitalization, punctuation, inverse text normalization, and profanity masking. Present only on success. Inverse text normalization is conversion of spoken text to shorter forms, such as 200 for "two hundred" or "Dr. Smith" for "doctor smith."|
-|`Offset`|The time (in 100-nanosecond units) at which the recognized speech begins in the audio stream.|
-|`Duration`|The duration (in 100-nanosecond units) of the recognized speech in the audio stream.|
-
-The `RecognitionStatus` field might contain these values:
-
-| Status | Description |
-|--|-|
-| `Success` | The recognition was successful, and the `DisplayText` field is present. |
-| `NoMatch` | Speech was detected in the audio stream, but no words from the target language were matched. This status usually means that the recognition language is different from the language that the user is speaking. |
-| `InitialSilenceTimeout` | The start of the audio stream contained only silence, and the service timed out while waiting for speech. |
-| `BabbleTimeout` | The start of the audio stream contained only noise, and the service timed out while waiting for speech. |
-| `Error` | The recognition service encountered an internal error and could not continue. Try again if possible. |
-
-> [!NOTE]
-> If the audio consists only of profanity, and the `profanity` query parameter is set to `remove`, the service does not return a speech result.
-
-The `detailed` format includes additional forms of recognized results.
-When you're using the `detailed` format, `DisplayText` is provided as `Display` for each result in the `NBest` list.
-
-The object in the `NBest` list can include:
-| Parameter | Description |
-|--|-|
-| `Confidence` | The confidence score of the entry, from 0.0 (no confidence) to 1.0 (full confidence). |
-| `Lexical` | The lexical form of the recognized text: the actual words recognized. |
-| `ITN` | The inverse-text-normalized (ITN) or canonical form of the recognized text, with phone numbers, numbers, abbreviations ("doctor smith" to "dr smith"), and other transformations applied. |
-| `MaskedITN` | The ITN form with profanity masking applied, if requested. |
-| `Display` | The display form of the recognized text, with punctuation and capitalization added. This parameter is the same as what `DisplayText` provides when the format is set to `simple`. |
-| `AccuracyScore` | Pronunciation accuracy of the speech. Accuracy indicates how closely the phonemes match a native speaker's pronunciation. The accuracy score at the word and full-text levels is aggregated from the accuracy score at the phoneme level. |
-| `FluencyScore` | Fluency of the provided speech. Fluency indicates how closely the speech matches a native speaker's use of silent breaks between words. |
-| `CompletenessScore` | Completeness of the speech, determined by calculating the ratio of pronounced words to reference text input. |
-| `PronScore` | Overall score that indicates the pronunciation quality of the provided speech. This score is aggregated from `AccuracyScore`, `FluencyScore`, and `CompletenessScore` with weight. |
-| `ErrorType` | Value that indicates whether a word is omitted, inserted, or badly pronounced, compared to `ReferenceText`. Possible values are `None` (meaning no error on this word), `Omission`, `Insertion`, and `Mispronunciation`. |
-
-### Sample responses
+## Sample responses
Here's a typical response for `simple` recognition:
Here's a typical response for recognition with pronunciation assessment:
} ```
+### Response properties
+
+Results are provided as JSON. The `simple` format includes the following top-level fields:
+
+| Property | Description |
+|--|--|
+|`RecognitionStatus`|Status, such as `Success` for successful recognition. See the next table.|
+|`DisplayText`|The recognized text after capitalization, punctuation, inverse text normalization, and profanity masking. Present only on success. Inverse text normalization is conversion of spoken text to shorter forms, such as 200 for "two hundred" or "Dr. Smith" for "doctor smith."|
+|`Offset`|The time (in 100-nanosecond units) at which the recognized speech begins in the audio stream.|
+|`Duration`|The duration (in 100-nanosecond units) of the recognized speech in the audio stream.|
+
+The `RecognitionStatus` field might contain these values:
+
+| Status | Description |
+|--|-|
+| `Success` | The recognition was successful, and the `DisplayText` field is present. |
+| `NoMatch` | Speech was detected in the audio stream, but no words from the target language were matched. This status usually means that the recognition language is different from the language that the user is speaking. |
+| `InitialSilenceTimeout` | The start of the audio stream contained only silence, and the service timed out while waiting for speech. |
+| `BabbleTimeout` | The start of the audio stream contained only noise, and the service timed out while waiting for speech. |
+| `Error` | The recognition service encountered an internal error and could not continue. Try again if possible. |
+
+> [!NOTE]
+> If the audio consists only of profanity, and the `profanity` query parameter is set to `remove`, the service does not return a speech result.
+
+The `detailed` format includes additional forms of recognized results.
+When you're using the `detailed` format, `DisplayText` is provided as `Display` for each result in the `NBest` list.
+
+The object in the `NBest` list can include:
+
+| Property | Description |
+|--|-|
+| `Confidence` | The confidence score of the entry, from 0.0 (no confidence) to 1.0 (full confidence). |
+| `Lexical` | The lexical form of the recognized text: the actual words recognized. |
+| `ITN` | The inverse-text-normalized (ITN) or canonical form of the recognized text, with phone numbers, numbers, abbreviations ("doctor smith" to "dr smith"), and other transformations applied. |
+| `MaskedITN` | The ITN form with profanity masking applied, if requested. |
+| `Display` | The display form of the recognized text, with punctuation and capitalization added. This parameter is the same as what `DisplayText` provides when the format is set to `simple`. |
+| `AccuracyScore` | Pronunciation accuracy of the speech. Accuracy indicates how closely the phonemes match a native speaker's pronunciation. The accuracy score at the word and full-text levels is aggregated from the accuracy score at the phoneme level. |
+| `FluencyScore` | Fluency of the provided speech. Fluency indicates how closely the speech matches a native speaker's use of silent breaks between words. |
+| `CompletenessScore` | Completeness of the speech, determined by calculating the ratio of pronounced words to reference text input. |
+| `PronScore` | Overall score that indicates the pronunciation quality of the provided speech. This score is aggregated from `AccuracyScore`, `FluencyScore`, and `CompletenessScore` with weight. |
+| `ErrorType` | Value that indicates whether a word is omitted, inserted, or badly pronounced, compared to `ReferenceText`. Possible values are `None` (meaning no error on this word), `Omission`, `Insertion`, and `Mispronunciation`. |
+
+## Chunked transfer
+
+Chunked transfer (`Transfer-Encoding: chunked`) can help reduce recognition latency. It allows the Speech service to begin processing the audio file while it's transmitted. The REST API for short audio does not provide partial or interim results.
+
+The following code sample shows how to send audio in chunks. Only the first chunk should contain the audio file's header. `request` is an `HttpWebRequest` object that's connected to the appropriate REST endpoint. `audioFile` is the path to an audio file on disk.
+
+```csharp
+var request = (HttpWebRequest)HttpWebRequest.Create(requestUri);
+request.SendChunked = true;
+request.Accept = @"application/json;text/xml";
+request.Method = "POST";
+request.ProtocolVersion = HttpVersion.Version11;
+request.Host = host;
+request.ContentType = @"audio/wav; codecs=audio/pcm; samplerate=16000";
+request.Headers["Ocp-Apim-Subscription-Key"] = "YOUR_RESOURCE_KEY";
+request.AllowWriteStreamBuffering = false;
+
+using (var fs = new FileStream(audioFile, FileMode.Open, FileAccess.Read))
+{
+ // Open a request stream and write 1,024-byte chunks in the stream one at a time.
+ byte[] buffer = null;
+ int bytesRead = 0;
+ using (var requestStream = request.GetRequestStream())
+ {
+ // Read 1,024 raw bytes from the input audio file.
+ buffer = new Byte[checked((uint)Math.Min(1024, (int)fs.Length))];
+ while ((bytesRead = fs.Read(buffer, 0, buffer.Length)) != 0)
+ {
+ requestStream.Write(buffer, 0, bytesRead);
+ }
+
+ requestStream.Flush();
+ }
+}
+```
++ ## Next steps -- [Create a free Azure account](https://azure.microsoft.com/free/cognitive-services/)-- [Customize acoustic models](./how-to-custom-speech-train-model.md)-- [Customize language models](./how-to-custom-speech-train-model.md)
+- [Customize speech models](./how-to-custom-speech-train-model.md)
- [Get familiar with batch transcription](batch-transcription.md)
iot-hub Iot Hub Rm Template https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-hub/iot-hub-rm-template.md
- Title: Create an Azure IoT Hub using a template (.NET) | Microsoft Docs
-description: How to use an Azure Resource Manager template to create an IoT Hub with a C# program.
------ Previously updated : 08/08/2017---
-# Create an IoT hub using Azure Resource Manager template (.NET)
--
-You can use Azure Resource Manager to create and manage Azure IoT hubs programmatically. This tutorial shows you how to use an Azure Resource Manager template to create an IoT hub from a C# program.
-
-> [!NOTE]
-> Azure has two different deployment models for creating and working with resources: [Azure Resource Manager and classic](../azure-resource-manager/management/deployment-models.md). This article covers using the Azure Resource Manager deployment model.
--
-To complete this tutorial, you need the following:
-
-* Visual Studio
-* An [Azure Storage account][lnk-storage-account] where you can store your Azure Resource Manager template files
-* [Azure PowerShell module][lnk-powershell-install]
--
-## Prepare your Visual Studio project
-
-1. In Visual Studio, create a Visual C# Windows Classic Desktop project using the **Console App (.NET Framework)** project template. Name the project **CreateIoTHub**.
-
-2. In Solution Explorer, right-click on your project and then click **Manage NuGet Packages**.
-
-3. In NuGet Package Manager, check **Include prerelease**, and on the **Browse** page search for **Microsoft.Azure.Management.ResourceManager**. Select the package, click **Install**, in **Review Changes** click **OK**, then click **I Accept** to accept the licenses.
-
-4. In NuGet Package Manager, search for **Microsoft.IdentityModel.Clients.ActiveDirectory**. Click **Install**, in **Review Changes** click **OK**, then click **I Accept** to accept the license.
- > [!IMPORTANT]
- > The [Microsoft.IdentityModel.Clients.ActiveDirectory](https://www.nuget.org/packages/Microsoft.IdentityModel.Clients.ActiveDirectory) NuGet package and Azure AD Authentication Library (ADAL) have been deprecated. No new features have been added since June 30, 2020. We strongly encourage you to upgrade, see the [migration guide](../active-directory/develop/msal-migration.md) for more details.
-
-5. In Program.cs, replace the existing **using** statements with the following code:
-
- ```csharp
- using System;
- using Microsoft.Azure.Management.ResourceManager;
- using Microsoft.Azure.Management.ResourceManager.Models;
- using Microsoft.IdentityModel.Clients.ActiveDirectory;
- using Microsoft.Rest;
- ```
-
-6. In Program.cs, add the following static variables replacing the placeholder values. You made a note of **ApplicationId**, **SubscriptionId**, **TenantId**, and **Password** earlier in this tutorial. **Your Azure Storage account name** is the name of the Azure Storage account where you store your Azure Resource Manager template files. **Resource group name** is the name of the resource group you use when you create the IoT hub. The name can be a pre-existing or new resource group. **Deployment name** is a name for the deployment, such as **Deployment_01**.
-
- ```csharp
- static string applicationId = "{Your ApplicationId}";
- static string subscriptionId = "{Your SubscriptionId}";
- static string tenantId = "{Your TenantId}";
- static string password = "{Your application Password}";
- static string storageAddress = "https://{Your storage account name}.blob.core.windows.net";
- static string rgName = "{Resource group name}";
- static string deploymentName = "{Deployment name}";
- ```
--
-## Submit a template to create an IoT hub
-
-Use a JSON template and parameter file to create an IoT hub in your resource group. You can also use an Azure Resource Manager template to make changes to an existing IoT hub.
-
-1. In Solution Explorer, right-click on your project, click **Add**, and then click **New Item**. Add a JSON file called **template.json** to your project.
-
-2. To add a standard IoT hub to the **East US** region, replace the contents of **template.json** with the following resource definition. For the current list of regions that support IoT Hub see [Azure Status][lnk-status]:
-
- ```json
- {
- "$schema": "https://schema.management.azure.com/schemas/2015-01-01/deploymentTemplate.json#",
- "contentVersion": "1.0.0.0",
- "parameters": {
- "hubName": {
- "type": "string"
- }
- },
- "resources": [
- {
- "apiVersion": "2016-02-03",
- "type": "Microsoft.Devices/IotHubs",
- "name": "[parameters('hubName')]",
- "location": "East US",
- "sku": {
- "name": "S1",
- "tier": "Standard",
- "capacity": 1
- },
- "properties": {
- "location": "East US"
- }
- }
- ],
- "outputs": {
- "hubKeys": {
- "value": "[listKeys(resourceId('Microsoft.Devices/IotHubs', parameters('hubName')), '2016-02-03')]",
- "type": "object"
- }
- }
- }
- ```
-
-3. In Solution Explorer, right-click on your project, click **Add**, and then click **New Item**. Add a JSON file called **parameters.json** to your project.
-
-4. Replace the contents of **parameters.json** with the following parameter information that sets a name for the new IoT hub such as **{your initials}mynewiothub**. The IoT hub name must be globally unique so it should include your name or initials:
-
- ```json
- {
- "$schema": "https://schema.management.azure.com/schemas/2015-01-01/deploymentParameters.json#",
- "contentVersion": "1.0.0.0",
- "parameters": {
- "hubName": { "value": "mynewiothub" }
- }
- }
- ```
- [!INCLUDE [iot-hub-pii-note-naming-hub](../../includes/iot-hub-pii-note-naming-hub.md)]
-
-5. In **Server Explorer**, connect to your Azure subscription, and in your Azure Storage account create a container called **templates**. In the **Properties** panel, set the **Public Read Access** permissions for the **templates** container to **Blob**.
-
-6. In **Server Explorer**, right-click on the **templates** container and then click **View Blob Container**. Click the **Upload Blob** button, select the two files, **parameters.json** and **templates.json**, and then click **Open** to upload the JSON files to the **templates** container. The URLs of the blobs containing the JSON data are:
-
- ```csharp
- https://{Your storage account name}.blob.core.windows.net/templates/parameters.json
- https://{Your storage account name}.blob.core.windows.net/templates/template.json
- ```
-7. Add the following method to Program.cs:
-
- ```csharp
- static void CreateIoTHub(ResourceManagementClient client)
- {
-
- }
- ```
-
-8. Add the following code to the **CreateIoTHub** method to submit the template and parameter files to the Azure Resource
-
- ```csharp
- var createResponse = client.Deployments.CreateOrUpdate(
- rgName,
- deploymentName,
- new Deployment()
- {
- Properties = new DeploymentProperties
- {
- Mode = DeploymentMode.Incremental,
- TemplateLink = new TemplateLink
- {
- Uri = storageAddress + "/templates/template.json"
- },
- ParametersLink = new ParametersLink
- {
- Uri = storageAddress + "/templates/parameters.json"
- }
- }
- });
- ```
-
-9. Add the following code to the **CreateIoTHub** method that displays the status and the keys for the new IoT hub:
-
- ```csharp
- string state = createResponse.Properties.ProvisioningState;
- Console.WriteLine("Deployment state: {0}", state);
-
- if (state != "Succeeded")
- {
- Console.WriteLine("Failed to create iothub");
- }
- Console.WriteLine(createResponse.Properties.Outputs);
- ```
-
-## Complete and run the application
-
-You can now complete the application by calling the **CreateIoTHub** method before you build and run it.
-
-1. Add the following code to the end of the **Main** method:
-
- ```csharp
- CreateIoTHub(client);
- Console.ReadLine();
- ```
-
-2. Click **Build** and then **Build Solution**. Correct any errors.
-
-3. Click **Debug** and then **Start Debugging** to run the application. It may take several minutes for the deployment to run.
-
-4. To verify your application added the new IoT hub, visit the [Azure portal][lnk-azure-portal] and view your list of resources. Alternatively, use the **Get-AzResource** PowerShell cmdlet.
-
-> [!NOTE]
-> This example application adds an S1 Standard IoT Hub for which you are billed. You can delete the IoT hub through the [Azure portal][lnk-azure-portal] or by using the **Remove-AzResource** PowerShell cmdlet when you are finished.
-
-## Next steps
-Now you have deployed an IoT hub using an Azure Resource Manager template with a C# program, you may want to explore further:
-
-* Read about the capabilities of the [IoT Hub resource provider REST API][lnk-rest-api].
-* Read [Azure Resource Manager overview][lnk-azure-rm-overview] to learn more about the capabilities of Azure Resource Manager.
-* For the JSON syntax and properties to use in templates, see [Microsoft.Devices resource types](/azure/templates/microsoft.devices/iothub-allversions).
-
-To learn more about developing for IoT Hub, see the following articles:
-
-* [Introduction to C SDK][lnk-c-sdk]
-* [Azure IoT SDKs][lnk-sdks]
-
-To further explore the capabilities of IoT Hub, see:
-
-* [Deploying AI to edge devices with Azure IoT Edge][lnk-iotedge]
-
-<!-- Links -->
-[lnk-free-trial]: https://azure.microsoft.com/pricing/free-trial/
-[lnk-azure-portal]: https://portal.azure.com/
-[lnk-status]: https://azure.microsoft.com/status/
-[lnk-powershell-install]: /powershell/azure/install-Az-ps
-[lnk-rest-api]: /rest/api/iothub/iothubresource
-[lnk-azure-rm-overview]: ../azure-resource-manager/management/overview.md
-[lnk-storage-account]:../storage/common/storage-create-storage-account.md
-
-[lnk-c-sdk]: iot-hub-device-sdk-c-intro.md
-[lnk-sdks]: iot-hub-devguide-sdks.md
-
-[lnk-iotedge]: ../iot-edge/quickstart-linux.md
key-vault Tutorial Javascript Virtual Machine https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/key-vault/general/tutorial-javascript-virtual-machine.md
az keyvault set-policy --name "<your-unique-keyvault-name>" --object-id "<system
## Log in to the VM
-To sign in to the virtual machine, follow the instructions in [Connect and sign in to an Azure virtual machine running Linux](../../virtual-machines/linux/login-using-aad.md) or [Connect and sign in to an Azure virtual machine running Windows](../../virtual-machines/windows/connect-logon.md).
+To sign in to the virtual machine, follow the instructions in [Connect and sign in to an Azure virtual machine running Linux](/azure-docs-archive-pr/virtual-machines/linux/login-using-aad) or [Connect and sign in to an Azure virtual machine running Windows](../../virtual-machines/windows/connect-logon.md).
To log into a Linux VM, you can use the ssh command with the \<publicIpAddress\> given in the [Create a virtual machine](#create-a-virtual-machine) step:
key-vault Tutorial Net Virtual Machine https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/key-vault/general/tutorial-net-virtual-machine.md
Set-AzKeyVaultAccessPolicy -ResourceGroupName <YourResourceGroupName> -VaultName
## Sign in to the virtual machine
-To sign in to the virtual machine, follow the instructions in [Connect and sign in to an Azure Windows virtual machine](../../virtual-machines/windows/connect-logon.md) or [Connect and sign in to an Azure Linux virtual machine](../../virtual-machines/linux/login-using-aad.md).
+To sign in to the virtual machine, follow the instructions in [Connect and sign in to an Azure Windows virtual machine](../../virtual-machines/windows/connect-logon.md) or [Connect and sign in to an Azure Linux virtual machine](/azure-docs-archive-pr/virtual-machines/linux/login-using-aad).
## Set up the console app
key-vault Tutorial Python Virtual Machine https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/key-vault/general/tutorial-python-virtual-machine.md
az keyvault set-policy --name "<your-unique-keyvault-name>" --object-id "<system
## Log in to the VM
-To sign in to the virtual machine, follow the instructions in [Connect and sign in to an Azure virtual machine running Linux](../../virtual-machines/linux/login-using-aad.md) or [Connect and sign in to an Azure virtual machine running Windows](../../virtual-machines/windows/connect-logon.md).
+To sign in to the virtual machine, follow the instructions in [Connect and sign in to an Azure virtual machine running Linux](/azure-docs-archive-pr/virtual-machines/linux/login-using-aad) or [Connect and sign in to an Azure virtual machine running Windows](../../virtual-machines/windows/connect-logon.md).
To log into a Linux VM, you can use the ssh command with the \<publicIpAddress\> given in the [Create a virtual machine](#create-a-virtual-machine) step:
key-vault Quick Create Template https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/key-vault/managed-hsm/quick-create-template.md
description: Quickstart showing how to create Azure an Azure Key Vault Managed H
Previously updated : 09/15/2020 Last updated : 09/22/2022 tags: azure-resource-manager-+ #Customer intent: As a security admin who is new to Azure, I want to create a managed HSM using an Azure Resource Manager template.
-# Quickstart: Create a Managed HSM using an Azure Resource Manager template
+# Quickstart: Create a Managed HSM using an ARM template
-Managed HSM is a fully managed, highly available, single-tenant, standards-compliant cloud service that enables you to safeguards cryptographic keys for your cloud applications, using **FIPS 140-2 Level 3** validated HSMs.
+This quickstart describes how to use an Azure Resource Manager template (ARM template) to create an Azure Key Vault managed HSM. Managed HSM is a fully managed, highly available, single-tenant, standards-compliant cloud service that enables you to safeguards cryptographic keys for your cloud applications, using **FIPS 140-2 Level 3** validated HSMs.
-This quickstart focuses on the process of deploying a Resource Manager template to create a Managed HSM. [Resource Manager template](../../azure-resource-manager/templates/overview.md) is a JavaScript Object Notation (JSON) file that defines the infrastructure and configuration for your project. The template uses declarative syntax, which lets you state what you intend to deploy without having to write the sequence of programming commands to create it. If you want to learn more about developing Resource Manager templates, see [Resource Manager documentation](../../azure-resource-manager/index.yml) and the [template reference](/azure/templates/microsoft.keyvault/allversions).
-If you don't have an Azure subscription, create a [free account](https://azure.microsoft.com/free/?WT.mc_id=A261C142F) before you begin.
-
-## Prerequisites
-
-To complete the steps in this article, you must have the following items:
--- A subscription to Microsoft Azure. If you don't have one, you can sign up for a [free trial](https://azure.microsoft.com/pricing/free-trial).-- The Azure CLI version 2.12.0 or later. Run `az --version` to find the version. If you need to install or upgrade, see [Install the Azure CLI]( /cli/azure/install-azure-cli)
+If your environment meets the prerequisites and you're familiar with using ARM templates, select the **Deploy to Azure** button. The template will open in the Azure portal.
+## Prerequisites
-## Sign in to Azure
+If you don't have an Azure subscription, create a [free account](https://azure.microsoft.com/free/?WT.mc_id=A261C142F) before you begin.
-To sign in to Azure using the CLI, you can type:
-```azurecli
-az login
-```
+## Review the template
-For more information on login options via the CLI, see [sign in with Azure CLI](/cli/azure/authenticate-azure-cli)
+The template used in this quickstart is from [Azure Quickstart Templates](https://azure.microsoft.com/resources/templates/managed-hsm-create):
-## Create a Managed HSM
-The template used in this quickstart is from [Azure Quickstart templates](https://azure.microsoft.com/resources/templates/managed-hsm-create/).
+The Azure resource defined in the template is:
-The Azure resource defined in the template:
+* **Microsoft.KeyVault/managedHSMs**: Create an Azure Key Vault Managed HSM.
-* **Microsoft.KeyVault/managedHSMs**: create an Azure Key Vault Managed HSM.
-
-More Azure Key Vault template samples can be found [here](https://azure.microsoft.com/resources/templates/?resourceType=Microsoft.Keyvault).
+## Deploy the template
The template requires the object ID associated with your account. To find it, use the Azure CLI [az ad user show](/cli/azure/ad/user#az-ad-user-show) command, passing your email address to the `--id` parameter. You can limit the output to the object ID only with the `--query` parameter.
You may also need your tenant ID. To find it, use the Azure CLI [az ad user show
az account show --query "tenantId" ```
-1. Select the following image to sign in to Azure and open a template. The template creates a Managed HSM.
+You can now deploy the ARM template:
- <a href="https://portal.azure.com/#create/Microsoft.Template/uri/https%3A%2F%2Fraw.githubusercontent.com%2FAzure%2Fazure-quickstart-templates%2F%2Fmaster%2Fquickstarts%2Fmicrosoft.keyvault%2Fmanaged-hsm-create%2Fazuredeploy.json"><img src="../media/deploy-to-azure.svg" alt="deploy to azure"/></a>
+1. Select the following image to sign in to Azure and open a template. The template creates a Managed HSM.
-2. Select or enter the following values.
+ :::image type="content" source="../../media/template-deployments/deploy-to-azure.svg" alt-text="Screenshot of the Deploy to Azure button to deploy resources with a template." link="https://portal.azure.com/#create/Microsoft.Template/uri/https%3A%2F%2Fraw.githubusercontent.com%2FAzure%2Fazure-quickstart-templates%2Fmaster%2Fquickstarts%2Fmicrosoft.keyvault%2Fmanaged-hsm-create%2Fazuredeploy.json":::
- Unless it is specified, use the default value to create the Managed HSM.
+1. Select or enter the following values. Unless specified, use the default value to create the Managed HSM.
- **Subscription**: Select an Azure subscription.
- - **Resource group**: Select **Create new**, enter a unique name for the resource group, and then select **OK**.
+ - **Resource group**: Select **Create new**, enter "myResourceGroup" as the name, and then select **OK**.
- **Location**: Select a location. For example, **West US 3**. - **managedHSMName**: Enter a name for your Managed HSM.
- - **Tenant ID**: The template function automatically retrieves your tenant ID; don't change the default value. If there is no value, enter the Tenant ID that you retrieved in [Prerequisites](#prerequisites).
- * **initialAdminObjectIds**: Enter the Object ID that you retrieved in [Prerequisites](#prerequisites).
+ - **Tenant ID**: The template function automatically retrieves your tenant ID; don't change the default value. If there is no value, enter the Tenant ID that you retrieved above.
+ - **initialAdminObjectIds**: Enter the Object ID that you retrieved above.
-3. Select **Purchase**. After the Managed HSM has been deployed successfully, you get a notification:
+1. Select **Purchase**. After the Managed HSM has been deployed successfully, you get a notification:
The Azure portal is used to deploy the template. In addition to the Azure portal, you can also use the Azure PowerShell, Azure CLI, and REST API. To learn other deployment methods, see [Deploy templates](../../azure-resource-manager/templates/deploy-powershell.md).
+## Validate the deployment
+
+You can verify that the managed HSM was created with the Azure CLI [az keyvault list](/cli/azure/keyvault#az-keyvault-list) command. You will find the output easier to read if you format the results as a table:
+
+```azurecli-interactive
+az keyvault list -o table
+```
+
+You should see the name of your newly created managed HSM.
+
+## Clean up resources
++
+> [!WARNING]
+> Deleting the resource group puts the Managed HSM into a soft-deleted state. The Managed HSM will continue to be billed until it is purged. See [Managed HSM soft-delete and purge protection](recovery.md)
+ ## Next steps In this quickstart, you created a Managed HSM. This Managed HSM will not be fully functional until it is activated. See [Activate your Managed HSM](quick-create-cli.md#activate-your-managed-hsm) to learn how to activate your HSM.
machine-learning Azure Machine Learning Glossary https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/azure-machine-learning-glossary.md
+
+ Title: Azure Machine Learning glossary
+description: Glossary of terms for the Azure Machine Learning platform.
+++++++ Last updated : 09/21/2022+
+
+# Azure Machine Learning glossary
+
+The Azure Machine Learning glossary is a short dictionary of terminology for the Azure Machine Learning platform. For the general Azure terminology, see also:
+
+* [Microsoft Azure glossary: A dictionary of cloud terminology on the Azure platform](../azure-glossary-cloud-terminology.md)
+* [Cloud computing terms](https://azure.microsoft.com/overview/cloud-computing-dictionary/) - General industry cloud terms.
+* [Azure fundamental concepts](/azure/cloud-adoption-framework/ready/considerations/fundamental-concepts) - Microsoft Cloud Adoption Framework for Azure.
+
+## Component
+
+An Azure Machine Learning [component](concept-component.md) is a self-contained piece of code that does one step in a machine learning pipeline. Components are the building blocks of advanced machine learning pipelines. Components can do tasks such as data processing, model training, model scoring, and so on. A component is analogous to a function - it has a name, parameters, expects input, and returns output.
++
+## Compute
+
+A compute is a designated compute resource where you run your job or host your endpoint. Azure Machine learning supports the following types of compute:
+
+* **Compute cluster** - a managed-compute infrastructure that allows you to easily create a cluster of CPU or GPU compute nodes in the cloud.
+* **Compute instance** - a fully configured and managed development environment in the cloud. You can use the instance as a training or inference compute for development and testing. It's similar to a virtual machine on the cloud.
+* **Inference cluster** - used to deploy trained machine learning models to Azure Kubernetes Service. You can create an Azure Kubernetes Service (AKS) cluster from your Azure ML workspace, or attach an existing AKS cluster.
+* **Attached compute** - You can attach your own compute resources to your workspace and use them for training and inference.
+
+## Data
+
+Azure Machine Learning allows you to work with different types of data:
+
+* URIs (a location in local/cloud storage)
+ * `uri_folder`
+ * `uri_file`
+* Tables (a tabular data abstraction)
+ * `mltable`
+* Primitives
+ * `string`
+ * `boolean`
+ * `number`
+
+For most scenarios, you'll use URIs (`uri_folder` and `uri_file`) - a location in storage that can be easily mapped to the filesystem of a compute node in a job by either mounting or downloading the storage to the node.
+
+`mltable` is an abstraction for tabular data that is to be used for AutoML Jobs, Parallel Jobs, and some advanced scenarios. If you're just starting to use Azure Machine Learning and aren't using AutoML, we strongly encourage you to begin with URIs.
++
+## Datastore
+
+Azure Machine Learning datastores securely keep the connection information to your data storage on Azure, so you don't have to code it in your scripts. You can register and create a datastore to easily connect to your storage account, and access the data in your underlying storage service. The CLI v2 and SDK v2 support the following types of cloud-based storage
+
+* Azure Blob Container
+* Azure File Share
+* Azure Data Lake
+* Azure Data Lake Gen2
+
+## Environment
+
+Azure Machine Learning environments are an encapsulation of the environment where your machine learning task happens. They specify the software packages, environment variables, and software settings around your training and scoring scripts. The environments are managed and versioned entities within your Machine Learning workspace. Environments enable reproducible, auditable, and portable machine learning workflows across various computes.
+
+### Types of environment
+
+Azure ML supports two types of environments: curated and custom.
+
+Curated environments are provided by Azure Machine Learning and are available in your workspace by default. Intended to be used as is, they contain collections of Python packages and settings to help you get started with various machine learning frameworks. These pre-created environments also allow for faster deployment time. For a full list, see the [curated environments article](resource-curated-environments.md).
+
+In custom environments, you're responsible for setting up your environment. Make sure to install the packages and any other dependencies that your training or scoring script needs on the compute. Azure ML allows you to create your own environment using
+
+* A docker image
+* A base docker image with a conda YAML to customize further
+* A docker build context
+
+## Model
+
+Azure machine learning models consist of the binary file(s) that represent a machine learning model and any corresponding metadata. Models can be created from a local or remote file or directory. For remote locations `https`, `wasbs` and `azureml` locations are supported. The created model will be tracked in the workspace under the specified name and version. Azure ML supports three types of storage format for models:
+
+* `custom_model`
+* `mlflow_model`
+* `triton_model`
+
+## Workspace
+
+The workspace is the top-level resource for Azure Machine Learning, providing a centralized place to work with all the artifacts you create when you use Azure Machine Learning. The workspace keeps a history of all jobs, including logs, metrics, output, and a snapshot of your scripts. The workspace stores references to resources like datastores and compute. It also holds all assets like models, environments, components and data asset.
+
+## Next steps
+
+[What is Azure Machine Learning?](overview-what-is-azure-machine-learning.md)
machine-learning How To Configure Cli https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-configure-cli.md
Remove any existing installation of the of `ml` extension and also the CLI v1 `a
Now, install the `ml` extension: Run the help command to verify your installation and see available subcommands:
machine-learning How To Migrate From V1 https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-migrate-from-v1.md
Previously updated : 06/01/2022 Last updated : 09/23/2022
Do consider migrating the code for creating a workspace to v2. Typically Azure r
> [!IMPORTANT] > If your workspace uses a private endpoint, it will automatically have the `v1_legacy_mode` flag enabled, preventing usage of v2 APIs. See [how to configure network isolation with v2](how-to-configure-network-isolation-with-v2.md) for details. + ### Connection (workspace connection in v1) Workspace connections from v1 are persisted on the workspace, and fully available with v2. We recommend migrating the code for creating connections to v2.
+For a comparison of SDK v1 and v2 code, see [Migrate workspace management from SDK v1 to SDK v2](migrate-to-v2-resource-workspace.md).
+ ### Datastore
You can continue using your existing v1 model deployments. For new model deploym
|Azure Kubernetes Service (AKS)|ACI, AKS|Manage your own AKS cluster(s) for model deployment, giving flexibility and granular control at the cost of IT overhead.| |Azure Arc Kubernetes|N/A|Manage your own Kubernetes cluster(s) in other clouds or on-premises, giving flexibility and granular control at the cost of IT overhead.|
+For a comparison of SDK v1 and v2 code, see [Migrate deployment endpoints from SDK v1 to SDK v2](migrate-to-v2-deploy-endpoints.md).
+ ### Jobs (experiments, runs, pipelines in v1) In v2, "experiments", "runs", and "pipelines" are consolidated into jobs. A job has a type. Most jobs are `command` jobs that run a command, like `python main.py`. What runs in a job is agnostic to any programming language, so you can run `bash` scripts, invoke `python` interpreters, run a bunch of `curl` commands, or anything else. Another common type of job is `pipeline`, which defines child jobs that may have input/output relationships, forming a directed acyclic graph (DAG).
What you run *within* the job does not need to be migrated to v2. However, it is
We recommend migrating the code for creating jobs to v2. You can see [how to train models with the CLI (v2)](how-to-train-cli.md) and the [job YAML references](reference-yaml-job-command.md) for authoring jobs in v2 YAMLs.
+For a comparison of SDK v1 and v2 code, see [Migrate script run from SDK v1 to SDK v2](migrate-to-v2-command-job.md).
+ ### Data (datasets in v1) Datasets are renamed to data assets. Interoperability between v1 datasets and v2 data assets is the most complex of any entity in Azure ML.
For details on data in v2, see the [data concept article](concept-data.md).
We recommend migrating the code for [creating data assets](how-to-create-data-assets.md) to v2.
+For a comparison of SDK v1 and v2 code, see [Migrate data management from SDK v1 to v2](migrate-to-v2-assets-data.md).
++ ### Model Models created from v1 can be used in v2. In v2, explicit model types are introduced. Similar to data assets, it may be easier to re-create a v1 model as a v2 model, setting the type appropriately. We recommend migrating the code for creating models with [SDK](how-to-train-sdk.md) or [CLI](how-to-train-cli.md) to v2.
+For a comparison of SDK v1 and v2 code, see
+
+* [Migrate model management from SDK v1 to SDK v2](migrate-to-v2-assets-model.md)
+* [Migrate AutoML from SDK v1 to SDK v2](migrate-to-v2-execution-automl.md)
+* [Migrate hyperparameter tuning from SDK v1 to SDK v2](migrate-to-v2-execution-hyperdrive.md)
+* [Migrate parallel run step from SDK v1 to SDK v2](migrate-to-v2-execution-parallel-run-step.md)
+ ### Environment Environments created from v1 can be used in v2. In v2, environments have new features like creation from a local Docker context. We recommend migrating the code for creating environments to v2.
+## Managing secrets
+
+The management of Key Vault secrets differs significantly in V2 compared to V1. The V1 set_secret and get_secret SDK methods are not available in V2. Instead, direct access using Key Vault client libraries should be used.
+
+For details about Key Vault, see [Use authentication credential secrets in Azure Machine Learning training jobs](how-to-use-secrets-in-runs.md).
+ ## Scenarios across the machine learning lifecycle There are a few scenarios that are common across the machine learning lifecycle using Azure ML. We'll look at a few and give general recommendations for migrating to v2.
A MLOps workflow typically involves CI/CD through an external tool. It's recomme
The solution accelerator for MLOps with v2 is being developed at https://github.com/Azure/mlops-v2 and can be used as reference or adopted for setup and automation of the machine learning lifecycle.
-#### A note on GitOps with v2
+### A note on GitOps with v2
A key paradigm with v2 is serializing machine learning entities as YAML files for source control with `git`, enabling better GitOps approaches than were possible with v1. For instance, you could enforce policy by which only a service principal used in CI/CD pipelines can create/update/delete some or all entities, ensuring changes go through a governed process like pull requests with required reviewers. Since the files in source control are YAML, they're easy to diff and track changes over time. You and your team may consider shifting to this paradigm as you migrate to v2.
machine-learning Migrate To V2 Assets Data https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/migrate-to-v2-assets-data.md
+
+ Title: 'Migrate data management from SDK v1 to v2'
+
+description: Migrate data management from v1 to v2 of Azure Machine Learning SDK
++++++ Last updated : 09/16/2022++++
+# Migrate data management from SDK v1 to v2
+
+In V1, an AzureML dataset can either be a `Filedataset` or a `Tabulardataset`.
+In V2, an AzureML data asset can be a `uri_folder`, `uri_file` or `mltable`.
+You can conceptually map `Filedataset` to `uri_folder` and `uri_file`, `Tabulardataset` to `mltable`.
+
+* URIs (`uri_folder`, `uri_file`) - a Uniform Resource Identifier that is a reference to a storage location on your local computer or in the cloud that makes it easy to access data in your jobs.
+* MLTable - a method to abstract the schema definition for tabular data so that it's easier for consumers of the data to materialize the table into a Pandas/Dask/Spark dataframe.
+
+This article gives a comparison of data scenario(s) in SDK v1 and SDK v2.
+
+## Create a `filedataset`/ uri type of data asset
+
+* SDK v1 - Create a `Filedataset`
+
+ ```python
+ from azureml.core import Workspace, Datastore, Dataset
+
+ # create a FileDataset pointing to files in 'animals' folder and its subfolders recursively
+ datastore_paths = [(datastore, 'animals')]
+ animal_ds = Dataset.File.from_files(path=datastore_paths)
+
+ # create a FileDataset from image and label files behind public web urls
+ web_paths = ['https://azureopendatastorage.blob.core.windows.net/mnist/train-images-idx3-ubyte.gz',
+ 'https://azureopendatastorage.blob.core.windows.net/mnist/train-labels-idx1-ubyte.gz']
+ mnist_ds = Dataset.File.from_files(path=web_paths)
+ ```
+
+* SDK v2
+ * Create a `URI_FOLDER` type data asset
+
+ ```python
+ from azure.ai.ml.entities import Data
+ from azure.ai.ml.constants import AssetTypes
+
+ # Supported paths include:
+ # local: './<path>'
+ # blob: 'https://<account_name>.blob.core.windows.net/<container_name>/<path>'
+ # ADLS gen2: 'abfss://<file_system>@<account_name>.dfs.core.windows.net/<path>/'
+ # Datastore: 'azureml://datastores/<data_store_name>/paths/<path>'
+
+ my_path = '<path>'
+
+ my_data = Data(
+ path=my_path,
+ type=AssetTypes.URI_FOLDER,
+ description="<description>",
+ name="<name>",
+ version='<version>'
+ )
+
+ ml_client.data.create_or_update(my_data)
+ ```
+
+ * Create a `URI_FILE` type data asset.
+ ```python
+ from azure.ai.ml.entities import Data
+ from azure.ai.ml.constants import AssetTypes
+
+ # Supported paths include:
+ # local: './<path>/<file>'
+ # blob: 'https://<account_name>.blob.core.windows.net/<container_name>/<path>/<file>'
+ # ADLS gen2: 'abfss://<file_system>@<account_name>.dfs.core.windows.net/<path>/<file>'
+ # Datastore: 'azureml://datastores/<data_store_name>/paths/<path>/<file>'
+ my_path = '<path>'
+
+ my_data = Data(
+ path=my_path,
+ type=AssetTypes.URI_FILE,
+ description="<description>",
+ name="<name>",
+ version="<version>"
+ )
+
+ ml_client.data.create_or_update(my_data)
+ ```
+
+## Create a tabular dataset/data asset
+
+* SDK v1
+
+ ```python
+ from azureml.core import Workspace, Datastore, Dataset
+
+ datastore_name = 'your datastore name'
+
+ # get existing workspace
+ workspace = Workspace.from_config()
+
+ # retrieve an existing datastore in the workspace by name
+ datastore = Datastore.get(workspace, datastore_name)
+
+ # create a TabularDataset from 3 file paths in datastore
+ datastore_paths = [(datastore, 'weather/2018/11.csv'),
+ (datastore, 'weather/2018/12.csv'),
+ (datastore, 'weather/2019/*.csv')]
+
+ weather_ds = Dataset.Tabular.from_delimited_files(path=datastore_paths)
+ ```
+
+* SDK v2 - Create `mltable` data asset via yaml definition
+
+ ```yaml
+ type: mltable
+
+ paths:
+ - pattern: ./*.txt
+ transformations:
+ - read_delimited:
+ delimiter: ,
+ encoding: ascii
+ header: all_files_same_headers
+ ```
+
+ ```python
+ from azure.ai.ml.entities import Data
+ from azure.ai.ml.constants import AssetTypes
+
+ # my_path must point to folder containing MLTable artifact (MLTable file + data
+ # Supported paths include:
+ # local: './<path>'
+ # blob: 'https://<account_name>.blob.core.windows.net/<container_name>/<path>'
+ # ADLS gen2: 'abfss://<file_system>@<account_name>.dfs.core.windows.net/<path>/'
+ # Datastore: 'azureml://datastores/<data_store_name>/paths/<path>'
+
+ my_path = '<path>'
+
+ my_data = Data(
+ path=my_path,
+ type=AssetTypes.MLTABLE,
+ description="<description>",
+ name="<name>",
+ version='<version>'
+ )
+
+ ml_client.data.create_or_update(my_data)
+ ```
+
+## Use data in an experiment/job
+
+* SDK v1
+
+ ```python
+ from azureml.core import ScriptRunConfig
+
+ src = ScriptRunConfig(source_directory=script_folder,
+ script='train_titanic.py',
+ # pass dataset as an input with friendly name 'titanic'
+ arguments=['--input-data', titanic_ds.as_named_input('titanic')],
+ compute_target=compute_target,
+ environment=myenv)
+
+ # Submit the run configuration for your training run
+ run = experiment.submit(src)
+ run.wait_for_completion(show_output=True)
+ ```
+
+* SDK v2
+
+ ```python
+ from azure.ai.ml import command
+ from azure.ai.ml.entities import Data
+ from azure.ai.ml import Input, Output
+ from azure.ai.ml.constants import AssetTypes
+
+ # Possible Asset Types for Data:
+ # AssetTypes.URI_FILE
+ # AssetTypes.URI_FOLDER
+ # AssetTypes.MLTABLE
+
+ # Possible Paths for Data:
+ # Blob: https://<account_name>.blob.core.windows.net/<container_name>/<folder>/<file>
+ # Datastore: azureml://datastores/paths/<folder>/<file>
+ # Data Asset: azureml:<my_data>:<version>
+
+ my_job_inputs = {
+ "raw_data": Input(type=AssetTypes.URI_FOLDER, path="<path>")
+ }
+
+ my_job_outputs = {
+ "prep_data": Output(type=AssetTypes.URI_FOLDER, path="<path>")
+ }
+
+ job = command(
+ code="./src", # local path where the code is stored
+ command="python process_data.py --raw_data ${{inputs.raw_data}} --prep_data ${{outputs.prep_data}}",
+ inputs=my_job_inputs,
+ outputs=my_job_outputs,
+ environment="<environment_name>:<version>",
+ compute="cpu-cluster",
+ )
+
+ # submit the command
+ returned_job = ml_client.create_or_update(job)
+ # get a URL for the status of the job
+ returned_job.services["Studio"].endpoint
+ ```
+
+## Mapping of key functionality in SDK v1 and SDK v2
+
+|Functionality in SDK v1|Rough mapping in SDK v2|
+|-|-|
+|[Method/API in SDK v1](/python/api/azurzeml-core/azureml.datadisplayname: migration, v1, v2)|[Method/API in SDK v2](/python/api/azure-ai-ml/azure.ai.ml.entities)|
+
+## Next steps
+
+For more information, see the documentation here:
+* [Data in Azure Machine Learning](concept-data.md?tabs=uri-file-example%2Ccli-data-create-example)
+* [Create data_assets](how-to-create-data-assets.md?tabs=CLI)
+* [Read and write data in a job](how-to-read-write-data-v2.md)
+* [V2 datastore operations](/python/api/azure-ai-ml/azure.ai.ml.operations.datastoreoperations)
machine-learning Migrate To V2 Assets Model https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/migrate-to-v2-assets-model.md
+
+ Title: Migrate model management from SDK v1 to SDK v2
+
+description: Migrate model management from v1 to v2 of Azure Machine Learning SDK
++++++ Last updated : 09/16/2022++++
+# Migrate model management from SDK v1 to SDK v2
+
+This article gives a comparison of scenario(s) in SDK v1 and SDK v2.
+
+## Create model
+
+* SDK v1
+
+ ```python
+ import urllib.request
+ from azureml.core.model import Model
+
+ # Register model
+ model = Model.register(ws, model_name="local-file-example", model_path="mlflow-model/model.pkl")
+ ```
+
+* SDK v2
+
+ ```python
+ from azure.ai.ml.entities import Model
+ from azure.ai.ml.constants import ModelType
+
+ file_model = Model(
+ path="mlflow-model/model.pkl",
+ type=ModelType.CUSTOM,
+ name="local-file-example",
+ description="Model created from local file."
+ )
+ ml_client.models.create_or_update(file_model)
+ ```
+
+## Use model in an experiment/job
+
+* SDK v1
+
+ ```python
+ model = run.register_model(model_name='run-model-example',
+ model_path='outputs/model/')
+ print(model.name, model.id, model.version, sep='\t')
+ ```
+
+* SDK v2
+
+ ```python
+ from azure.ai.ml.entities import Model
+ from azure.ai.ml.constants import ModelType
+
+ run_model = Model(
+ path="azureml://jobs/$RUN_ID/outputs/artifacts/paths/model/"
+ name="run-model-example",
+ description="Model created from run.",
+ type=ModelType.CUSTOM
+ )
+
+ ml_client.models.create_or_update(run_model)
+ ```
+
+## Mapping of key functionality in SDK v1 and SDK v2
+
+|Functionality in SDK v1|Rough mapping in SDK v2|
+|-|-|
+|[Model.register](/python/api/azureml-core/azureml.core.model(class)#azureml-core-model-register)|[ml_client.models.create_or_update](/python/api/azure-ai-ml/azure.ai.ml.mlclient#azure-ai-ml-mlclient-create-or-update)|
+|[run.register_model](/python/api/azureml-core/azureml.core.run.run#azureml-core-run-run-register-model)|[ml_client.models.create_or_update](/python/api/azure-ai-ml/azure.ai.ml.mlclient#azure-ai-ml-mlclient-create-or-update)|
+|[Model.deploy](/python/api/azureml-core/azureml.core.model(class)#azureml-core-model-deploy)|[ml_client.begin_create_or_update(blue_deployment)](/python/api/azure-ai-ml/azure.ai.ml.mlclient#azure-ai-ml-mlclient-begin-create-or-update)|
+
+## Next steps
+
+For more information, see the documentation here:
+
+* [Create a model in v1](v1/how-to-deploy-and-where.md?tabs=python#register-a-model-from-a-local-file)
+* [Deploy a model in v1](v1/how-to-deploy-and-where.md?tabs=azcli#workflow-for-deploying-a-model)
+* [Create a model in v2](how-to-manage-models.md)
+* [Deploy a model in v2](how-to-deploy-managed-online-endpoints.md)
+
machine-learning Migrate To V2 Command Job https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/migrate-to-v2-command-job.md
+
+ Title: 'Migrate script run from SDK v1 to SDK v2'
+
+description: Migrate how to run a script from SDK v1 to SDK v2
++++++ Last updated : 09/16/2022++++
+# Migrate script run from SDK v1 to SDK v2
+
+In SDK v2, "experiments" and "runs" are consolidated into jobs.
+
+A job has a type. Most jobs are command jobs that run a `command`, like `python main.py`. What runs in a job is agnostic to any programming language, so you can run `bash` scripts, invoke `python` interpreters, run a bunch of `curl` commands, or anything else.
+
+To migrate, you'll need to change your code for submitting jobs to SDK v2. What you run _within_ the job doesn't need to be migrated to SDK v2. However, it's recommended to remove any code specific to Azure ML from your model training scripts. This separation allows for an easier transition between local and cloud and is considered best practice for mature MLOps. In practice, this means removing `azureml.*` lines of code. Model logging and tracking code should be replaced with MLflow. For more details, see [how to use MLflow in v2](how-to-use-mlflow-cli-runs.md).
+
+This article gives a comparison of scenario(s) in SDK v1 and SDK v2.
+
+## Submit a script run
+
+* SDK v1
+
+ ```python
+ from azureml.core import Workspace, Experiment, Environment, ScriptRunConfig
+
+ # connect to the workspace
+ ws = Workspace.from_config()
+
+ # define and configure the experiment
+ experiment = Experiment(workspace=ws, name='day1-experiment-train')
+ config = ScriptRunConfig(source_directory='./src',
+ script='train.py',
+ compute_target='cpu-cluster')
+
+ # set up pytorch environment
+ env = Environment.from_conda_specification(
+ name='pytorch-env',
+ file_path='pytorch-env.yml')
+ config.run_config.environment = env
+
+ run = experiment.submit(config)
+
+ aml_url = run.get_portal_url()
+ print(aml_url)
+ ```
+
+* SDK v2
+
+ ```python
+ #import required libraries
+ from azure.ai.ml import MLClient, command
+ from azure.ai.ml.entities import Environment
+ from azure.identity import DefaultAzureCredential
+
+ #connect to the workspace
+ ml_client = MLClient.from_config(DefaultAzureCredential())
+
+ # set up pytorch environment
+ env = Environment(
+ image="mcr.microsoft.com/azureml/openmpi3.1.2-ubuntu18.04",
+ conda_file="pytorch-env.yml",
+ name="pytorch-env"
+ )
+
+ # define the command
+ command_job = command(
+ code="./src",
+ command="train.py",
+ environment=env,
+ compute="cpu-cluster",
+ )
+
+ returned_job = ml_client.jobs.create_or_update(command_job)
+ returned_job
+ ```
+
+## Mapping of key functionality in v1 and v2
+
+|Functionality in SDK v1|Rough mapping in SDK v2|
+|-|-|
+|[experiment.submit](/python/api/azureml-core/azureml.core.experiment.experiment#azureml-core-experiment-experiment-submit)|[MLCLient.jobs.create_or_update](/python/api/azure-ai-ml/azure.ai.ml.mlclient#azure-ai-ml-mlclient-create-or-update)|
+|[ScriptRunConfig()](/python/api/azureml-core/azureml.core.scriptrunconfig#constructor)|[command()](/python/api/azure-ai-ml/azure.ai.ml#azure-ai-ml-command)|
+
+## Next steps
+
+For more information, see:
+
+* [V1 - Experiment](/python/api/azureml-core/azureml.core.experiment)
+* [V2 - Command Job](/python/api/azure-ai-ml/azure.ai.ml.md#azure-ai-ml-command)
+* [Train models with the Azure ML Python SDK v2](how-to-train-sdk.md)
machine-learning Migrate To V2 Deploy Endpoints https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/migrate-to-v2-deploy-endpoints.md
+
+ Title: Migrate endpoints from SDK v1 to SDK v2
+
+description: Migrate deployment endpoints from v1 to v2 of Azure Machine Learning SDK
++++++ Last updated : 09/16/2022++++
+# Migrate deployment endpoints from SDK v1 to SDK v2
+
+We newly introduced [online endpoints](concept-endpoints.md) and batch endpoints as v2 concepts. There are several deployment funnels such as managed online endpoints, [kubernetes online endpoints](how-to-attach-kubernetes-anywhere.md) (including AKS and Arch-enabled Kubernetes) in v2, and ACI and AKS webservices in v1. In this article, we'll focus on the comparison of deploying to ACI webservices (v1) and managed online endpoints (v2).
+
+Examples in this article show how to:
+
+* Deploy your model to Azure
+* Score using the endpoint
+* Delete the webservice/endpoint
+
+## Create inference resources
+
+* SDK v1
+ 1. Configure a model, an environment, and a scoring script:
+ ```python
+ # configure a model. example for registering a model
+ from azureml.core.model import Model
+ model = Model.register(ws, model_name="bidaf_onnx", model_path="./model.onnx")
+
+ # configure an environment
+ from azureml.core import Environment
+ env = Environment(name='myenv')
+ python_packages = ['nltk', 'numpy', 'onnxruntime']
+ for package in python_packages:
+ env.python.conda_dependencies.add_pip_package(package)
+
+ # configure an inference configuration with a scoring script
+ from azureml.core.model import InferenceConfig
+ inference_config = InferenceConfig(
+ environment=env,
+ source_directory="./source_dir",
+ entry_script="./score.py",
+ )
+ ```
+
+ 1. Configure and deploy an **ACI webservice**:
+ ```python
+ from azureml.core.webservice import AciWebservice
+
+ # defince compute resources for ACI
+ deployment_config = AciWebservice.deploy_configuration(
+ cpu_cores=0.5, memory_gb=1, auth_enabled=True
+ )
+
+ # define an ACI webservice
+ service = Model.deploy(
+ ws,
+ "myservice",
+ [model],
+ inference_config,
+ deployment_config,
+ overwrite=True,
+ )
+
+ # create the service
+ service.wait_for_deployment(show_output=True)
+ ```
+
+For more information on registering models, see [Register a model from a local file](v1/how-to-deploy-and-where.md?tabs=python#register-a-model-from-a-local-file).
+
+* SDK v2
+
+ 1. Configure a model, an environment, and a scoring script:
+ ```python
+ from azure.ai.ml.entities import Model
+ # configure a model
+ model = Model(path="../model-1/model/sklearn_regression_model.pkl")
+
+ # configure an environment
+ from azure.ai.ml.entities import Environment
+ env = Environment(
+ conda_file="../model-1/environment/conda.yml",
+ image="mcr.microsoft.com/azureml/openmpi3.1.2-ubuntu18.04:20210727.v1",
+ )
+
+ # configure an inference configuration with a scoring script
+ from azure.ai.ml.entities import CodeConfiguration
+ code_config = CodeConfiguration(
+ code="../model-1/onlinescoring", scoring_script="score.py"
+ )
+ ```
+
+ 1. Configure and create an **online endpoint**:
+ ```python
+ import datetime
+ from azure.ai.ml.entities import ManagedOnlineEndpoint
+
+ # create a unique endpoint name with current datetime to avoid conflicts
+ online_endpoint_name = "endpoint-" + datetime.datetime.now().strftime("%m%d%H%M%f")
+
+ # define an online endpoint
+ endpoint = ManagedOnlineEndpoint(
+ name=online_endpoint_name,
+ description="this is a sample online endpoint",
+ auth_mode="key",
+ tags={"foo": "bar"},
+ )
+
+ # create the endpoint:
+ ml_client.begin_create_or_update(endpoint)
+ ```
+
+ 1. Configure and create an **online deployment**:
+ ```python
+ from azure.ai.ml.entities import ManagedOnlineDeployment
+
+ # define a deployment
+ blue_deployment = ManagedOnlineDeployment(
+ name="blue",
+ endpoint_name=online_endpoint_name,
+ model=model,
+ environment=env,
+ code_configuration=code_config,
+ instance_type="Standard_F2s_v2",
+ instance_count=1,
+ )
+
+ # create the deployment:
+ ml_client.begin_create_or_update(blue_deployment)
+
+ # blue deployment takes 100 traffic
+ endpoint.traffic = {"blue": 100}
+ ml_client.begin_create_or_update(endpoint)
+ ```
+
+For more information on concepts for endpoints and deployments, see [What are online endpoints?](concept-endpoints.md#what-are-online-endpoints)
++
+## Submit a request
+
+* SDK v1
+
+ ```python
+ import json
+ data = {
+ "query": "What color is the fox",
+ "context": "The quick brown fox jumped over the lazy dog.",
+ }
+ data = json.dumps(data)
+ predictions = service.run(input_data=data)
+ print(predictions)
+ ```
+
+* SDK v2
+
+ ```python
+ # test the endpoint (the request will route to blue deployment as set above)
+ ml_client.online_endpoints.invoke(
+ endpoint_name=online_endpoint_name,
+ request_file="../model-1/sample-request.json",
+ )
+
+ # test the specific (blue) deployment
+ ml_client.online_endpoints.invoke(
+ endpoint_name=online_endpoint_name,
+ deployment_name="blue",
+ request_file="../model-1/sample-request.json",
+ )
+ ```
+
+## Delete resources
+
+* SDK v1
+
+ ```python
+ service.delete()
+ ```
+
+* SDK v2
+
+ ```python
+ ml_client.online_endpoints.begin_delete(name=online_endpoint_name)
+ ```
+
+## Mapping of key functionality in SDK v1 and SDK v2
+
+|Functionality in SDK v1|Rough mapping in SDK v2|
+|-|-|
+|[azureml.core.model.Model class](/python/api/azureml-core/azureml.core.model.model?view=azure-ml-py&preserve-view=true)|[azure.ai.ml.entities.Model class](/python/api/azure-ai-ml/azure.ai.ml.entities.model?view=azure-python-preview&preserve-view=true)|
+|[azureml.core.Environment class](/python/api/azureml-core/azureml.core.environment%28class%29?view=azure-ml-py&preserve-view=true)|[azure.ai.ml.entities.Environment class](/python/api/azure-ai-ml/azure.ai.ml.entities.environment?view=azure-python-preview&preserve-view=true)|
+|[azureml.core.model.InferenceConfig class](/python/api/azureml-core/azureml.core.model.inferenceconfig?view=azure-ml-py&preserve-view=true)|[azure.ai.ml.entities.CodeConfiguration class](/python/api/azure-ai-ml/azure.ai.ml.entities.codeconfiguration?view=azure-python-preview&preserve-view=true)|
+|[azureml.core.webservice.AciWebservice class](/python/api/azureml-core/azureml.core.webservice.aciwebservice?view=azure-ml-py&preserve-view=true#azureml-core-webservice-aciwebservice-deploy-configuration)|[azure.ai.ml.entities.OnlineDeployment class](/python/api/azure-ai-ml/azure.ai.ml.entities.onlinedeployment?view=azure-python-&preserve-view=true) (and [azure.ai.ml.entities.ManagedOnlineEndpoint class](/en-us/python/api/azure-ai-ml/azure.ai.ml.entities.managedonlineendpoint?view=azure-python-preview&preserve-view=true))|
+|[Model.deploy](/python/api/azureml-core/azureml.core.model(class)?view=azure-ml-py&preserve-view=true#azureml-core-model-deploy) or [Webservice.deploy](/python/api/azureml-core/azureml.core.webservice%28class%29?view=azure-ml-py&preserve-view=true#azureml-core-webservice-deploy) |[ml_client.begin_create_or_update(online_deployment)](/python/api/azure-ai-ml/azure.ai.ml.mlclient?view=azure-python-preview&preserve-view=true#azure-ai-ml-mlclient-begin-create-or-update)|
+[Webservice.run](/python/api/azureml-core/azureml.core.webservice%28class%29?view=azure-ml-py&preserve-view=true#azureml-core-webservice-run)|[ml_client.online_endpoints.invoke](/python/api/azure-ai-ml/azure.ai.ml.operations.onlineendpointoperations?view=azure-python-preview&preserve-view=true#azure-ai-ml-operations-onlineendpointoperations-invoke)|
+[Webservice.delete](/python/api/azureml-core/azureml.core.webservice%28class%29?view=azure-ml-py&preserve-view=true#azureml-core-webservice-delete)|[ml_client.online_endpoints.delete](/python/api/azure-ai-ml/azure.ai.ml.operations.onlineendpointoperations?view=azure-python-preview&preserve-view=true#azure-ai-ml-operations-onlineendpointoperations-begin-delete)|
+
+## Related documents
+
+For more information, see
+
+v2 docs:
+* [What are endpoints?](concept-endpoints.md)
+* [Deploy machine learning models to managed online endpoint using Python SDK v2](how-to-deploy-managed-online-endpoint-sdk-v2.md)
+
+v1 docs:
+* [MLOps: ML model management v1](v1/concept-model-management-and-deployment.md)
+* [Deploy machine learning models](v1/how-to-deploy-and-where.md?tabs=python.md)
machine-learning Migrate To V2 Execution Automl https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/migrate-to-v2-execution-automl.md
+
+ Title: Migrate AutoML from SDK v1 to SDK v2
+
+description: Migrate AutoML from v1 to v2 of Azure Machine Learning SDK
++++++ Last updated : 09/16/2022++++
+# Migrate AutoML from SDK v1 to SDK v2
+
+In SDK v2, "experiments" and "runs" are consolidated into jobs.
+
+In SDK v1, AutoML was primarily configured and run using the `AutoMLConfig` class. In SDK v2, this class has been converted to an `AutoML` job. Although there are some differences in the configuration options, by and large, naming & functionality has been preserved in V2.
+
+This article gives a comparison of scenario(s) in SDK v1 and SDK v2.
+
+## Submit AutoML run
+
+* SDK v1: Below is a sample AutoML classification task. For the entire code, check out our [examples repo](https://github.com/azure/azureml-examples/blob/main/python-sdk/tutorials/automl-with-azureml/classification-credit-card-fraud/auto-ml-classification-credit-card-fraud.ipynb).
+
+ ```python
+ # Imports
+
+ import azureml.core
+ from azureml.core.experiment import Experiment
+ from azureml.core.workspace import Workspace
+ from azureml.core.dataset import Dataset
+ from azureml.train.automl import AutoMLConfig
+ from azureml.train.automl.run import AutoMLRun
+
+ # Load tabular dataset
+ data = "<url_to_data>"
+ dataset = Dataset.Tabular.from_delimited_files(data)
+ training_data, validation_data = dataset.random_split(percentage=0.8, seed=223)
+ label_column_name = "Class"
+
+ # Configure Auto ML settings
+ automl_settings = {
+ "n_cross_validations": 3,
+ "primary_metric": "average_precision_score_weighted",
+ "enable_early_stopping": True,
+ "max_concurrent_iterations": 2,
+ "experiment_timeout_hours": 0.25,
+ "verbosity": logging.INFO,
+ }
+
+ # Put together an AutoML job constructor
+ automl_config = AutoMLConfig(
+ task="classification",
+ debug_log="automl_errors.log",
+ compute_target=compute_target,
+ training_data=training_data,
+ label_column_name=label_column_name,
+ **automl_settings,
+ )
+
+ # Submit run
+ remote_run = experiment.submit(automl_config, show_output=False)
+ azureml_url = remote_run.get_portal_url()
+ print(azureml_url)
+ ```
+
+* SDK v2: Below is a sample AutoML classification task. For the entire code, check out our [examples repo](https://github.com/Azure/azureml-examples/blob/main/sdk/jobs/automl-standalone-jobs/automl-classification-task-bankmarketing/automl-classification-task-bankmarketing-mlflow.ipynb).
+
+ ```python
+ # Imports
+ from azure.ai.ml import automl, Input, MLClient
+
+ from azure.ai.ml.constants import AssetTypes
+ from azure.ai.ml.automl import (
+ classification,
+ ClassificationPrimaryMetrics,
+ ClassificationModels,
+ )
+
+
+ # Create MLTables for training dataset
+ # Note that AutoML Job can also take in tabular data
+ my_training_data_input = Input(
+ type=AssetTypes.MLTABLE, path="./data/training-mltable-folder"
+ )
+
+ # Create the AutoML classification job with the related factory-function.
+ classification_job = automl.classification(
+ compute="<compute_name>",
+ experiment_name="<exp_name?",
+ training_data=my_training_data_input,
+ target_column_name="<name_of_target_column>",
+ primary_metric="accuracy",
+ n_cross_validations=5,
+ enable_model_explainability=True,
+ tags={"my_custom_tag": "My custom value"},
+ )
+
+ # Limits are all optional
+ classification_job.set_limits(
+ timeout_minutes=600,
+ trial_timeout_minutes=20,
+ max_trials=5,
+ max_concurrent_trials = 4,
+ max_cores_per_trial= 1,
+ enable_early_termination=True,
+ )
+
+ # Training properties are optional
+ classification_job.set_training(
+ blocked_training_algorithms=["LogisticRegression"],
+ enable_onnx_compatible_models=True,
+ )
+
+ # Submit the AutoML job
+ returned_job = ml_client.jobs.create_or_update(classification_job)
+ returned_job
+ ```
+
+## Mapping of key functionality in SDK v1 and SDK v2
+
+|Functionality in SDK v1|Rough mapping in SDK v2|
+|-|-|
+|[Method/API in SDK v1 (use links to ref docs)](/python/api/azureml-train-automl-client/azureml.train.automl)|[Method/API in SDK v2 (use links to ref docs)](/python/api/azure-ai-ml/azure.ai.ml.automl)|
+
+## Next steps
+
+For more information, see:
+
+* [How to train an AutoML model with Python SDKv2](how-to-configure-auto-train.md)
machine-learning Migrate To V2 Execution Hyperdrive https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/migrate-to-v2-execution-hyperdrive.md
+
+ Title: 'Migrate from v1 to v2: '
+
+description: Migrate from v1 to v2 of Azure Machine Learning SDK
++++++ Last updated : 09/16/2022++++
+# Migrate hyperparameter tuning from SDK v1 to SDK v2
+
+In SDK v2, tuning hyperparameters are consolidated into jobs.
+
+A job has a type. Most jobs are command jobs that run a `command`, like `python main.py`. What runs in a job is agnostic to any programming language, so you can run `bash` scripts, invoke `python` interpreters, run a bunch of `curl` commands, or anything else.
+
+A sweep job is another type of job, which defines sweep settings and can be initiated by calling the sweep method of command.
+
+To migrate, you'll need to change your code for defining and submitting your hyperparameter tuning experiment to SDK v2. What you run _within_ the job doesn't need to be migrated to SDK v2. However, it's recommended to remove any code specific to Azure ML from your model training scripts. This separation allows for an easier transition between local and cloud and is considered best practice for mature MLOps. In practice, this means removing `azureml.*` lines of code. Model logging and tracking code should be replaced with MLflow. For more information, see [how to use MLflow in v2](how-to-use-mlflow-cli-runs.md).
+
+This article gives a comparison of scenario(s) in SDK v1 and SDK v2.
+
+## Run hyperparameter tuning in an experiment
+
+* SDK v1
+
+ ```python
+ from azureml.core import ScriptRunConfig, Experiment, Workspace
+ from azureml.train.hyperdrive import RandomParameterSampling, BanditPolicy, HyperDriveConfig, PrimaryMetricGoal
+ from azureml.train.hyperdrive import choice, loguniform
+
+ dataset = Dataset.get_by_name(ws, 'mnist-dataset')
+
+ # list the files referenced by mnist dataset
+ dataset.to_path()
+
+ #define the search space for your hyperparameters
+ param_sampling = RandomParameterSampling(
+ {
+ '--batch-size': choice(25, 50, 100),
+ '--first-layer-neurons': choice(10, 50, 200, 300, 500),
+ '--second-layer-neurons': choice(10, 50, 200, 500),
+ '--learning-rate': loguniform(-6, -1)
+ }
+ )
+
+ args = ['--data-folder', dataset.as_named_input('mnist').as_mount()]
+
+ #Set up your script run
+ src = ScriptRunConfig(source_directory=script_folder,
+ script='keras_mnist.py',
+ arguments=args,
+ compute_target=compute_target,
+ environment=keras_env)
+
+ # Set early stopping on this one
+ early_termination_policy = BanditPolicy(evaluation_interval=2, slack_factor=0.1)
+
+ # Define the configurations for your hyperparameter tuning experiment
+ hyperdrive_config = HyperDriveConfig(run_config=src,
+ hyperparameter_sampling=param_sampling,
+ policy=early_termination_policy,
+ primary_metric_name='Accuracy',
+ primary_metric_goal=PrimaryMetricGoal.MAXIMIZE,
+ max_total_runs=20,
+ max_concurrent_runs=4)
+ # Specify your experiment details
+ experiment = Experiment(workspace, experiment_name)
+
+ hyperdrive_run = experiment.submit(hyperdrive_config)
+
+ #Find the best model
+ best_run = hyperdrive_run.get_best_run_by_primary_metric()
+ ```
+
+* SDK v2
+
+ ```python
+ from azure.ai.ml import MLClient
+ from azure.ai.ml import command, Input
+ from azure.ai.ml.sweep import Choice, Uniform, MedianStoppingPolicy
+ from azure.identity import DefaultAzureCredential
+
+ # Create your command
+ command_job_for_sweep = command(
+ code="./src",
+ command="python main.py --iris-csv ${{inputs.iris_csv}} --learning-rate ${{inputs.learning_rate}} --boosting ${{inputs.boosting}}",
+ environment="AzureML-lightgbm-3.2-ubuntu18.04-py37-cpu@latest",
+ inputs={
+ "iris_csv": Input(
+ type="uri_file",
+ path="https://azuremlexamples.blob.core.windows.net/datasets/iris.csv",
+ ),
+ #define the search space for your hyperparameters
+ "learning_rate": Uniform(min_value=0.01, max_value=0.9),
+ "boosting": Choice(values=["gbdt", "dart"]),
+ },
+ compute="cpu-cluster",
+ )
+
+ # Call sweep() on your command job to sweep over your parameter expressions
+ sweep_job = command_job_for_sweep.sweep(
+ compute="cpu-cluster",
+ sampling_algorithm="random",
+ primary_metric="test-multi_logloss",
+ goal="Minimize",
+ )
+
+ # Define the limits for this sweep
+ sweep_job.set_limits(max_total_trials=20, max_concurrent_trials=10, timeout=7200)
+
+ # Set early stopping on this one
+ sweep_job.early_termination = MedianStoppingPolicy(delay_evaluation=5, evaluation_interval=2)
+
+ # Specify your experiment details
+ sweep_job.display_name = "lightgbm-iris-sweep-example"
+ sweep_job.experiment_name = "lightgbm-iris-sweep-example"
+ sweep_job.description = "Run a hyperparameter sweep job for LightGBM on Iris dataset."
+
+ # submit the sweep
+ returned_sweep_job = ml_client.create_or_update(sweep_job)
+
+ # get a URL for the status of the job
+ returned_sweep_job.services["Studio"].endpoint
+
+ # Download best trial model output
+ ml_client.jobs.download(returned_sweep_job.name, output_name="model")
+ ```
+
+## Run hyperparameter tuning in a pipeline
+
+* SDK v1
+
+ ````python
+
+ tf_env = Environment.get(ws, name='AzureML-TensorFlow-2.0-GPU')
+ data_folder = dataset.as_mount()
+ src = ScriptRunConfig(source_directory=script_folder,
+ script='tf_mnist.py',
+ arguments=['--data-folder', data_folder],
+ compute_target=compute_target,
+ environment=tf_env)
+
+ #Define HyperDrive configs
+ ps = RandomParameterSampling(
+ {
+ '--batch-size': choice(25, 50, 100),
+ '--first-layer-neurons': choice(10, 50, 200, 300, 500),
+ '--second-layer-neurons': choice(10, 50, 200, 500),
+ '--learning-rate': loguniform(-6, -1)
+ }
+ )
+
+ early_termination_policy = BanditPolicy(evaluation_interval=2, slack_factor=0.1)
+
+ hd_config = HyperDriveConfig(run_config=src,
+ hyperparameter_sampling=ps,
+ policy=early_termination_policy,
+ primary_metric_name='validation_acc',
+ primary_metric_goal=PrimaryMetricGoal.MAXIMIZE,
+ max_total_runs=4,
+ max_concurrent_runs=4)
+
+ metrics_output_name = 'metrics_output'
+ metrics_data = PipelineData(name='metrics_data',
+ datastore=datastore,
+ pipeline_output_name=metrics_output_name,
+ training_output=TrainingOutput("Metrics"))
+
+ model_output_name = 'model_output'
+ saved_model = PipelineData(name='saved_model',
+ datastore=datastore,
+ pipeline_output_name=model_output_name,
+ training_output=TrainingOutput("Model",
+ model_file="outputs/model/saved_model.pb"))
+ #Create HyperDriveStep
+ hd_step_name='hd_step01'
+ hd_step = HyperDriveStep(
+ name=hd_step_name,
+ hyperdrive_config=hd_config,
+ inputs=[data_folder],
+ outputs=[metrics_data, saved_model])
+
+ #Find and register best model
+ conda_dep = CondaDependencies()
+ conda_dep.add_pip_package("azureml-sdk")
+
+ rcfg = RunConfiguration(conda_dependencies=conda_dep)
+
+ register_model_step = PythonScriptStep(script_name='register_model.py',
+ name="register_model_step01",
+ inputs=[saved_model],
+ compute_target=cpu_cluster,
+ arguments=["--saved-model", saved_model],
+ allow_reuse=True,
+ runconfig=rcfg)
+
+ register_model_step.run_after(hd_step)
+
+ #Run the pipeline
+ pipeline = Pipeline(workspace=ws, steps=[hd_step, register_model_step])
+ pipeline_run = exp.submit(pipeline)
+
+ ````
+
+* SDK v2
+
+ ```python
+ train_component_func = load_component(path="./train.yml")
+ score_component_func = load_component(path="./predict.yml")
+
+ # define a pipeline
+ @pipeline()
+ def pipeline_with_hyperparameter_sweep():
+ """Tune hyperparameters using sample components."""
+ train_model = train_component_func(
+ data=Input(
+ type="uri_file",
+ path="wasbs://datasets@azuremlexamples.blob.core.windows.net/iris.csv",
+ ),
+ c_value=Uniform(min_value=0.5, max_value=0.9),
+ kernel=Choice(["rbf", "linear", "poly"]),
+ coef0=Uniform(min_value=0.1, max_value=1),
+ degree=3,
+ gamma="scale",
+ shrinking=False,
+ probability=False,
+ tol=0.001,
+ cache_size=1024,
+ verbose=False,
+ max_iter=-1,
+ decision_function_shape="ovr",
+ break_ties=False,
+ random_state=42,
+ )
+ sweep_step = train_model.sweep(
+ primary_metric="training_f1_score",
+ goal="minimize",
+ sampling_algorithm="random",
+ compute="cpu-cluster",
+ )
+ sweep_step.set_limits(max_total_trials=20, max_concurrent_trials=10, timeout=7200)
+
+ score_data = score_component_func(
+ model=sweep_step.outputs.model_output, test_data=sweep_step.outputs.test_data
+ )
+
+
+ pipeline_job = pipeline_with_hyperparameter_sweep()
+
+ # set pipeline level compute
+ pipeline_job.settings.default_compute = "cpu-cluster"
+
+ # submit job to workspace
+ pipeline_job = ml_client.jobs.create_or_update(
+ pipeline_job, experiment_name="pipeline_samples"
+ )
+ pipeline_job
+ ```
+
+## Mapping of key functionality in SDK v1 and SDK v2
+
+|Functionality in SDK v1|Rough mapping in SDK v2|
+|-|-|
+|[HyperDriveRunConfig()](/python/api/azureml-train-core/azureml.train.hyperdrive.hyperdriverunconfig)|[SweepJob()](/python/api/azure-ai-ml/azure.ai.ml.sweep.sweepjob)|
+|[hyperdrive Package](/python/api/azureml-train-core/azureml.train.hyperdrive)|[sweep Package](/python/api/azure-ai-ml/azure.ai.ml.sweep)|
++
+## Next steps
+
+For more information, see:
+
+* [SDK v1 - Tune Hyperparameters](/v1/how-to-tune-hyperparameters-v1.md)
+* [SDK v2 - Tune Hyperparameters](/python/api/azure-ai-ml/azure.ai.ml.sweep)
+* [SDK v2 - Sweep in Pipeline](how-to-use-sweep-in-pipeline.md)
machine-learning Migrate To V2 Execution Parallel Run Step https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/migrate-to-v2-execution-parallel-run-step.md
+
+ Title: Migrate parallel run step from SDK v1 to SDK v2
+
+description: Migrate parallel run step from v1 to v2 of Azure Machine Learning SDK
++++++ Last updated : 09/16/2022++++
+# Migrate parallel run step from SDK v1 to SDK v2
+
+In SDK v2, "Parallel run step" is consolidated into job concept as `parallel job`. Parallel job keeps the same target to empower users to accelerate their job execution by distributing repeated tasks on powerful multi-nodes compute clusters. On top of parallel run step, v2 parallel job provides extra benefits:
+
+- Flexible interface, which allows user to define multiple custom inputs and outputs for your parallel job. You can connect them with other steps to consume or manage their content in your entry script
+- Simplify input schema, which replaces `Dataset` as input by using v2 `data asset` concept. You can easily use your local files or blob directory URI as the inputs to parallel job.
+- More powerful features are under developed in v2 parallel job only. For example, resume the failed/canceled parallel job to continue process the failed or unprocessed mini-batches by reusing the successful result to save duplicate effort.
+
+To migrate your current sdk v1 parallel run step to v2, you'll need to
+
+- Use `parallel_run_function` to create parallel job by replacing `ParallelRunConfig` and `ParallelRunStep` in v1.
+- Migrate your v1 pipeline to v2. Then invoke your v2 parallel job as a step in your v2 pipeline. See [how to migrate pipeline from v1 to v2](migrate-to-v2-execution-pipeline.md) for the details about pipeline migration.
+
+> Note: User __entry script__ is compatible between v1 parallel run step and v2 parallel job. So you can keep using the same entry_script.py when you migrate your parallel run job.
+
+This article gives a comparison of scenario(s) in SDK v1 and SDK v2. In the following examples, we'll build a parallel job to predict input data in a pipelines job. You'll see how to build a parallel job, and how to use it in a pipeline job for both SDK v1 and SDK v2.
+
+## Prerequisites
+
+ - Prepare your SDK v2 environment: [Install the Azure ML SDK v2 for Python](/python/api/overview/azure/ml/installv2)
+ - Understand the basis of SDK v2 pipeline: [How to create Azure ML pipeline with python SDK v2](how-to-create-component-pipeline-python.md)
++
+## Create parallel step
+* SDK v1
+
+ ```python
+ # Create the configuration to wrap the inference script
+ from azureml.pipeline.steps import ParallelRunStep, ParallelRunConfig
+
+ parallel_run_config = ParallelRunConfig(
+ source_directory=scripts_folder,
+ entry_script=script_file,
+ mini_batch_size=PipelineParameter(name="batch_size_param", default_value="5"),
+ error_threshold=10,
+ output_action="append_row",
+ append_row_file_name="mnist_outputs.txt",
+ environment=batch_env,
+ compute_target=compute_target,
+ process_count_per_node=PipelineParameter(name="process_count_param", default_value=2),
+ node_count=2
+ )
+
+ # Create the Parallel run step
+ parallelrun_step = ParallelRunStep(
+ name="predict-digits-mnist",
+ parallel_run_config=parallel_run_config,
+ inputs=[ input_mnist_ds_consumption ],
+ output=output_dir,
+ allow_reuse=False
+ )
+ ```
+
+* SDK v2
+
+ ```python
+ # parallel job to process file data
+ file_batch_inference = parallel_run_function(
+ name="file_batch_score",
+ display_name="Batch Score with File Dataset",
+ description="parallel component for batch score",
+ inputs=dict(
+ job_data_path=Input(
+ type=AssetTypes.MLTABLE,
+ description="The data to be split and scored in parallel",
+ )
+ ),
+ outputs=dict(job_output_path=Output(type=AssetTypes.MLTABLE)),
+ input_data="${{inputs.job_data_path}}",
+ instance_count=2,
+ mini_batch_size="1",
+ mini_batch_error_threshold=1,
+ max_concurrency_per_instance=1,
+ task=RunFunction(
+ code="./src",
+ entry_script="file_batch_inference.py",
+ program_arguments="--job_output_path ${{outputs.job_output_path}}",
+ environment="azureml:AzureML-sklearn-0.24-ubuntu18.04-py37-cpu:1",
+ ),
+ )
+ ```
+
+## Use parallel step in pipeline
+
+* SDK v1
+
+ ```python
+ # Run pipeline with parallel run step
+ from azureml.core import Experiment
+
+ pipeline = Pipeline(workspace=ws, steps=[parallelrun_step])
+ experiment = Experiment(ws, 'digit_identification')
+ pipeline_run = experiment.submit(pipeline)
+ pipeline_run.wait_for_completion(show_output=True)
+ ```
+
+* SDK v2
+
+ ```python
+ @pipeline()
+ def parallel_in_pipeline(pipeline_job_data_path, pipeline_score_model):
+
+ prepare_file_tabular_data = prepare_data(input_data=pipeline_job_data_path)
+ # output of file & tabular data should be type MLTable
+ prepare_file_tabular_data.outputs.file_output_data.type = AssetTypes.MLTABLE
+ prepare_file_tabular_data.outputs.tabular_output_data.type = AssetTypes.MLTABLE
+
+ batch_inference_with_file_data = file_batch_inference(
+ job_data_path=prepare_file_tabular_data.outputs.file_output_data
+ )
+ # use eval_mount mode to handle file data
+ batch_inference_with_file_data.inputs.job_data_path.mode = (
+ InputOutputModes.EVAL_MOUNT
+ )
+ batch_inference_with_file_data.outputs.job_output_path.type = AssetTypes.MLTABLE
+
+ batch_inference_with_tabular_data = tabular_batch_inference(
+ job_data_path=prepare_file_tabular_data.outputs.tabular_output_data,
+ score_model=pipeline_score_model,
+ )
+ # use direct mode to handle tabular data
+ batch_inference_with_tabular_data.inputs.job_data_path.mode = (
+ InputOutputModes.DIRECT
+ )
+
+ return {
+ "pipeline_job_out_file": batch_inference_with_file_data.outputs.job_output_path,
+ "pipeline_job_out_tabular": batch_inference_with_tabular_data.outputs.job_output_path,
+ }
+
+ pipeline_job_data_path = Input(
+ path="./dataset/", type=AssetTypes.MLTABLE, mode=InputOutputModes.RO_MOUNT
+ )
+ pipeline_score_model = Input(
+ path="./model/", type=AssetTypes.URI_FOLDER, mode=InputOutputModes.DOWNLOAD
+ )
+ # create a pipeline
+ pipeline_job = parallel_in_pipeline(
+ pipeline_job_data_path=pipeline_job_data_path,
+ pipeline_score_model=pipeline_score_model,
+ )
+ pipeline_job.outputs.pipeline_job_out_tabular.type = AssetTypes.URI_FILE
+
+ # set pipeline level compute
+ pipeline_job.settings.default_compute = "cpu-cluster"
+
+ # run pipeline job
+ pipeline_job = ml_client.jobs.create_or_update(
+ pipeline_job, experiment_name="pipeline_samples"
+ )
+ ```
+
+## Mapping of key functionality in SDK v1 and SDK v2
+
+|Functionality in SDK v1|Rough mapping in SDK v2|
+|-|-|
+|[azureml.pipeline.steps.parallelrunconfig](/python/api/azureml-pipeline-steps/azureml.pipeline.steps.parallelrunconfig)<br>[azureml.pipeline.steps.parallelrunstep](/python/api/azureml-pipeline-steps/azureml.pipeline.steps.parallelrunstep)|[azure.ai.ml.parallel](/python/api/azure-ai-ml/azure.ai.ml.parallel)|
+|[OutputDatasetConfig](/python/api/azureml-core/azureml.data.output_dataset_config.outputdatasetconfig)|[Output](/python/api/azure-ai-ml/azure.ai.ml.output)|
+|[dataset as_mount](/python/api/azureml-core/azureml.data.filedataset#azureml-data-filedataset-as-mount)|[Input](/python/api/azure-ai-ml/azure.ai.ml.input)|
+
+## Parallel job configurations and settings mapping
+
+| SDK v1| SDK v2| Description |
+|-|-|-|
+|ParallelRunConfig.environment|parallel_run_function.task.environment|Environment that training job will run in. |
+|ParallelRunConfig.entry_script|parallel_run_function.task.entry_script |User script that will be run in parallel on multiple nodes. |
+|ParallelRunConfig.error_threshold| parallel_run_function.error_threshold |The number of failed mini batches that could be ignored in this parallel job. If the count of failed mini-batch is higher than this threshold, the parallel job will be marked as failed.<br><br>"-1" is the default number, which means to ignore all failed mini-batch during parallel job.|
+|ParallelRunConfig.output_action|parallel_run_function.append_row_to |Aggregate all returns from each run of mini-batch and output it into this file. May reference to one of the outputs of parallel job by using the expression ${{outputs.<output_name>}}|
+|ParallelRunConfig.node_count|parallel_run_function.instance_count |Optional number of instances or nodes used by the compute target. Defaults to 1.|
+|ParallelRunConfig.process_count_per_node|parallel_run_function.max_concurrency_per_instance |The max parallelism that each compute instance has. |
+|ParallelRunConfig.mini_batch_size|parallel_run_function.mini_batch_size |Define the size of each mini-batch to split the input.<br><br>If the input_data is a folder or set of files, this number defines the file count for each mini-batch. For example, 10, 100.<br><br>If the input_data is tabular data from `mltable`, this number defines the proximate physical size for each mini-batch. The default unit is Byte and the value could accept string like 100 kb, 100 mb.|
+|ParallelRunConfig.source_directory|parallel_run_function.task.code |A local or remote path pointing at source code.|
+|ParallelRunConfig.description|parallel_run_function.description |A friendly description of the parallel|
+|ParallelRunConfig.logging_level|parallel_run_function.logging_level |A string of the logging level name, which is defined in 'logging'. Possible values are 'WARNING', 'INFO', and 'DEBUG'. (optional, default value is 'INFO'.) This value could be set through PipelineParameter. |
+|ParallelRunConfig.run_invocation_timeout|parallel_run_function.retry_settings.timeout |The timeout in seconds for executing custom run() function. If the execution time is higher than this threshold, the mini-batch will be aborted, and marked as a failed mini-batch to trigger retry.|
+|ParallelRunConfig.run_max_try|parallel_run_function.retry_settings.max_retries |The number of retries when mini-batch is failed or timeout. If all retries are failed, the mini-batch will be marked as failed to be counted by mini_batch_error_threshold calculation.|
+|ParallelRunConfig.append_row_file_name |parallel_run_function.append_row_to | Combined with `append_row_to` setting.|
+|ParallelRunConfig.allowed_failed_count|parallel_run_function.mini_batch_error_threshold |The number of failed mini batches that could be ignored in this parallel job. If the count of failed mini-batch is higher than this threshold, the parallel job will be marked as failed.<br><br>"-1" is the default number, which means to ignore all failed mini-batch during parallel job.|
+|ParallelRunConfig.allowed_failed_percent|parallel_run_function.task.program_arguments set <br>`--allowed_failed_percent`|Similar to "allowed_failed_count" but this setting uses the percent of failed mini-batches instead of the mini-batch failure count.<br><br>The range of this setting is [0, 100]. "100" is the default number, which means to ignore all failed mini-batch during parallel job.|
+|ParallelRunConfig.partition_keys| _Under development._ | |
+|ParallelRunConfig.environment_variables|parallel_run_function.environment_variables |A dictionary of environment variables names and values. These environment variables are set on the process where user script is being executed.|
+|ParallelRunStep.name|parallel_run_function.name |Name of the parallel job or component created.|
+|ParallelRunStep.inputs|parallel_run_function.inputs|A dict of inputs used by this parallel.|
+|--|parallel_run_function.input_data| Declare the data to be split and processed with parallel|
+|ParallelRunStep.output|parallel_run_function.outputs|The outputs of this parallel job.|
+|ParallelRunStep.side_inputs|parallel_run_function.inputs|Defined together with `inputs`.|
+|ParallelRunStep.arguments|parallel_run_function.task.program_arguments|The arguments of the parallel task.|
+|ParallelRunStep.allow_reuse|parallel_run_function.is_deterministic|Specify whether the parallel will return same output given same input.|
+
+## Next steps
+
+For more information, see the documentation here:
+
+* [Parallel run step SDK v1 examples](https://github.com/Azure/MachineLearningNotebooks/tree/master/how-to-use-azureml/machine-learning-pipelines/parallel-run)
+* [Parallel job SDK v2 examples](https://github.com/Azure/azureml-examples/blob/main/sdk/jobs/pipelines/1g_pipeline_with_parallel_nodes/pipeline_with_parallel_nodes.ipynb)
machine-learning Migrate To V2 Execution Pipeline https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/migrate-to-v2-execution-pipeline.md
+
+ Title: Migrate pipelines from SDK v1 to SDK v2
+
+description: Migrate pipelines from v1 to v2 of Azure Machine Learning SDK
++++++ Last updated : 09/16/2022++++
+# Migrate pipelines from SDK v1 to SDK v2
+
+In SDK v2, "pipelines" are consolidated into jobs.
+
+A job has a type. Most jobs are command jobs that run a `command`, like `python main.py`. What runs in a job is agnostic to any programming language, so you can run `bash` scripts, invoke `python` interpreters, run a bunch of `curl` commands, or anything else.
+
+A `pipeline` is another type of job, which defines child jobs that may have input/output relationships, forming a directed acyclic graph (DAG).
+
+To migrate, you'll need to change your code for defining and submitting the pipelines to SDK v2. What you run _within_ the child job doesn't need to be migrated to SDK v2. However, it's recommended to remove any code specific to Azure ML from your model training scripts. This separation allows for an easier transition between local and cloud and is considered best practice for mature MLOps. In practice, this means removing `azureml.*` lines of code. Model logging and tracking code should be replaced with MLflow. For more information, see [how to use MLflow in v2](how-to-use-mlflow-cli-runs.md).
+
+This article gives a comparison of scenario(s) in SDK v1 and SDK v2. In the following examples, we'll build three steps (train, score and evaluate) into a dummy pipeline job. This demonstrates how to build pipeline jobs using SDK v1 and SDK v2, and how to consume data and transfer data between steps.
+
+## Run a pipeline
+
+* SDK v1
+
+ ```python
+ # import required libraries
+ import os
+ import azureml.core
+ from azureml.core import (
+ Workspace,
+ Dataset,
+ Datastore,
+ ComputeTarget,
+ Experiment,
+ ScriptRunConfig,
+ )
+ from azureml.pipeline.steps import PythonScriptStep
+ from azureml.pipeline.core import Pipeline
+
+ # check core SDK version number
+ print("Azure ML SDK Version: ", azureml.core.VERSION)
+
+ # load workspace
+ workspace = Workspace.from_config()
+ print(
+ "Workspace name: " + workspace.name,
+ "Azure region: " + workspace.location,
+ "Subscription id: " + workspace.subscription_id,
+ "Resource group: " + workspace.resource_group,
+ sep="\n",
+ )
+
+ # create an ML experiment
+ experiment = Experiment(workspace=workspace, name="train_score_eval_pipeline")
+
+ # create a directory
+ script_folder = "./src"
+
+ # create compute
+ from azureml.core.compute import ComputeTarget, AmlCompute
+ from azureml.core.compute_target import ComputeTargetException
+
+ # Choose a name for your CPU cluster
+ amlcompute_cluster_name = "cpu-cluster"
+
+ # Verify that cluster does not exist already
+ try:
+ aml_compute = ComputeTarget(workspace=workspace, name=amlcompute_cluster_name)
+ print('Found existing cluster, use it.')
+ except ComputeTargetException:
+ compute_config = AmlCompute.provisioning_configuration(vm_size='STANDARD_DS12_V2',
+ max_nodes=4)
+ aml_compute = ComputeTarget.create(ws, amlcompute_cluster_name, compute_config)
+
+ aml_compute.wait_for_completion(show_output=True)
+
+ # define data set
+ data_urls = ["wasbs://demo@dprepdata.blob.core.windows.net/Titanic.csv"]
+ input_ds = Dataset.File.from_files(data_urls)
+
+ # define steps in pipeline
+ from azureml.data import OutputFileDatasetConfig
+ model_output = OutputFileDatasetConfig('model_output')
+ train_step = PythonScriptStep(
+ name="train step",
+ script_name="train.py",
+ arguments=['--training_data', input_ds.as_named_input('training_data').as_mount() ,'--max_epocs', 5, '--learning_rate', 0.1,'--model_output', model_output],
+ source_directory=script_folder,
+ compute_target=aml_compute,
+ allow_reuse=True,
+ )
+
+ score_output = OutputFileDatasetConfig('score_output')
+ score_step = PythonScriptStep(
+ name="score step",
+ script_name="score.py",
+ arguments=['--model_input',model_output.as_input('model_input'), '--test_data', input_ds.as_named_input('test_data').as_mount(), '--score_output', score_output],
+ source_directory=script_folder,
+ compute_target=aml_compute,
+ allow_reuse=True,
+ )
+
+ eval_output = OutputFileDatasetConfig('eval_output')
+ eval_step = PythonScriptStep(
+ name="eval step",
+ script_name="eval.py",
+ arguments=['--scoring_result',score_output.as_input('scoring_result'), '--eval_output', eval_output],
+ source_directory=script_folder,
+ compute_target=aml_compute,
+ allow_reuse=True,
+ )
+
+ # built pipeline
+ from azureml.pipeline.core import Pipeline
+
+ pipeline_steps = [train_step, score_step, eval_step]
+
+ pipeline = Pipeline(workspace = workspace, steps=pipeline_steps)
+ print("Pipeline is built.")
+
+ pipeline_run = experiment.submit(pipeline, regenerate_outputs=False)
+
+ print("Pipeline submitted for execution.")
+
+ ```
+
+* SDK v2. [Full sample link](https://github.com/Azure/azureml-examples/blob/main/sdk/jobs/pipelines/1b_pipeline_with_python_function_components/pipeline_with_python_function_components.ipynb)
+
+ ```python
+ # import required libraries
+ from azure.identity import DefaultAzureCredential, InteractiveBrowserCredential
+
+ from azure.ai.ml import MLClient, Input
+ from azure.ai.ml.dsl import pipeline
+
+ try:
+ credential = DefaultAzureCredential()
+ # Check if given credential can get token successfully.
+ credential.get_token("https://management.azure.com/.default")
+ except Exception as ex:
+ # Fall back to InteractiveBrowserCredential in case DefaultAzureCredential not work
+ credential = InteractiveBrowserCredential()
+
+ # Get a handle to workspace
+ ml_client = MLClient.from_config(credential=credential)
+
+ # Retrieve an already attached Azure Machine Learning Compute.
+ cluster_name = "cpu-cluster"
+ print(ml_client.compute.get(cluster_name))
+
+ # Import components that are defined with python function
+ with open("src/components.py") as fin:
+ print(fin.read())
+
+ # You need to install mldesigner package to use command_component decorator.
+ # Option 1: install directly
+ # !pip install mldesigner
+
+ # Option 2: install as an extra dependency of azure-ai-ml
+ # !pip install azure-ai-ml[designer]
+
+ # import the components as functions
+ from src.components import train_model, score_data, eval_model
+
+ cluster_name = "cpu-cluster"
+ # define a pipeline with component
+ @pipeline(default_compute=cluster_name)
+ def pipeline_with_python_function_components(input_data, test_data, learning_rate):
+ """E2E dummy train-score-eval pipeline with components defined via python function components"""
+
+ # Call component obj as function: apply given inputs & parameters to create a node in pipeline
+ train_with_sample_data = train_model(
+ training_data=input_data, max_epochs=5, learning_rate=learning_rate
+ )
+
+ score_with_sample_data = score_data(
+ model_input=train_with_sample_data.outputs.model_output, test_data=test_data
+ )
+
+ eval_with_sample_data = eval_model(
+ scoring_result=score_with_sample_data.outputs.score_output
+ )
+
+ # Return: pipeline outputs
+ return {
+ "eval_output": eval_with_sample_data.outputs.eval_output,
+ "model_output": train_with_sample_data.outputs.model_output,
+ }
+
+
+ pipeline_job = pipeline_with_python_function_components(
+ input_data=Input(
+ path="wasbs://demo@dprepdata.blob.core.windows.net/Titanic.csv", type="uri_file"
+ ),
+ test_data=Input(
+ path="wasbs://demo@dprepdata.blob.core.windows.net/Titanic.csv", type="uri_file"
+ ),
+ learning_rate=0.1,
+ )
+
+ # submit job to workspace
+ pipeline_job = ml_client.jobs.create_or_update(
+ pipeline_job, experiment_name="train_score_eval_pipeline"
+ )
+ ```
+
+## Mapping of key functionality in SDK v1 and SDK v2
+
+|Functionality in SDK v1|Rough mapping in SDK v2|
+|-|-|
+|[azureml.pipeline.core.Pipeline](/python/api/azureml-pipeline-core/azureml.pipeline.core.pipeline?view=azure-ml-py&preserve-view=true)|[azure.ai.ml.dsl.pipeline]/python/api/azure-ai-ml/azure.ai.ml.dsl#azure-ai-ml-dsl-pipeline)|
+|[OutputDatasetConfig](/python/api/azureml-core/azureml.data.output_dataset_config.outputdatasetconfig?view=azure-ml-py&preserve-view=true)|[Output]/python/api/azure-ai-ml/azure.ai.ml.output|
+|[dataset as_mount](/python/api/azureml-core/azureml.data.filedataset?view=azure-ml-py#azureml-data-filedataset-as-mount&preserve-view=true)|[Input](/python/api/azure-ai-ml/azure.ai.ml.input)|
+
+## Step and job/component type mapping
+
+|step in SDK v1| job type in SDK v2| component type in SDK v2|
+|--|-|-|
+|`adla_step`|None|None|
+|`automl_step`|`automl` job|`automl` component|
+|`azurebatch_step`| None| None|
+|`command_step`| `command` job|`command` component|
+|`data_transfer_step`| coming soon | coming soon|
+|`databricks_step`| coming soon|coming soon|
+|`estimator_step`| command job|`command` component|
+|`hyper_drive_step`|`sweep` job| `sweep` component|
+|`kusto_step`| None|None|
+|`module_step`|None|command component|
+|`mpi_step`| command job|command component|
+|`parallel_run_step`|`Parallel` job| `Parallel` component|
+|`python_script_step`| `command` job|command component|
+|`r_script_step`| `command` job|`command` component|
+|`synapse_spark_step`| coming soon|coming soon|
+
+## Related documents
+
+For more information, see the documentation here:
+
+* [steps in SDK v1](/python/api/azureml-pipeline-steps/azureml.pipeline.steps?view=azure-ml-py&preserve-view=true)
+* [Create and run machine learning pipelines using components with the Azure Machine Learning SDK v2 (Preview)](how-to-create-component-pipeline-python.md)
+* [Build a simple ML pipeline for image classification (SDK v1)](https://github.com/Azure/azureml-examples/blob/main/python-sdk/tutorials/using-pipelines/image-classification.ipynb)
+* [OutputDatasetConfig](/python/api/azureml-core/azureml.data.output_dataset_config.outputdatasetconfig?view=azure-ml-py)
+* [`mldesigner`](https://pypi.org/project/mldesigner/)
machine-learning Migrate To V2 Local Runs https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/migrate-to-v2-local-runs.md
+
+ Title: Migrate local runs from SDK v1 to SDK v2
+
+description: Migrate local runs from v1 to v2 of Azure Machine Learning SDK
++++++ Last updated : 09/16/2022++++
+# Migrate local runs from SDK v1 to SDK v2
+
+Local runs are similar in both V1 and V2. Use the "local" string when setting the compute target in either version.
+
+This article gives a comparison of scenario(s) in SDK v1 and SDK v2.
+
+### Submit a local run
+
+* SDK v1
+
+ ```python
+ from azureml.core import Workspace, Experiment, Environment, ScriptRunConfig
+
+ # connect to the workspace
+ ws = Workspace.from_config()
+
+ # define and configure the experiment
+ experiment = Experiment(workspace=ws, name='day1-experiment-train')
+ config = ScriptRunConfig(source_directory='./src',
+ script='train.py',
+ compute_target='local')
+
+ # set up pytorch environment
+ env = Environment.from_conda_specification(
+ name='pytorch-env',
+ file_path='pytorch-env.yml')
+ config.run_config.environment = env
+
+ run = experiment.submit(config)
+
+ aml_url = run.get_portal_url()
+ print(aml_url)
+ ```
+
+* SDK v2
+
+ ```python
+ #import required libraries
+ from azure.ai.ml import MLClient, command
+ from azure.ai.ml.entities import Environment
+ from azure.identity import DefaultAzureCredential
+
+ #connect to the workspace
+ ml_client = MLClient.from_config(DefaultAzureCredential())
+
+ # set up pytorch environment
+ env = Environment(
+ image='mcr.microsoft.com/azureml/openmpi3.1.2-ubuntu18.04',
+ conda_file='pytorch-env.yml',
+ name='pytorch-env'
+ )
+
+ # define the command
+ command_job = command(
+ code='./src',
+ command='train.py',
+ environment=env,
+ compute='local',
+ )
+
+ returned_job = ml_client.jobs.create_or_update(command_job)
+ returned_job
+ ```
+
+## Mapping of key functionality in SDK v1 and SDK v2
+
+|Functionality in SDK v1|Rough mapping in SDK v2|
+|-|-|
+|[experiment.submit](/python/api/azureml-core/azureml.core.experiment.experiment#azureml-core-experiment-experiment-submit)|[MLCLient.jobs.create_or_update](/python/api/azure-ai-ml/azure.ai.ml.mlclient#azure-ai-ml-mlclient-create-or-update)|
+
+## Next steps
+
+* [Train models with Azure Machine Learning](concept-train-machine-learning-model.md)
machine-learning Migrate To V2 Resource Datastore https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/migrate-to-v2-resource-datastore.md
+
+ Title: Migrate datastore management from SDK v1 to SDK v2
+
+description: Migrate datastore management from v1 to v2 of Azure Machine Learning SDK
++++++ Last updated : 09/16/2022++++
+# Migrate datastore management from SDK v1 to SDK v2
+
+Azure Machine Learning Datastores securely keep the connection information to your data storage on Azure, so you don't have to code it in your scripts. V2 Datastore concept remains mostly unchanged compared with V1. The difference is we won't support SQL-like data sources via AzureML Datastores. We'll support SQL-like data sources via AzureML data import&export functionalities.
+
+This article gives a comparison of scenario(s) in SDK v1 and SDK v2.
+
+## Create a datastore from an Azure Blob container via account_key
+
+* SDK v1
+
+ ```python
+ blob_datastore_name='azblobsdk' # Name of the datastore to workspace
+ container_name=os.getenv("BLOB_CONTAINER", "<my-container-name>") # Name of Azure blob container
+ account_name=os.getenv("BLOB_ACCOUNTNAME", "<my-account-name>") # Storage account name
+ account_key=os.getenv("BLOB_ACCOUNT_KEY", "<my-account-key>") # Storage account access key
+
+ blob_datastore = Datastore.register_azure_blob_container(workspace=ws,
+ datastore_name=blob_datastore_name,
+ container_name=container_name,
+ account_name=account_name,
+ account_key=account_key)
+ ```
++
+* SDK v2
+
+ ```python
+ from azure.ai.ml.entities import AzureBlobDatastore
+ from azure.ai.ml import MLClient
+
+ ml_client = MLClient.from_config()
+
+ store = AzureBlobDatastore(
+ name="blob-protocol-example",
+ description="Datastore pointing to a blob container using wasbs protocol.",
+ account_name="mytestblobstore",
+ container_name="data-container",
+ protocol="wasbs",
+ credentials={
+ "account_key": "XXXxxxXXXxXXXXxxXXXXXxXXXXXxXxxXxXXXxXXXxXXxxxXXxxXXXxXxXXXxxXxxXXXXxxxxxXXxxxxxxXXXxXXX"
+ },
+ )
+
+ ml_client.create_or_update(store)
+ ```
++
+## Create a datastore from an Azure Blob container via sas_token
+
+* SDK v1
+
+ ```python
+ blob_datastore_name='azblobsdk' # Name of the datastore to workspace
+ container_name=os.getenv("BLOB_CONTAINER", "<my-container-name>") # Name of Azure blob container
+ sas_token=os.getenv("BLOB_SAS_TOKEN", "<my-sas-token>") # Sas token
+
+ blob_datastore = Datastore.register_azure_blob_container(workspace=ws,
+ datastore_name=blob_datastore_name,
+ container_name=container_name,
+ sas_token=sas_token)
+ ```
+
+* SDK v2
+
+ ```python
+ from azure.ai.ml.entities import AzureBlobDatastore
+ from azure.ai.ml import MLClient
+
+ ml_client = MLClient.from_config()
+
+ store = AzureBlobDatastore(
+ name="blob-sas-example",
+ description="Datastore pointing to a blob container using SAS token.",
+ account_name="mytestblobstore",
+ container_name="data-container",
+ credentials={
+ "sas_token": "?xx=XXXX-XX-XX&xx=xxxx&xxx=xxx&xx=xxxxxxxxxxx&xx=XXXX-XX-XXXXX:XX:XXX&xx=XXXX-XX-XXXXX:XX:XXX&xxx=xxxxx&xxx=XXxXXXxxxxxXXXXXXXxXxxxXXXXXxxXXXXXxXXXXxXXXxXXxXX"
+ },
+ )
+
+ ml_client.create_or_update(store)
+ ```
+
+## Create a datastore from an Azure Blob container via identity-based authentication
+
+* SDK v1
+
+```python
+blob_datastore = Datastore.register_azure_blob_container(workspace=ws,
+ datastore_name='credentialless_blob',
+ container_name='my_container_name',
+ account_name='my_account_name')
+
+```
+
+* SDK v2
+
+ ```python
+ from azure.ai.ml.entities import AzureBlobDatastore
+ from azure.ai.ml import MLClient
+
+ ml_client = MLClient.from_config()
+
+ store = AzureBlobDatastore(
+ name="",
+ description="",
+ account_name="",
+ container_name=""
+ )
+
+ ml_client.create_or_update(store)
+ ```
+
+## Get datastores from your workspace
+
+* SDK v1
+
+ ```python
+ # Get a named datastore from the current workspace
+ datastore = Datastore.get(ws, datastore_name='your datastore name')
+ ```
+
+ ```python
+ # List all datastores registered in the current workspace
+ datastores = ws.datastores
+ for name, datastore in datastores.items():
+ print(name, datastore.datastore_type)
+ ```
+
+* SDK v2
+
+ ```python
+ from azure.ai.ml import MLClient
+ from azure.identity import DefaultAzureCredential
+
+ #Enter details of your AzureML workspace
+ subscription_id = '<SUBSCRIPTION_ID>'
+ resource_group = '<RESOURCE_GROUP>'
+ workspace_name = '<AZUREML_WORKSPACE_NAME>'
+
+ ml_client = MLClient(credential=DefaultAzureCredential(),
+ subscription_id=subscription_id,
+ resource_group_name=resource_group)
+
+ datastore = ml_client.datastores.get(datastore_name='your datastore name')
+ ```
+
+## Mapping of key functionality in SDK v1 and SDK v2
+
+|Storage types in SDK v1|Storage types in SDK v2|
+|--|-|
+|[azureml_blob_datastore](/python/api/azureml-core/azureml.data.azure_storage_datastore.azureblobdatastore?view=azure-ml-py&preserve-view=true)|[azureml_blob_datastore](/python/api/azure-ai-ml/azure.ai.ml.entities.azuredatalakegen1datastore?view=azure-python-preview&preserve-view=true)|
+|[azureml_data_lake_gen1_datastore](/python/api/azureml-core/azureml.data.azure_data_lake_datastore.azuredatalakedatastore?view=azure-ml-py&preserve-view=true)|[azureml_data_lake_gen1_datastore](/python/api/azure-ai-ml/azure.ai.ml.entities.azuredatalakegen1datastore?view=azure-python-preview&preserve-view=true)|
+|[azureml_data_lake_gen2_datastore](/python/api/azureml-core/azureml.data.azure_data_lake_datastore.azuredatalakegen2datastore?view=azure-ml-py&preserve-view=true)|[azureml_data_lake_gen2_datastore](/python/api/azure-ai-ml/azure.ai.ml.entities.azuredatalakegen2datastore?view=azure-python-preview&preserve-view=true)|
+|[azuremlml_sql_database_datastore](/python/api/azureml-core/azureml.data.azure_sql_database_datastore.azuresqldatabasedatastore?view=azure-ml-py&preserve-view=true)|Will be supported via import & export functionalities|
+|[azuremlml_my_sql_datastore](/python/api/azureml-core/azureml.data.azure_my_sql_datastore.azuremysqldatastore?view=azure-ml-py&preserve-view=true)|Will be supported via import & export functionalities|
+|[azuremlml_postgre_sql_datastore](/python/api/azureml-core/azureml.data.azure_postgre_sql_datastore.azurepostgresqldatastore?view=azure-ml-py&preserve-view=true)|Will be supported via import & export functionalities|
++
+## Next steps
+
+For more information, see:
+
+* [Create datastores](how-to-datastore.md?tabs=cli-identity-based-access%2Csdk-adls-sp%2Csdk-azfiles-sas%2Csdk-adlsgen1-sp)
+* [Read and write data in a job](how-to-read-write-data-v2.md)
+* [V2 datastore operations](/python/api/azure-ai-ml/azure.ai.ml.operations.datastoreoperations?view=azure-python-preview)
+
machine-learning Migrate To V2 Resource Workspace https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/migrate-to-v2-resource-workspace.md
+
+ Title: Migrate workspace management from SDK v1 to SDK v2
+
+description: Migrate workspace management from v1 to v2 of Azure Machine Learning SDK
++++++ Last updated : 09/16/2022++++
+# Migrate workspace management from SDK v1 to SDK v2
+
+The workspace functionally remains unchanged with the V2 development platform. However, there are network-related changes to be aware of. For details, see [Network Isolation Change with Our New API Platform on Azure Resource Manager](how-to-configure-network-isolation-with-v2.md?tabs=python)
+
+This article gives a comparison of scenario(s) in SDK v1 and SDK v2.
+
+## Create a workspace
+
+* SDK v1
+
+ ```python
+ from azureml.core import Workspace
+
+ ws = Workspace.create(
+ name='my_workspace',
+ location='eastus',
+ subscription_id = '<SUBSCRIPTION_ID>'
+ resource_group = '<RESOURCE_GROUP>'
+ )
+ ```
+
+* SDK v2
+
+ ```python
+ from azure.ai.ml import MLClient
+ from azure.ai.ml.entities import Workspace
+ from azure.identity import DefaultAzureCredential
+
+ # specify the details of your subscription
+ subscription_id = "<SUBSCRIPTION_ID>"
+ resource_group = "<RESOURCE_GROUP>"
+
+ # get a handle to the subscription
+ ml_client = MLClient(DefaultAzureCredential(), subscription_id, resource_group)
+
+ # specify the workspace details
+ ws = Workspace(
+ name="my_workspace",
+ location="eastus",
+ display_name="My workspace",
+ description="This example shows how to create a workspace",
+ tags=dict(purpose="demo"),
+ )
+
+ ml_client.workspaces.begin_create(ws)
+ ```
+
+## Create a workspace for use with Azure Private Link endpoints
+
+* SDK v1
+
+ ```python
+ from azureml.core import Workspace
+
+ ws = Workspace.create(
+ name='my_workspace',
+ location='eastus',
+ subscription_id = '<SUBSCRIPTION_ID>'
+ resource_group = '<RESOURCE_GROUP>'
+ )
+
+ ple = PrivateEndPointConfig(
+ name='my_private_link_endpoint',
+ vnet_name='<VNET_NAME>',
+ vnet_subnet_name='<VNET_SUBNET_NAME>',
+ vnet_subscription_id='<SUBSCRIPTION_ID>',
+ vnet_resource_group='<RESOURCE_GROUP>'
+ )
+
+ ws.add_private_endpoint(ple, private_endpoint_auto_approval=True)
+ ```
+
+* SDK v2
+
+ ```python
+ from azure.ai.ml import MLClient
+ from azure.ai.ml.entities import Workspace
+ from azure.identity import DefaultAzureCredential
+
+ # specify the details of your subscription
+ subscription_id = "<SUBSCRIPTION_ID>"
+ resource_group = "<RESOURCE_GROUP>"
+
+ # get a handle to the subscription
+ ml_client = MLClient(DefaultAzureCredential(), subscription_id, resource_group)
+
+ ws = Workspace(
+ name="private_link_endpoint_workspace,
+ location="eastus",
+ display_name="Private Link endpoint workspace",
+ description="When using private link, you must set the image_build_compute property to a cluster name to use for Docker image environment building. You can also specify whether the workspace should be accessible over the internet.",
+ image_build_compute="cpu-compute",
+ public_network_access="Disabled",
+ tags=dict(purpose="demonstration"),
+ )
+
+ ml_client.workspaces.begin_create(ws)
+ ```
+
+## Load/connect to workspace using parameters
+
+* SDK v1
+
+ ```python
+ from azureml.core import Workspace
+ ws = Workspace.from_config()
+
+ # specify the details of your subscription
+ subscription_id = "<SUBSCRIPTION_ID>"
+ resource_group = "<RESOURCE_GROUP>"
+
+ # get handle on the workspace
+ ws = Workspace.get(
+ subscription_id='<SUBSCRIPTION_ID>',
+ resource_group='<RESOURCE_GROUP>',
+ name='my_workspace',
+ )
+ ```
+
+* SDK v2
+
+ ```python
+ from azure.ai.ml import MLClient
+ from azure.ai.ml.entities import Workspace
+ from azure.identity import DefaultAzureCredential
+
+ # specify the details of your subscription
+ subscription_id = "<SUBSCRIPTION_ID>"
+ resource_group = "<RESOURCE_GROUP>"
+
+ # get handle on the workspace
+ ws = MLClient(
+ DefaultAzureCredential(),
+ subscription_id='<SUBSCRIPTION_ID>',
+ resource_group_name='<RESOURCE_GROUP>',
+ workspace_name='my_workspace'
+ )
+ ```
+
+## Load/connect to workspace using config file
+
+* SDK v1
+
+ ```python
+ from azureml.core import Workspace
+
+ ws = Workspace.from_config()
+ ws.get_details()
+ ```
+
+* SDK v2
+
+ ```python
+ from azure.ai.ml import MLClient
+ from azure.ai.ml.entities import Workspace
+ from azure.identity import DefaultAzureCredential
+
+ ws = MLClient.from_config(
+ DefaultAzureCredential()
+ )
+ ```
+
+## Mapping of key functionality in SDK v1 and SDK v2
+
+|Functionality in SDK v1|Rough mapping in SDK v2|
+|-|-|
+|[Method/API in SDK v1 (use links to ref docs)](/python/api/azureml-core/azureml.core.workspace.workspace)|[Method/API in SDK v2 (use links to ref docs)](/python/api/azure-ai-ml/azure.ai.ml.entities.workspace)|
+
+## Related documents
+
+For more information, see:
+
+* [What is a workspace?](concept-workspace.md)
security Iaas https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/security/fundamentals/iaas.md
In most infrastructure as a service (IaaS) scenarios, [Azure virtual machines (V
The first step in protecting your VMs is to ensure that only authorized users can set up new VMs and access VMs. > [!NOTE]
-> To improve the security of Linux VMs on Azure, you can integrate with Azure AD authentication. When you use [Azure AD authentication for Linux VMs](../../virtual-machines/linux/login-using-aad.md), you centrally control and enforce policies that allow or deny access to the VMs.
+> To improve the security of Linux VMs on Azure, you can integrate with Azure AD authentication. When you use [Azure AD authentication for Linux VMs](/azure-docs-archive-pr/virtual-machines/linux/login-using-aad), you centrally control and enforce policies that allow or deny access to the VMs.
> >
static-web-apps Key Vault Secrets https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/static-web-apps/key-vault-secrets.md
When configuring custom authentication providers, you may want to store connection secrets in Azure Key Vault. This article demonstrates how to use a managed identity to grant Azure Static Web Apps access to Key Vault for custom authentication secrets.
+> [!NOTE]
+> Azure Serverless Functions do not support direct Key Vault integration. If you require Key Vault integration with your managed Function app, you will need to implement Key Vault access into your app's code.
+ Security secrets require the following items to be in place. - Create a system-assigned identity in the Static Web Apps instance.
storage Multiple Identity Scenarios https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/common/multiple-identity-scenarios.md
Previously updated : 08/01/2022 Last updated : 09/23/2022
# Configure passwordless connections between multiple Azure apps and services
-Applications often require secure connections between multiple Azure services simultaneously. For example, an enterprise Azure App Service instance might connect to several different storage accounts, an Azure SQL database instance, a service bus, and more.
+Applications often require secure connections between multiple Azure services simultaneously. For example, an enterprise Azure App Service instance might connect to several different storage accounts, an Azure SQL database instance, a service bus, and more.
[Managed identities](/azure/active-directory/managed-identities-azure-resources/overview) are the recommended authentication option for secure, passwordless connections between Azure resources. Developers do not have to manually track and manage many different secrets for managed identities, since most of these tasks are handled internally by Azure. This tutorial explores how to manage connections between multiple services using managed identities and the Azure Identity client library.
Applications often require secure connections between multiple Azure services si
Azure provides the following types of managed identities:
-* **System-assigned managed identities** are directly tied to a single Azure resource. When you enable a system-assigned managed identity on a service, Azure will create a linked identity and handle administrative tasks for that identity internally. When the Azure resource is deleted, the identity is also deleted.
-* **User-assigned managed identities** are independent identities that are created by an administrator and can be associated with one or more Azure resources. The lifecycle of the identity is independent of those resources.
+* **System-assigned managed identities** are directly tied to a single Azure resource. When you enable a system-assigned managed identity on a service, Azure will create a linked identity and handle administrative tasks for that identity internally. When the Azure resource is deleted, the identity is also deleted.
+* **User-assigned managed identities** are independent identities that are created by an administrator and can be associated with one or more Azure resources. The lifecycle of the identity is independent of those resources.
You can read more about best practices and when to use system-assigned identities versus user-assigned identities in the [identities best practice recommendations](/azure/active-directory/managed-identities-azure-resources/managed-identity-best-practice-recommendations). ## Explore DefaultAzureCredential
-Managed identities are generally implemented in your application code through a class called `DefaultAzureCredential` from the `Azure.Identity` client library. `DefaultAzureCredential` supports multiple authentication methods and automatically determines which should be used at runtime. You can read more about this approach in the [DefaultAzureCredential overview](/dotnet/api/overview/azure/Identity-readme#defaultazurecredential).
+Managed identities are generally implemented in your application code through a class called `DefaultAzureCredential` from the `Azure.Identity` client library. `DefaultAzureCredential` supports multiple authentication methods and automatically determines which should be used at runtime. You can read more about this approach in the [DefaultAzureCredential overview](/dotnet/api/overview/azure/Identity-readme#defaultazurecredential).
## Connect an Azure hosted app to multiple Azure services
You have been tasked with connecting an existing app to multiple Azure services
This tutorial applies to the following architectures, though it can be adapted to many other scenarios as well through minimal configuration changes.
-The following steps demonstrate how to configure an app to use a system-assigned managed identity and your local development account to connect to multiple Azure Services.
+The following steps demonstrate how to configure an app to use a system-assigned managed identity and your local development account to connect to multiple Azure Services.
### Create a system-assigned managed identity
The following steps demonstrate how to configure an app to use a system-assigned
3) Toggle the **Status** setting to **On** to enable a system assigned managed identity for the service.
- :::image type="content" source="media/enable-system-assigned-identity.png" alt-text="A screenshot showing how to assign a system assigned managed identity." :::
+ :::image type="content" source="media/enable-system-assigned-identity.png" alt-text="Screenshot showing how to assign a system assigned managed identity." :::
### Assign roles to the managed identity for each connected service
-
+ 1) Navigate to the overview page of the storage account you would like to grant access your identity access to. 3) Select **Access Control (IAM)** from the storage account navigation. 4) Choose **+ Add** and then **Add role assignment**.
- :::image type="content" source="media/assign-role-system-identity.png" alt-text="A screenshot showing how to assign a system-assigned identity." :::
+ :::image type="content" source="media/assign-role-system-identity.png" alt-text="Screenshot showing how to assign a system-assigned identity." :::
5) In the **Role** search box, search for *Storage Blob Data Contributor*, which grants permissions to perform read and write operations on blob data. You can assign whatever role is appropriate for your use case. Select the *Storage Blob Data Contributor* from the list and choose **Next**.
The following steps demonstrate how to configure an app to use a system-assigned
7) In the flyout, search for the managed identity you created by entering the name of your app service. Select the system assigned identity, and then choose **Select** to close the flyout menu.
- :::image type="content" source="media/migration-select-identity.png" alt-text="A screenshot showing how to select a system-assigned identity." :::
+ :::image type="content" source="media/migration-select-identity.png" alt-text="Screenshot showing how to select a system-assigned identity." :::
8) Select **Next** a couple times until you're able to select **Review + assign** to finish the role assignment.
The following steps demonstrate how to configure an app to use a system-assigned
#### Local development considerations
-You can also enable access to Azure resources for local development by assigning roles to a user account the same way you assigned roles to your managed identity.
+You can also enable access to Azure resources for local development by assigning roles to a user account the same way you assigned roles to your managed identity.
1) After assigning the **Storage Blob Data Contributor** role to your managed identity, under **Assign access to**, this time select **User, group or service principal**. Choose **+ Select members** to open the flyout menu again.
You can also enable access to Azure resources for local development by assigning
### Implement the application code
+#### [C#](#tab/csharp)
+ Inside of your project, add a reference to the `Azure.Identity` NuGet package. This library contains all of the necessary entities to implement `DefaultAzureCredential`. You can also add any other Azure libraries that are relevant to your app. For this example, the `Azure.Storage.Blobs` and `Azure.KeyVault.Keys` packages are added in order to connect to Blob Storage and Key Vault. ```dotnetcli
var serviceBusClient = new ServiceBusClient("<your-namespace>", new DefaultAzure
var sender = serviceBusClient.CreateSender("producttracking"); ```
+#### [Java](#tab/java)
+
+Inside your project, add the `azure-identity` dependency to your *pom.xml* file. This library contains all the necessary entities to implement `DefaultAzureCredential`. You can also add any other Azure dependencies that are relevant to your app. For this example, the `azure-storage-blob` and `azure-messaging-servicebus` dependencies are added in order to connect to Blob Storage and Key Vault.
+
+```xml
+<dependencyManagement>
+ <dependencies>
+ <dependency>
+ <groupId>com.azure</groupId>
+ <artifactId>azure-sdk-bom</artifactId>
+ <version>1.2.5</version>
+ <type>pom</type>
+ <scope>import</scope>
+ </dependency>
+ </dependencies>
+</dependencyManagement>
+<dependencies>
+ <dependency>
+ <groupId>com.azure</groupId>
+ <artifactId>azure-identity</artifactId>
+ </dependency>
+ <dependency>
+ <groupId>com.azure</groupId>
+ <artifactId>azure-storage-blob</artifactId>
+ </dependency>
+ <dependency>
+ <groupId>com.azure</groupId>
+ <artifactId>azure-messaging-servicebus</artifactId>
+ </dependency>
+</dependencies>
+
+```
+
+In your project code, create instances of the necessary services your app will connect to. The following examples connect to Blob Storage and service bus using the corresponding SDK classes.
+
+```java
+class Demo {
+
+ public static void main(String[] args) {
+
+ DefaultAzureCredential defaultAzureCredential = new DefaultAzureCredentialBuilder().build();
+
+ BlobServiceClient blobServiceClient = new BlobServiceClientBuilder()
+ .endpoint("https://<your-storage-account>.blob.core.windows.net")
+ .credential(defaultAzureCredential)
+ .buildClient();
+
+ ServiceBusClientBuilder clientBuilder = new ServiceBusClientBuilder().credential(defaultAzureCredential);
+ ServiceBusSenderClient serviceBusSenderClient = clientBuilder.sender()
+ .queueName("producttracking")
+ .buildClient();
+ }
+
+}
+```
+
+#### [Spring](#tab/spring)
+
+Inside your project, only need to add service dependencies you use. For this example, the `spring-cloud-azure-starter-storage-blob` and `spring-cloud-azure-starter-servicebus` dependencies are added in order to connect to Blob Storage and Key Vault.
+
+```xml
+<dependencyManagement>
+ <dependencies>
+ <dependency>
+ <groupId>com.azure.spring</groupId>
+ <artifactId>spring-cloud-azure-dependencies</artifactId>
+ <version>4.5.0</version>
+ <type>pom</type>
+ <scope>import</scope>
+ </dependency>
+ </dependencies>
+</dependencyManagement>
+<dependencies>
+ <dependency>
+ <groupId>com.azure.spring</groupId>
+ <artifactId>spring-cloud-azure-starter-storage-blob</artifactId>
+ </dependency>
+ <dependency>
+ <groupId>com.azure.spring</groupId>
+ <artifactId>spring-cloud-azure-starter-servicebus</artifactId>
+ </dependency>
+</dependencies>
+```
+
+In your project code, create instances of the necessary services your app will connect to. The following examples connect to Blob Storage and service bus using the corresponding SDK classes.
+
+```yaml
+spring:
+ cloud:
+ azure:
+ servicebus:
+ namespace: <service-bus-name>
+ entity-name: <service-bus-entity-name>
+ entity-type: <service-bus-entity-type>
+ storage:
+ blob:
+ account-name: <storage-account-name>
+```
+
+```java
+@Service
+public class ExampleService {
+
+ @Autowired
+ private BlobServiceClient blobServiceClient;
+
+ @Autowired
+ private ServiceBusSenderClient serviceBusSenderClient;
+
+}
+```
+++ When this application code runs locally, `DefaultAzureCredential` will search down a credential chain for the first available credentials. If the `Managed_Identity_Client_ID` is null locally, it will automatically use the credentials from your local Azure CLI or Visual Studio sign-in. You can read more about this process in the [Azure Identity library overview](/dotnet/api/overview/azure/Identity-readme#defaultazurecredential). When the application is deployed to Azure, `DefaultAzureCredential` will automatically retrieve the `Managed_Identity_Client_ID` variable from the app service environment. That value becomes available when a managed identity is associated with your app.
This overall process ensures that your app can run securely locally and in Azure
Although the apps in the previous example all shared the same service access requirements, real environments are often more nuanced. Consider a scenario where multiple apps all connect to the same storage accounts, but two of the apps also access different services or databases. To configure this setup in your code, make sure your application registers separate services to connect to each storage account or database. Make sure to pull in the correct managed identity client IDs for each service when configuring `DefaultAzureCredential`. The following code example configures the following service connections:+ * Two connections to separate storage accounts using a shared user-assigned managed identity * A connection to Azure Cosmos DB and Azure SQL services using a second shared user-assigned managed identity
+### [C#](#tab/csharp)
+ ```csharp // Get the first user-assigned managed identity ID to connect to shared storage var clientIDstorage = Environment.GetEnvironmentVariable("Managed_Identity_Client_ID_Storage");
BlobServiceClient blobServiceClient = new BlobServiceClient(
BlobServiceClient blobServiceClient2 = new BlobServiceClient( new Uri("https://<contract-storage-account>.blob.core.windows.net"), new DefaultAzureCredential()
-{
+ {
ManagedIdentityClientId = clientIDstorage });
using (SqlConnection conn = new SqlConnection(ConnectionString1))
```
+### [Java](#tab/java)
+
+Add the following to your *pom.xml* file:
+
+```xml
+<dependencyManagement>
+ <dependencies>
+ <dependency>
+ <groupId>com.azure</groupId>
+ <artifactId>azure-sdk-bom</artifactId>
+ <version>1.2.5</version>
+ <type>pom</type>
+ <scope>import</scope>
+ </dependency>
+ </dependencies>
+</dependencyManagement>
+<dependencies>
+ <dependency>
+ <groupId>com.azure</groupId>
+ <artifactId>azure-identity</artifactId>
+ </dependency>
+ <dependency>
+ <groupId>com.azure</groupId>
+ <artifactId>azure-storage-blob</artifactId>
+ </dependency>
+ <dependency>
+ <groupId>com.azure</groupId>
+ <artifactId>azure-cosmos</artifactId>
+ </dependency>
+ <dependency>
+ <groupId>com.microsoft.sqlserver</groupId>
+ <artifactId>mssql-jdbc</artifactId>
+ <version>11.2.1.jre17</version>
+ </dependency>
+</dependencies>
+```
+
+Add the following to your code:
+
+```java
+class Demo {
+
+ public static void main(String[] args) {
+ // Get the first user-assigned managed identity ID to connect to shared storage
+ String clientIdStorage = System.getenv("Managed_Identity_Client_ID_Storage");
+
+ // Get the DefaultAzureCredential from clientIdStorage
+ DefaultAzureCredential storageCredential =
+ new DefaultAzureCredentialBuilder().managedIdentityClientId(clientIdStorage).build();
+
+ // First blob storage client that using a managed identity
+ BlobServiceClient blobServiceClient = new BlobServiceClientBuilder()
+ .endpoint("https://<receipt-storage-account>.blob.core.windows.net")
+ .credential(storageCredential)
+ .buildClient();
+
+ // Second blob storage client that using a managed identity
+ BlobServiceClient blobServiceClient2 = new BlobServiceClientBuilder()
+ .endpoint("https://<contract-storage-account>.blob.core.windows.net")
+ .credential(storageCredential)
+ .buildClient();
+
+ // Get the second user-assigned managed identity ID to connect to shared databases
+ String clientIdDatabase = System.getenv("Managed_Identity_Client_ID_Databases");
+
+ // Create a Cosmos DB client
+ CosmosClient cosmosClient = new CosmosClientBuilder()
+ .endpoint("https://<cosmos-db-account>.documents.azure.com:443/")
+ .credential(new DefaultAzureCredentialBuilder().managedIdentityClientId(clientIdDatabase).build())
+ .buildClient();
+
+ // Open a connection to Azure SQL using a managed identity
+ String connectionUrl = "jdbc:sqlserver://<azure-sql-hostname>.database.windows.net:1433;"
+ + "database=<database-name>;encrypt=true;trustServerCertificate=false;hostNameInCertificate=*.database"
+ + ".windows.net;loginTimeout=30;Authentication=ActiveDirectoryMSI;";
+ try {
+ Connection connection = DriverManager.getConnection(connectionUrl);
+ Statement statement = connection.createStatement();
+ } catch (SQLException e) {
+ e.printStackTrace();
+ }
+ }
+}
+```
+
+### [Spring](#tab/spring)
+
+Add the following to your *pom.xml* file:
+
+```xml
+<dependencyManagement>
+ <dependencies>
+ <dependency>
+ <groupId>com.azure.spring</groupId>
+ <artifactId>spring-cloud-azure-dependencies</artifactId>
+ <version>4.5.0</version>
+ <type>pom</type>
+ <scope>import</scope>
+ </dependency>
+ </dependencies>
+</dependencyManagement>
+<dependencies>
+ <dependency>
+ <groupId>com.azure.spring</groupId>
+ <artifactId>spring-cloud-azure-starter-storage-blob</artifactId>
+ </dependency>
+ <dependency>
+ <groupId>com.azure.spring</groupId>
+ <artifactId>spring-cloud-azure-starter-cosmos</artifactId>
+ </dependency>
+ <dependency>
+ <groupId>com.microsoft.sqlserver</groupId>
+ <artifactId>mssql-jdbc</artifactId>
+ </dependency>
+ <dependency>
+ <groupId>org.springframework.boot</groupId>
+ <artifactId>spring-boot-starter-jdbc</artifactId>
+ </dependency>
+</dependencies>
+```
+
+Add the following to your *application.yml* file:
+
+```yaml
+spring:
+ cloud:
+ azure:
+ cosmos:
+ endpoint: https://<cosmos-db-account>.documents.azure.com:443/
+ credential:
+ client-id: <Managed_Identity_Client_ID_Databases>
+ managed-identity-enabled: true
+ storage:
+ blob:
+ endpoint: https://<contract-storage-account>.blob.core.windows.net
+ credential:
+ client-id: <Managed_Identity_Client_ID_Storage>
+ managed-identity-enabled: true
+ datasource:
+ url: jdbc:sqlserver://<azure-sql-hostname>.database.windows.net:1433;database=<database-name>;encrypt=true;trustServerCertificate=false;hostNameInCertificate=*.database.windows.net;loginTimeout=30;Authentication=ActiveDirectoryMSI;
+```
+
+Add the following to your code:
+
+> [!NOTE]
+> Spring Cloud Azure doesn't support configure multiple clients of the same service, the following codes create multiple beans for this situation.
+
+```java
+@Configuration
+public class AzureStorageConfiguration {
+
+ @Bean("secondBlobServiceClient")
+ public BlobServiceClient secondBlobServiceClient(BlobServiceClientBuilder builder) {
+ return builder.endpoint("https://<receipt-storage-account>.blob.core.windows.net").buildClient();
+ }
+
+ @Bean("firstBlobServiceClient")
+ public BlobServiceClient firstBlobServiceClient(BlobServiceClientBuilder builder) {
+ return builder.buildClient();
+ }
+}
+```
+
+```java
+@Service
+public class ExampleService {
+
+ @Autowired
+ @Qualifier("firstBlobServiceClient")
+ private BlobServiceClient blobServiceClient;
+
+ @Autowired
+ @Qualifier("secondBlobServiceClient")
+ private BlobServiceClient blobServiceClient2;
+
+ @Autowired
+ private CosmosClient cosmosClient;
+
+ @Autowired
+ private JdbcTemplate jdbcTemplate;
+
+}
+```
+++ You can also associate a user-assigned managed identity as well as a system-assigned managed identity to a resource simultaneously. This can be useful in scenarios where all of the apps require access to the same shared services, but one of the apps also has a very specific dependency on an additional service. Using a system-assigned identity also ensures that the identity tied to that specific app is deleted when the app is deleted, which can help keep your environment clean. These types of scenarios are explored in more depth in the [identities best practice recommendations](/azure/active-directory/managed-identities-azure-resources/managed-identity-best-practice-recommendations).
These types of scenarios are explored in more depth in the [identities best prac
In this tutorial, you learned how to migrate an application to passwordless connections. You can read the following resources to explore the concepts discussed in this article in more depth: -- For more information on authorizing access with managed identity, visit [Authorize access to blob data with managed identities for Azure resources](/azure/storage/blobs/authorize-managed-identity).--[Authorize with Azure roles](/azure/storage/blobs/authorize-access-azure-active-directory)-- To learn more about .NET Core, see [Get started with .NET in 10 minutes](https://dotnet.microsoft.com/learn/dotnet/hello-world-tutorial/intro).-- To learn more about authorizing from a web application, visit [Authorize from a native or web application](/azure/storage/common/storage-auth-aad-app)
+* For more information on authorizing access with managed identity, visit [Authorize access to blob data with managed identities for Azure resources](/azure/storage/blobs/authorize-managed-identity).
+* [Authorize with Azure roles](/azure/storage/blobs/authorize-access-azure-active-directory)
+* To learn more about .NET Core, see [Get started with .NET in 10 minutes](https://dotnet.microsoft.com/learn/dotnet/hello-world-tutorial/intro).
+* To learn more about authorizing from a web application, visit [Authorize from a native or web application](/azure/storage/common/storage-auth-aad-app).
virtual-machines Disk Encryption Linux https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/linux/disk-encryption-linux.md
You can add a new data disk using [az vm disk attach](add-disk.md), or [through
### Enable encryption on a newly added disk with Azure CLI
- If the VM was previously encrypted with "All" then the --volume-type parameter should remain "All". All includes both OS and data disks. If the VM was previously encrypted with a volume type of "OS", then the --volume-type parameter should be changed to "All" so that both the OS and the new data disk will be included. If the VM was encrypted with only the volume type of "Data", then it can remain "Data" as demonstrated below. Adding and attaching a new data disk to a VM is not sufficient preparation for encryption. The newly attached disk must also be formatted and properly mounted within the VM prior to enabling encryption. On Linux the disk must be mounted in /etc/fstab with a [persistent block device name](../troubleshooting/troubleshoot-device-names-problems.md).
+ If the VM was previously encrypted with "All" then the --volume-type parameter should remain "All". All includes both OS and data disks. If the VM was previously encrypted with a volume type of "OS", then the --volume-type parameter should be changed to "All" so that both the OS and the new data disk will be included. If the VM was encrypted with only the volume type of "Data", then it can remain "Data" as demonstrated below. Adding and attaching a new data disk to a VM is not sufficient preparation for encryption. The newly attached disk must also be formatted and properly mounted within the VM prior to enabling encryption. On Linux the disk must be mounted in /etc/fstab with a [persistent block device name](/azure-docs-test-baseline-pr/virtual-machines/linux/troubleshoot-device-names-problems).
In contrast to PowerShell syntax, the CLI does not require the user to provide a unique sequence version when enabling encryption. The CLI automatically generates and uses its own unique sequence version value.
virtual-machines Image Builder Json https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/linux/image-builder-json.md
properties: {
- **The stagingResourceGroup property is specified with a resource group that exists** If the `stagingResourceGroup` property is specified with a resource group that does exist, then the Image Builder service will check to make sure the resource group isn't associated with another image template, is empty (no resources inside), in the same region as the image template, and has either "Contributor" or "Owner" RBAC applied to the identity assigned to the Azure Image Builder image template resource. If any of the aforementioned requirements aren't met, an error will be thrown. The staging resource group will have the following tags added to it: `usedBy`, `imageTemplateName`, `imageTemplateResourceGroupName`. Pre-existing tags aren't deleted.
+
+> [!IMPORTANT]
+> You will need to assign the contributor role to the resource group for the service principal corresponding to Azure Image Builder's first party app when trying to specify a pre-existing resource group and VNet to the Azure Image Builder service with a Windows source image. For the CLI command and portal instructions on how to assign the contributor role to the resource group see the following documentation [Troubleshoot VM Azure Image Builder: Authorization error creating disk](https://learn.microsoft.com/us/azure/virtual-machines/linux/image-builder-troubleshoot#authorization-error-creating-disk)
- **The stagingResourceGroup property is specified with a resource group that doesn't exist**
virtual-machines Np Series https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/np-series.md
VM Generation Support: Generation 1<br>
**Q:** How to request quota for NP VMs?
-**A:** Please follow this page [Increase limits by VM series](../azure-portal/supportability/per-vm-quota-requests.md). NP VMs are available in East US, West US2, West Europe, SouthEast Asia, and SouthCentral US.
+**A:** Follow this page [Increase VM-family vCPU quotas](../azure-portal/supportability/per-vm-quota-requests.md). NP VMs are available in East US, West US2, West Europe, SouthEast Asia, and SouthCentral US.
**Q:** What version of Vitis should I use?
-**A:** Xilinx recommends [Vitis 2021.1](https://www.xilinx.com/products/design-tools/vitis/vitis-platform.html), you can also use the Development VM marketplace options (Vitis 2021.1 Development VM for Ubuntu 18.04, Ubuntu 20.04, and CentOS 7.8)
+**A:** Xilinx recommends [Vitis 2022.1](https://www.xilinx.com/products/design-tools/vitis/vitis-platform.html), you can also use the Development VM marketplace options (Vitis 2022.1 Development VM for Ubuntu 18.04, Ubuntu 20.04, and CentOS 7.8)
**Q:** Do I need to use NP VMs to develop my solution?
-**A:** No, you can develop on-premises and deploy to the cloud. Please make sure to follow the [attestation documentation](./field-programmable-gate-arrays-attestation.md) to deploy on NP VMs.
+**A:** No, you can develop on-premises and deploy to the cloud. Make sure to follow the [attestation documentation](./field-programmable-gate-arrays-attestation.md) to deploy on NP VMs.
+
+**Q:** What shell version is supported and how can I get the development files?
+
+**A:** The FPGAs in Azure NP VMs support Xilinx Shell 2.1 (gen3x16-xdma-shell_2.1). See Xilinx Page [Xilinx/Azure with Alveo U250](https://www.xilinx.com/microsoft-azure.htm) to get the development shell files.
**Q:** Which file returned from attestation should I use when programming my FPGA in an NP VM?
-**A:** Attestation returns two xclbins, **design.bit.xclbin** and **design.azure.xclbin**. Please use **design.azure.xclbin**.
+**A:** Attestation returns two xclbins, **design.bit.xclbin** and **design.azure.xclbin**. Use **design.azure.xclbin**.
**Q:** Where should I get all the XRT / Platform files?
-**A:** Please visit Xilinx's [Microsoft-Azure](https://www.xilinx.com/microsoft-azure.html) site for all files.
+**A:** Visit Xilinx's [Microsoft-Azure](https://www.xilinx.com/microsoft-azure.html) site for all files.
**Q:** What Version of XRT should I use?
-**A:** xrt_202110.2.11.680
+**A:** xrt_202210.2.13.479
**Q:** What is the target deployment platform?
VM Generation Support: Generation 1<br>
**A:** Xilinx and Microsoft have validated Ubuntu 18.04 LTS, Ubuntu 20.04 LTS, and CentOS 7.8.
- Xilinx has created the following marketplace images to simplify the deployment of these VMs.
-
-Xilinx Alveo U250 2021.1 Deployment VM ΓÇô Ubuntu18.04
-
-Xilinx Alveo U250 2021.1 Deployment VM ΓÇô Ubuntu20.04
-
-Xilinx Alveo U250 2021.1 Deployment VM ΓÇô CentOS7.8
+>Xilinx has created the following marketplace images to simplify the deployment of these VMs:
+>
+>- Xilinx Alveo U250 2022.1 Deployment VM [Ubuntu18.04](https://portal.azure.com/#create/xilinx.vitis2022_1_ubuntu1804_development_imagevitis2022_1_ubuntu1804)
+>
+>- Xilinx Alveo U250 2022.1 Deployment VM [Ubuntu20.04](https://portal.azure.com/#create/xilinx.vitis2022_1_ubuntu2004_development_imagevitis2022_1_ubuntu2004)
+>
+>- Xilinx Alveo U250 2022.1 Deployment VM [CentOS7.8](https://portal.azure.com/#create/xilinx.vitis2022_1_centos78_development_imagevitis2022_1_centos78)
**Q:** Can I deploy my Own Ubuntu / CentOS VMs and install XRT / Deployment Target Platform?
Xilinx Alveo U250 2021.1 Deployment VM ΓÇô CentOS7.8
**Q:** If I deploy my own Ubuntu18.04 VM then what are the required packages and steps?
-**A:** Use Kernel 4.15 per [Xilinx XRT documentation](https://www.xilinx.com/support/documentation/sw_manuals/xilinx2021_1/ug1451-xrt-release-notes.pdf)
-
-Install the following packages.
-- xrt_202110.2.11.680_18.04-amd64-xrt.deb
-
-- xrt_202110.2.11.680_18.04-amd64-azure.deb
-
-- xilinx-u250-gen3x16-xdma-platform-2.1-3_all_18.04.deb.tar.gz
-
-- xilinx-u250-gen3x16-xdma-validate_2.1-3005608.1_all.deb -
-**Q:** On Ubuntu, after rebooting my VM I can't find my FPGA(s):
+**A:** Follow the guidance in Xilinx XRT documentation [Xilinx XRT documentation](https://docs.xilinx.com/r/en-US/ug1451-xrt-release-notes/XRT-Operating-System-Support)
-**A:** Please verify that your kernel hasn't been upgraded (uname -a). If so, please downgrade to kernel 4.1X.
+>Install the following packages.
+>- xrt_202210.2.13.479_18.04-amd64-xrt.deb
+>
+>- xrt_202210.2.13.479_18.04-amd64-azure.deb
+>
+>- xilinx-u250-gen3x16-xdma-platform-2.1-3_all_18.04.deb.tar.gz
+>
+>- xilinx-u250-gen3x16-xdma-validate_2.1-3005608.1_all.deb
**Q:** If I deploy my own Ubuntu20.04 VM then what are the required packages and steps?
-**A:** Use Kernel 5.4 per [Xilinx XRT documentation](https://www.xilinx.com/support/documentation/sw_manuals/xilinx2021_1/ug1451-xrt-release-notes.pdf)
-
-Install the following packages.
-- xrt_202110.2.11.680_20.04-amd64-xrt.deb
+**A:** Follow the guidance in Xilinx XRT documentation [Xilinx XRT documentation](https://docs.xilinx.com/r/en-US/ug1451-xrt-release-notes/XRT-Operating-System-Support)
-- xrt_202110.2.11.680_20.04-amd64-azure.deb
-
-- xilinx-u250-gen3x16-xdma-platform-2.1-3_all_18.04.deb.tar.gz
-
-- xilinx-u250-gen3x16-xdma-validate_2.1-3005608.1_all.deb -
+>Install the following packages.
+>- xrt_202210.2.13.479_20.04-amd64-xrt.deb
+>
+>- xrt_202210.2.13.479_20.04-amd64-azure.deb
+>
+>- xilinx-u250-gen3x16-xdma-platform-2.1-3_all_18.04.deb.tar.gz
+>
+>- xilinx-u250-gen3x16-xdma-validate_2.1-3005608.1_all.deb
**Q:** If I deploy my own CentOS7.8 VM then what are the required packages and steps?
-**A:** Use Kernel version: 3.10.0-1160.15.2.el7.x86_64
+**A:** Follow the guidance in Xilinx XRT documentation [Xilinx XRT documentation](https://docs.xilinx.com/r/en-US/ug1451-xrt-release-notes/XRT-Operating-System-Support)
- Install the following packages.
-
-
-
-
+ >Install the following packages.
+ >- xrt_202210.2.13.479_7.8.2003-x86_64-xrt.rpm
+ >
+ >- xrt_202210.2.13.479_7.8.2003-x86_64-azure.rpm
+ >
+ >- xilinx-u250-gen3x16-xdma-platform-2.1-3.noarch.rpm.tar.gz
+ >
+ >- xilinx-u250-gen3x16-xdma-validate-2.1-3005608.1.noarch.rpm
-**Q:** What are the differences between OnPrem and NP VMs?
+**Q:** What are the differences between on-premise FPGAs and NP VMs?
**A:** <br>
Install the following packages.
<br> On Azure NP VMs, only the role endpoint (Device ID 5005), which uses the XOCL driver, is present.
-OnPrem FPGA, both the management endpoint (Device ID 5004) and role endpoint (Device ID 5005), which use the XCLMGMT and XOCL drivers respectively, are present.
+In on-premise FPGAs, both the management endpoint (Device ID 5004) and role endpoint (Device ID 5005), which use the XCLMGMT and XOCL drivers respectively, are present.
<br> <b>- Regarding XRT: </b> <br>
-On Azure NP VMs, the XDMA 2.1 platform only supports Host_Mem(SB) and DDR data retention features.
+On Azure NP VMs, the XDMA 2.1 platform only supports Host_Mem(SB).
<br>
-To enable Host_Mem(SB) (up to 1 Gb RAM): sudo xbutil host_mem --enable --size 1g
+To enable Host_Mem(SB) (up to 1-Gb RAM): sudo xbutil host_mem --enable --size 1g
<br> To disable Host_Mem(SB): sudo xbutil host_mem --disable <br>
To disable Host_Mem(SB): sudo xbutil host_mem --disable
<br> Starting on XRT2021.1:
-OnPrem FPGA in Linux exposes
+On-premise FPGA in Linux exposes
[M2M data transfer](https://xilinx.github.io/XRT/master/html/m2m.html). <br>
-This feature is not supported in Azure NP VMs.
+This feature isn't supported in Azure NP VMs.
**Q:** Can I run xbmgmt commands?
virtual-machines Automation Configure Devops https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/workloads/sap/automation-configure-devops.md
description: Configure your Azure DevOps Services for the SAP Deployment Automat
Previously updated : 08/30/2022 Last updated : 09/25/2022
You can use the following script to do a basic installation of Azure Devops Serv
Log in to Azure Cloud Shell ```bash export ADO_ORGANIZATION=https://dev.azure.com/<yourorganization>
- export ADO_PROJECT=SAP-Deployment-Automation
+ export ADO_PROJECT='SAP Deployment Automation'
wget https://raw.githubusercontent.com/Azure/sap-automation/main/deploy/scripts/create_devops_artifacts.sh -O devops.sh chmod +x ./devops.sh ./devops.sh
Validate that the project has been created by navigating to the Azure DevOps por
You can finalize the Azure DevOps configuration by running the following scripts on your local workstation. Open a PowerShell Console and define the environment variables. Replace the bracketed values with the actual values. > [!IMPORTANT]
-> Run the following steps on your local workstation, also make sure that you have logged on to Azure using az login first.
+> Run the following steps on your local workstation, also make sure that you have logged on to Azure using az login first. Please also ensure that you have the latest Azure CLI installed by running the 'az upgrade' command.
```powershell $Env:ADO_ORGANIZATION="https://dev.azure.com/<yourorganization>"
- $Env:ADO_PROJECT="<yourProject>"
- $Env:YourPrefix="<yourPrefix>"
+ $Env:ADO_PROJECT="SAP Deployment Automation"
$Env:ControlPlaneSubscriptionID="<YourControlPlaneSubscriptionID>" $Env:ControlPlaneSubscriptionName="<YourControlPlaneSubscriptionName>"+ $Env:DevSubscriptionID="<YourDevSubscriptionID>" $Env:DevSubscriptionName="<YourDevSubscriptionName>" ``` > [!NOTE]
-> The ControlPlaneSubscriptionID and DevSubscriptionID can use the same subscriptionID.
+> The ControlPlaneSubscriptionID and DevSubscriptionID can use the same subscriptionID.
+>
+> You can use the environment variable $Env:SDAF_APP_NAME for an existing Application registration, $Env:SDAF_MGMT_SPN_NAME for an existing service principal for the control plane and $Env:SDAF_DEV_SPN_NAME for an existing service principal for the workload zone plane. For the names use the Display Name of the existing resources.
+ Once the variables are defined run the following script to create the service principals and the application registration.
Once the variables are defined run the following script to create the service pr
Invoke-WebRequest -Uri https://raw.githubusercontent.com/Azure/sap-automation/main/deploy/scripts/update_devops_credentials.ps1 -OutFile .\configureDevOps.ps1 ; .\configureDevOps.ps1 ```
+> [!NOTE]
+> In PowerShell navigate to a folder where you have write permissions before running the Invoke-WebRequest command.
### Create a sample Control Plane configuration
Create the SAP configuration and software installation pipeline by choosing _New
Save the Pipeline, to see the Save option select the chevron next to the Run button. Navigate to the Pipelines section and select the pipeline. Rename the pipeline to 'SAP configuration and software installation' by choosing 'Rename/Move' from the three-dot menu on the right.
-## Configuration Web App pipeline
-
-Create the Configuration Web App pipeline by choosing _New Pipeline_ from the Pipelines section, select 'Azure Repos Git' as the source for your code. Configure your Pipeline to use an existing Azure Pipelines YAML File. Specify the pipeline with the following settings:
-
-| Setting | Value |
-| - | -- |
-| Branch | main |
-| Path | `deploy/pipelines/21-deploy-web-app.yaml` |
-| Name | Configuration Web App |
-
-Save the Pipeline, to see the Save option select the chevron next to the Run button. Navigate to the Pipelines section and select the pipeline. Rename the pipeline to 'Configuration Web App' by choosing 'Rename/Move' from the three-dot menu on the right.
- ## Deployment removal pipeline Create the deployment removal pipeline by choosing _New Pipeline_ from the Pipelines section, select 'Azure Repos Git' as the source for your code. Configure your Pipeline to use an existing Azure Pipelines YAML File. Specify the pipeline with the following settings:
Create the deployment removal ARM pipeline by choosing _New Pipeline_ from the P
| Path | `deploy/pipelines/11-remover-arm-fallback.yaml` | | Name | Deployment removal using ARM |
-Save the Pipeline, to see the Save option select the chevron next to the Run button. Navigate to the Pipelines section and select the pipeline. Rename the pipeline to 'Deployment removal using ARM' by choosing 'Rename/Move' from the three-dot menu on the right.
+Save the Pipeline, to see the Save option select the chevron next to the Run button. Navigate to the Pipelines section and select the pipeline. Rename the pipeline to 'Deployment removal using ARM processor' by choosing 'Rename/Move' from the three-dot menu on the right.
> [!NOTE] > Only use this pipeline as last resort, removing just the resource groups will leave remnants that may complicate re-deployments.
The pipelines use a custom task to perform cleanup activities post deployment. T
## Variable definitions
-The deployment pipelines are configured to use a set of predefined parameter values. In Azure DevOps the variables are defined using variable groups.
+The deployment pipelines are configured to use a set of predefined parameter values defined using variable groups.
### Common variables
Create a new variable group 'SDAF-MGMT' for the control plane environment using
| FENCING_SPN_TENANT | 'Service principal tenant ID' for the fencing agent. | Required for highly available deployments using a service principal for fencing agent. | | PAT | `<Personal Access Token>` | Use the Personal Token defined in the previous step | | POOL | `<Agent Pool name>` | The Agent pool to use for this environment |
+| | | |
| APP_REGISTRATION_APP_ID | 'App registration application ID' | Required if deploying the web app | | WEB_APP_CLIENT_SECRET | 'App registration password' | Required if deploying the web app |
+| | | |
+| SDAF_GENERAL_GROUP_ID | The group ID for the SDAF-General group | The ID can be retrieved from the URL parameter 'variableGroupId' when accessing the variable group using a browser. For example: 'variableGroupId=8 |
+| WORKLOADZONE_PIPELINE_ID | The ID for the 'SAP workload zone deployment' pipeline | The ID can be retrieved from the URL parameter 'definitionId' from the pipeline page in Azure DevOps. For example: 'definitionId=31. |
+| SYSTEM_PIPELINE_ID | The ID for the 'SAP system deployment (infrastructure)' pipeline | The ID can be retrieved from the URL parameter 'definitionId' from the pipeline page in Azure DevOps. For example: 'definitionId=32. |
Save the variables.
Save the variables.
> > When using the web app, ensure that the Build Service has at least Contribute permissions. >
-> You can use the clone functionality to create the next environment variable group.
+> You can use the clone functionality to create the next environment variable group. APP_REGISTRATION_APP_ID, WEB_APP_CLIENT_SECRET, SDAF_GENERAL_GROUP_ID, WORKLOADZONE_PIPELINE_ID and SYSTEM_PIPELINE_ID are only needed for the SDAF-MGMT group.
+ ## Create a service connection